diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
index 261adedaa40b..c220bce3ba0d 100644
--- a/.github/ISSUE_TEMPLATE.md
+++ b/.github/ISSUE_TEMPLATE.md
@@ -32,7 +32,7 @@ What should have happened?
What actually happened?
### Steps to Reproduce
-Please list the steps requires to reproduce the issue, for example:
+Please list the steps required to reproduce the issue, for example:
1. `terraform apply`
### Important Factoids
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 47d640911183..be6aca17542d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,62 +1,165 @@
-## 0.6.15 (Unreleased)
+## 0.6.16 (Unreleased)
FEATURES:
- * **New command:** `terraform fmt` to automatically normalize config file style [GH-4955]
- * **New interpolation function:** `jsonencode` [GH-5890]
- * **New provider:** `fastly` [GH-5814]
- * **New resource:** `aws_iam_user_ssh_key` [GH-5774]
- * **New resource:** `aws_s3_bucket_notification` [GH-5473]
- * **New resource:** `cloudstack_static_nat` [GH-6004]
- * **New resource:** `consul_key_prefix` [GH-5988]
- * **New resource:** `triton_fabric` [GH-5920]
- * **New resource:** `triton_vlan` [GH-5920]
+ * **New provider:** `librato` [GH-3371]
+ * **New provider:** `softlayer` [GH-4327]
+ * **New resource:** `aws_api_gateway_account` [GH-6321]
+ * **New resource:** `aws_api_gateway_authorizer` [GH-6320]
+ * **New resource:** `openstack_networking_secgroup_v2` [GH-6410]
+ * **New resource:** `openstack_networking_secgroup_rule_v2` [GH-6410]
+ * **New resource:** `vsphere_file` [GH-6401]
IMPROVEMENTS:
- * provider/aws: Add support for Step Scaling in `aws_autoscaling_policy` [GH-4277]
- * provider/aws: Add support for `cname_prefix` to `aws_elastic_beanstalk_environment` resource [GH-5966]
- * provider/aws: Adding outputs for elastic_beanstalk_environment resource [GH-5915]
- * provider/aws: Adds `wait_for_ready_timeout` option to `aws_elastic_beanstalk_environment` [GH-5967]
- * provider/aws: Allow `aws_db_subnet_group` description to be updated [GH-5921]
- * provider/aws: Change `aws_elb` access_logs to list type [GH-5065]
- * provider/aws: Making the Cloudwatch Event Rule Target `target_id` optional [GH-5787]
- * provider/aws: Timeouts for `elasticsearch_domain` are increased [GH-5910]
- * provider/aws: `aws_codecommit_repository` set `default_branch` only if defined [GH-5904]
- * provider/aws: `aws_redshift_cluster` allows usernames with underscore in it [GH-5935]
- * provider/aws: normalize json for `aws_cloudwatch_event_rule` [GH-6025]
- * provider/aws: normalise json for `aws_sns_topic` [GH-6089]
- * provider/aws: Allow multiple EIPs to associate to single ENI [GH-6070]
- * provider/clc: Override default `account` alias in provider config [GH-5785]
- * provider/datadog: Add heredoc support to message, escalation_message, and query [GH-5788]
- * provider/docker: Add support for docker run --user option [GH-5300]
- * provider/google: Accept GOOGLE_CLOUD_KEYFILE_JSON env var for credentials [GH-6007]
- * provider/google: Make "project" attribute on provider configuration optional [GH-6112]
- * provider/google: Add "project" argument and attribute to all GCP compute resources which inherit from the provider's value [GH-6112]
- *provider/google: Deprecate unused "region" attribute in `global_forwarding_rule`; this attribute was never used anywhere in the computation of the resource [GH-6112]
- * provider/github: Add support for privacy to `github_team` [GH-6116]
- * provider/cloudstack: Deprecate `ipaddress` in favour of `ip_address` in all resources [GH-6010]
- * provider/openstack: Allow subnets with no gateway [GH-6060]
+ * core: update HCL dependency to improve whitespace handling in `terraform fmt` [GH-6347]
+ * provider/azurerm: Increase timeout for ARM Template deployments to 40 minutes [GH-6319]
+ * provider/cloudflare: Add proxied option to `cloudflare_record` [GH-5508]
+ * provider/docker: Add ability to keep docker image locally on terraform destroy [GH-6376]
+ * provider/fastly: Add S3 Log Streaming to Fastly Service [GH-6378]
+ * provider/aws: Improve error handling in IAM Server Certificates [GH-6442]
+ * provider/aws: Add support for response parameters to `api_gateway_method_response` & `api_gateway_integration_response` [GH-6344]
+ * provider/triton: Add support for specifying network interfaces on `triton machine` resources [GH-6418]
+ * provider/vsphere: Add `skip_customization` option to `vsphere_virtual_machine` resources [GH-6355]
+ * provider/vsphere: Add ability to specify and mount bootable vmdk in `vsphere_virtual_machine` [GH-6146]
+ * provider/vsphere: Add support for `memory_reservation` to `vsphere_virtual_machine` [GH-6036]
+ * provider/vsphere: Checking for empty diskPath in `vsphere_virtual_machine` before creating [GH-6400]
+ * provider/vsphere: Support updates to vcpu and memory on `vsphere_virtual_machine` [GH-6356]
+ * provider/vsphere: Add support for IPV6 to `vsphere_virtual_machine` [GH-6457]
BUG FIXES:
- * provider/aws: Convert protocols to standard format for Security Groups [GH-5881]
- * provider/aws: Fix `aws_route panic` when destination CIDR block is nil [GH-5781]
- * provider/aws: Fix issue re-creating deleted VPC peering connections [GH-5959]
- * provider/aws: Fix issue with changing iops when also changing storage type to io1 on RDS [GH-5676]
- * provider/aws: Fix issue with retrying deletion of Network ACLs [GH-5954]
- * provider/aws: Fix potential crash when receiving malformed `aws_route` API responses [GH-5867]
- * provider/aws: Guard against empty responses from Lambda Permissions [GH-5838]
- * provider/aws: Normalize and compact SQS Redrive, Policy JSON [GH-5888]
- * provider/aws: Remove CloudTrail Trail from state if not found [GH-6024]
- * provider/aws: Report better error message in `aws_route53_record` when `set_identifier` is required [GH-5777]
- * provider/aws: set ASG `health_check_grace_period` default to 300 [GH-5830]
- * provider/aws: Show human-readable error message when failing to read an EBS volume [GH-6038]
- * provider/azurerm: Fix detection of `azurerm_storage_account` resources removed manually [GH-5878]
- * provider/docker: Docker Image will be deleted on destroy [GH-5801]
- * provider/openstack: Fix resizing when Flavor Name changes [GH-6020]
- * provider/openstack: Fix Disabling DHCP on Subnets [GH-6052]
- * provider/vsphere: Add error handling to `vsphere_folder` [GH-6095]
+ * provider/aws: Allow account ID checks on EC2 instances & w/ federated accounts [GH-5030]
+ * provider/aws: Fix bug where `aws_elastic_beanstalk_environment` ignored `wait_for_ready_timeout` [GH-6358]
+ * provider/aws: Fix bug where `aws_elastic_beanstalk_environment` update config template didn't work [GH-6342]
+ * provider/aws: Fix issue with KMS Alias keys and name prefixes [GH-6328]
+ * provider/aws: Fix read of `aws_cloudwatch_log_group` after an update is applied [GH-6384]
+ * provider/aws: Fix updating `number_of_nodes` on `aws_redshift_cluster` [GH-6333]
+ * provider/aws: Omit `aws_cloudfront_distribution` custom_error fields when not explicitly set [GH-6382]
+ * provider/aws: Refresh state on `aws_sqs_queue` not found [GH-6381]
+ * provider/aws: Fix issue in updating CloudFront distribution LoggingConfig [GH-6407]
+ * provider/aws: Fix an eventually consistent issue aws_security_group_rule and possible duplications [GH-6325]
+ * provider/aws: Respect `selection_pattern` in `aws_api_gateway_integration_response` (previously ignored field) [GH-5893]
+ * provider/aws: `aws_route` crash when used with `aws_vpc_endpoint` [GH-6338]
+ * provider/aws: Fix issue replacing Network ACL Relationship [GH-6421]
+ * provider/aws: validate `cluster_id` length for `aws_elasticache_cluster` [GH-6330]
+ * provider/aws: Fix issue with encrypted snapshots of block devices in `aws_launch_configuration` resources [GH-6452]
+ * provider/cloudflare: can manage apex records [GH-6449]
+ * provider/cloudflare: won't refresh with incorrect record if names match [GH-6449]
+ * provider/docker: Fix crash when using empty string in the `command` list in `docker_container` resources [GH-6424]
+ * provider/vsphere: Memory reservations are now set correctly in `vsphere_virtual_machine` resources [GH-6482]
+
+## 0.6.15 (April 22, 2016)
+
+FEATURES:
+
+ * **New command:** `terraform fmt` to automatically normalize config file style ([#4955](https://github.com/hashicorp/terraform/issues/4955))
+ * **New interpolation function:** `jsonencode` ([#5890](https://github.com/hashicorp/terraform/issues/5890))
+ * **New provider:** `cobbler` ([#5969](https://github.com/hashicorp/terraform/issues/5969))
+ * **New provider:** `fastly` ([#5814](https://github.com/hashicorp/terraform/issues/5814))
+ * **New resource:** `aws_cloudfront_distribution` ([#5221](https://github.com/hashicorp/terraform/issues/5221))
+ * **New resource:** `aws_cloudfront_origin_access_identity` ([#5221](https://github.com/hashicorp/terraform/issues/5221))
+ * **New resource:** `aws_iam_user_ssh_key` ([#5774](https://github.com/hashicorp/terraform/issues/5774))
+ * **New resource:** `aws_s3_bucket_notification` ([#5473](https://github.com/hashicorp/terraform/issues/5473))
+ * **New resource:** `cloudstack_static_nat` ([#6004](https://github.com/hashicorp/terraform/issues/6004))
+ * **New resource:** `consul_key_prefix` ([#5988](https://github.com/hashicorp/terraform/issues/5988))
+ * **New resource:** `aws_default_network_acl` ([#6165](https://github.com/hashicorp/terraform/issues/6165))
+ * **New resource:** `triton_fabric` ([#5920](https://github.com/hashicorp/terraform/issues/5920))
+ * **New resource:** `triton_vlan` ([#5920](https://github.com/hashicorp/terraform/issues/5920))
+ * **New resource:** `aws_opsworks_application` ([#4419](https://github.com/hashicorp/terraform/issues/4419))
+ * **New resource:** `aws_opsworks_instance` ([#4276](https://github.com/hashicorp/terraform/issues/4276))
+ * **New resource:** `aws_cloudwatch_log_subscription_filter` ([#5996](https://github.com/hashicorp/terraform/issues/5996))
+ * **New resource:** `openstack_networking_router_route_v2` ([#6207](https://github.com/hashicorp/terraform/issues/6207))
+
+IMPROVEMENTS:
+
+ * command/apply: Output will now show periodic status updates of slow resources. ([#6163](https://github.com/hashicorp/terraform/issues/6163))
+ * core: Variables passed between modules are now type checked ([#6185](https://github.com/hashicorp/terraform/issues/6185))
+ * core: Smaller release binaries by stripping debug information ([#6238](https://github.com/hashicorp/terraform/issues/6238))
+ * provider/aws: Add support for Step Scaling in `aws_autoscaling_policy` ([#4277](https://github.com/hashicorp/terraform/issues/4277))
+ * provider/aws: Add support for `cname_prefix` to `aws_elastic_beanstalk_environment` resource ([#5966](https://github.com/hashicorp/terraform/issues/5966))
+ * provider/aws: Add support for trigger_configuration to `aws_codedeploy_deployment_group` ([#5599](https://github.com/hashicorp/terraform/issues/5599))
+ * provider/aws: Adding outputs for elastic_beanstalk_environment resource ([#5915](https://github.com/hashicorp/terraform/issues/5915))
+ * provider/aws: Adds `wait_for_ready_timeout` option to `aws_elastic_beanstalk_environment` ([#5967](https://github.com/hashicorp/terraform/issues/5967))
+ * provider/aws: Allow `aws_db_subnet_group` description to be updated ([#5921](https://github.com/hashicorp/terraform/issues/5921))
+ * provider/aws: Allow multiple EIPs to associate to single ENI ([#6070](https://github.com/hashicorp/terraform/issues/6070))
+ * provider/aws: Change `aws_elb` access_logs to list type ([#5065](https://github.com/hashicorp/terraform/issues/5065))
+ * provider/aws: Check that InternetGateway exists before returning from creation ([#6105](https://github.com/hashicorp/terraform/issues/6105))
+ * provider/aws: Don't Base64-encode EC2 userdata if it is already Base64 encoded ([#6140](https://github.com/hashicorp/terraform/issues/6140))
+ * provider/aws: Making the Cloudwatch Event Rule Target `target_id` optional ([#5787](https://github.com/hashicorp/terraform/issues/5787))
+ * provider/aws: Timeouts for `elasticsearch_domain` are increased ([#5910](https://github.com/hashicorp/terraform/issues/5910))
+ * provider/aws: `aws_codecommit_repository` set `default_branch` only if defined ([#5904](https://github.com/hashicorp/terraform/issues/5904))
+ * provider/aws: `aws_redshift_cluster` allows usernames with underscore in it ([#5935](https://github.com/hashicorp/terraform/issues/5935))
+ * provider/aws: normalise json for `aws_sns_topic` ([#6089](https://github.com/hashicorp/terraform/issues/6089))
+ * provider/aws: normalize json for `aws_cloudwatch_event_rule` ([#6025](https://github.com/hashicorp/terraform/issues/6025))
+ * provider/aws: increase timeout for aws_redshift_cluster ([#6305](https://github.com/hashicorp/terraform/issues/6305))
+ * provider/aws: Opsworks layers now support `custom_json` argument ([#4272](https://github.com/hashicorp/terraform/issues/4272))
+ * provider/aws: Added migration for `tier` attribute in `aws_elastic_beanstalk_environment` ([#6167](https://github.com/hashicorp/terraform/issues/6167))
+ * provider/aws: Use resource.Retry for route creation and deletion ([#6225](https://github.com/hashicorp/terraform/issues/6225))
+ * provider/aws: Add support S3 Bucket Lifecycle Rule ([#6220](https://github.com/hashicorp/terraform/issues/6220))
+ * provider/clc: Override default `account` alias in provider config ([#5785](https://github.com/hashicorp/terraform/issues/5785))
+ * provider/cloudstack: Deprecate `ipaddress` in favour of `ip_address` in all resources ([#6010](https://github.com/hashicorp/terraform/issues/6010))
+ * provider/cloudstack: Deprecate allowing names (instead of IDs) for parameters that reference other resources ([#6123](https://github.com/hashicorp/terraform/issues/6123))
+ * provider/datadog: Add heredoc support to message, escalation_message, and query ([#5788](https://github.com/hashicorp/terraform/issues/5788))
+ * provider/docker: Add support for docker run --user option ([#5300](https://github.com/hashicorp/terraform/issues/5300))
+ * provider/github: Add support for privacy to `github_team` ([#6116](https://github.com/hashicorp/terraform/issues/6116))
+ * provider/google: Accept GOOGLE_CLOUD_KEYFILE_JSON env var for credentials ([#6007](https://github.com/hashicorp/terraform/issues/6007))
+ * provider/google: Add "project" argument and attribute to all GCP compute resources which inherit from the provider's value ([#6112](https://github.com/hashicorp/terraform/issues/6112))
+ * provider/google: Make "project" attribute on provider configuration optional ([#6112](https://github.com/hashicorp/terraform/issues/6112))
+ * provider/google: Read more common configuration values from the environment and clarify precedence ordering ([#6114](https://github.com/hashicorp/terraform/issues/6114))
+ * provider/google: `addons_config` and `subnetwork` added as attributes to `google_container_cluster` ([#5871](https://github.com/hashicorp/terraform/issues/5871))
+ * provider/fastly: Add support for Request Headers ([#6197](https://github.com/hashicorp/terraform/issues/6197))
+ * provider/fastly: Add support for Gzip rules ([#6247](https://github.com/hashicorp/terraform/issues/6247))
+ * provider/openstack: Add value_specs argument and attribute for routers ([#4898](https://github.com/hashicorp/terraform/issues/4898))
+ * provider/openstack: Allow subnets with no gateway ([#6060](https://github.com/hashicorp/terraform/issues/6060))
+ * provider/openstack: Enable Token Authentication ([#6081](https://github.com/hashicorp/terraform/issues/6081))
+ * provider/postgresql: New `ssl_mode` argument allowing different SSL usage tradeoffs ([#6008](https://github.com/hashicorp/terraform/issues/6008))
+ * provider/vsphere: Support for linked clones and Windows-specific guest config options ([#6087](https://github.com/hashicorp/terraform/issues/6087))
+ * provider/vsphere: Checking for Powered Off State before `vsphere_virtual_machine` deletion ([#6283](https://github.com/hashicorp/terraform/issues/6283))
+ * provider/vsphere: Support mounting ISO images to virtual cdrom drives ([#4243](https://github.com/hashicorp/terraform/issues/4243))
+ * provider/vsphere: Fix missing ssh connection info ([#4283](https://github.com/hashicorp/terraform/issues/4283))
+ * provider/google: Deprecate unused "region" attribute in `global_forwarding_rule`; this attribute was never used anywhere in the computation of the resource ([#6112](https://github.com/hashicorp/terraform/issues/6112))
+ * provider/cloudstack: Add group attribute to `cloudstack_instance` resource ([#6023](https://github.com/hashicorp/terraform/issues/6023))
+ * provider/azurerm: Provider meaningful error message when credentials not correct ([#6290](https://github.com/hashicorp/terraform/issues/6290))
+ * provider/cloudstack: Improve support for using projects ([#6282](https://github.com/hashicorp/terraform/issues/6282))
+
+BUG FIXES:
+
+ * core: Providers are now correctly inherited down a nested module tree ([#6186](https://github.com/hashicorp/terraform/issues/6186))
+ * provider/aws: Convert protocols to standard format for Security Groups ([#5881](https://github.com/hashicorp/terraform/issues/5881))
+ * provider/aws: Fix Lambda VPC integration (missing `vpc_id` field in schema) ([#6157](https://github.com/hashicorp/terraform/issues/6157))
+ * provider/aws: Fix `aws_route panic` when destination CIDR block is nil ([#5781](https://github.com/hashicorp/terraform/issues/5781))
+ * provider/aws: Fix issue re-creating deleted VPC peering connections ([#5959](https://github.com/hashicorp/terraform/issues/5959))
+ * provider/aws: Fix issue with changing iops when also changing storage type to io1 on RDS ([#5676](https://github.com/hashicorp/terraform/issues/5676))
+ * provider/aws: Fix issue with retrying deletion of Network ACLs ([#5954](https://github.com/hashicorp/terraform/issues/5954))
+ * provider/aws: Fix potential crash when receiving malformed `aws_route` API responses ([#5867](https://github.com/hashicorp/terraform/issues/5867))
+ * provider/aws: Guard against empty responses from Lambda Permissions ([#5838](https://github.com/hashicorp/terraform/issues/5838))
+ * provider/aws: Normalize and compact SQS Redrive, Policy JSON ([#5888](https://github.com/hashicorp/terraform/issues/5888))
+ * provider/aws: Fix issue updating ElasticBeanstalk Configuraiton Templates ([#6307](https://github.com/hashicorp/terraform/issues/6307))
+ * provider/aws: Remove CloudTrail Trail from state if not found ([#6024](https://github.com/hashicorp/terraform/issues/6024))
+ * provider/aws: Fix crash in AWS S3 Bucket when website index/error is empty ([#6269](https://github.com/hashicorp/terraform/issues/6269))
+ * provider/aws: Report better error message in `aws_route53_record` when `set_identifier` is required ([#5777](https://github.com/hashicorp/terraform/issues/5777))
+ * provider/aws: Show human-readable error message when failing to read an EBS volume ([#6038](https://github.com/hashicorp/terraform/issues/6038))
+ * provider/aws: set ASG `health_check_grace_period` default to 300 ([#5830](https://github.com/hashicorp/terraform/issues/5830))
+ * provider/aws: Fix issue with with Opsworks and empty Custom Cook Book sources ([#6078](https://github.com/hashicorp/terraform/issues/6078))
+ * provider/aws: wait for IAM instance profile to propagate when creating Opsworks stacks ([#6049](https://github.com/hashicorp/terraform/issues/6049))
+ * provider/aws: Don't read back `aws_opsworks_stack` cookbooks source password ([#6203](https://github.com/hashicorp/terraform/issues/6203))
+ * provider/aws: Resolves DefaultOS and ConfigurationManager conflict on `aws_opsworks_stack` ([#6244](https://github.com/hashicorp/terraform/issues/6244))
+ * provider/aws: Renaming `aws_elastic_beanstalk_configuration_template``option_settings` to `setting` ([#6043](https://github.com/hashicorp/terraform/issues/6043))
+ * provider/aws: `aws_customer_gateway` will properly populate `bgp_asn` on refresh. [no issue]
+ * provider/aws: provider/aws: Refresh state on `aws_directory_service_directory` not found ([#6294](https://github.com/hashicorp/terraform/issues/6294))
+ * provider/aws: `aws_elb` `cross_zone_load_balancing` is not refreshed in the state file ([#6295](https://github.com/hashicorp/terraform/issues/6295))
+ * provider/aws: `aws_autoscaling_group` will properly populate `tag` on refresh. [no issue]
+ * provider/azurerm: Fix detection of `azurerm_storage_account` resources removed manually ([#5878](https://github.com/hashicorp/terraform/issues/5878))
+ * provider/docker: Docker Image will be deleted on destroy ([#5801](https://github.com/hashicorp/terraform/issues/5801))
+ * provider/openstack: Fix Disabling DHCP on Subnets ([#6052](https://github.com/hashicorp/terraform/issues/6052))
+ * provider/openstack: Fix resizing when Flavor Name changes ([#6020](https://github.com/hashicorp/terraform/issues/6020))
+ * provider/openstack: Fix Access Address Detection ([#6181](https://github.com/hashicorp/terraform/issues/6181))
+ * provider/openstack: Fix admin_state_up on openstack_lb_member_v1 ([#6267](https://github.com/hashicorp/terraform/issues/6267))
+ * provider/triton: Firewall status on `triton_machine` resources is reflected correctly ([#6119](https://github.com/hashicorp/terraform/issues/6119))
+ * provider/triton: Fix time out when applying updates to Triton machine metadata ([#6149](https://github.com/hashicorp/terraform/issues/6149))
+ * provider/vsphere: Add error handling to `vsphere_folder` ([#6095](https://github.com/hashicorp/terraform/issues/6095))
+ * provider/cloudstack: Fix mashalling errors when using CloudStack 4.7.x (or newer) [GH-#226]
## 0.6.14 (March 21, 2016)
@@ -176,7 +279,7 @@ BUG FIXES:
* provider/aws: Fix a bug where listener protocol on `aws_elb` resources was case insensitive ([#5376](https://github.com/hashicorp/terraform/issues/5376))
* provider/aws: Fix a bug which caused panics creating rules on security groups in EC2 Classic ([#5329](https://github.com/hashicorp/terraform/issues/5329))
* provider/aws: Fix crash when `aws_lambda_function` VpcId is nil ([#5182](https://github.com/hashicorp/terraform/issues/5182))
- * provider/aws: Fix error with parsing JSON in `aws_s3_bucket` policy attribute ([#5474](https://github.com/hashicorp/terraform/issues/5474))
+ * provider/aws: Fix error with parsing JSON in `aws_s3_bucket` policy attribute ([#5474](https://github.com/hashicorp/terraform/issues/5474))
* provider/aws: `aws_lambda_function` can be properly updated, either via `s3_object_version` or via `filename` & `source_code_hash` as described in docs ([#5239](https://github.com/hashicorp/terraform/issues/5239))
* provider/google: Fix managed instance group preemptible instance creation ([#4834](https://github.com/hashicorp/terraform/issues/4834))
* provider/openstack: Account for a 403 reply when os-tenant-networks is disabled ([#5432](https://github.com/hashicorp/terraform/issues/5432))
@@ -1710,7 +1813,7 @@ BUG FIXES:
* providers/aws: Retry deleting subnet for some time while AWS eventually
destroys dependencies. ([#357](https://github.com/hashicorp/terraform/issues/357))
* providers/aws: More robust destroy for route53 records. ([#342](https://github.com/hashicorp/terraform/issues/342))
- * providers/aws: ELB generates much more correct plans without extranneous
+ * providers/aws: ELB generates much more correct plans without extraneous
data.
* providers/aws: ELB works properly with dynamically changing
count of instances.
diff --git a/Godeps/Godeps.json b/Godeps/Godeps.json
index 45a22d950f1e..39346dc3dadb 100644
--- a/Godeps/Godeps.json
+++ b/Godeps/Godeps.json
@@ -1,6 +1,7 @@
{
"ImportPath": "github.com/hashicorp/terraform",
"GoVersion": "go1.6",
+ "GodepVersion": "v63",
"Packages": [
"./..."
],
@@ -226,288 +227,293 @@
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awserr",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awsutil",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client/metadata",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/corehandlers",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/defaults",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/ec2metadata",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/request",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/session",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/endpoints",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/ec2query",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/jsonrpc",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/rest",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restjson",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restxml",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/signer/v4",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/waiter",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/apigateway",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/autoscaling",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudformation",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
+ },
+ {
+ "ImportPath": "github.com/aws/aws-sdk-go/service/cloudfront",
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudtrail",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatch",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchevents",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/cloudwatchlogs",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/codecommit",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/codedeploy",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/directoryservice",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/dynamodb",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/ec2",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/ecr",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/ecs",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/efs",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/elasticache",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/elasticbeanstalk",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/elasticsearchservice",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/elb",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/firehose",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/glacier",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/iam",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/kinesis",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/kms",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/lambda",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/opsworks",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/rds",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/redshift",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/route53",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/s3",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/sns",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/sqs",
- "Comment": "v1.1.14",
- "Rev": "6876e9922ff299adf36e43e04c94820077968b3b"
+ "Comment": "v1.1.15",
+ "Rev": "e7cf1e5986499eea7d4a87868f1eb578c8f2045a"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
@@ -546,6 +552,10 @@
"ImportPath": "github.com/cyberdelia/heroku-go/v3",
"Rev": "81c5afa1abcf69cc18ccc24fa3716b5a455c9208"
},
+ {
+ "ImportPath": "github.com/davecgh/go-spew/spew",
+ "Rev": "5215b55f46b2b919f50a1df0eaa5886afe4e3b3d"
+ },
{
"ImportPath": "github.com/digitalocean/godo",
"Comment": "v0.9.0-20-gf75d769",
@@ -559,6 +569,10 @@
"ImportPath": "github.com/dylanmei/winrmtest",
"Rev": "025617847eb2cf9bd1d851bc3b22ed28e6245ce5"
},
+ {
+ "ImportPath": "github.com/fatih/structs",
+ "Rev": "73c4e3dc02a78deaba8640d5f3a8c236ec1352bf"
+ },
{
"ImportPath": "github.com/fsouza/go-dockerclient",
"Rev": "bf97c77db7c945cbcdbf09d56c6f87a66f54537b"
@@ -655,13 +669,13 @@
},
{
"ImportPath": "github.com/hashicorp/atlas-go/archive",
- "Comment": "20141209094003-90-g0008886",
- "Rev": "0008886ebfa3b424bed03e2a5cbe4a2568ea0ff6"
+ "Comment": "20141209094003-92-g95fa852",
+ "Rev": "95fa852edca41c06c4ce526af4bb7dec4eaad434"
},
{
"ImportPath": "github.com/hashicorp/atlas-go/v1",
- "Comment": "20141209094003-90-g0008886",
- "Rev": "0008886ebfa3b424bed03e2a5cbe4a2568ea0ff6"
+ "Comment": "20141209094003-92-g95fa852",
+ "Rev": "95fa852edca41c06c4ce526af4bb7dec4eaad434"
},
{
"ImportPath": "github.com/hashicorp/consul/api",
@@ -696,6 +710,10 @@
"ImportPath": "github.com/hashicorp/go-retryablehttp",
"Rev": "24fda80b7c713c52649e57ce20100d453f7bdb24"
},
+ {
+ "ImportPath": "github.com/hashicorp/go-rootcerts",
+ "Rev": "6bb64b370b90e7ef1fa532be9e591a81c3493e00"
+ },
{
"ImportPath": "github.com/hashicorp/go-uuid",
"Rev": "36289988d83ca270bc07c234c36f364b0dd9c9a7"
@@ -706,55 +724,55 @@
},
{
"ImportPath": "github.com/hashicorp/hcl",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/ast",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/fmtcmd",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/parser",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/printer",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/scanner",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/strconv",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/hcl/token",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/parser",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/scanner",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hcl/json/token",
- "Rev": "2604f3bda7e8960c1be1063709e7d7f0765048d0"
+ "Rev": "9a905a34e6280ce905da1a32344b25e81011197a"
},
{
"ImportPath": "github.com/hashicorp/hil",
- "Rev": "59cce4313fb7be2d9064afbdb3cacd76737cfa3c"
+ "Rev": "0640fefa3817883b16b77bf760c4c3a6f2589545"
},
{
"ImportPath": "github.com/hashicorp/hil/ast",
- "Rev": "59cce4313fb7be2d9064afbdb3cacd76737cfa3c"
+ "Rev": "0640fefa3817883b16b77bf760c4c3a6f2589545"
},
{
"ImportPath": "github.com/hashicorp/logutils",
@@ -769,6 +787,10 @@
"ImportPath": "github.com/hashicorp/yamux",
"Rev": "df949784da9ed028ee76df44652e42d37a09d7e4"
},
+ {
+ "ImportPath": "github.com/henrikhodne/go-librato/librato",
+ "Rev": "613abdebf4922c4d9d46bcb4bcf14ee18c08d7de"
+ },
{
"ImportPath": "github.com/hmrc/vmware-govcd",
"Comment": "v0.0.2-37-g5cd82f0",
@@ -842,16 +864,25 @@
},
{
"ImportPath": "github.com/joyent/gosdc/cloudapi",
- "Rev": "d0f3bf74903550b93aa817695001d4607cc632f3"
+ "Rev": "0697a5c4f39a71a4f9e3b154380b47dbfcc3da6e"
},
{
"ImportPath": "github.com/joyent/gosign/auth",
"Rev": "a1f3aa7d52213987117e47d721bcc9a499994d5f"
},
+ {
+ "ImportPath": "github.com/jtopjian/cobblerclient",
+ "Comment": "v0.3.0-33-g53d1c0a",
+ "Rev": "53d1c0a0b003aabfa7ecfa848d856606cb481196"
+ },
{
"ImportPath": "github.com/kardianos/osext",
"Rev": "29ae4ffbc9a6fe9fb2bc5029050ce6996ea1d3bc"
},
+ {
+ "ImportPath": "github.com/kolo/xmlrpc",
+ "Rev": "0826b98aaa29c0766956cb40d45cf7482a597671"
+ },
{
"ImportPath": "github.com/lib/pq",
"Comment": "go1.0-cutoff-74-g8ad2b29",
@@ -890,10 +921,39 @@
"ImportPath": "github.com/mattn/go-isatty",
"Rev": "56b76bdf51f7708750eac80fa38b952bb9f32639"
},
+ {
+ "ImportPath": "github.com/maximilien/softlayer-go/client",
+ "Comment": "v0.6.0",
+ "Rev": "85659debe44fab5792fc92cf755c04b115b9dc19"
+ },
+ {
+ "ImportPath": "github.com/maximilien/softlayer-go/common",
+ "Comment": "v0.6.0",
+ "Rev": "85659debe44fab5792fc92cf755c04b115b9dc19"
+ },
+ {
+ "ImportPath": "github.com/maximilien/softlayer-go/data_types",
+ "Comment": "v0.6.0",
+ "Rev": "85659debe44fab5792fc92cf755c04b115b9dc19"
+ },
+ {
+ "ImportPath": "github.com/maximilien/softlayer-go/services",
+ "Comment": "v0.6.0",
+ "Rev": "85659debe44fab5792fc92cf755c04b115b9dc19"
+ },
+ {
+ "ImportPath": "github.com/maximilien/softlayer-go/softlayer",
+ "Comment": "v0.6.0",
+ "Rev": "85659debe44fab5792fc92cf755c04b115b9dc19"
+ },
{
"ImportPath": "github.com/mitchellh/cli",
"Rev": "cb6853d606ea4a12a15ac83cc43503df99fd28fb"
},
+ {
+ "ImportPath": "github.com/mitchellh/cloudflare-go",
+ "Rev": "84c7a0993a06d555dbfddd2b32f5fa9b92fa1dc1"
+ },
{
"ImportPath": "github.com/mitchellh/colorstring",
"Rev": "8631ce90f28644f54aeedcb3e389a85174e067d1"
@@ -956,10 +1016,6 @@
"ImportPath": "github.com/pborman/uuid",
"Rev": "dee7705ef7b324f27ceb85a121c61f2c2e8ce988"
},
- {
- "ImportPath": "github.com/pearkes/cloudflare",
- "Rev": "765ac1828a78ba49e6dc48309d56415c61806ac3"
- },
{
"ImportPath": "github.com/pearkes/dnsimple",
"Rev": "78996265f576c7580ff75d0cb2c606a61883ceb8"
@@ -968,185 +1024,200 @@
"ImportPath": "github.com/pearkes/mailgun",
"Rev": "b88605989c4141d22a6d874f78800399e5bb7ac2"
},
+ {
+ "ImportPath": "github.com/pkg/errors",
+ "Comment": "v0.3.0",
+ "Rev": "42fa80f2ac6ed17a977ce826074bd3009593fa9d"
+ },
{
"ImportPath": "github.com/rackspace/gophercloud",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/blockstorage/v1/volumes",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/bootfromvolume",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/schedulerhints",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/secgroups",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/servergroups",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/tenantnetworks",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/volumeattach",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/flavors",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/images",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/compute/v2/servers",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/identity/v2/tenants",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/identity/v2/tokens",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/identity/v3/tokens",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/policies",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/rules",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/floatingips",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/members",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/monitors",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/pools",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/vips",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
+ },
+ {
+ "ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups",
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
+ },
+ {
+ "ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules",
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/networks",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/ports",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/networking/v2/subnets",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/objectstorage/v1/accounts",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/objectstorage/v1/containers",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/objectstorage/v1/objects",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/openstack/utils",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/pagination",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/testhelper",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/rackspace/gophercloud/testhelper/client",
- "Comment": "v1.0.0-868-ga09b5b4",
- "Rev": "a09b5b4eb58195b6fb3898496586b8d6aeb558e0"
+ "Comment": "v1.0.0-884-gc54bbac",
+ "Rev": "c54bbac81d19eb4df3ad167764dbb6ff2e7194de"
},
{
"ImportPath": "github.com/satori/go.uuid",
@@ -1154,7 +1225,7 @@
},
{
"ImportPath": "github.com/sethvargo/go-fastly",
- "Rev": "382fee1e5e1adf3cc112fadf4f0a8a98e269ea3c"
+ "Rev": "6566b161e807516f4a45bc3054eac291a120e217"
},
{
"ImportPath": "github.com/soniah/dnsmadeeasy",
@@ -1246,8 +1317,8 @@
},
{
"ImportPath": "github.com/xanzy/go-cloudstack/cloudstack",
- "Comment": "v1.2.0-61-g252eb1b",
- "Rev": "252eb1b665d77aa31dedd435fab0a7da57b2d8c1"
+ "Comment": "2.0.0-2-gcfbfb48",
+ "Rev": "cfbfb481e04c131cb89df1c6141b082f2714bc29"
},
{
"ImportPath": "github.com/xanzy/ssh-agent",
@@ -1294,23 +1365,23 @@
},
{
"ImportPath": "golang.org/x/oauth2",
- "Rev": "8a57ed94ffd43444c0879fe75701732a38afc985"
+ "Rev": "2897dcade18a126645f1368de827f1e613a60049"
},
{
"ImportPath": "golang.org/x/oauth2/google",
- "Rev": "8a57ed94ffd43444c0879fe75701732a38afc985"
+ "Rev": "2897dcade18a126645f1368de827f1e613a60049"
},
{
"ImportPath": "golang.org/x/oauth2/internal",
- "Rev": "8a57ed94ffd43444c0879fe75701732a38afc985"
+ "Rev": "2897dcade18a126645f1368de827f1e613a60049"
},
{
"ImportPath": "golang.org/x/oauth2/jws",
- "Rev": "8a57ed94ffd43444c0879fe75701732a38afc985"
+ "Rev": "2897dcade18a126645f1368de827f1e613a60049"
},
{
"ImportPath": "golang.org/x/oauth2/jwt",
- "Rev": "8a57ed94ffd43444c0879fe75701732a38afc985"
+ "Rev": "2897dcade18a126645f1368de827f1e613a60049"
},
{
"ImportPath": "golang.org/x/sys/unix",
@@ -1318,39 +1389,39 @@
},
{
"ImportPath": "google.golang.org/api/compute/v1",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/container/v1",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/dns/v1",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/gensupport",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/googleapi",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/googleapi/internal/uritemplates",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/pubsub/v1",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/sqladmin/v1beta4",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/api/storage/v1",
- "Rev": "61d74df3f9f3a66898c8e08aa7e702337b34dda3"
+ "Rev": "43c645d4bcf9251ced36c823a93b6d198764aae4"
},
{
"ImportPath": "google.golang.org/appengine",
diff --git a/Makefile b/Makefile
index f98d6a5f0c58..de7aa27e92fc 100644
--- a/Makefile
+++ b/Makefile
@@ -5,7 +5,7 @@ default: test vet
# bin generates the releaseable binaries for Terraform
bin: fmtcheck generate
- @sh -c "'$(CURDIR)/scripts/build.sh'"
+ @TF_RELEASE=1 sh -c "'$(CURDIR)/scripts/build.sh'"
# dev creates binaries for testing Terraform locally. These are put
# into ./bin/ as well as $GOPATH/bin
@@ -18,7 +18,7 @@ quickdev: generate
# Shorthand for quickly building the core of Terraform. Note that some
# changes will require a rebuild of everything, in which case the dev
# target should be used.
-core-dev: fmtcheck generate
+core-dev: generate
go install github.com/hashicorp/terraform
# Shorthand for quickly testing the core of Terraform (i.e. "not providers")
diff --git a/README.md b/README.md
index a7cdf90bc538..1ae8c6a55ef7 100644
--- a/README.md
+++ b/README.md
@@ -31,7 +31,7 @@ Developing Terraform
If you wish to work on Terraform itself or any of its built-in providers, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.6+ is *required*). Alternatively, you can use the Vagrantfile in the root of this repo to stand up a virtual machine with the appropriate dev tooling already set up for you.
-For local dev first make sure Go is properly installed, including setting up a [GOPATH](http://golang.org/doc/code.html#GOPATH). You will also need to add `$GOPATH/bin` to your `$PATH`. Next, install the following software packages, which are needed for some dependencies:
+For local dev first make sure Go is properly installed, including setting up a [GOPATH](http://golang.org/doc/code.html#GOPATH). You will also need to add `$GOPATH/bin` to your `$PATH`.
Next, using [Git](https://git-scm.com/), clone this repository into `$GOPATH/src/github.com/hashicorp/terraform`. All the necessary dependencies are either vendored or automatically installed, so you just need to type `make`. This will compile the code and then run the tests. If this exits with exit status 0, then everything is working!
@@ -144,7 +144,7 @@ git push origin my-feature-branch
Terraform has a comprehensive [acceptance
test](http://en.wikipedia.org/wiki/Acceptance_testing) suite covering the
-built-in providers. Our [Contributing Guide](https://github.com/hashicorp/terraform/blob/master/CONTRIBUTING.md) includes details about how and when to write and run acceptance tests in order to help contributions get accepted quickly.
+built-in providers. Our [Contributing Guide](https://github.com/hashicorp/terraform/blob/master/.github/CONTRIBUTING.md) includes details about how and when to write and run acceptance tests in order to help contributions get accepted quickly.
### Cross Compilation and Building for Distribution
diff --git a/builtin/bins/provider-cobbler/main.go b/builtin/bins/provider-cobbler/main.go
new file mode 100644
index 000000000000..73d46b96ee20
--- /dev/null
+++ b/builtin/bins/provider-cobbler/main.go
@@ -0,0 +1,12 @@
+package main
+
+import (
+ "github.com/hashicorp/terraform/builtin/providers/cobbler"
+ "github.com/hashicorp/terraform/plugin"
+)
+
+func main() {
+ plugin.Serve(&plugin.ServeOpts{
+ ProviderFunc: cobbler.Provider,
+ })
+}
diff --git a/builtin/bins/provider-librato/main.go b/builtin/bins/provider-librato/main.go
new file mode 100644
index 000000000000..557e973a1c5c
--- /dev/null
+++ b/builtin/bins/provider-librato/main.go
@@ -0,0 +1,12 @@
+package main
+
+import (
+ "github.com/hashicorp/terraform/builtin/providers/librato"
+ "github.com/hashicorp/terraform/plugin"
+)
+
+func main() {
+ plugin.Serve(&plugin.ServeOpts{
+ ProviderFunc: librato.Provider,
+ })
+}
diff --git a/builtin/bins/provider-librato/main_test.go b/builtin/bins/provider-librato/main_test.go
new file mode 100644
index 000000000000..06ab7d0f9a35
--- /dev/null
+++ b/builtin/bins/provider-librato/main_test.go
@@ -0,0 +1 @@
+package main
diff --git a/builtin/bins/provider-softlayer/main.go b/builtin/bins/provider-softlayer/main.go
new file mode 100644
index 000000000000..c3fdb4bcb31a
--- /dev/null
+++ b/builtin/bins/provider-softlayer/main.go
@@ -0,0 +1,12 @@
+package main
+
+import (
+ "github.com/hashicorp/terraform/builtin/providers/softlayer"
+ "github.com/hashicorp/terraform/plugin"
+)
+
+func main() {
+ plugin.Serve(&plugin.ServeOpts{
+ ProviderFunc: softlayer.Provider,
+ })
+}
diff --git a/builtin/bins/provider-softlayer/main_test.go b/builtin/bins/provider-softlayer/main_test.go
new file mode 100644
index 000000000000..06ab7d0f9a35
--- /dev/null
+++ b/builtin/bins/provider-softlayer/main_test.go
@@ -0,0 +1 @@
+package main
diff --git a/builtin/providers/aws/auth_helpers.go b/builtin/providers/aws/auth_helpers.go
new file mode 100644
index 000000000000..914c7e97174d
--- /dev/null
+++ b/builtin/providers/aws/auth_helpers.go
@@ -0,0 +1,134 @@
+package aws
+
+import (
+ "fmt"
+ "log"
+ "os"
+ "strings"
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ awsCredentials "github.com/aws/aws-sdk-go/aws/credentials"
+ "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
+ "github.com/aws/aws-sdk-go/aws/ec2metadata"
+ "github.com/aws/aws-sdk-go/aws/session"
+ "github.com/aws/aws-sdk-go/service/iam"
+ "github.com/hashicorp/go-cleanhttp"
+)
+
+func GetAccountId(iamconn *iam.IAM, authProviderName string) (string, error) {
+ // If we have creds from instance profile, we can use metadata API
+ if authProviderName == ec2rolecreds.ProviderName {
+ log.Println("[DEBUG] Trying to get account ID via AWS Metadata API")
+
+ cfg := &aws.Config{}
+ setOptionalEndpoint(cfg)
+ metadataClient := ec2metadata.New(session.New(cfg))
+ info, err := metadataClient.IAMInfo()
+ if err != nil {
+ // This can be triggered when no IAM Role is assigned
+ // or AWS just happens to return invalid response
+ return "", fmt.Errorf("Failed getting EC2 IAM info: %s", err)
+ }
+
+ return parseAccountIdFromArn(info.InstanceProfileArn)
+ }
+
+ // Then try IAM GetUser
+ log.Println("[DEBUG] Trying to get account ID via iam:GetUser")
+ outUser, err := iamconn.GetUser(nil)
+ if err == nil {
+ return parseAccountIdFromArn(*outUser.User.Arn)
+ }
+
+ // Then try IAM ListRoles
+ awsErr, ok := err.(awserr.Error)
+ // AccessDenied and ValidationError can be raised
+ // if credentials belong to federated profile, so we ignore these
+ if !ok || (awsErr.Code() != "AccessDenied" && awsErr.Code() != "ValidationError") {
+ return "", fmt.Errorf("Failed getting account ID via 'iam:GetUser': %s", err)
+ }
+
+ log.Printf("[DEBUG] Getting account ID via iam:GetUser failed: %s", err)
+ log.Println("[DEBUG] Trying to get account ID via iam:ListRoles instead")
+ outRoles, err := iamconn.ListRoles(&iam.ListRolesInput{
+ MaxItems: aws.Int64(int64(1)),
+ })
+ if err != nil {
+ return "", fmt.Errorf("Failed getting account ID via 'iam:ListRoles': %s", err)
+ }
+
+ if len(outRoles.Roles) < 1 {
+ return "", fmt.Errorf("Failed getting account ID via 'iam:ListRoles': No roles available")
+ }
+
+ return parseAccountIdFromArn(*outRoles.Roles[0].Arn)
+}
+
+func parseAccountIdFromArn(arn string) (string, error) {
+ parts := strings.Split(arn, ":")
+ if len(parts) < 5 {
+ return "", fmt.Errorf("Unable to parse ID from invalid ARN: %q", arn)
+ }
+ return parts[4], nil
+}
+
+// This function is responsible for reading credentials from the
+// environment in the case that they're not explicitly specified
+// in the Terraform configuration.
+func GetCredentials(key, secret, token, profile, credsfile string) *awsCredentials.Credentials {
+ // build a chain provider, lazy-evaulated by aws-sdk
+ providers := []awsCredentials.Provider{
+ &awsCredentials.StaticProvider{Value: awsCredentials.Value{
+ AccessKeyID: key,
+ SecretAccessKey: secret,
+ SessionToken: token,
+ }},
+ &awsCredentials.EnvProvider{},
+ &awsCredentials.SharedCredentialsProvider{
+ Filename: credsfile,
+ Profile: profile,
+ },
+ }
+
+ // Build isolated HTTP client to avoid issues with globally-shared settings
+ client := cleanhttp.DefaultClient()
+
+ // Keep the timeout low as we don't want to wait in non-EC2 environments
+ client.Timeout = 100 * time.Millisecond
+ cfg := &aws.Config{
+ HTTPClient: client,
+ }
+ usedEndpoint := setOptionalEndpoint(cfg)
+
+ // Real AWS should reply to a simple metadata request.
+ // We check it actually does to ensure something else didn't just
+ // happen to be listening on the same IP:Port
+ metadataClient := ec2metadata.New(session.New(cfg))
+ if metadataClient.Available() {
+ providers = append(providers, &ec2rolecreds.EC2RoleProvider{
+ Client: metadataClient,
+ })
+ log.Printf("[INFO] AWS EC2 instance detected via default metadata" +
+ " API endpoint, EC2RoleProvider added to the auth chain")
+ } else {
+ if usedEndpoint == "" {
+ usedEndpoint = "default location"
+ }
+ log.Printf("[WARN] Ignoring AWS metadata API endpoint at %s "+
+ "as it doesn't return any instance-id", usedEndpoint)
+ }
+
+ return awsCredentials.NewChainCredentials(providers)
+}
+
+func setOptionalEndpoint(cfg *aws.Config) string {
+ endpoint := os.Getenv("AWS_METADATA_URL")
+ if endpoint != "" {
+ log.Printf("[INFO] Setting custom metadata endpoint: %q", endpoint)
+ cfg.Endpoint = aws.String(endpoint)
+ return endpoint
+ }
+ return ""
+}
diff --git a/builtin/providers/aws/auth_helpers_test.go b/builtin/providers/aws/auth_helpers_test.go
new file mode 100644
index 000000000000..a5fcf8f1636b
--- /dev/null
+++ b/builtin/providers/aws/auth_helpers_test.go
@@ -0,0 +1,757 @@
+package aws
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "net/http/httptest"
+ "os"
+ "testing"
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ awsCredentials "github.com/aws/aws-sdk-go/aws/credentials"
+ "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
+ "github.com/aws/aws-sdk-go/aws/session"
+ "github.com/aws/aws-sdk-go/service/iam"
+)
+
+func TestAWSGetAccountId_shouldBeValid_fromEC2Role(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ awsTs := awsEnv(t)
+ defer awsTs()
+
+ iamEndpoints := []*iamEndpoint{}
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, ec2rolecreds.ProviderName)
+ if err != nil {
+ t.Fatalf("Getting account ID from EC2 metadata API failed: %s", err)
+ }
+
+ expectedAccountId := "123456789013"
+ if id != expectedAccountId {
+ t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id)
+ }
+}
+
+func TestAWSGetAccountId_shouldBeValid_EC2RoleHasPriority(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ awsTs := awsEnv(t)
+ defer awsTs()
+
+ iamEndpoints := []*iamEndpoint{
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"},
+ Response: &iamResponse{200, iamResponse_GetUser_valid, "text/xml"},
+ },
+ }
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, ec2rolecreds.ProviderName)
+ if err != nil {
+ t.Fatalf("Getting account ID from EC2 metadata API failed: %s", err)
+ }
+
+ expectedAccountId := "123456789013"
+ if id != expectedAccountId {
+ t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id)
+ }
+}
+
+func TestAWSGetAccountId_shouldBeValid_fromIamUser(t *testing.T) {
+ iamEndpoints := []*iamEndpoint{
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"},
+ Response: &iamResponse{200, iamResponse_GetUser_valid, "text/xml"},
+ },
+ }
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, "")
+ if err != nil {
+ t.Fatalf("Getting account ID via GetUser failed: %s", err)
+ }
+
+ expectedAccountId := "123456789012"
+ if id != expectedAccountId {
+ t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id)
+ }
+}
+
+func TestAWSGetAccountId_shouldBeValid_fromIamListRoles(t *testing.T) {
+ iamEndpoints := []*iamEndpoint{
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"},
+ Response: &iamResponse{403, iamResponse_GetUser_unauthorized, "text/xml"},
+ },
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"},
+ Response: &iamResponse{200, iamResponse_ListRoles_valid, "text/xml"},
+ },
+ }
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, "")
+ if err != nil {
+ t.Fatalf("Getting account ID via ListRoles failed: %s", err)
+ }
+
+ expectedAccountId := "123456789012"
+ if id != expectedAccountId {
+ t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id)
+ }
+}
+
+func TestAWSGetAccountId_shouldBeValid_federatedRole(t *testing.T) {
+ iamEndpoints := []*iamEndpoint{
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"},
+ Response: &iamResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"},
+ },
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"},
+ Response: &iamResponse{200, iamResponse_ListRoles_valid, "text/xml"},
+ },
+ }
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, "")
+ if err != nil {
+ t.Fatalf("Getting account ID via ListRoles failed: %s", err)
+ }
+
+ expectedAccountId := "123456789012"
+ if id != expectedAccountId {
+ t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id)
+ }
+}
+
+func TestAWSGetAccountId_shouldError_unauthorizedFromIam(t *testing.T) {
+ iamEndpoints := []*iamEndpoint{
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"},
+ Response: &iamResponse{403, iamResponse_GetUser_unauthorized, "text/xml"},
+ },
+ &iamEndpoint{
+ Request: &iamRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"},
+ Response: &iamResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"},
+ },
+ }
+ ts, iamConn := getMockedAwsIamApi(iamEndpoints)
+ defer ts()
+
+ id, err := GetAccountId(iamConn, "")
+ if err == nil {
+ t.Fatal("Expected error when getting account ID")
+ }
+
+ if id != "" {
+ t.Fatalf("Expected no account ID, given: %s", id)
+ }
+}
+
+func TestAWSParseAccountIdFromArn(t *testing.T) {
+ validArn := "arn:aws:iam::101636750127:instance-profile/aws-elasticbeanstalk-ec2-role"
+ expectedId := "101636750127"
+ id, err := parseAccountIdFromArn(validArn)
+ if err != nil {
+ t.Fatalf("Expected no error when parsing valid ARN: %s", err)
+ }
+ if id != expectedId {
+ t.Fatalf("Parsed id doesn't match with expected (%q != %q)", id, expectedId)
+ }
+
+ invalidArn := "blablah"
+ id, err = parseAccountIdFromArn(invalidArn)
+ if err == nil {
+ t.Fatalf("Expected error when parsing invalid ARN (%q)", invalidArn)
+ }
+}
+
+func TestAWSGetCredentials_shouldError(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ cfg := Config{}
+
+ c := GetCredentials(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
+ _, err := c.Get()
+ if awsErr, ok := err.(awserr.Error); ok {
+ if awsErr.Code() != "NoCredentialProviders" {
+ t.Fatalf("Expected NoCredentialProviders error")
+ }
+ }
+ if err == nil {
+ t.Fatalf("Expected an error with empty env, keys, and IAM in AWS Config")
+ }
+}
+
+func TestAWSGetCredentials_shouldBeStatic(t *testing.T) {
+ simple := []struct {
+ Key, Secret, Token string
+ }{
+ {
+ Key: "test",
+ Secret: "secret",
+ }, {
+ Key: "test",
+ Secret: "test",
+ Token: "test",
+ },
+ }
+
+ for _, c := range simple {
+ cfg := Config{
+ AccessKey: c.Key,
+ SecretKey: c.Secret,
+ Token: c.Token,
+ }
+
+ creds := GetCredentials(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
+ if creds == nil {
+ t.Fatalf("Expected a static creds provider to be returned")
+ }
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Error gettings creds: %s", err)
+ }
+ if v.AccessKeyID != c.Key {
+ t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID)
+ }
+ if v.SecretAccessKey != c.Secret {
+ t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey)
+ }
+ if v.SessionToken != c.Token {
+ t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken)
+ }
+ }
+}
+
+// TestAWSGetCredentials_shouldIAM is designed to test the scenario of running Terraform
+// from an EC2 instance, without environment variables or manually supplied
+// credentials.
+func TestAWSGetCredentials_shouldIAM(t *testing.T) {
+ // clear AWS_* environment variables
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+
+ // capture the test server's close method, to call after the test returns
+ ts := awsEnv(t)
+ defer ts()
+
+ // An empty config, no key supplied
+ cfg := Config{}
+
+ creds := GetCredentials(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
+ if creds == nil {
+ t.Fatalf("Expected a static creds provider to be returned")
+ }
+
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Error gettings creds: %s", err)
+ }
+ if v.AccessKeyID != "somekey" {
+ t.Fatalf("AccessKeyID mismatch, expected: (somekey), got (%s)", v.AccessKeyID)
+ }
+ if v.SecretAccessKey != "somesecret" {
+ t.Fatalf("SecretAccessKey mismatch, expected: (somesecret), got (%s)", v.SecretAccessKey)
+ }
+ if v.SessionToken != "sometoken" {
+ t.Fatalf("SessionToken mismatch, expected: (sometoken), got (%s)", v.SessionToken)
+ }
+}
+
+// TestAWSGetCredentials_shouldIAM is designed to test the scenario of running Terraform
+// from an EC2 instance, without environment variables or manually supplied
+// credentials.
+func TestAWSGetCredentials_shouldIgnoreIAM(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ ts := awsEnv(t)
+ defer ts()
+ simple := []struct {
+ Key, Secret, Token string
+ }{
+ {
+ Key: "test",
+ Secret: "secret",
+ }, {
+ Key: "test",
+ Secret: "test",
+ Token: "test",
+ },
+ }
+
+ for _, c := range simple {
+ cfg := Config{
+ AccessKey: c.Key,
+ SecretKey: c.Secret,
+ Token: c.Token,
+ }
+
+ creds := GetCredentials(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
+ if creds == nil {
+ t.Fatalf("Expected a static creds provider to be returned")
+ }
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Error gettings creds: %s", err)
+ }
+ if v.AccessKeyID != c.Key {
+ t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID)
+ }
+ if v.SecretAccessKey != c.Secret {
+ t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey)
+ }
+ if v.SessionToken != c.Token {
+ t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken)
+ }
+ }
+}
+
+func TestAWSGetCredentials_shouldErrorWithInvalidEndpoint(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ ts := invalidAwsEnv(t)
+ defer ts()
+
+ creds := GetCredentials("", "", "", "", "")
+ v, err := creds.Get()
+ if err == nil {
+ t.Fatal("Expected error returned when getting creds w/ invalid EC2 endpoint")
+ }
+
+ if v.ProviderName != "" {
+ t.Fatalf("Expected provider name to be empty, %q given", v.ProviderName)
+ }
+}
+
+func TestAWSGetCredentials_shouldIgnoreInvalidEndpoint(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ ts := invalidAwsEnv(t)
+ defer ts()
+
+ creds := GetCredentials("accessKey", "secretKey", "", "", "")
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Getting static credentials w/ invalid EC2 endpoint failed: %s", err)
+ }
+
+ if v.ProviderName != "StaticProvider" {
+ t.Fatalf("Expected provider name to be %q, %q given", "StaticProvider", v.ProviderName)
+ }
+
+ if v.AccessKeyID != "accessKey" {
+ t.Fatalf("Static Access Key %q doesn't match: %s", "accessKey", v.AccessKeyID)
+ }
+
+ if v.SecretAccessKey != "secretKey" {
+ t.Fatalf("Static Secret Key %q doesn't match: %s", "secretKey", v.SecretAccessKey)
+ }
+}
+
+func TestAWSGetCredentials_shouldCatchEC2RoleProvider(t *testing.T) {
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+ // capture the test server's close method, to call after the test returns
+ ts := awsEnv(t)
+ defer ts()
+
+ creds := GetCredentials("", "", "", "", "")
+ if creds == nil {
+ t.Fatalf("Expected an EC2Role creds provider to be returned")
+ }
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Expected no error when getting creds: %s", err)
+ }
+ expectedProvider := "EC2RoleProvider"
+ if v.ProviderName != expectedProvider {
+ t.Fatalf("Expected provider name to be %q, %q given",
+ expectedProvider, v.ProviderName)
+ }
+}
+
+var credentialsFileContents = `[myprofile]
+aws_access_key_id = accesskey
+aws_secret_access_key = secretkey
+`
+
+func TestAWSGetCredentials_shouldBeShared(t *testing.T) {
+ file, err := ioutil.TempFile(os.TempDir(), "terraform_aws_cred")
+ if err != nil {
+ t.Fatalf("Error writing temporary credentials file: %s", err)
+ }
+ _, err = file.WriteString(credentialsFileContents)
+ if err != nil {
+ t.Fatalf("Error writing temporary credentials to file: %s", err)
+ }
+ err = file.Close()
+ if err != nil {
+ t.Fatalf("Error closing temporary credentials file: %s", err)
+ }
+
+ defer os.Remove(file.Name())
+
+ resetEnv := unsetEnv(t)
+ defer resetEnv()
+
+ if err := os.Setenv("AWS_PROFILE", "myprofile"); err != nil {
+ t.Fatalf("Error resetting env var AWS_PROFILE: %s", err)
+ }
+ if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", file.Name()); err != nil {
+ t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
+ }
+
+ creds := GetCredentials("", "", "", "myprofile", file.Name())
+ if creds == nil {
+ t.Fatalf("Expected a provider chain to be returned")
+ }
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Error gettings creds: %s", err)
+ }
+
+ if v.AccessKeyID != "accesskey" {
+ t.Fatalf("AccessKeyID mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID)
+ }
+
+ if v.SecretAccessKey != "secretkey" {
+ t.Fatalf("SecretAccessKey mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID)
+ }
+}
+
+func TestAWSGetCredentials_shouldBeENV(t *testing.T) {
+ // need to set the environment variables to a dummy string, as we don't know
+ // what they may be at runtime without hardcoding here
+ s := "some_env"
+ resetEnv := setEnv(s, t)
+
+ defer resetEnv()
+
+ cfg := Config{}
+ creds := GetCredentials(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
+ if creds == nil {
+ t.Fatalf("Expected a static creds provider to be returned")
+ }
+ v, err := creds.Get()
+ if err != nil {
+ t.Fatalf("Error gettings creds: %s", err)
+ }
+ if v.AccessKeyID != s {
+ t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", s, v.AccessKeyID)
+ }
+ if v.SecretAccessKey != s {
+ t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", s, v.SecretAccessKey)
+ }
+ if v.SessionToken != s {
+ t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", s, v.SessionToken)
+ }
+}
+
+// unsetEnv unsets enviornment variables for testing a "clean slate" with no
+// credentials in the environment
+func unsetEnv(t *testing.T) func() {
+ // Grab any existing AWS keys and preserve. In some tests we'll unset these, so
+ // we need to have them and restore them after
+ e := getEnv()
+ if err := os.Unsetenv("AWS_ACCESS_KEY_ID"); err != nil {
+ t.Fatalf("Error unsetting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Unsetenv("AWS_SECRET_ACCESS_KEY"); err != nil {
+ t.Fatalf("Error unsetting env var AWS_SECRET_ACCESS_KEY: %s", err)
+ }
+ if err := os.Unsetenv("AWS_SESSION_TOKEN"); err != nil {
+ t.Fatalf("Error unsetting env var AWS_SESSION_TOKEN: %s", err)
+ }
+ if err := os.Unsetenv("AWS_PROFILE"); err != nil {
+ t.Fatalf("Error unsetting env var AWS_PROFILE: %s", err)
+ }
+ if err := os.Unsetenv("AWS_SHARED_CREDENTIALS_FILE"); err != nil {
+ t.Fatalf("Error unsetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
+ }
+
+ return func() {
+ // re-set all the envs we unset above
+ if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil {
+ t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil {
+ t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err)
+ }
+ if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil {
+ t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err)
+ }
+ if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil {
+ t.Fatalf("Error resetting env var AWS_PROFILE: %s", err)
+ }
+ if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", e.CredsFilename); err != nil {
+ t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
+ }
+ }
+}
+
+func setEnv(s string, t *testing.T) func() {
+ e := getEnv()
+ // Set all the envs to a dummy value
+ if err := os.Setenv("AWS_ACCESS_KEY_ID", s); err != nil {
+ t.Fatalf("Error setting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Setenv("AWS_SECRET_ACCESS_KEY", s); err != nil {
+ t.Fatalf("Error setting env var AWS_SECRET_ACCESS_KEY: %s", err)
+ }
+ if err := os.Setenv("AWS_SESSION_TOKEN", s); err != nil {
+ t.Fatalf("Error setting env var AWS_SESSION_TOKEN: %s", err)
+ }
+ if err := os.Setenv("AWS_PROFILE", s); err != nil {
+ t.Fatalf("Error setting env var AWS_PROFILE: %s", err)
+ }
+ if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil {
+ t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err)
+ }
+
+ return func() {
+ // re-set all the envs we unset above
+ if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil {
+ t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil {
+ t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err)
+ }
+ if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil {
+ t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err)
+ }
+ if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil {
+ t.Fatalf("Error setting env var AWS_PROFILE: %s", err)
+ }
+ if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil {
+ t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err)
+ }
+ }
+}
+
+// awsEnv establishes a httptest server to mock out the internal AWS Metadata
+// service. IAM Credentials are retrieved by the EC2RoleProvider, which makes
+// API calls to this internal URL. By replacing the server with a test server,
+// we can simulate an AWS environment
+func awsEnv(t *testing.T) func() {
+ routes := routes{}
+ if err := json.Unmarshal([]byte(metadataApiRoutes), &routes); err != nil {
+ t.Fatalf("Failed to unmarshal JSON in AWS ENV test: %s", err)
+ }
+ ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ w.Header().Set("Content-Type", "text/plain")
+ w.Header().Add("Server", "MockEC2")
+ log.Printf("[DEBUG] Mocker server received request to %q", r.RequestURI)
+ for _, e := range routes.Endpoints {
+ if r.RequestURI == e.Uri {
+ fmt.Fprintln(w, e.Body)
+ w.WriteHeader(200)
+ return
+ }
+ }
+ w.WriteHeader(400)
+ }))
+
+ os.Setenv("AWS_METADATA_URL", ts.URL+"/latest")
+ return ts.Close
+}
+
+// invalidAwsEnv establishes a httptest server to simulate behaviour
+// when endpoint doesn't respond as expected
+func invalidAwsEnv(t *testing.T) func() {
+ ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ w.WriteHeader(400)
+ }))
+
+ os.Setenv("AWS_METADATA_URL", ts.URL+"/latest")
+ return ts.Close
+}
+
+// getMockedAwsIamApi establishes a httptest server to simulate behaviour
+// of a real AWS' IAM server
+func getMockedAwsIamApi(endpoints []*iamEndpoint) (func(), *iam.IAM) {
+ ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ buf := new(bytes.Buffer)
+ buf.ReadFrom(r.Body)
+ requestBody := buf.String()
+
+ log.Printf("[DEBUG] Received IAM API %q request to %q: %s",
+ r.Method, r.RequestURI, requestBody)
+
+ for _, e := range endpoints {
+ if r.Method == e.Request.Method && r.RequestURI == e.Request.Uri && requestBody == e.Request.Body {
+ log.Printf("[DEBUG] Mock API responding with %d: %s", e.Response.StatusCode, e.Response.Body)
+
+ w.WriteHeader(e.Response.StatusCode)
+ w.Header().Set("Content-Type", e.Response.ContentType)
+ w.Header().Set("X-Amzn-Requestid", "1b206dd1-f9a8-11e5-becf-051c60f11c4a")
+ w.Header().Set("Date", time.Now().Format(time.RFC1123))
+
+ fmt.Fprintln(w, e.Response.Body)
+ return
+ }
+ }
+
+ w.WriteHeader(400)
+ return
+ }))
+
+ sc := awsCredentials.NewStaticCredentials("accessKey", "secretKey", "")
+
+ sess := session.New(&aws.Config{
+ Credentials: sc,
+ Region: aws.String("us-east-1"),
+ Endpoint: aws.String(ts.URL),
+ CredentialsChainVerboseErrors: aws.Bool(true),
+ })
+ iamConn := iam.New(sess)
+
+ return ts.Close, iamConn
+}
+
+func getEnv() *currentEnv {
+ // Grab any existing AWS keys and preserve. In some tests we'll unset these, so
+ // we need to have them and restore them after
+ return ¤tEnv{
+ Key: os.Getenv("AWS_ACCESS_KEY_ID"),
+ Secret: os.Getenv("AWS_SECRET_ACCESS_KEY"),
+ Token: os.Getenv("AWS_SESSION_TOKEN"),
+ Profile: os.Getenv("AWS_PROFILE"),
+ CredsFilename: os.Getenv("AWS_SHARED_CREDENTIALS_FILE"),
+ }
+}
+
+// struct to preserve the current environment
+type currentEnv struct {
+ Key, Secret, Token, Profile, CredsFilename string
+}
+
+type routes struct {
+ Endpoints []*endpoint `json:"endpoints"`
+}
+type endpoint struct {
+ Uri string `json:"uri"`
+ Body string `json:"body"`
+}
+
+const metadataApiRoutes = `
+{
+ "endpoints": [
+ {
+ "uri": "/latest/meta-data/instance-id",
+ "body": "mock-instance-id"
+ },
+ {
+ "uri": "/latest/meta-data/iam/info",
+ "body": "{\"Code\": \"Success\",\"LastUpdated\": \"2016-03-17T12:27:32Z\",\"InstanceProfileArn\": \"arn:aws:iam::123456789013:instance-profile/my-instance-profile\",\"InstanceProfileId\": \"AIPAABCDEFGHIJKLMN123\"}"
+ },
+ {
+ "uri": "/latest/meta-data/iam/security-credentials",
+ "body": "test_role"
+ },
+ {
+ "uri": "/latest/meta-data/iam/security-credentials/test_role",
+ "body": "{\"Code\":\"Success\",\"LastUpdated\":\"2015-12-11T17:17:25Z\",\"Type\":\"AWS-HMAC\",\"AccessKeyId\":\"somekey\",\"SecretAccessKey\":\"somesecret\",\"Token\":\"sometoken\"}"
+ }
+ ]
+}
+`
+
+type iamEndpoint struct {
+ Request *iamRequest
+ Response *iamResponse
+}
+
+type iamRequest struct {
+ Method string
+ Uri string
+ Body string
+}
+
+type iamResponse struct {
+ StatusCode int
+ Body string
+ ContentType string
+}
+
+const iamResponse_GetUser_valid = `
+
+
+ AIDACKCEVSQ6C2EXAMPLE
+ /division_abc/subdivision_xyz/
+ Bob
+ arn:aws:iam::123456789012:user/division_abc/subdivision_xyz/Bob
+ 2013-10-02T17:01:44Z
+ 2014-10-10T14:37:51Z
+
+
+
+ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE
+
+`
+
+const iamResponse_GetUser_unauthorized = `
+
+ Sender
+ AccessDenied
+ User: arn:aws:iam::123456789012:user/Bob is not authorized to perform: iam:GetUser on resource: arn:aws:iam::123456789012:user/Bob
+
+ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE
+`
+
+const iamResponse_GetUser_federatedFailure = `
+
+ Sender
+ ValidationError
+ Must specify userName when calling with non-User credentials
+
+ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE
+`
+
+const iamResponse_ListRoles_valid = `
+
+ true
+ AWceSSsKsazQ4IEplT9o4hURCzBs00iavlEvEXAMPLE
+
+
+ /
+ %7B%22Version%22%3A%222008-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22ec2.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D
+ AROACKCEVSQ6C2EXAMPLE
+ elasticbeanstalk-role
+ arn:aws:iam::123456789012:role/elasticbeanstalk-role
+ 2013-10-02T17:01:44Z
+
+
+
+
+ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE
+
+`
+
+const iamResponse_ListRoles_unauthorized = `
+
+ Sender
+ AccessDenied
+ User: arn:aws:iam::123456789012:user/Bob is not authorized to perform: iam:ListRoles on resource: arn:aws:iam::123456789012:role/
+
+ 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE
+`
diff --git a/builtin/providers/aws/autoscaling_tags.go b/builtin/providers/aws/autoscaling_tags.go
index ecc4164a0bd5..e6eef8ec6df0 100644
--- a/builtin/providers/aws/autoscaling_tags.go
+++ b/builtin/providers/aws/autoscaling_tags.go
@@ -159,6 +159,20 @@ func autoscalingTagDescriptionsToMap(ts *[]*autoscaling.TagDescription) map[stri
return tags
}
+// autoscalingTagDescriptionsToSlice turns the list of tags into a slice.
+func autoscalingTagDescriptionsToSlice(ts []*autoscaling.TagDescription) []map[string]interface{} {
+ tags := make([]map[string]interface{}, 0, len(ts))
+ for _, t := range ts {
+ tags = append(tags, map[string]interface{}{
+ "key": *t.Key,
+ "value": *t.Value,
+ "propagate_at_launch": *t.PropagateAtLaunch,
+ })
+ }
+
+ return tags
+}
+
func setToMapByKey(s *schema.Set, key string) map[string]interface{} {
result := make(map[string]interface{})
for _, rawData := range s.List() {
diff --git a/builtin/providers/aws/cloudfront_distribution_configuration_structure.go b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go
new file mode 100644
index 000000000000..dfd86e211839
--- /dev/null
+++ b/builtin/providers/aws/cloudfront_distribution_configuration_structure.go
@@ -0,0 +1,983 @@
+// CloudFront DistributionConfig structure helpers.
+//
+// These functions assist in pulling in data from Terraform resource
+// configuration for the aws_cloudfront_distribution resource, as there are
+// several sub-fields that require their own data type, and do not necessarily
+// 1-1 translate to resource configuration.
+
+package aws
+
+import (
+ "bytes"
+ "fmt"
+ "reflect"
+ "strconv"
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/service/cloudfront"
+ "github.com/hashicorp/terraform/flatmap"
+ "github.com/hashicorp/terraform/helper/hashcode"
+ "github.com/hashicorp/terraform/helper/schema"
+)
+
+// Assemble the *cloudfront.DistributionConfig variable. Calls out to various
+// expander functions to convert attributes and sub-attributes to the various
+// complex structures which are necessary to properly build the
+// DistributionConfig structure.
+//
+// Used by the aws_cloudfront_distribution Create and Update functions.
+func expandDistributionConfig(d *schema.ResourceData) *cloudfront.DistributionConfig {
+ distributionConfig := &cloudfront.DistributionConfig{
+ CacheBehaviors: expandCacheBehaviors(d.Get("cache_behavior").(*schema.Set)),
+ CustomErrorResponses: expandCustomErrorResponses(d.Get("custom_error_response").(*schema.Set)),
+ DefaultCacheBehavior: expandDefaultCacheBehavior(d.Get("default_cache_behavior").(*schema.Set).List()[0].(map[string]interface{})),
+ Enabled: aws.Bool(d.Get("enabled").(bool)),
+ Origins: expandOrigins(d.Get("origin").(*schema.Set)),
+ PriceClass: aws.String(d.Get("price_class").(string)),
+ }
+ // This sets CallerReference if it's still pending computation (ie: new resource)
+ if v, ok := d.GetOk("caller_reference"); ok == false {
+ distributionConfig.CallerReference = aws.String(time.Now().Format(time.RFC3339Nano))
+ } else {
+ distributionConfig.CallerReference = aws.String(v.(string))
+ }
+ if v, ok := d.GetOk("comment"); ok {
+ distributionConfig.Comment = aws.String(v.(string))
+ } else {
+ distributionConfig.Comment = aws.String("")
+ }
+ if v, ok := d.GetOk("default_root_object"); ok {
+ distributionConfig.DefaultRootObject = aws.String(v.(string))
+ } else {
+ distributionConfig.DefaultRootObject = aws.String("")
+ }
+ if v, ok := d.GetOk("logging_config"); ok {
+ distributionConfig.Logging = expandLoggingConfig(v.(*schema.Set).List()[0].(map[string]interface{}))
+ } else {
+ distributionConfig.Logging = expandLoggingConfig(nil)
+ }
+ if v, ok := d.GetOk("aliases"); ok {
+ distributionConfig.Aliases = expandAliases(v.(*schema.Set))
+ } else {
+ distributionConfig.Aliases = expandAliases(schema.NewSet(aliasesHash, []interface{}{}))
+ }
+ if v, ok := d.GetOk("restrictions"); ok {
+ distributionConfig.Restrictions = expandRestrictions(v.(*schema.Set).List()[0].(map[string]interface{}))
+ }
+ if v, ok := d.GetOk("viewer_certificate"); ok {
+ distributionConfig.ViewerCertificate = expandViewerCertificate(v.(*schema.Set).List()[0].(map[string]interface{}))
+ }
+ if v, ok := d.GetOk("web_acl_id"); ok {
+ distributionConfig.WebACLId = aws.String(v.(string))
+ } else {
+ distributionConfig.WebACLId = aws.String("")
+ }
+ return distributionConfig
+}
+
+// Unpack the *cloudfront.DistributionConfig variable and set resource data.
+// Calls out to flatten functions to convert the DistributionConfig
+// sub-structures to their respective attributes in the
+// aws_cloudfront_distribution resource.
+//
+// Used by the aws_cloudfront_distribution Read function.
+func flattenDistributionConfig(d *schema.ResourceData, distributionConfig *cloudfront.DistributionConfig) error {
+ var err error
+
+ d.Set("enabled", distributionConfig.Enabled)
+ d.Set("price_class", distributionConfig.PriceClass)
+
+ err = d.Set("default_cache_behavior", flattenDefaultCacheBehavior(distributionConfig.DefaultCacheBehavior))
+ if err != nil {
+ return err
+ }
+ err = d.Set("viewer_certificate", flattenViewerCertificate(distributionConfig.ViewerCertificate))
+ if err != nil {
+ return err
+ }
+
+ if distributionConfig.CallerReference != nil {
+ d.Set("caller_reference", distributionConfig.CallerReference)
+ }
+ if distributionConfig.Comment != nil {
+ if *distributionConfig.Comment != "" {
+ d.Set("comment", distributionConfig.Comment)
+ }
+ }
+ if distributionConfig.DefaultRootObject != nil {
+ d.Set("default_root_object", distributionConfig.DefaultRootObject)
+ }
+ if distributionConfig.WebACLId != nil {
+ d.Set("web_acl_id", distributionConfig.WebACLId)
+ }
+
+ if distributionConfig.CustomErrorResponses != nil {
+ err = d.Set("custom_error_response", flattenCustomErrorResponses(distributionConfig.CustomErrorResponses))
+ if err != nil {
+ return err
+ }
+ }
+ if distributionConfig.CacheBehaviors != nil {
+ err = d.Set("cache_behavior", flattenCacheBehaviors(distributionConfig.CacheBehaviors))
+ if err != nil {
+ return err
+ }
+ }
+
+ if distributionConfig.Logging != nil && *distributionConfig.Logging.Enabled {
+ err = d.Set("logging_config", flattenLoggingConfig(distributionConfig.Logging))
+ } else {
+ err = d.Set("logging_config", schema.NewSet(loggingConfigHash, []interface{}{}))
+ }
+ if err != nil {
+ return err
+ }
+
+ if distributionConfig.Aliases != nil {
+ err = d.Set("aliases", flattenAliases(distributionConfig.Aliases))
+ if err != nil {
+ return err
+ }
+ }
+ if distributionConfig.Restrictions != nil {
+ err = d.Set("restrictions", flattenRestrictions(distributionConfig.Restrictions))
+ if err != nil {
+ return err
+ }
+ }
+ if *distributionConfig.Origins.Quantity > 0 {
+ err = d.Set("origin", flattenOrigins(distributionConfig.Origins))
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func expandDefaultCacheBehavior(m map[string]interface{}) *cloudfront.DefaultCacheBehavior {
+ cb := expandCacheBehavior(m)
+ var dcb cloudfront.DefaultCacheBehavior
+
+ simpleCopyStruct(cb, &dcb)
+ return &dcb
+}
+
+func flattenDefaultCacheBehavior(dcb *cloudfront.DefaultCacheBehavior) *schema.Set {
+ m := make(map[string]interface{})
+ var cb cloudfront.CacheBehavior
+
+ simpleCopyStruct(dcb, &cb)
+ m = flattenCacheBehavior(&cb)
+ return schema.NewSet(defaultCacheBehaviorHash, []interface{}{m})
+}
+
+// Assemble the hash for the aws_cloudfront_distribution default_cache_behavior
+// TypeSet attribute.
+func defaultCacheBehaviorHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%t-", m["compress"].(bool)))
+ buf.WriteString(fmt.Sprintf("%s-", m["viewer_protocol_policy"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["target_origin_id"].(string)))
+ buf.WriteString(fmt.Sprintf("%d-", forwardedValuesHash(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{}))))
+ buf.WriteString(fmt.Sprintf("%d-", m["min_ttl"].(int)))
+ if d, ok := m["trusted_signers"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ if d, ok := m["max_ttl"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", d.(int)))
+ }
+ if d, ok := m["smooth_streaming"]; ok {
+ buf.WriteString(fmt.Sprintf("%t-", d.(bool)))
+ }
+ if d, ok := m["default_ttl"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", d.(int)))
+ }
+ if d, ok := m["allowed_methods"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ if d, ok := m["cached_methods"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandCacheBehaviors(s *schema.Set) *cloudfront.CacheBehaviors {
+ var qty int64
+ var items []*cloudfront.CacheBehavior
+ for _, v := range s.List() {
+ items = append(items, expandCacheBehavior(v.(map[string]interface{})))
+ qty++
+ }
+ return &cloudfront.CacheBehaviors{
+ Quantity: aws.Int64(qty),
+ Items: items,
+ }
+}
+
+func flattenCacheBehaviors(cbs *cloudfront.CacheBehaviors) *schema.Set {
+ s := []interface{}{}
+ for _, v := range cbs.Items {
+ s = append(s, flattenCacheBehavior(v))
+ }
+ return schema.NewSet(cacheBehaviorHash, s)
+}
+
+func expandCacheBehavior(m map[string]interface{}) *cloudfront.CacheBehavior {
+ cb := &cloudfront.CacheBehavior{
+ Compress: aws.Bool(m["compress"].(bool)),
+ ViewerProtocolPolicy: aws.String(m["viewer_protocol_policy"].(string)),
+ TargetOriginId: aws.String(m["target_origin_id"].(string)),
+ ForwardedValues: expandForwardedValues(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})),
+ MinTTL: aws.Int64(int64(m["min_ttl"].(int))),
+ MaxTTL: aws.Int64(int64(m["max_ttl"].(int))),
+ DefaultTTL: aws.Int64(int64(m["default_ttl"].(int))),
+ }
+ if v, ok := m["trusted_signers"]; ok {
+ cb.TrustedSigners = expandTrustedSigners(v.([]interface{}))
+ } else {
+ cb.TrustedSigners = expandTrustedSigners([]interface{}{})
+ }
+ if v, ok := m["smooth_streaming"]; ok {
+ cb.SmoothStreaming = aws.Bool(v.(bool))
+ }
+ if v, ok := m["allowed_methods"]; ok {
+ cb.AllowedMethods = expandAllowedMethods(v.([]interface{}))
+ }
+ if v, ok := m["cached_methods"]; ok {
+ cb.AllowedMethods.CachedMethods = expandCachedMethods(v.([]interface{}))
+ }
+ if v, ok := m["path_pattern"]; ok {
+ cb.PathPattern = aws.String(v.(string))
+ }
+ return cb
+}
+
+func flattenCacheBehavior(cb *cloudfront.CacheBehavior) map[string]interface{} {
+ m := make(map[string]interface{})
+
+ m["compress"] = *cb.Compress
+ m["viewer_protocol_policy"] = *cb.ViewerProtocolPolicy
+ m["target_origin_id"] = *cb.TargetOriginId
+ m["forwarded_values"] = schema.NewSet(forwardedValuesHash, []interface{}{flattenForwardedValues(cb.ForwardedValues)})
+ m["min_ttl"] = int(*cb.MinTTL)
+
+ if len(cb.TrustedSigners.Items) > 0 {
+ m["trusted_signers"] = flattenTrustedSigners(cb.TrustedSigners)
+ }
+ if cb.MaxTTL != nil {
+ m["max_ttl"] = int(*cb.MaxTTL)
+ }
+ if cb.SmoothStreaming != nil {
+ m["smooth_streaming"] = *cb.SmoothStreaming
+ }
+ if cb.DefaultTTL != nil {
+ m["default_ttl"] = int(*cb.DefaultTTL)
+ }
+ if cb.AllowedMethods != nil {
+ m["allowed_methods"] = flattenAllowedMethods(cb.AllowedMethods)
+ }
+ if cb.AllowedMethods.CachedMethods != nil {
+ m["cached_methods"] = flattenCachedMethods(cb.AllowedMethods.CachedMethods)
+ }
+ if cb.PathPattern != nil {
+ m["path_pattern"] = *cb.PathPattern
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution cache_behavior
+// TypeSet attribute.
+func cacheBehaviorHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%t-", m["compress"].(bool)))
+ buf.WriteString(fmt.Sprintf("%s-", m["viewer_protocol_policy"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["target_origin_id"].(string)))
+ buf.WriteString(fmt.Sprintf("%d-", forwardedValuesHash(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{}))))
+ buf.WriteString(fmt.Sprintf("%d-", m["min_ttl"].(int)))
+ if d, ok := m["trusted_signers"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ if d, ok := m["max_ttl"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", d.(int)))
+ }
+ if d, ok := m["smooth_streaming"]; ok {
+ buf.WriteString(fmt.Sprintf("%t-", d.(bool)))
+ }
+ if d, ok := m["default_ttl"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", d.(int)))
+ }
+ if d, ok := m["allowed_methods"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ if d, ok := m["cached_methods"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ if d, ok := m["path_pattern"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", d))
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandTrustedSigners(s []interface{}) *cloudfront.TrustedSigners {
+ var ts cloudfront.TrustedSigners
+ if len(s) > 0 {
+ ts.Quantity = aws.Int64(int64(len(s)))
+ ts.Items = expandStringList(s)
+ ts.Enabled = aws.Bool(true)
+ } else {
+ ts.Quantity = aws.Int64(0)
+ ts.Enabled = aws.Bool(false)
+ }
+ return &ts
+}
+
+func flattenTrustedSigners(ts *cloudfront.TrustedSigners) []interface{} {
+ if ts.Items != nil {
+ return flattenStringList(ts.Items)
+ }
+ return []interface{}{}
+}
+
+func expandForwardedValues(m map[string]interface{}) *cloudfront.ForwardedValues {
+ fv := &cloudfront.ForwardedValues{
+ QueryString: aws.Bool(m["query_string"].(bool)),
+ }
+ if v, ok := m["cookies"]; ok {
+ fv.Cookies = expandCookiePreference(v.(*schema.Set).List()[0].(map[string]interface{}))
+ }
+ if v, ok := m["headers"]; ok {
+ fv.Headers = expandHeaders(v.([]interface{}))
+ }
+ return fv
+}
+
+func flattenForwardedValues(fv *cloudfront.ForwardedValues) map[string]interface{} {
+ m := make(map[string]interface{})
+ m["query_string"] = *fv.QueryString
+ if fv.Cookies != nil {
+ m["cookies"] = schema.NewSet(cookiePreferenceHash, []interface{}{flattenCookiePreference(fv.Cookies)})
+ }
+ if fv.Headers != nil {
+ m["headers"] = flattenHeaders(fv.Headers)
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution forwarded_values
+// TypeSet attribute.
+func forwardedValuesHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%t-", m["query_string"].(bool)))
+ if d, ok := m["cookies"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", cookiePreferenceHash(d.(*schema.Set).List()[0].(map[string]interface{}))))
+ }
+ if d, ok := m["headers"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandHeaders(d []interface{}) *cloudfront.Headers {
+ return &cloudfront.Headers{
+ Quantity: aws.Int64(int64(len(d))),
+ Items: expandStringList(d),
+ }
+}
+
+func flattenHeaders(h *cloudfront.Headers) []interface{} {
+ if h.Items != nil {
+ return flattenStringList(h.Items)
+ }
+ return []interface{}{}
+}
+
+func expandCookiePreference(m map[string]interface{}) *cloudfront.CookiePreference {
+ cp := &cloudfront.CookiePreference{
+ Forward: aws.String(m["forward"].(string)),
+ }
+ if v, ok := m["whitelisted_names"]; ok {
+ cp.WhitelistedNames = expandCookieNames(v.([]interface{}))
+ }
+ return cp
+}
+
+func flattenCookiePreference(cp *cloudfront.CookiePreference) map[string]interface{} {
+ m := make(map[string]interface{})
+ m["forward"] = *cp.Forward
+ if cp.WhitelistedNames != nil {
+ m["whitelisted_names"] = flattenCookieNames(cp.WhitelistedNames)
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution cookies
+// TypeSet attribute.
+func cookiePreferenceHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["forward"].(string)))
+ if d, ok := m["whitelisted_names"]; ok {
+ for _, e := range sortInterfaceSlice(d.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", e.(string)))
+ }
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandCookieNames(d []interface{}) *cloudfront.CookieNames {
+ return &cloudfront.CookieNames{
+ Quantity: aws.Int64(int64(len(d))),
+ Items: expandStringList(d),
+ }
+}
+
+func flattenCookieNames(cn *cloudfront.CookieNames) []interface{} {
+ if cn.Items != nil {
+ return flattenStringList(cn.Items)
+ }
+ return []interface{}{}
+}
+
+func expandAllowedMethods(s []interface{}) *cloudfront.AllowedMethods {
+ return &cloudfront.AllowedMethods{
+ Quantity: aws.Int64(int64(len(s))),
+ Items: expandStringList(s),
+ }
+}
+
+func flattenAllowedMethods(am *cloudfront.AllowedMethods) []interface{} {
+ if am.Items != nil {
+ return flattenStringList(am.Items)
+ }
+ return []interface{}{}
+}
+
+func expandCachedMethods(s []interface{}) *cloudfront.CachedMethods {
+ return &cloudfront.CachedMethods{
+ Quantity: aws.Int64(int64(len(s))),
+ Items: expandStringList(s),
+ }
+}
+
+func flattenCachedMethods(cm *cloudfront.CachedMethods) []interface{} {
+ if cm.Items != nil {
+ return flattenStringList(cm.Items)
+ }
+ return []interface{}{}
+}
+
+func expandOrigins(s *schema.Set) *cloudfront.Origins {
+ qty := 0
+ items := []*cloudfront.Origin{}
+ for _, v := range s.List() {
+ items = append(items, expandOrigin(v.(map[string]interface{})))
+ qty++
+ }
+ return &cloudfront.Origins{
+ Quantity: aws.Int64(int64(qty)),
+ Items: items,
+ }
+}
+
+func flattenOrigins(ors *cloudfront.Origins) *schema.Set {
+ s := []interface{}{}
+ for _, v := range ors.Items {
+ s = append(s, flattenOrigin(v))
+ }
+ return schema.NewSet(originHash, s)
+}
+
+func expandOrigin(m map[string]interface{}) *cloudfront.Origin {
+ origin := &cloudfront.Origin{
+ Id: aws.String(m["origin_id"].(string)),
+ DomainName: aws.String(m["domain_name"].(string)),
+ }
+ if v, ok := m["custom_header"]; ok {
+ origin.CustomHeaders = expandCustomHeaders(v.(*schema.Set))
+ }
+ if v, ok := m["custom_origin_config"]; ok {
+ if s := v.(*schema.Set).List(); len(s) > 0 {
+ origin.CustomOriginConfig = expandCustomOriginConfig(s[0].(map[string]interface{}))
+ }
+ }
+ if v, ok := m["origin_path"]; ok {
+ origin.OriginPath = aws.String(v.(string))
+ }
+ if v, ok := m["s3_origin_config"]; ok {
+ if s := v.(*schema.Set).List(); len(s) > 0 {
+ origin.S3OriginConfig = expandS3OriginConfig(s[0].(map[string]interface{}))
+ }
+ }
+ return origin
+}
+
+func flattenOrigin(or *cloudfront.Origin) map[string]interface{} {
+ m := make(map[string]interface{})
+ m["origin_id"] = *or.Id
+ m["domain_name"] = *or.DomainName
+ if or.CustomHeaders != nil {
+ m["custom_header"] = flattenCustomHeaders(or.CustomHeaders)
+ }
+ if or.CustomOriginConfig != nil {
+ m["custom_origin_config"] = schema.NewSet(customOriginConfigHash, []interface{}{flattenCustomOriginConfig(or.CustomOriginConfig)})
+ }
+ if or.OriginPath != nil {
+ m["origin_path"] = *or.OriginPath
+ }
+ if or.S3OriginConfig != nil {
+ m["s3_origin_config"] = schema.NewSet(s3OriginConfigHash, []interface{}{flattenS3OriginConfig(or.S3OriginConfig)})
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution origin
+// TypeSet attribute.
+func originHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["origin_id"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["domain_name"].(string)))
+ if v, ok := m["custom_header"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", customHeadersHash(v.(*schema.Set))))
+ }
+ if v, ok := m["custom_origin_config"]; ok {
+ if s := v.(*schema.Set).List(); len(s) > 0 {
+ buf.WriteString(fmt.Sprintf("%d-", customOriginConfigHash((s[0].(map[string]interface{})))))
+ }
+ }
+ if v, ok := m["origin_path"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ if v, ok := m["s3_origin_config"]; ok {
+ if s := v.(*schema.Set).List(); len(s) > 0 {
+ buf.WriteString(fmt.Sprintf("%d-", s3OriginConfigHash((s[0].(map[string]interface{})))))
+ }
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandCustomHeaders(s *schema.Set) *cloudfront.CustomHeaders {
+ qty := 0
+ items := []*cloudfront.OriginCustomHeader{}
+ for _, v := range s.List() {
+ items = append(items, expandOriginCustomHeader(v.(map[string]interface{})))
+ qty++
+ }
+ return &cloudfront.CustomHeaders{
+ Quantity: aws.Int64(int64(qty)),
+ Items: items,
+ }
+}
+
+func flattenCustomHeaders(chs *cloudfront.CustomHeaders) *schema.Set {
+ s := []interface{}{}
+ for _, v := range chs.Items {
+ s = append(s, flattenOriginCustomHeader(v))
+ }
+ return schema.NewSet(originCustomHeaderHash, s)
+}
+
+func expandOriginCustomHeader(m map[string]interface{}) *cloudfront.OriginCustomHeader {
+ return &cloudfront.OriginCustomHeader{
+ HeaderName: aws.String(m["name"].(string)),
+ HeaderValue: aws.String(m["value"].(string)),
+ }
+}
+
+func flattenOriginCustomHeader(och *cloudfront.OriginCustomHeader) map[string]interface{} {
+ return map[string]interface{}{
+ "name": *och.HeaderName,
+ "value": *och.HeaderValue,
+ }
+}
+
+// Helper function used by originHash to get a composite hash for all
+// aws_cloudfront_distribution custom_header attributes.
+func customHeadersHash(s *schema.Set) int {
+ var buf bytes.Buffer
+ for _, v := range s.List() {
+ buf.WriteString(fmt.Sprintf("%d-", originCustomHeaderHash(v)))
+ }
+ return hashcode.String(buf.String())
+}
+
+// Assemble the hash for the aws_cloudfront_distribution custom_header
+// TypeSet attribute.
+func originCustomHeaderHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["name"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["value"].(string)))
+ return hashcode.String(buf.String())
+}
+
+func expandCustomOriginConfig(m map[string]interface{}) *cloudfront.CustomOriginConfig {
+ return &cloudfront.CustomOriginConfig{
+ OriginProtocolPolicy: aws.String(m["origin_protocol_policy"].(string)),
+ HTTPPort: aws.Int64(int64(m["http_port"].(int))),
+ HTTPSPort: aws.Int64(int64(m["https_port"].(int))),
+ OriginSslProtocols: expandCustomOriginConfigSSL(m["origin_ssl_protocols"].([]interface{})),
+ }
+}
+
+func flattenCustomOriginConfig(cor *cloudfront.CustomOriginConfig) map[string]interface{} {
+ return map[string]interface{}{
+ "origin_protocol_policy": *cor.OriginProtocolPolicy,
+ "http_port": int(*cor.HTTPPort),
+ "https_port": int(*cor.HTTPSPort),
+ "origin_ssl_protocols": flattenCustomOriginConfigSSL(cor.OriginSslProtocols),
+ }
+}
+
+// Assemble the hash for the aws_cloudfront_distribution custom_origin_config
+// TypeSet attribute.
+func customOriginConfigHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["origin_protocol_policy"].(string)))
+ buf.WriteString(fmt.Sprintf("%d-", m["http_port"].(int)))
+ buf.WriteString(fmt.Sprintf("%d-", m["https_port"].(int)))
+ for _, v := range sortInterfaceSlice(m["origin_ssl_protocols"].([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandCustomOriginConfigSSL(s []interface{}) *cloudfront.OriginSslProtocols {
+ items := expandStringList(s)
+ return &cloudfront.OriginSslProtocols{
+ Quantity: aws.Int64(int64(len(items))),
+ Items: items,
+ }
+}
+
+func flattenCustomOriginConfigSSL(osp *cloudfront.OriginSslProtocols) []interface{} {
+ return flattenStringList(osp.Items)
+}
+
+func expandS3OriginConfig(m map[string]interface{}) *cloudfront.S3OriginConfig {
+ return &cloudfront.S3OriginConfig{
+ OriginAccessIdentity: aws.String(m["origin_access_identity"].(string)),
+ }
+}
+
+func flattenS3OriginConfig(s3o *cloudfront.S3OriginConfig) map[string]interface{} {
+ return map[string]interface{}{
+ "origin_access_identity": *s3o.OriginAccessIdentity,
+ }
+}
+
+// Assemble the hash for the aws_cloudfront_distribution s3_origin_config
+// TypeSet attribute.
+func s3OriginConfigHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["origin_access_identity"].(string)))
+ return hashcode.String(buf.String())
+}
+
+func expandCustomErrorResponses(s *schema.Set) *cloudfront.CustomErrorResponses {
+ qty := 0
+ items := []*cloudfront.CustomErrorResponse{}
+ for _, v := range s.List() {
+ items = append(items, expandCustomErrorResponse(v.(map[string]interface{})))
+ qty++
+ }
+ return &cloudfront.CustomErrorResponses{
+ Quantity: aws.Int64(int64(qty)),
+ Items: items,
+ }
+}
+
+func flattenCustomErrorResponses(ers *cloudfront.CustomErrorResponses) *schema.Set {
+ s := []interface{}{}
+ for _, v := range ers.Items {
+ s = append(s, flattenCustomErrorResponse(v))
+ }
+ return schema.NewSet(customErrorResponseHash, s)
+}
+
+func expandCustomErrorResponse(m map[string]interface{}) *cloudfront.CustomErrorResponse {
+ er := cloudfront.CustomErrorResponse{
+ ErrorCode: aws.Int64(int64(m["error_code"].(int))),
+ }
+ if v, ok := m["error_caching_min_ttl"]; ok {
+ er.ErrorCachingMinTTL = aws.Int64(int64(v.(int)))
+ }
+ if v, ok := m["response_code"]; ok && v.(int) != 0 {
+ er.ResponseCode = aws.String(strconv.Itoa(v.(int)))
+ } else {
+ er.ResponseCode = aws.String("")
+ }
+ if v, ok := m["response_page_path"]; ok {
+ er.ResponsePagePath = aws.String(v.(string))
+ }
+
+ return &er
+}
+
+func flattenCustomErrorResponse(er *cloudfront.CustomErrorResponse) map[string]interface{} {
+ m := make(map[string]interface{})
+ m["error_code"] = int(*er.ErrorCode)
+ if er.ErrorCachingMinTTL != nil {
+ m["error_caching_min_ttl"] = int(*er.ErrorCachingMinTTL)
+ }
+ if er.ResponseCode != nil {
+ m["response_code"], _ = strconv.Atoi(*er.ResponseCode)
+ }
+ if er.ResponsePagePath != nil {
+ m["response_page_path"] = *er.ResponsePagePath
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution custom_error_response
+// TypeSet attribute.
+func customErrorResponseHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%d-", m["error_code"].(int)))
+ if v, ok := m["error_caching_min_ttl"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", v.(int)))
+ }
+ if v, ok := m["response_code"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", v.(int)))
+ }
+ if v, ok := m["response_page_path"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandLoggingConfig(m map[string]interface{}) *cloudfront.LoggingConfig {
+ var lc cloudfront.LoggingConfig
+ if m != nil {
+ lc.Prefix = aws.String(m["prefix"].(string))
+ lc.Bucket = aws.String(m["bucket"].(string))
+ lc.IncludeCookies = aws.Bool(m["include_cookies"].(bool))
+ lc.Enabled = aws.Bool(true)
+ } else {
+ lc.Prefix = aws.String("")
+ lc.Bucket = aws.String("")
+ lc.IncludeCookies = aws.Bool(false)
+ lc.Enabled = aws.Bool(false)
+ }
+ return &lc
+}
+
+func flattenLoggingConfig(lc *cloudfront.LoggingConfig) *schema.Set {
+ m := make(map[string]interface{})
+ m["prefix"] = *lc.Prefix
+ m["bucket"] = *lc.Bucket
+ m["include_cookies"] = *lc.IncludeCookies
+ return schema.NewSet(loggingConfigHash, []interface{}{m})
+}
+
+// Assemble the hash for the aws_cloudfront_distribution logging_config
+// TypeSet attribute.
+func loggingConfigHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["prefix"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["bucket"].(string)))
+ buf.WriteString(fmt.Sprintf("%t-", m["include_cookies"].(bool)))
+ return hashcode.String(buf.String())
+}
+
+func expandAliases(as *schema.Set) *cloudfront.Aliases {
+ s := as.List()
+ var aliases cloudfront.Aliases
+ if len(s) > 0 {
+ aliases.Quantity = aws.Int64(int64(len(s)))
+ aliases.Items = expandStringList(s)
+ } else {
+ aliases.Quantity = aws.Int64(0)
+ }
+ return &aliases
+}
+
+func flattenAliases(aliases *cloudfront.Aliases) *schema.Set {
+ if aliases.Items != nil {
+ return schema.NewSet(aliasesHash, flattenStringList(aliases.Items))
+ }
+ return schema.NewSet(aliasesHash, []interface{}{})
+}
+
+// Assemble the hash for the aws_cloudfront_distribution aliases
+// TypeSet attribute.
+func aliasesHash(v interface{}) int {
+ return hashcode.String(v.(string))
+}
+
+func expandRestrictions(m map[string]interface{}) *cloudfront.Restrictions {
+ return &cloudfront.Restrictions{
+ GeoRestriction: expandGeoRestriction(m["geo_restriction"].(*schema.Set).List()[0].(map[string]interface{})),
+ }
+}
+
+func flattenRestrictions(r *cloudfront.Restrictions) *schema.Set {
+ m := make(map[string]interface{})
+ s := schema.NewSet(geoRestrictionHash, []interface{}{flattenGeoRestriction(r.GeoRestriction)})
+ m["geo_restriction"] = s
+ return schema.NewSet(restrictionsHash, []interface{}{m})
+}
+
+// Assemble the hash for the aws_cloudfront_distribution restrictions
+// TypeSet attribute.
+func restrictionsHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%d-", geoRestrictionHash(m["geo_restriction"].(*schema.Set).List()[0].(map[string]interface{}))))
+ return hashcode.String(buf.String())
+}
+
+func expandGeoRestriction(m map[string]interface{}) *cloudfront.GeoRestriction {
+ gr := cloudfront.GeoRestriction{
+ RestrictionType: aws.String(m["restriction_type"].(string)),
+ }
+ if v, ok := m["locations"]; ok {
+ gr.Quantity = aws.Int64(int64(len(v.([]interface{}))))
+ gr.Items = expandStringList(v.([]interface{}))
+ } else {
+ gr.Quantity = aws.Int64(0)
+ }
+ return &gr
+}
+
+func flattenGeoRestriction(gr *cloudfront.GeoRestriction) map[string]interface{} {
+ m := make(map[string]interface{})
+
+ m["restriction_type"] = *gr.RestrictionType
+ if gr.Items != nil {
+ m["locations"] = flattenStringList(gr.Items)
+ }
+ return m
+}
+
+// Assemble the hash for the aws_cloudfront_distribution geo_restriction
+// TypeSet attribute.
+func geoRestrictionHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ // All keys added in alphabetical order.
+ buf.WriteString(fmt.Sprintf("%s-", m["restriction_type"].(string)))
+ if v, ok := m["locations"]; ok {
+ for _, w := range sortInterfaceSlice(v.([]interface{})) {
+ buf.WriteString(fmt.Sprintf("%s-", w.(string)))
+ }
+ }
+ return hashcode.String(buf.String())
+}
+
+func expandViewerCertificate(m map[string]interface{}) *cloudfront.ViewerCertificate {
+ var vc cloudfront.ViewerCertificate
+ if v, ok := m["iam_certificate_id"]; ok && v != "" {
+ vc.IAMCertificateId = aws.String(v.(string))
+ vc.SSLSupportMethod = aws.String(m["ssl_support_method"].(string))
+ } else if v, ok := m["acm_certificate_arn"]; ok && v != "" {
+ vc.ACMCertificateArn = aws.String(v.(string))
+ vc.SSLSupportMethod = aws.String(m["ssl_support_method"].(string))
+ } else {
+ vc.CloudFrontDefaultCertificate = aws.Bool(m["cloudfront_default_certificate"].(bool))
+ }
+ if v, ok := m["minimum_protocol_version"]; ok && v != "" {
+ vc.MinimumProtocolVersion = aws.String(v.(string))
+ }
+ return &vc
+}
+
+func flattenViewerCertificate(vc *cloudfront.ViewerCertificate) *schema.Set {
+ m := make(map[string]interface{})
+
+ if vc.IAMCertificateId != nil {
+ m["iam_certificate_id"] = *vc.IAMCertificateId
+ m["ssl_support_method"] = *vc.SSLSupportMethod
+ }
+ if vc.ACMCertificateArn != nil {
+ m["acm_certificate_arn"] = *vc.ACMCertificateArn
+ m["ssl_support_method"] = *vc.SSLSupportMethod
+ }
+ if vc.CloudFrontDefaultCertificate != nil {
+ m["cloudfront_default_certificate"] = *vc.CloudFrontDefaultCertificate
+ }
+ if vc.MinimumProtocolVersion != nil {
+ m["minimum_protocol_version"] = *vc.MinimumProtocolVersion
+ }
+ return schema.NewSet(viewerCertificateHash, []interface{}{m})
+}
+
+// Assemble the hash for the aws_cloudfront_distribution viewer_certificate
+// TypeSet attribute.
+func viewerCertificateHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ if v, ok := m["iam_certificate_id"]; ok && v.(string) != "" {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["ssl_support_method"].(string)))
+ } else if v, ok := m["acm_certificate_arn"]; ok && v.(string) != "" {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["ssl_support_method"].(string)))
+ } else {
+ buf.WriteString(fmt.Sprintf("%t-", m["cloudfront_default_certificate"].(bool)))
+ }
+ if v, ok := m["minimum_protocol_version"]; ok && v.(string) != "" {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ return hashcode.String(buf.String())
+}
+
+// Do a top-level copy of struct fields from one struct to another. Used to
+// copy fields between CacheBehavior and DefaultCacheBehavior structs.
+func simpleCopyStruct(src, dst interface{}) {
+ s := reflect.ValueOf(src).Elem()
+ d := reflect.ValueOf(dst).Elem()
+
+ for i := 0; i < s.NumField(); i++ {
+ if s.Field(i).CanSet() == true {
+ if s.Field(i).Interface() != nil {
+ for j := 0; j < d.NumField(); j++ {
+ if d.Type().Field(j).Name == s.Type().Field(i).Name {
+ d.Field(j).Set(s.Field(i))
+ }
+ }
+ }
+ }
+ }
+}
+
+// Convert *cloudfront.ActiveTrustedSigners to a flatmap.Map type, which ensures
+// it can probably be inserted into the schema.TypeMap type used by the
+// active_trusted_signers attribute.
+func flattenActiveTrustedSigners(ats *cloudfront.ActiveTrustedSigners) flatmap.Map {
+ m := make(map[string]interface{})
+ s := []interface{}{}
+ m["enabled"] = *ats.Enabled
+
+ for _, v := range ats.Items {
+ signer := make(map[string]interface{})
+ signer["aws_account_number"] = *v.AwsAccountNumber
+ signer["key_pair_ids"] = aws.StringValueSlice(v.KeyPairIds.Items)
+ s = append(s, signer)
+ }
+ m["items"] = s
+ return flatmap.Flatten(m)
+}
diff --git a/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go b/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go
new file mode 100644
index 000000000000..e788c80b707f
--- /dev/null
+++ b/builtin/providers/aws/cloudfront_distribution_configuration_structure_test.go
@@ -0,0 +1,1045 @@
+package aws
+
+import (
+ "reflect"
+ "testing"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/hashicorp/terraform/helper/schema"
+)
+
+func defaultCacheBehaviorConf() map[string]interface{} {
+ return map[string]interface{}{
+ "viewer_protocol_policy": "allow-all",
+ "target_origin_id": "myS3Origin",
+ "forwarded_values": schema.NewSet(forwardedValuesHash, []interface{}{forwardedValuesConf()}),
+ "min_ttl": 86400,
+ "trusted_signers": trustedSignersConf(),
+ "max_ttl": 365000000,
+ "smooth_streaming": false,
+ "default_ttl": 86400,
+ "allowed_methods": allowedMethodsConf(),
+ "cached_methods": cachedMethodsConf(),
+ "compress": true,
+ }
+}
+
+func cacheBehaviorConf1() map[string]interface{} {
+ cb := defaultCacheBehaviorConf()
+ cb["path_pattern"] = "/path1"
+ return cb
+}
+
+func cacheBehaviorConf2() map[string]interface{} {
+ cb := defaultCacheBehaviorConf()
+ cb["path_pattern"] = "/path2"
+ return cb
+}
+
+func cacheBehaviorsConf() *schema.Set {
+ return schema.NewSet(cacheBehaviorHash, []interface{}{cacheBehaviorConf1(), cacheBehaviorConf2()})
+}
+
+func trustedSignersConf() []interface{} {
+ return []interface{}{"1234567890EX", "1234567891EX"}
+}
+
+func forwardedValuesConf() map[string]interface{} {
+ return map[string]interface{}{
+ "query_string": true,
+ "cookies": schema.NewSet(cookiePreferenceHash, []interface{}{cookiePreferenceConf()}),
+ "headers": headersConf(),
+ }
+}
+
+func headersConf() []interface{} {
+ return []interface{}{"X-Example1", "X-Example2"}
+}
+
+func cookiePreferenceConf() map[string]interface{} {
+ return map[string]interface{}{
+ "forward": "whitelist",
+ "whitelisted_names": cookieNamesConf(),
+ }
+}
+
+func cookieNamesConf() []interface{} {
+ return []interface{}{"Example1", "Example2"}
+}
+
+func allowedMethodsConf() []interface{} {
+ return []interface{}{"DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"}
+}
+
+func cachedMethodsConf() []interface{} {
+ return []interface{}{"GET", "HEAD", "OPTIONS"}
+}
+
+func originCustomHeadersConf() *schema.Set {
+ return schema.NewSet(originCustomHeaderHash, []interface{}{originCustomHeaderConf1(), originCustomHeaderConf2()})
+}
+
+func originCustomHeaderConf1() map[string]interface{} {
+ return map[string]interface{}{
+ "name": "X-Custom-Header1",
+ "value": "samplevalue",
+ }
+}
+
+func originCustomHeaderConf2() map[string]interface{} {
+ return map[string]interface{}{
+ "name": "X-Custom-Header2",
+ "value": "samplevalue",
+ }
+}
+
+func customOriginConf() map[string]interface{} {
+ return map[string]interface{}{
+ "origin_protocol_policy": "http-only",
+ "http_port": 80,
+ "https_port": 443,
+ "origin_ssl_protocols": customOriginSslProtocolsConf(),
+ }
+}
+
+func customOriginSslProtocolsConf() []interface{} {
+ return []interface{}{"SSLv3", "TLSv1", "TLSv1.1", "TLSv1.2"}
+}
+
+func s3OriginConf() map[string]interface{} {
+ return map[string]interface{}{
+ "origin_access_identity": "origin-access-identity/cloudfront/E127EXAMPLE51Z",
+ }
+}
+
+func originWithCustomConf() map[string]interface{} {
+ return map[string]interface{}{
+ "origin_id": "CustomOrigin",
+ "domain_name": "www.example.com",
+ "origin_path": "/",
+ "custom_origin_config": schema.NewSet(customOriginConfigHash, []interface{}{customOriginConf()}),
+ "custom_header": originCustomHeadersConf(),
+ }
+}
+func originWithS3Conf() map[string]interface{} {
+ return map[string]interface{}{
+ "origin_id": "S3Origin",
+ "domain_name": "s3.example.com",
+ "origin_path": "/",
+ "s3_origin_config": schema.NewSet(s3OriginConfigHash, []interface{}{s3OriginConf()}),
+ "custom_header": originCustomHeadersConf(),
+ }
+}
+
+func multiOriginConf() *schema.Set {
+ return schema.NewSet(originHash, []interface{}{originWithCustomConf(), originWithS3Conf()})
+}
+
+func geoRestrictionWhitelistConf() map[string]interface{} {
+ return map[string]interface{}{
+ "restriction_type": "whitelist",
+ "locations": []interface{}{"CA", "GB", "US"},
+ }
+}
+
+func geoRestrictionsConf() map[string]interface{} {
+ return map[string]interface{}{
+ "geo_restriction": schema.NewSet(geoRestrictionHash, []interface{}{geoRestrictionWhitelistConf()}),
+ }
+}
+
+func geoRestrictionConfNoItems() map[string]interface{} {
+ return map[string]interface{}{
+ "restriction_type": "none",
+ }
+}
+
+func customErrorResponsesConf() []interface{} {
+ return []interface{}{
+ map[string]interface{}{
+ "error_code": 404,
+ "error_caching_min_ttl": 30,
+ "response_code": 200,
+ "response_page_path": "/error-pages/404.html",
+ },
+ map[string]interface{}{
+ "error_code": 403,
+ "error_caching_min_ttl": 15,
+ "response_code": 404,
+ "response_page_path": "/error-pages/404.html",
+ },
+ }
+}
+
+func aliasesConf() *schema.Set {
+ return schema.NewSet(aliasesHash, []interface{}{"example.com", "www.example.com"})
+}
+
+func loggingConfigConf() map[string]interface{} {
+ return map[string]interface{}{
+ "include_cookies": false,
+ "bucket": "mylogs.s3.amazonaws.com",
+ "prefix": "myprefix",
+ }
+}
+
+func customErrorResponsesConfSet() *schema.Set {
+ return schema.NewSet(customErrorResponseHash, customErrorResponsesConf())
+}
+
+func customErrorResponsesConfFirst() map[string]interface{} {
+ return customErrorResponsesConf()[0].(map[string]interface{})
+}
+
+func customErrorResponseConfNoResponseCode() map[string]interface{} {
+ er := customErrorResponsesConf()[0].(map[string]interface{})
+ er["response_code"] = 0
+ er["response_page_path"] = ""
+ return er
+}
+
+func viewerCertificateConfSetCloudFrontDefault() map[string]interface{} {
+ return map[string]interface{}{
+ "acm_certificate_arn": "",
+ "cloudfront_default_certificate": true,
+ "iam_certificate_id": "",
+ "minimum_protocol_version": "",
+ "ssl_support_method": "",
+ }
+}
+
+func viewerCertificateConfSetIAM() map[string]interface{} {
+ return map[string]interface{}{
+ "acm_certificate_arn": "",
+ "cloudfront_default_certificate": false,
+ "iam_certificate_id": "iamcert-01234567",
+ "ssl_support_method": "vip",
+ "minimum_protocol_version": "TLSv1",
+ }
+}
+
+func viewerCertificateConfSetACM() map[string]interface{} {
+ return map[string]interface{}{
+ "acm_certificate_arn": "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012",
+ "cloudfront_default_certificate": false,
+ "iam_certificate_id": "",
+ "ssl_support_method": "sni-only",
+ "minimum_protocol_version": "TLSv1",
+ }
+}
+
+func TestCloudFrontStructure_expandDefaultCacheBehavior(t *testing.T) {
+ data := defaultCacheBehaviorConf()
+ dcb := expandDefaultCacheBehavior(data)
+ if *dcb.Compress != true {
+ t.Fatalf("Expected Compress to be true, got %v", *dcb.Compress)
+ }
+ if *dcb.ViewerProtocolPolicy != "allow-all" {
+ t.Fatalf("Expected ViewerProtocolPolicy to be allow-all, got %v", *dcb.ViewerProtocolPolicy)
+ }
+ if *dcb.TargetOriginId != "myS3Origin" {
+ t.Fatalf("Expected TargetOriginId to be allow-all, got %v", *dcb.TargetOriginId)
+ }
+ if reflect.DeepEqual(dcb.ForwardedValues.Headers.Items, expandStringList(headersConf())) != true {
+ t.Fatalf("Expected Items to be %v, got %v", headersConf(), dcb.ForwardedValues.Headers.Items)
+ }
+ if *dcb.MinTTL != 86400 {
+ t.Fatalf("Expected MinTTL to be 86400, got %v", *dcb.MinTTL)
+ }
+ if reflect.DeepEqual(dcb.TrustedSigners.Items, expandStringList(trustedSignersConf())) != true {
+ t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", trustedSignersConf(), dcb.TrustedSigners.Items)
+ }
+ if *dcb.MaxTTL != 365000000 {
+ t.Fatalf("Expected MaxTTL to be 86400, got %v", *dcb.MaxTTL)
+ }
+ if *dcb.SmoothStreaming != false {
+ t.Fatalf("Expected SmoothStreaming to be false, got %v", *dcb.SmoothStreaming)
+ }
+ if *dcb.DefaultTTL != 86400 {
+ t.Fatalf("Expected DefaultTTL to be 86400, got %v", *dcb.DefaultTTL)
+ }
+ if reflect.DeepEqual(dcb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) != true {
+ t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", allowedMethodsConf(), dcb.AllowedMethods.Items)
+ }
+ if reflect.DeepEqual(dcb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) != true {
+ t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", cachedMethodsConf(), dcb.AllowedMethods.CachedMethods.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenDefaultCacheBehavior(t *testing.T) {
+ in := defaultCacheBehaviorConf()
+ dcb := expandDefaultCacheBehavior(in)
+ out := flattenDefaultCacheBehavior(dcb)
+ diff := schema.NewSet(defaultCacheBehaviorHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandCacheBehavior(t *testing.T) {
+ data := cacheBehaviorConf1()
+ cb := expandCacheBehavior(data)
+ if *cb.Compress != true {
+ t.Fatalf("Expected Compress to be true, got %v", *cb.Compress)
+ }
+ if *cb.ViewerProtocolPolicy != "allow-all" {
+ t.Fatalf("Expected ViewerProtocolPolicy to be allow-all, got %v", *cb.ViewerProtocolPolicy)
+ }
+ if *cb.TargetOriginId != "myS3Origin" {
+ t.Fatalf("Expected TargetOriginId to be myS3Origin, got %v", *cb.TargetOriginId)
+ }
+ if reflect.DeepEqual(cb.ForwardedValues.Headers.Items, expandStringList(headersConf())) != true {
+ t.Fatalf("Expected Items to be %v, got %v", headersConf(), cb.ForwardedValues.Headers.Items)
+ }
+ if *cb.MinTTL != 86400 {
+ t.Fatalf("Expected MinTTL to be 86400, got %v", *cb.MinTTL)
+ }
+ if reflect.DeepEqual(cb.TrustedSigners.Items, expandStringList(trustedSignersConf())) != true {
+ t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", trustedSignersConf(), cb.TrustedSigners.Items)
+ }
+ if *cb.MaxTTL != 365000000 {
+ t.Fatalf("Expected MaxTTL to be 365000000, got %v", *cb.MaxTTL)
+ }
+ if *cb.SmoothStreaming != false {
+ t.Fatalf("Expected SmoothStreaming to be false, got %v", *cb.SmoothStreaming)
+ }
+ if *cb.DefaultTTL != 86400 {
+ t.Fatalf("Expected DefaultTTL to be 86400, got %v", *cb.DefaultTTL)
+ }
+ if reflect.DeepEqual(cb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) != true {
+ t.Fatalf("Expected AllowedMethods.Items to be %v, got %v", allowedMethodsConf(), cb.AllowedMethods.Items)
+ }
+ if reflect.DeepEqual(cb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) != true {
+ t.Fatalf("Expected AllowedMethods.CachedMethods.Items to be %v, got %v", cachedMethodsConf(), cb.AllowedMethods.CachedMethods.Items)
+ }
+ if *cb.PathPattern != "/path1" {
+ t.Fatalf("Expected PathPattern to be /path1, got %v", *cb.PathPattern)
+ }
+}
+
+func TestCloudFrontStructure_flattenCacheBehavior(t *testing.T) {
+ in := cacheBehaviorConf1()
+ cb := expandCacheBehavior(in)
+ out := flattenCacheBehavior(cb)
+ var diff *schema.Set
+ if out["compress"] != true {
+ t.Fatalf("Expected out[compress] to be true, got %v", out["compress"])
+ }
+ if out["viewer_protocol_policy"] != "allow-all" {
+ t.Fatalf("Expected out[viewer_protocol_policy] to be allow-all, got %v", out["viewer_protocol_policy"])
+ }
+ if out["target_origin_id"] != "myS3Origin" {
+ t.Fatalf("Expected out[target_origin_id] to be myS3Origin, got %v", out["target_origin_id"])
+ }
+ diff = out["forwarded_values"].(*schema.Set).Difference(in["forwarded_values"].(*schema.Set))
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out[forwarded_values] to be %v, got %v, diff: %v", out["forwarded_values"], in["forwarded_values"], diff)
+ }
+ if out["min_ttl"] != int(86400) {
+ t.Fatalf("Expected out[min_ttl] to be 86400 (int), got %v", out["forwarded_values"])
+ }
+ if reflect.DeepEqual(out["trusted_signers"], in["trusted_signers"]) != true {
+ t.Fatalf("Expected out[trusted_signers] to be %v, got %v", in["trusted_signers"], out["trusted_signers"])
+ }
+ if out["max_ttl"] != int(365000000) {
+ t.Fatalf("Expected out[max_ttl] to be 365000000 (int), got %v", out["max_ttl"])
+ }
+ if out["smooth_streaming"] != false {
+ t.Fatalf("Expected out[smooth_streaming] to be false, got %v", out["smooth_streaming"])
+ }
+ if out["default_ttl"] != int(86400) {
+ t.Fatalf("Expected out[default_ttl] to be 86400 (int), got %v", out["default_ttl"])
+ }
+ if reflect.DeepEqual(out["allowed_methods"], in["allowed_methods"]) != true {
+ t.Fatalf("Expected out[allowed_methods] to be %v, got %v", in["allowed_methods"], out["allowed_methods"])
+ }
+ if reflect.DeepEqual(out["cached_methods"], in["cached_methods"]) != true {
+ t.Fatalf("Expected out[cached_methods] to be %v, got %v", in["cached_methods"], out["cached_methods"])
+ }
+ if out["path_pattern"] != "/path1" {
+ t.Fatalf("Expected out[path_pattern] to be /path1, got %v", out["path_pattern"])
+ }
+}
+
+func TestCloudFrontStructure_expandCacheBehaviors(t *testing.T) {
+ data := cacheBehaviorsConf()
+ cbs := expandCacheBehaviors(data)
+ if *cbs.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *cbs.Quantity)
+ }
+ if *cbs.Items[0].TargetOriginId != "myS3Origin" {
+ t.Fatalf("Expected first Item's TargetOriginId to be myS3Origin, got %v", *cbs.Items[0].TargetOriginId)
+ }
+}
+
+func TestCloudFrontStructure_flattenCacheBehaviors(t *testing.T) {
+ in := cacheBehaviorsConf()
+ cbs := expandCacheBehaviors(in)
+ out := flattenCacheBehaviors(cbs)
+ diff := in.Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandTrustedSigners(t *testing.T) {
+ data := trustedSignersConf()
+ ts := expandTrustedSigners(data)
+ if *ts.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *ts.Quantity)
+ }
+ if *ts.Enabled != true {
+ t.Fatalf("Expected Enabled to be true, got %v", *ts.Enabled)
+ }
+ if reflect.DeepEqual(ts.Items, expandStringList(data)) != true {
+ t.Fatalf("Expected Items to be %v, got %v", data, ts.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenTrustedSigners(t *testing.T) {
+ in := trustedSignersConf()
+ ts := expandTrustedSigners(in)
+ out := flattenTrustedSigners(ts)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandTrustedSigners_empty(t *testing.T) {
+ data := []interface{}{}
+ ts := expandTrustedSigners(data)
+ if *ts.Quantity != 0 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *ts.Quantity)
+ }
+ if *ts.Enabled != false {
+ t.Fatalf("Expected Enabled to be true, got %v", *ts.Enabled)
+ }
+ if ts.Items != nil {
+ t.Fatalf("Expected Items to be nil, got %v", ts.Items)
+ }
+}
+
+func TestCloudFrontStructure_expandForwardedValues(t *testing.T) {
+ data := forwardedValuesConf()
+ fv := expandForwardedValues(data)
+ if *fv.QueryString != true {
+ t.Fatalf("Expected QueryString to be true, got %v", *fv.QueryString)
+ }
+ if reflect.DeepEqual(fv.Cookies.WhitelistedNames.Items, expandStringList(cookieNamesConf())) != true {
+ t.Fatalf("Expected Cookies.WhitelistedNames.Items to be %v, got %v", cookieNamesConf(), fv.Cookies.WhitelistedNames.Items)
+ }
+ if reflect.DeepEqual(fv.Headers.Items, expandStringList(headersConf())) != true {
+ t.Fatalf("Expected Headers.Items to be %v, got %v", headersConf(), fv.Headers.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenForwardedValues(t *testing.T) {
+ in := forwardedValuesConf()
+ fv := expandForwardedValues(in)
+ out := flattenForwardedValues(fv)
+
+ if out["query_string"] != true {
+ t.Fatalf("Expected out[query_string] to be true, got %v", out["query_string"])
+ }
+ if out["cookies"].(*schema.Set).Equal(in["cookies"].(*schema.Set)) != true {
+ t.Fatalf("Expected out[cookies] to be %v, got %v", in["cookies"], out["cookies"])
+ }
+ if reflect.DeepEqual(out["headers"], in["headers"]) != true {
+ t.Fatalf("Expected out[headers] to be %v, got %v", in["headers"], out["headers"])
+ }
+}
+
+func TestCloudFrontStructure_expandHeaders(t *testing.T) {
+ data := headersConf()
+ h := expandHeaders(data)
+ if *h.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *h.Quantity)
+ }
+ if reflect.DeepEqual(h.Items, expandStringList(data)) != true {
+ t.Fatalf("Expected Items to be %v, got %v", data, h.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenHeaders(t *testing.T) {
+ in := headersConf()
+ h := expandHeaders(in)
+ out := flattenHeaders(h)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCookiePreference(t *testing.T) {
+ data := cookiePreferenceConf()
+ cp := expandCookiePreference(data)
+ if *cp.Forward != "whitelist" {
+ t.Fatalf("Expected Forward to be whitelist, got %v", *cp.Forward)
+ }
+ if reflect.DeepEqual(cp.WhitelistedNames.Items, expandStringList(cookieNamesConf())) != true {
+ t.Fatalf("Expected WhitelistedNames.Items to be %v, got %v", cookieNamesConf(), cp.WhitelistedNames.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenCookiePreference(t *testing.T) {
+ in := cookiePreferenceConf()
+ cp := expandCookiePreference(in)
+ out := flattenCookiePreference(cp)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCookieNames(t *testing.T) {
+ data := cookieNamesConf()
+ cn := expandCookieNames(data)
+ if *cn.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *cn.Quantity)
+ }
+ if reflect.DeepEqual(cn.Items, expandStringList(data)) != true {
+ t.Fatalf("Expected Items to be %v, got %v", data, cn.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenCookieNames(t *testing.T) {
+ in := cookieNamesConf()
+ cn := expandCookieNames(in)
+ out := flattenCookieNames(cn)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandAllowedMethods(t *testing.T) {
+ data := allowedMethodsConf()
+ am := expandAllowedMethods(data)
+ if *am.Quantity != 7 {
+ t.Fatalf("Expected Quantity to be 3, got %v", *am.Quantity)
+ }
+ if reflect.DeepEqual(am.Items, expandStringList(data)) != true {
+ t.Fatalf("Expected Items to be %v, got %v", data, am.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenAllowedMethods(t *testing.T) {
+ in := allowedMethodsConf()
+ am := expandAllowedMethods(in)
+ out := flattenAllowedMethods(am)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCachedMethods(t *testing.T) {
+ data := cachedMethodsConf()
+ cm := expandCachedMethods(data)
+ if *cm.Quantity != 3 {
+ t.Fatalf("Expected Quantity to be 3, got %v", *cm.Quantity)
+ }
+ if reflect.DeepEqual(cm.Items, expandStringList(data)) != true {
+ t.Fatalf("Expected Items to be %v, got %v", data, cm.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenCachedMethods(t *testing.T) {
+ in := cachedMethodsConf()
+ cm := expandCachedMethods(in)
+ out := flattenCachedMethods(cm)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandOrigins(t *testing.T) {
+ data := multiOriginConf()
+ origins := expandOrigins(data)
+ if *origins.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *origins.Quantity)
+ }
+ if *origins.Items[0].OriginPath != "/" {
+ t.Fatalf("Expected first Item's OriginPath to be /, got %v", *origins.Items[0].OriginPath)
+ }
+}
+
+func TestCloudFrontStructure_flattenOrigins(t *testing.T) {
+ in := multiOriginConf()
+ origins := expandOrigins(in)
+ out := flattenOrigins(origins)
+ diff := in.Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandOrigin(t *testing.T) {
+ data := originWithCustomConf()
+ or := expandOrigin(data)
+ if *or.Id != "CustomOrigin" {
+ t.Fatalf("Expected Id to be CustomOrigin, got %v", *or.Id)
+ }
+ if *or.DomainName != "www.example.com" {
+ t.Fatalf("Expected DomainName to be www.example.com, got %v", *or.DomainName)
+ }
+ if *or.OriginPath != "/" {
+ t.Fatalf("Expected OriginPath to be /, got %v", *or.OriginPath)
+ }
+ if *or.CustomOriginConfig.OriginProtocolPolicy != "http-only" {
+ t.Fatalf("Expected CustomOriginConfig.OriginProtocolPolicy to be http-only, got %v", *or.CustomOriginConfig.OriginProtocolPolicy)
+ }
+ if *or.CustomHeaders.Items[0].HeaderValue != "samplevalue" {
+ t.Fatalf("Expected CustomHeaders.Items[0].HeaderValue to be samplevalue, got %v", *or.CustomHeaders.Items[0].HeaderValue)
+ }
+}
+
+func TestCloudFrontStructure_flattenOrigin(t *testing.T) {
+ in := originWithCustomConf()
+ or := expandOrigin(in)
+ out := flattenOrigin(or)
+
+ if out["origin_id"] != "CustomOrigin" {
+ t.Fatalf("Expected out[origin_id] to be CustomOrigin, got %v", out["origin_id"])
+ }
+ if out["domain_name"] != "www.example.com" {
+ t.Fatalf("Expected out[domain_name] to be www.example.com, got %v", out["domain_name"])
+ }
+ if out["origin_path"] != "/" {
+ t.Fatalf("Expected out[origin_path] to be /, got %v", out["origin_path"])
+ }
+ if out["custom_origin_config"].(*schema.Set).Equal(in["custom_origin_config"].(*schema.Set)) != true {
+ t.Fatalf("Expected out[custom_origin_config] to be %v, got %v", in["custom_origin_config"], out["custom_origin_config"])
+ }
+}
+
+func TestCloudFrontStructure_expandCustomHeaders(t *testing.T) {
+ in := originCustomHeadersConf()
+ chs := expandCustomHeaders(in)
+ if *chs.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *chs.Quantity)
+ }
+ if *chs.Items[0].HeaderValue != "samplevalue" {
+ t.Fatalf("Expected first Item's HeaderValue to be samplevalue, got %v", *chs.Items[0].HeaderValue)
+ }
+}
+
+func TestCloudFrontStructure_flattenCustomHeaders(t *testing.T) {
+ in := originCustomHeadersConf()
+ chs := expandCustomHeaders(in)
+ out := flattenCustomHeaders(chs)
+ diff := in.Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_flattenOriginCustomHeader(t *testing.T) {
+ in := originCustomHeaderConf1()
+ och := expandOriginCustomHeader(in)
+ out := flattenOriginCustomHeader(och)
+
+ if out["name"] != "X-Custom-Header1" {
+ t.Fatalf("Expected out[name] to be X-Custom-Header1, got %v", out["name"])
+ }
+ if out["value"] != "samplevalue" {
+ t.Fatalf("Expected out[value] to be samplevalue, got %v", out["value"])
+ }
+}
+
+func TestCloudFrontStructure_expandOriginCustomHeader(t *testing.T) {
+ in := originCustomHeaderConf1()
+ och := expandOriginCustomHeader(in)
+
+ if *och.HeaderName != "X-Custom-Header1" {
+ t.Fatalf("Expected HeaderName to be X-Custom-Header1, got %v", *och.HeaderName)
+ }
+ if *och.HeaderValue != "samplevalue" {
+ t.Fatalf("Expected HeaderValue to be samplevalue, got %v", *och.HeaderValue)
+ }
+}
+
+func TestCloudFrontStructure_expandCustomOriginConfig(t *testing.T) {
+ data := customOriginConf()
+ co := expandCustomOriginConfig(data)
+ if *co.OriginProtocolPolicy != "http-only" {
+ t.Fatalf("Expected OriginProtocolPolicy to be http-only, got %v", *co.OriginProtocolPolicy)
+ }
+ if *co.HTTPPort != 80 {
+ t.Fatalf("Expected HTTPPort to be 80, got %v", *co.HTTPPort)
+ }
+ if *co.HTTPSPort != 443 {
+ t.Fatalf("Expected HTTPSPort to be 443, got %v", *co.HTTPSPort)
+ }
+}
+
+func TestCloudFrontStructure_flattenCustomOriginConfig(t *testing.T) {
+ in := customOriginConf()
+ co := expandCustomOriginConfig(in)
+ out := flattenCustomOriginConfig(co)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCustomOriginConfigSSL(t *testing.T) {
+ in := customOriginSslProtocolsConf()
+ ocs := expandCustomOriginConfigSSL(in)
+ if *ocs.Quantity != 4 {
+ t.Fatalf("Expected Quantity to be 4, got %v", *ocs.Quantity)
+ }
+ if *ocs.Items[0] != "SSLv3" {
+ t.Fatalf("Expected first Item to be SSLv3, got %v", *ocs.Items[0])
+ }
+}
+
+func TestCloudFrontStructure_flattenCustomOriginConfigSSL(t *testing.T) {
+ in := customOriginSslProtocolsConf()
+ ocs := expandCustomOriginConfigSSL(in)
+ out := flattenCustomOriginConfigSSL(ocs)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandS3OriginConfig(t *testing.T) {
+ data := s3OriginConf()
+ s3o := expandS3OriginConfig(data)
+ if *s3o.OriginAccessIdentity != "origin-access-identity/cloudfront/E127EXAMPLE51Z" {
+ t.Fatalf("Expected OriginAccessIdentity to be origin-access-identity/cloudfront/E127EXAMPLE51Z, got %v", *s3o.OriginAccessIdentity)
+ }
+}
+
+func TestCloudFrontStructure_flattenS3OriginConfig(t *testing.T) {
+ in := s3OriginConf()
+ s3o := expandS3OriginConfig(in)
+ out := flattenS3OriginConfig(s3o)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCustomErrorResponses(t *testing.T) {
+ data := customErrorResponsesConfSet()
+ ers := expandCustomErrorResponses(data)
+ if *ers.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *ers.Quantity)
+ }
+ if *ers.Items[0].ResponsePagePath != "/error-pages/404.html" {
+ t.Fatalf("Expected ResponsePagePath in first Item to be /error-pages/404.html, got %v", *ers.Items[0].ResponsePagePath)
+ }
+}
+
+func TestCloudFrontStructure_flattenCustomErrorResponses(t *testing.T) {
+ in := customErrorResponsesConfSet()
+ ers := expandCustomErrorResponses(in)
+ out := flattenCustomErrorResponses(ers)
+
+ if in.Equal(out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandCustomErrorResponse(t *testing.T) {
+ data := customErrorResponsesConfFirst()
+ er := expandCustomErrorResponse(data)
+ if *er.ErrorCode != 404 {
+ t.Fatalf("Expected ErrorCode to be 404, got %v", *er.ErrorCode)
+ }
+ if *er.ErrorCachingMinTTL != 30 {
+ t.Fatalf("Expected ErrorCachingMinTTL to be 30, got %v", *er.ErrorCachingMinTTL)
+ }
+ if *er.ResponseCode != "200" {
+ t.Fatalf("Expected ResponseCode to be 200 (as string), got %v", *er.ResponseCode)
+ }
+ if *er.ResponsePagePath != "/error-pages/404.html" {
+ t.Fatalf("Expected ResponsePagePath to be /error-pages/404.html, got %v", *er.ResponsePagePath)
+ }
+}
+
+func TestCloudFrontStructure_expandCustomErrorResponse_emptyResponseCode(t *testing.T) {
+ data := customErrorResponseConfNoResponseCode()
+ er := expandCustomErrorResponse(data)
+ if *er.ResponseCode != "" {
+ t.Fatalf("Expected ResponseCode to be empty string, got %v", *er.ResponseCode)
+ }
+ if *er.ResponsePagePath != "" {
+ t.Fatalf("Expected ResponsePagePath to be empty string, got %v", *er.ResponsePagePath)
+ }
+}
+
+func TestCloudFrontStructure_flattenCustomErrorResponse(t *testing.T) {
+ in := customErrorResponsesConfFirst()
+ er := expandCustomErrorResponse(in)
+ out := flattenCustomErrorResponse(er)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandLoggingConfig(t *testing.T) {
+ data := loggingConfigConf()
+
+ lc := expandLoggingConfig(data)
+ if *lc.Enabled != true {
+ t.Fatalf("Expected Enabled to be true, got %v", *lc.Enabled)
+ }
+ if *lc.Prefix != "myprefix" {
+ t.Fatalf("Expected Prefix to be myprefix, got %v", *lc.Prefix)
+ }
+ if *lc.Bucket != "mylogs.s3.amazonaws.com" {
+ t.Fatalf("Expected Bucket to be mylogs.s3.amazonaws.com, got %v", *lc.Bucket)
+ }
+ if *lc.IncludeCookies != false {
+ t.Fatalf("Expected IncludeCookies to be false, got %v", *lc.IncludeCookies)
+ }
+}
+
+func TestCloudFrontStructure_expandLoggingConfig_nilValue(t *testing.T) {
+ lc := expandLoggingConfig(nil)
+ if *lc.Enabled != false {
+ t.Fatalf("Expected Enabled to be false, got %v", *lc.Enabled)
+ }
+ if *lc.Prefix != "" {
+ t.Fatalf("Expected Prefix to be blank, got %v", *lc.Prefix)
+ }
+ if *lc.Bucket != "" {
+ t.Fatalf("Expected Bucket to be blank, got %v", *lc.Bucket)
+ }
+ if *lc.IncludeCookies != false {
+ t.Fatalf("Expected IncludeCookies to be false, got %v", *lc.IncludeCookies)
+ }
+}
+
+func TestCloudFrontStructure_flattenLoggingConfig(t *testing.T) {
+ in := loggingConfigConf()
+ lc := expandLoggingConfig(in)
+ out := flattenLoggingConfig(lc)
+ diff := schema.NewSet(loggingConfigHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandAliases(t *testing.T) {
+ data := aliasesConf()
+ a := expandAliases(data)
+ if *a.Quantity != 2 {
+ t.Fatalf("Expected Quantity to be 2, got %v", *a.Quantity)
+ }
+ if reflect.DeepEqual(a.Items, expandStringList(data.List())) != true {
+ t.Fatalf("Expected Items to be [example.com www.example.com], got %v", a.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenAliases(t *testing.T) {
+ in := aliasesConf()
+ a := expandAliases(in)
+ out := flattenAliases(a)
+ diff := in.Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandRestrictions(t *testing.T) {
+ data := geoRestrictionsConf()
+ r := expandRestrictions(data)
+ if *r.GeoRestriction.RestrictionType != "whitelist" {
+ t.Fatalf("Expected GeoRestriction.RestrictionType to be whitelist, got %v", *r.GeoRestriction.RestrictionType)
+ }
+}
+
+func TestCloudFrontStructure_flattenRestrictions(t *testing.T) {
+ in := geoRestrictionsConf()
+ r := expandRestrictions(in)
+ out := flattenRestrictions(r)
+ diff := schema.NewSet(restrictionsHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandGeoRestriction_whitelist(t *testing.T) {
+ data := geoRestrictionWhitelistConf()
+ gr := expandGeoRestriction(data)
+ if *gr.RestrictionType != "whitelist" {
+ t.Fatalf("Expected RestrictionType to be whitelist, got %v", *gr.RestrictionType)
+ }
+ if *gr.Quantity != 3 {
+ t.Fatalf("Expected Quantity to be 3, got %v", *gr.Quantity)
+ }
+ if reflect.DeepEqual(gr.Items, aws.StringSlice([]string{"CA", "GB", "US"})) != true {
+ t.Fatalf("Expected Items be [CA, GB, US], got %v", gr.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenGeoRestriction_whitelist(t *testing.T) {
+ in := geoRestrictionWhitelistConf()
+ gr := expandGeoRestriction(in)
+ out := flattenGeoRestriction(gr)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandGeoRestriction_no_items(t *testing.T) {
+ data := geoRestrictionConfNoItems()
+ gr := expandGeoRestriction(data)
+ if *gr.RestrictionType != "none" {
+ t.Fatalf("Expected RestrictionType to be none, got %v", *gr.RestrictionType)
+ }
+ if *gr.Quantity != 0 {
+ t.Fatalf("Expected Quantity to be 0, got %v", *gr.Quantity)
+ }
+ if gr.Items != nil {
+ t.Fatalf("Expected Items to not be set, got %v", gr.Items)
+ }
+}
+
+func TestCloudFrontStructure_flattenGeoRestriction_no_items(t *testing.T) {
+ in := geoRestrictionConfNoItems()
+ gr := expandGeoRestriction(in)
+ out := flattenGeoRestriction(gr)
+
+ if reflect.DeepEqual(in, out) != true {
+ t.Fatalf("Expected out to be %v, got %v", in, out)
+ }
+}
+
+func TestCloudFrontStructure_expandViewerCertificate_cloudfront_default_certificate(t *testing.T) {
+ data := viewerCertificateConfSetCloudFrontDefault()
+ vc := expandViewerCertificate(data)
+ if vc.ACMCertificateArn != nil {
+ t.Fatalf("Expected ACMCertificateArn to be unset, got %v", *vc.ACMCertificateArn)
+ }
+ if *vc.CloudFrontDefaultCertificate != true {
+ t.Fatalf("Expected CloudFrontDefaultCertificate to be true, got %v", *vc.CloudFrontDefaultCertificate)
+ }
+ if vc.IAMCertificateId != nil {
+ t.Fatalf("Expected IAMCertificateId to not be set, got %v", *vc.IAMCertificateId)
+ }
+ if vc.SSLSupportMethod != nil {
+ t.Fatalf("Expected IAMCertificateId to not be set, got %v", *vc.SSLSupportMethod)
+ }
+ if vc.MinimumProtocolVersion != nil {
+ t.Fatalf("Expected IAMCertificateId to not be set, got %v", *vc.MinimumProtocolVersion)
+ }
+}
+
+func TestCloudFrontStructure_flattenViewerCertificate_cloudfront_default_certificate(t *testing.T) {
+ in := viewerCertificateConfSetCloudFrontDefault()
+ vc := expandViewerCertificate(in)
+ out := flattenViewerCertificate(vc)
+ diff := schema.NewSet(viewerCertificateHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_expandViewerCertificate_iam_certificate_id(t *testing.T) {
+ data := viewerCertificateConfSetIAM()
+ vc := expandViewerCertificate(data)
+ if vc.ACMCertificateArn != nil {
+ t.Fatalf("Expected ACMCertificateArn to be unset, got %v", *vc.ACMCertificateArn)
+ }
+ if vc.CloudFrontDefaultCertificate != nil {
+ t.Fatalf("Expected CloudFrontDefaultCertificate to be unset, got %v", *vc.CloudFrontDefaultCertificate)
+ }
+ if *vc.IAMCertificateId != "iamcert-01234567" {
+ t.Fatalf("Expected IAMCertificateId to be iamcert-01234567, got %v", *vc.IAMCertificateId)
+ }
+ if *vc.SSLSupportMethod != "vip" {
+ t.Fatalf("Expected IAMCertificateId to be vip, got %v", *vc.SSLSupportMethod)
+ }
+ if *vc.MinimumProtocolVersion != "TLSv1" {
+ t.Fatalf("Expected IAMCertificateId to be TLSv1, got %v", *vc.MinimumProtocolVersion)
+ }
+}
+
+func TestCloudFrontStructure_expandViewerCertificate_acm_certificate_arn(t *testing.T) {
+ data := viewerCertificateConfSetACM()
+ vc := expandViewerCertificate(data)
+ if *vc.ACMCertificateArn != "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012" {
+ t.Fatalf("Expected ACMCertificateArn to be arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012, got %v", *vc.ACMCertificateArn)
+ }
+ if vc.CloudFrontDefaultCertificate != nil {
+ t.Fatalf("Expected CloudFrontDefaultCertificate to be unset, got %v", *vc.CloudFrontDefaultCertificate)
+ }
+ if vc.IAMCertificateId != nil {
+ t.Fatalf("Expected IAMCertificateId to be unset, got %v", *vc.IAMCertificateId)
+ }
+ if *vc.SSLSupportMethod != "sni-only" {
+ t.Fatalf("Expected IAMCertificateId to be sni-only, got %v", *vc.SSLSupportMethod)
+ }
+ if *vc.MinimumProtocolVersion != "TLSv1" {
+ t.Fatalf("Expected IAMCertificateId to be TLSv1, got %v", *vc.MinimumProtocolVersion)
+ }
+}
+
+func TestCloudFrontStructure_falttenViewerCertificate_iam_certificate_id(t *testing.T) {
+ in := viewerCertificateConfSetIAM()
+ vc := expandViewerCertificate(in)
+ out := flattenViewerCertificate(vc)
+ diff := schema.NewSet(viewerCertificateHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_falttenViewerCertificate_acm_certificate_arn(t *testing.T) {
+ in := viewerCertificateConfSetACM()
+ vc := expandViewerCertificate(in)
+ out := flattenViewerCertificate(vc)
+ diff := schema.NewSet(viewerCertificateHash, []interface{}{in}).Difference(out)
+
+ if len(diff.List()) > 0 {
+ t.Fatalf("Expected out to be %v, got %v, diff: %v", in, out, diff)
+ }
+}
+
+func TestCloudFrontStructure_viewerCertificateHash_IAM(t *testing.T) {
+ in := viewerCertificateConfSetIAM()
+ out := viewerCertificateHash(in)
+ expected := 1157261784
+
+ if expected != out {
+ t.Fatalf("Expected %v, got %v", expected, out)
+ }
+}
+
+func TestCloudFrontStructure_viewerCertificateHash_ACM(t *testing.T) {
+ in := viewerCertificateConfSetACM()
+ out := viewerCertificateHash(in)
+ expected := 2883600425
+
+ if expected != out {
+ t.Fatalf("Expected %v, got %v", expected, out)
+ }
+}
+
+func TestCloudFrontStructure_viewerCertificateHash_default(t *testing.T) {
+ in := viewerCertificateConfSetCloudFrontDefault()
+ out := viewerCertificateHash(in)
+ expected := 69840937
+
+ if expected != out {
+ t.Fatalf("Expected %v, got %v", expected, out)
+ }
+}
diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go
index 8861b260f42c..82a82e016f49 100644
--- a/builtin/providers/aws/config.go
+++ b/builtin/providers/aws/config.go
@@ -4,9 +4,7 @@ import (
"fmt"
"log"
"net/http"
- "os"
"strings"
- "time"
"github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/go-multierror"
@@ -17,14 +15,12 @@ import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
- awsCredentials "github.com/aws/aws-sdk-go/aws/credentials"
- "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
- "github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/aws/aws-sdk-go/service/autoscaling"
"github.com/aws/aws-sdk-go/service/cloudformation"
+ "github.com/aws/aws-sdk-go/service/cloudfront"
"github.com/aws/aws-sdk-go/service/cloudtrail"
"github.com/aws/aws-sdk-go/service/cloudwatch"
"github.com/aws/aws-sdk-go/service/cloudwatchevents"
@@ -78,6 +74,7 @@ type Config struct {
type AWSClient struct {
cfconn *cloudformation.CloudFormation
+ cloudfrontconn *cloudfront.CloudFront
cloudtrailconn *cloudtrail.CloudTrail
cloudwatchconn *cloudwatch.CloudWatch
cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs
@@ -131,20 +128,23 @@ func (c *Config) Client() (interface{}, error) {
client.region = c.Region
log.Println("[INFO] Building AWS auth structure")
- creds := getCreds(c.AccessKey, c.SecretKey, c.Token, c.Profile, c.CredsFilename)
+ creds := GetCredentials(c.AccessKey, c.SecretKey, c.Token, c.Profile, c.CredsFilename)
// Call Get to check for credential provider. If nothing found, we'll get an
// error, and we can present it nicely to the user
- _, err = creds.Get()
+ cp, err := creds.Get()
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" {
- errs = append(errs, fmt.Errorf(`No valid credential sources found for AWS Provider.
- Please see https://terraform.io/docs/providers/aws/index.html for more information on
+ errs = append(errs, fmt.Errorf(`No valid credential sources found for AWS Provider.
+ Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider`))
} else {
errs = append(errs, fmt.Errorf("Error loading credentials for AWS Provider: %s", err))
}
return nil, &multierror.Error{Errors: errs}
}
+
+ log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName)
+
awsConfig := &aws.Config{
Credentials: creds,
Region: aws.String(c.Region),
@@ -175,6 +175,7 @@ func (c *Config) Client() (interface{}, error) {
err = c.ValidateCredentials(client.iamconn)
if err != nil {
errs = append(errs, err)
+ return nil, &multierror.Error{Errors: errs}
}
// Some services exist only in us-east-1, e.g. because they manage
@@ -188,6 +189,9 @@ func (c *Config) Client() (interface{}, error) {
dynamoSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DynamoDBEndpoint)})
client.dynamodbconn = dynamodb.New(dynamoSess)
+ log.Println("[INFO] Initializing Cloudfront connection")
+ client.cloudfrontconn = cloudfront.New(sess)
+
log.Println("[INFO] Initializing ELB connection")
awsElbSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.ElbEndpoint)})
client.elbconn = elb.New(awsElbSess)
@@ -211,7 +215,7 @@ func (c *Config) Client() (interface{}, error) {
log.Println("[INFO] Initializing Elastic Beanstalk Connection")
client.elasticbeanstalkconn = elasticbeanstalk.New(sess)
- authErr := c.ValidateAccountId(client.iamconn)
+ authErr := c.ValidateAccountId(client.iamconn, cp.ProviderName)
if authErr != nil {
errs = append(errs, authErr)
}
@@ -318,7 +322,7 @@ func (c *Config) ValidateCredentials(iamconn *iam.IAM) error {
if awsErr, ok := err.(awserr.Error); ok {
if awsErr.Code() == "AccessDenied" || awsErr.Code() == "ValidationError" {
- log.Printf("[WARN] AccessDenied Error with iam.GetUser, assuming IAM profile")
+ log.Printf("[WARN] AccessDenied Error with iam.GetUser, assuming IAM role")
// User may be an IAM instance profile, or otherwise IAM role without the
// GetUser permissions, so fail silently
return nil
@@ -334,31 +338,17 @@ func (c *Config) ValidateCredentials(iamconn *iam.IAM) error {
// ValidateAccountId returns a context-specific error if the configured account
// id is explicitly forbidden or not authorised; and nil if it is authorised.
-func (c *Config) ValidateAccountId(iamconn *iam.IAM) error {
+func (c *Config) ValidateAccountId(iamconn *iam.IAM, authProviderName string) error {
if c.AllowedAccountIds == nil && c.ForbiddenAccountIds == nil {
return nil
}
log.Printf("[INFO] Validating account ID")
-
- out, err := iamconn.GetUser(nil)
-
+ account_id, err := GetAccountId(iamconn, authProviderName)
if err != nil {
- awsErr, _ := err.(awserr.Error)
- if awsErr.Code() == "ValidationError" {
- log.Printf("[WARN] ValidationError with iam.GetUser, assuming its an IAM profile")
- // User may be an IAM instance profile, so fail silently.
- // If it is an IAM instance profile
- // validating account might be superfluous
- return nil
- } else {
- return fmt.Errorf("Failed getting account ID from IAM: %s", err)
- // return error if the account id is explicitly not authorised
- }
+ return err
}
- account_id := strings.Split(*out.User.Arn, ":")[4]
-
if c.ForbiddenAccountIds != nil {
for _, id := range c.ForbiddenAccountIds {
if id == account_id {
@@ -379,59 +369,6 @@ func (c *Config) ValidateAccountId(iamconn *iam.IAM) error {
return nil
}
-// This function is responsible for reading credentials from the
-// environment in the case that they're not explicitly specified
-// in the Terraform configuration.
-func getCreds(key, secret, token, profile, credsfile string) *awsCredentials.Credentials {
- // build a chain provider, lazy-evaulated by aws-sdk
- providers := []awsCredentials.Provider{
- &awsCredentials.StaticProvider{Value: awsCredentials.Value{
- AccessKeyID: key,
- SecretAccessKey: secret,
- SessionToken: token,
- }},
- &awsCredentials.EnvProvider{},
- &awsCredentials.SharedCredentialsProvider{
- Filename: credsfile,
- Profile: profile,
- },
- }
-
- // We only look in the EC2 metadata API if we can connect
- // to the metadata service within a reasonable amount of time
- metadataURL := os.Getenv("AWS_METADATA_URL")
- if metadataURL == "" {
- metadataURL = "http://169.254.169.254:80/latest"
- }
- c := http.Client{
- Timeout: 100 * time.Millisecond,
- }
-
- r, err := c.Get(metadataURL)
- // Flag to determine if we should add the EC2Meta data provider. Default false
- var useIAM bool
- if err == nil {
- // AWS will add a "Server: EC2ws" header value for the metadata request. We
- // check the headers for this value to ensure something else didn't just
- // happent to be listening on that IP:Port
- if r.Header["Server"] != nil && strings.Contains(r.Header["Server"][0], "EC2") {
- useIAM = true
- }
- }
-
- if useIAM {
- log.Printf("[DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider")
- providers = append(providers, &ec2rolecreds.EC2RoleProvider{
- Client: ec2metadata.New(session.New(&aws.Config{
- Endpoint: aws.String(metadataURL),
- })),
- })
- } else {
- log.Printf("[DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider")
- }
- return awsCredentials.NewChainCredentials(providers)
-}
-
// addTerraformVersionToUserAgent is a named handler that will add Terraform's
// version information to requests made by the AWS SDK.
var addTerraformVersionToUserAgent = request.NamedHandler{
diff --git a/builtin/providers/aws/config_test.go b/builtin/providers/aws/config_test.go
deleted file mode 100644
index 5c58a57290fb..000000000000
--- a/builtin/providers/aws/config_test.go
+++ /dev/null
@@ -1,376 +0,0 @@
-package aws
-
-import (
- "encoding/json"
- "fmt"
- "io/ioutil"
- "net/http"
- "net/http/httptest"
- "os"
- "testing"
-
- "github.com/aws/aws-sdk-go/aws/awserr"
-)
-
-func TestAWSConfig_shouldError(t *testing.T) {
- resetEnv := unsetEnv(t)
- defer resetEnv()
- cfg := Config{}
-
- c := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
- _, err := c.Get()
- if awsErr, ok := err.(awserr.Error); ok {
- if awsErr.Code() != "NoCredentialProviders" {
- t.Fatalf("Expected NoCredentialProviders error")
- }
- }
- if err == nil {
- t.Fatalf("Expected an error with empty env, keys, and IAM in AWS Config")
- }
-}
-
-func TestAWSConfig_shouldBeStatic(t *testing.T) {
- simple := []struct {
- Key, Secret, Token string
- }{
- {
- Key: "test",
- Secret: "secret",
- }, {
- Key: "test",
- Secret: "test",
- Token: "test",
- },
- }
-
- for _, c := range simple {
- cfg := Config{
- AccessKey: c.Key,
- SecretKey: c.Secret,
- Token: c.Token,
- }
-
- creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
- if creds == nil {
- t.Fatalf("Expected a static creds provider to be returned")
- }
- v, err := creds.Get()
- if err != nil {
- t.Fatalf("Error gettings creds: %s", err)
- }
- if v.AccessKeyID != c.Key {
- t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID)
- }
- if v.SecretAccessKey != c.Secret {
- t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey)
- }
- if v.SessionToken != c.Token {
- t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken)
- }
- }
-}
-
-// TestAWSConfig_shouldIAM is designed to test the scenario of running Terraform
-// from an EC2 instance, without environment variables or manually supplied
-// credentials.
-func TestAWSConfig_shouldIAM(t *testing.T) {
- // clear AWS_* environment variables
- resetEnv := unsetEnv(t)
- defer resetEnv()
-
- // capture the test server's close method, to call after the test returns
- ts := awsEnv(t)
- defer ts()
-
- // An empty config, no key supplied
- cfg := Config{}
-
- creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
- if creds == nil {
- t.Fatalf("Expected a static creds provider to be returned")
- }
-
- v, err := creds.Get()
- if err != nil {
- t.Fatalf("Error gettings creds: %s", err)
- }
- if v.AccessKeyID != "somekey" {
- t.Fatalf("AccessKeyID mismatch, expected: (somekey), got (%s)", v.AccessKeyID)
- }
- if v.SecretAccessKey != "somesecret" {
- t.Fatalf("SecretAccessKey mismatch, expected: (somesecret), got (%s)", v.SecretAccessKey)
- }
- if v.SessionToken != "sometoken" {
- t.Fatalf("SessionToken mismatch, expected: (sometoken), got (%s)", v.SessionToken)
- }
-}
-
-// TestAWSConfig_shouldIAM is designed to test the scenario of running Terraform
-// from an EC2 instance, without environment variables or manually supplied
-// credentials.
-func TestAWSConfig_shouldIgnoreIAM(t *testing.T) {
- resetEnv := unsetEnv(t)
- defer resetEnv()
- // capture the test server's close method, to call after the test returns
- ts := awsEnv(t)
- defer ts()
- simple := []struct {
- Key, Secret, Token string
- }{
- {
- Key: "test",
- Secret: "secret",
- }, {
- Key: "test",
- Secret: "test",
- Token: "test",
- },
- }
-
- for _, c := range simple {
- cfg := Config{
- AccessKey: c.Key,
- SecretKey: c.Secret,
- Token: c.Token,
- }
-
- creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
- if creds == nil {
- t.Fatalf("Expected a static creds provider to be returned")
- }
- v, err := creds.Get()
- if err != nil {
- t.Fatalf("Error gettings creds: %s", err)
- }
- if v.AccessKeyID != c.Key {
- t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", c.Key, v.AccessKeyID)
- }
- if v.SecretAccessKey != c.Secret {
- t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", c.Secret, v.SecretAccessKey)
- }
- if v.SessionToken != c.Token {
- t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", c.Token, v.SessionToken)
- }
- }
-}
-
-var credentialsFileContents = `[myprofile]
-aws_access_key_id = accesskey
-aws_secret_access_key = secretkey
-`
-
-func TestAWSConfig_shouldBeShared(t *testing.T) {
- file, err := ioutil.TempFile(os.TempDir(), "terraform_aws_cred")
- if err != nil {
- t.Fatalf("Error writing temporary credentials file: %s", err)
- }
- _, err = file.WriteString(credentialsFileContents)
- if err != nil {
- t.Fatalf("Error writing temporary credentials to file: %s", err)
- }
- err = file.Close()
- if err != nil {
- t.Fatalf("Error closing temporary credentials file: %s", err)
- }
-
- defer os.Remove(file.Name())
-
- resetEnv := unsetEnv(t)
- defer resetEnv()
-
- if err := os.Setenv("AWS_PROFILE", "myprofile"); err != nil {
- t.Fatalf("Error resetting env var AWS_PROFILE: %s", err)
- }
- if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", file.Name()); err != nil {
- t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
- }
-
- creds := getCreds("", "", "", "myprofile", file.Name())
- if creds == nil {
- t.Fatalf("Expected a provider chain to be returned")
- }
- v, err := creds.Get()
- if err != nil {
- t.Fatalf("Error gettings creds: %s", err)
- }
-
- if v.AccessKeyID != "accesskey" {
- t.Fatalf("AccessKeyID mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID)
- }
-
- if v.SecretAccessKey != "secretkey" {
- t.Fatalf("SecretAccessKey mismatch, expected (%s), got (%s)", "accesskey", v.AccessKeyID)
- }
-}
-
-func TestAWSConfig_shouldBeENV(t *testing.T) {
- // need to set the environment variables to a dummy string, as we don't know
- // what they may be at runtime without hardcoding here
- s := "some_env"
- resetEnv := setEnv(s, t)
-
- defer resetEnv()
-
- cfg := Config{}
- creds := getCreds(cfg.AccessKey, cfg.SecretKey, cfg.Token, cfg.Profile, cfg.CredsFilename)
- if creds == nil {
- t.Fatalf("Expected a static creds provider to be returned")
- }
- v, err := creds.Get()
- if err != nil {
- t.Fatalf("Error gettings creds: %s", err)
- }
- if v.AccessKeyID != s {
- t.Fatalf("AccessKeyID mismatch, expected: (%s), got (%s)", s, v.AccessKeyID)
- }
- if v.SecretAccessKey != s {
- t.Fatalf("SecretAccessKey mismatch, expected: (%s), got (%s)", s, v.SecretAccessKey)
- }
- if v.SessionToken != s {
- t.Fatalf("SessionToken mismatch, expected: (%s), got (%s)", s, v.SessionToken)
- }
-}
-
-// unsetEnv unsets enviornment variables for testing a "clean slate" with no
-// credentials in the environment
-func unsetEnv(t *testing.T) func() {
- // Grab any existing AWS keys and preserve. In some tests we'll unset these, so
- // we need to have them and restore them after
- e := getEnv()
- if err := os.Unsetenv("AWS_ACCESS_KEY_ID"); err != nil {
- t.Fatalf("Error unsetting env var AWS_ACCESS_KEY_ID: %s", err)
- }
- if err := os.Unsetenv("AWS_SECRET_ACCESS_KEY"); err != nil {
- t.Fatalf("Error unsetting env var AWS_SECRET_ACCESS_KEY: %s", err)
- }
- if err := os.Unsetenv("AWS_SESSION_TOKEN"); err != nil {
- t.Fatalf("Error unsetting env var AWS_SESSION_TOKEN: %s", err)
- }
- if err := os.Unsetenv("AWS_PROFILE"); err != nil {
- t.Fatalf("Error unsetting env var AWS_TOKEN: %s", err)
- }
- if err := os.Unsetenv("AWS_SHARED_CREDENTIALS_FILE"); err != nil {
- t.Fatalf("Error unsetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
- }
-
- return func() {
- // re-set all the envs we unset above
- if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil {
- t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err)
- }
- if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil {
- t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err)
- }
- if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil {
- t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err)
- }
- if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil {
- t.Fatalf("Error resetting env var AWS_PROFILE: %s", err)
- }
- if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", e.CredsFilename); err != nil {
- t.Fatalf("Error resetting env var AWS_SHARED_CREDENTIALS_FILE: %s", err)
- }
- }
-}
-
-func setEnv(s string, t *testing.T) func() {
- e := getEnv()
- // Set all the envs to a dummy value
- if err := os.Setenv("AWS_ACCESS_KEY_ID", s); err != nil {
- t.Fatalf("Error setting env var AWS_ACCESS_KEY_ID: %s", err)
- }
- if err := os.Setenv("AWS_SECRET_ACCESS_KEY", s); err != nil {
- t.Fatalf("Error setting env var AWS_SECRET_ACCESS_KEY: %s", err)
- }
- if err := os.Setenv("AWS_SESSION_TOKEN", s); err != nil {
- t.Fatalf("Error setting env var AWS_SESSION_TOKEN: %s", err)
- }
- if err := os.Setenv("AWS_PROFILE", s); err != nil {
- t.Fatalf("Error setting env var AWS_PROFILE: %s", err)
- }
- if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil {
- t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err)
- }
-
- return func() {
- // re-set all the envs we unset above
- if err := os.Setenv("AWS_ACCESS_KEY_ID", e.Key); err != nil {
- t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err)
- }
- if err := os.Setenv("AWS_SECRET_ACCESS_KEY", e.Secret); err != nil {
- t.Fatalf("Error resetting env var AWS_SECRET_ACCESS_KEY: %s", err)
- }
- if err := os.Setenv("AWS_SESSION_TOKEN", e.Token); err != nil {
- t.Fatalf("Error resetting env var AWS_SESSION_TOKEN: %s", err)
- }
- if err := os.Setenv("AWS_PROFILE", e.Profile); err != nil {
- t.Fatalf("Error setting env var AWS_PROFILE: %s", err)
- }
- if err := os.Setenv("AWS_SHARED_CREDENTIALS_FILE", s); err != nil {
- t.Fatalf("Error setting env var AWS_SHARED_CREDENTIALS_FLE: %s", err)
- }
- }
-}
-
-// awsEnv establishes a httptest server to mock out the internal AWS Metadata
-// service. IAM Credentials are retrieved by the EC2RoleProvider, which makes
-// API calls to this internal URL. By replacing the server with a test server,
-// we can simulate an AWS environment
-func awsEnv(t *testing.T) func() {
- routes := routes{}
- if err := json.Unmarshal([]byte(aws_routes), &routes); err != nil {
- t.Fatalf("Failed to unmarshal JSON in AWS ENV test: %s", err)
- }
- ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- w.Header().Set("Content-Type", "text/plain")
- w.Header().Add("Server", "MockEC2")
- for _, e := range routes.Endpoints {
- if r.RequestURI == e.Uri {
- fmt.Fprintln(w, e.Body)
- }
- }
- }))
-
- os.Setenv("AWS_METADATA_URL", ts.URL+"/latest")
- return ts.Close
-}
-
-func getEnv() *currentEnv {
- // Grab any existing AWS keys and preserve. In some tests we'll unset these, so
- // we need to have them and restore them after
- return ¤tEnv{
- Key: os.Getenv("AWS_ACCESS_KEY_ID"),
- Secret: os.Getenv("AWS_SECRET_ACCESS_KEY"),
- Token: os.Getenv("AWS_SESSION_TOKEN"),
- Profile: os.Getenv("AWS_TOKEN"),
- CredsFilename: os.Getenv("AWS_SHARED_CREDENTIALS_FILE"),
- }
-}
-
-// struct to preserve the current environment
-type currentEnv struct {
- Key, Secret, Token, Profile, CredsFilename string
-}
-
-type routes struct {
- Endpoints []*endpoint `json:"endpoints"`
-}
-type endpoint struct {
- Uri string `json:"uri"`
- Body string `json:"body"`
-}
-
-const aws_routes = `
-{
- "endpoints": [
- {
- "uri": "/latest/meta-data/iam/security-credentials",
- "body": "test_role"
- },
- {
- "uri": "/latest/meta-data/iam/security-credentials/test_role",
- "body": "{\"Code\":\"Success\",\"LastUpdated\":\"2015-12-11T17:17:25Z\",\"Type\":\"AWS-HMAC\",\"AccessKeyId\":\"somekey\",\"SecretAccessKey\":\"somesecret\",\"Token\":\"sometoken\"}"
- }
- ]
-}
-`
diff --git a/builtin/providers/aws/opsworks_layers.go b/builtin/providers/aws/opsworks_layers.go
index 6eb6d1bddeab..7959c61a9828 100644
--- a/builtin/providers/aws/opsworks_layers.go
+++ b/builtin/providers/aws/opsworks_layers.go
@@ -109,6 +109,12 @@ func (lt *opsworksLayerType) SchemaResource() *schema.Resource {
Set: schema.HashString,
},
+ "custom_json": &schema.Schema{
+ Type: schema.TypeString,
+ StateFunc: normalizeJson,
+ Optional: true,
+ },
+
"auto_healing": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
@@ -288,6 +294,14 @@ func (lt *opsworksLayerType) Read(d *schema.ResourceData, client *opsworks.OpsWo
d.Set("short_name", layer.Shortname)
}
+ if v := layer.CustomJson; v == nil {
+ if err := d.Set("custom_json", ""); err != nil {
+ return err
+ }
+ } else if err := d.Set("custom_json", normalizeJson(*v)); err != nil {
+ return err
+ }
+
lt.SetAttributeMap(d, layer.Attributes)
lt.SetLifecycleEventConfiguration(d, layer.LifecycleEventConfiguration)
lt.SetCustomRecipes(d, layer.CustomRecipes)
@@ -342,6 +356,8 @@ func (lt *opsworksLayerType) Create(d *schema.ResourceData, client *opsworks.Ops
req.Shortname = aws.String(lt.TypeName)
}
+ req.CustomJson = aws.String(d.Get("custom_json").(string))
+
log.Printf("[DEBUG] Creating OpsWorks layer: %s", d.Id())
resp, err := client.CreateLayer(req)
@@ -393,6 +409,8 @@ func (lt *opsworksLayerType) Update(d *schema.ResourceData, client *opsworks.Ops
req.Shortname = aws.String(lt.TypeName)
}
+ req.CustomJson = aws.String(d.Get("custom_json").(string))
+
log.Printf("[DEBUG] Updating OpsWorks layer: %s", d.Id())
if d.HasChange("elastic_load_balancer") {
diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go
index 01d55a8992d9..343c5015a756 100644
--- a/builtin/providers/aws/provider.go
+++ b/builtin/providers/aws/provider.go
@@ -114,7 +114,9 @@ func Provider() terraform.ResourceProvider {
"aws_ami": resourceAwsAmi(),
"aws_ami_copy": resourceAwsAmiCopy(),
"aws_ami_from_instance": resourceAwsAmiFromInstance(),
+ "aws_api_gateway_account": resourceAwsApiGatewayAccount(),
"aws_api_gateway_api_key": resourceAwsApiGatewayApiKey(),
+ "aws_api_gateway_authorizer": resourceAwsApiGatewayAuthorizer(),
"aws_api_gateway_deployment": resourceAwsApiGatewayDeployment(),
"aws_api_gateway_integration": resourceAwsApiGatewayIntegration(),
"aws_api_gateway_integration_response": resourceAwsApiGatewayIntegrationResponse(),
@@ -129,11 +131,14 @@ func Provider() terraform.ResourceProvider {
"aws_autoscaling_policy": resourceAwsAutoscalingPolicy(),
"aws_autoscaling_schedule": resourceAwsAutoscalingSchedule(),
"aws_cloudformation_stack": resourceAwsCloudFormationStack(),
+ "aws_cloudfront_distribution": resourceAwsCloudFrontDistribution(),
+ "aws_cloudfront_origin_access_identity": resourceAwsCloudFrontOriginAccessIdentity(),
"aws_cloudtrail": resourceAwsCloudTrail(),
"aws_cloudwatch_event_rule": resourceAwsCloudWatchEventRule(),
"aws_cloudwatch_event_target": resourceAwsCloudWatchEventTarget(),
"aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(),
"aws_cloudwatch_log_metric_filter": resourceAwsCloudWatchLogMetricFilter(),
+ "aws_cloudwatch_log_subscription_filter": resourceAwsCloudwatchLogSubscriptionFilter(),
"aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(),
"aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(),
"aws_codedeploy_app": resourceAwsCodeDeployApp(),
@@ -197,8 +202,10 @@ func Provider() terraform.ResourceProvider {
"aws_main_route_table_association": resourceAwsMainRouteTableAssociation(),
"aws_nat_gateway": resourceAwsNatGateway(),
"aws_network_acl": resourceAwsNetworkAcl(),
+ "aws_default_network_acl": resourceAwsDefaultNetworkAcl(),
"aws_network_acl_rule": resourceAwsNetworkAclRule(),
"aws_network_interface": resourceAwsNetworkInterface(),
+ "aws_opsworks_application": resourceAwsOpsworksApplication(),
"aws_opsworks_stack": resourceAwsOpsworksStack(),
"aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(),
"aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(),
@@ -210,6 +217,7 @@ func Provider() terraform.ResourceProvider {
"aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(),
"aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(),
"aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(),
+ "aws_opsworks_instance": resourceAwsOpsworksInstance(),
"aws_placement_group": resourceAwsPlacementGroup(),
"aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(),
"aws_rds_cluster": resourceAwsRDSCluster(),
diff --git a/builtin/providers/aws/provider_test.go b/builtin/providers/aws/provider_test.go
index 77cd931075bf..8712e5c9ad47 100644
--- a/builtin/providers/aws/provider_test.go
+++ b/builtin/providers/aws/provider_test.go
@@ -30,11 +30,13 @@ func TestProvider_impl(t *testing.T) {
}
func testAccPreCheck(t *testing.T) {
- if v := os.Getenv("AWS_ACCESS_KEY_ID"); v == "" {
- t.Fatal("AWS_ACCESS_KEY_ID must be set for acceptance tests")
- }
- if v := os.Getenv("AWS_SECRET_ACCESS_KEY"); v == "" {
- t.Fatal("AWS_SECRET_ACCESS_KEY must be set for acceptance tests")
+ if v := os.Getenv("AWS_PROFILE"); v == "" {
+ if v := os.Getenv("AWS_ACCESS_KEY_ID"); v == "" {
+ t.Fatal("AWS_ACCESS_KEY_ID must be set for acceptance tests")
+ }
+ if v := os.Getenv("AWS_SECRET_ACCESS_KEY"); v == "" {
+ t.Fatal("AWS_SECRET_ACCESS_KEY must be set for acceptance tests")
+ }
}
if v := os.Getenv("AWS_DEFAULT_REGION"); v == "" {
log.Println("[INFO] Test: Using us-west-2 as test region")
diff --git a/builtin/providers/aws/resource_aws_api_gateway_account.go b/builtin/providers/aws/resource_aws_api_gateway_account.go
new file mode 100644
index 000000000000..2f562e63b584
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_api_gateway_account.go
@@ -0,0 +1,124 @@
+package aws
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ "github.com/aws/aws-sdk-go/service/apigateway"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+)
+
+func resourceAwsApiGatewayAccount() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceAwsApiGatewayAccountUpdate,
+ Read: resourceAwsApiGatewayAccountRead,
+ Update: resourceAwsApiGatewayAccountUpdate,
+ Delete: resourceAwsApiGatewayAccountDelete,
+
+ Schema: map[string]*schema.Schema{
+ "cloudwatch_role_arn": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "throttle_settings": &schema.Schema{
+ Type: schema.TypeList,
+ Computed: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "burst_limit": &schema.Schema{
+ Type: schema.TypeInt,
+ Computed: true,
+ },
+ "rate_limit": &schema.Schema{
+ Type: schema.TypeFloat,
+ Computed: true,
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
+func resourceAwsApiGatewayAccountRead(d *schema.ResourceData, meta interface{}) error {
+ conn := meta.(*AWSClient).apigateway
+
+ log.Printf("[INFO] Reading API Gateway Account %s", d.Id())
+ account, err := conn.GetAccount(&apigateway.GetAccountInput{})
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[DEBUG] Received API Gateway Account: %s", account)
+
+ if _, ok := d.GetOk("cloudwatch_role_arn"); ok {
+ // CloudwatchRoleArn cannot be empty nor made empty via API
+ // This resource can however be useful w/out defining cloudwatch_role_arn
+ // (e.g. for referencing throttle_settings)
+ d.Set("cloudwatch_role_arn", account.CloudwatchRoleArn)
+ }
+ d.Set("throttle_settings", flattenApiGatewayThrottleSettings(account.ThrottleSettings))
+
+ return nil
+}
+
+func resourceAwsApiGatewayAccountUpdate(d *schema.ResourceData, meta interface{}) error {
+ conn := meta.(*AWSClient).apigateway
+
+ input := apigateway.UpdateAccountInput{}
+ operations := make([]*apigateway.PatchOperation, 0)
+
+ if d.HasChange("cloudwatch_role_arn") {
+ arn := d.Get("cloudwatch_role_arn").(string)
+ if len(arn) > 0 {
+ // Unfortunately AWS API doesn't allow empty ARNs,
+ // even though that's default settings for new AWS accounts
+ // BadRequestException: The role ARN is not well formed
+ operations = append(operations, &apigateway.PatchOperation{
+ Op: aws.String("replace"),
+ Path: aws.String("/cloudwatchRoleArn"),
+ Value: aws.String(arn),
+ })
+ }
+ }
+ input.PatchOperations = operations
+
+ log.Printf("[INFO] Updating API Gateway Account: %s", input)
+
+ // Retry due to eventual consistency of IAM
+ expectedErrMsg := "The role ARN does not have required permissions set to API Gateway"
+ var out *apigateway.Account
+ var err error
+ err = resource.Retry(2*time.Minute, func() *resource.RetryError {
+ out, err = conn.UpdateAccount(&input)
+
+ if err != nil {
+ if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "BadRequestException" &&
+ awsErr.Message() == expectedErrMsg {
+ log.Printf("[DEBUG] Retrying API Gateway Account update: %s", awsErr)
+ return resource.RetryableError(err)
+ }
+ return resource.NonRetryableError(err)
+ }
+
+ return nil
+ })
+ if err != nil {
+ return fmt.Errorf("Updating API Gateway Account failed: %s", err)
+ }
+ log.Printf("[DEBUG] API Gateway Account updated: %s", out)
+
+ d.SetId("api-gateway-account")
+ return resourceAwsApiGatewayAccountRead(d, meta)
+}
+
+func resourceAwsApiGatewayAccountDelete(d *schema.ResourceData, meta interface{}) error {
+ // There is no API for "deleting" account or resetting it to "default" settings
+ d.SetId("")
+ return nil
+}
diff --git a/builtin/providers/aws/resource_aws_api_gateway_account_test.go b/builtin/providers/aws/resource_aws_api_gateway_account_test.go
new file mode 100644
index 000000000000..c50339f7edb7
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_api_gateway_account_test.go
@@ -0,0 +1,205 @@
+package aws
+
+import (
+ "fmt"
+ "regexp"
+ "testing"
+
+ "github.com/aws/aws-sdk-go/service/apigateway"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func TestAccAWSAPIGatewayAccount_basic(t *testing.T) {
+ var conf apigateway.Account
+
+ expectedRoleArn_first := regexp.MustCompile("[0-9]+")
+ expectedRoleArn_second := regexp.MustCompile("[0-9]+")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSAPIGatewayAccountConfig_updated,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf),
+ testAccCheckAWSAPIGatewayAccountCloudwatchRoleArn(&conf, expectedRoleArn_first),
+ resource.TestMatchResourceAttr("aws_api_gateway_account.test", "cloudwatch_role_arn", expectedRoleArn_first),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccAWSAPIGatewayAccountConfig_updated2,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf),
+ testAccCheckAWSAPIGatewayAccountCloudwatchRoleArn(&conf, expectedRoleArn_second),
+ resource.TestMatchResourceAttr("aws_api_gateway_account.test", "cloudwatch_role_arn", expectedRoleArn_second),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccAWSAPIGatewayAccountConfig_empty,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf),
+ testAccCheckAWSAPIGatewayAccountCloudwatchRoleArn(&conf, expectedRoleArn_second),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckAWSAPIGatewayAccountCloudwatchRoleArn(conf *apigateway.Account, expectedArn *regexp.Regexp) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ if expectedArn == nil && conf.CloudwatchRoleArn == nil {
+ return nil
+ }
+ if expectedArn == nil && conf.CloudwatchRoleArn != nil {
+ return fmt.Errorf("Expected empty CloudwatchRoleArn, given: %q", *conf.CloudwatchRoleArn)
+ }
+ if expectedArn != nil && conf.CloudwatchRoleArn == nil {
+ return fmt.Errorf("Empty CloudwatchRoleArn, expected: %q", expectedArn)
+ }
+ if !expectedArn.MatchString(*conf.CloudwatchRoleArn) {
+ return fmt.Errorf("CloudwatchRoleArn didn't match. Expected: %q, Given: %q", expectedArn, *conf.CloudwatchRoleArn)
+ }
+ return nil
+ }
+}
+
+func testAccCheckAWSAPIGatewayAccountExists(n string, res *apigateway.Account) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No API Gateway Account ID is set")
+ }
+
+ conn := testAccProvider.Meta().(*AWSClient).apigateway
+
+ req := &apigateway.GetAccountInput{}
+ describe, err := conn.GetAccount(req)
+ if err != nil {
+ return err
+ }
+ if describe == nil {
+ return fmt.Errorf("Got nil account ?!")
+ }
+
+ *res = *describe
+
+ return nil
+ }
+}
+
+func testAccCheckAWSAPIGatewayAccountDestroy(s *terraform.State) error {
+ // Intentionally noop
+ // as there is no API method for deleting or resetting account settings
+ return nil
+}
+
+const testAccAWSAPIGatewayAccountConfig_empty = `
+resource "aws_api_gateway_account" "test" {
+}
+`
+
+const testAccAWSAPIGatewayAccountConfig_updated = `
+resource "aws_api_gateway_account" "test" {
+ cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
+}
+
+resource "aws_iam_role" "cloudwatch" {
+ name = "api_gateway_cloudwatch_global"
+ assume_role_policy = < 0 {
+ //
+ // NO-OP
+ //
+ // Subnets *must* belong to a Network ACL. Subnets are not "removed" from
+ // Network ACLs, instead their association is replaced. In a normal
+ // Network ACL, any removal of a Subnet is done by replacing the
+ // Subnet/ACL association with an association between the Subnet and the
+ // Default Network ACL. Because we're managing the default here, we cannot
+ // do that, so we simply log a NO-OP. In order to remove the Subnet here,
+ // it must be destroyed, or assigned to different Network ACL. Those
+ // operations are not handled here
+ log.Printf("[WARN] Cannot remove subnets from the Default Network ACL. They must be re-assigned or destroyed")
+ }
+
+ if len(add) > 0 {
+ for _, a := range add {
+ association, err := findNetworkAclAssociation(a.(string), conn)
+ if err != nil {
+ return fmt.Errorf("Failed to find acl association: acl %s with subnet %s: %s", d.Id(), a, err)
+ }
+ log.Printf("[DEBUG] Updating Network Association for Default Network ACL (%s) and Subnet (%s)", d.Id(), a.(string))
+ _, err = conn.ReplaceNetworkAclAssociation(&ec2.ReplaceNetworkAclAssociationInput{
+ AssociationId: association.NetworkAclAssociationId,
+ NetworkAclId: aws.String(d.Id()),
+ })
+ if err != nil {
+ return err
+ }
+ }
+ }
+ }
+
+ if err := setTags(conn, d); err != nil {
+ return err
+ } else {
+ d.SetPartial("tags")
+ }
+
+ d.Partial(false)
+ // Re-use the exiting Network ACL Resources READ method
+ return resourceAwsNetworkAclRead(d, meta)
+}
+
+func resourceAwsDefaultNetworkAclDelete(d *schema.ResourceData, meta interface{}) error {
+ log.Printf("[WARN] Cannot destroy Default Network ACL. Terraform will remove this resource from the state file, however resources may remain.")
+ d.SetId("")
+ return nil
+}
+
+// revokeAllNetworkACLEntries revoke all ingress and egress rules that the Default
+// Network ACL currently has
+func revokeAllNetworkACLEntries(netaclId string, meta interface{}) error {
+ conn := meta.(*AWSClient).ec2conn
+
+ resp, err := conn.DescribeNetworkAcls(&ec2.DescribeNetworkAclsInput{
+ NetworkAclIds: []*string{aws.String(netaclId)},
+ })
+
+ if err != nil {
+ log.Printf("[DEBUG] Error looking up Network ACL: %s", err)
+ return err
+ }
+
+ if resp == nil {
+ return fmt.Errorf("[ERR] Error looking up Default Network ACL Entries: No results")
+ }
+
+ networkAcl := resp.NetworkAcls[0]
+ for _, e := range networkAcl.Entries {
+ // Skip the default rules added by AWS. They can be neither
+ // configured or deleted by users. See http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html#default-network-acl
+ if *e.RuleNumber == awsDefaultAclRuleNumber {
+ continue
+ }
+
+ // track if this is an egress or ingress rule, for logging purposes
+ rt := "ingress"
+ if *e.Egress == true {
+ rt = "egress"
+ }
+
+ log.Printf("[DEBUG] Destroying Network ACL (%s) Entry number (%d)", rt, int(*e.RuleNumber))
+ _, err := conn.DeleteNetworkAclEntry(&ec2.DeleteNetworkAclEntryInput{
+ NetworkAclId: aws.String(netaclId),
+ RuleNumber: e.RuleNumber,
+ Egress: e.Egress,
+ })
+ if err != nil {
+ return fmt.Errorf("Error deleting entry (%s): %s", e, err)
+ }
+ }
+
+ return nil
+}
diff --git a/builtin/providers/aws/resource_aws_default_network_acl_test.go b/builtin/providers/aws/resource_aws_default_network_acl_test.go
new file mode 100644
index 000000000000..628943634ba0
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_default_network_acl_test.go
@@ -0,0 +1,428 @@
+package aws
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/service/ec2"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+var defaultEgressAcl = &ec2.NetworkAclEntry{
+ CidrBlock: aws.String("0.0.0.0/0"),
+ Egress: aws.Bool(true),
+ Protocol: aws.String("-1"),
+ RuleAction: aws.String("allow"),
+ RuleNumber: aws.Int64(100),
+}
+var defaultIngressAcl = &ec2.NetworkAclEntry{
+ CidrBlock: aws.String("0.0.0.0/0"),
+ Egress: aws.Bool(false),
+ Protocol: aws.String("-1"),
+ RuleAction: aws.String("allow"),
+ RuleNumber: aws.Int64(100),
+}
+
+func TestAccAWSDefaultNetworkAcl_basic(t *testing.T) {
+ var networkAcl ec2.NetworkAcl
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{}, 0),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccAWSDefaultNetworkAcl_deny_ingress(t *testing.T) {
+ // TestAccAWSDefaultNetworkAcl_deny_ingress will deny all Ingress rules, but
+ // not Egress. We then expect there to be 3 rules, 2 AWS defaults and 1
+ // additional Egress.
+ var networkAcl ec2.NetworkAcl
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_deny_ingress,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{defaultEgressAcl}, 0),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccAWSDefaultNetworkAcl_SubnetRemoval(t *testing.T) {
+ var networkAcl ec2.NetworkAcl
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_Subnets,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{}, 2),
+ ),
+ },
+
+ // Here the Subnets have been removed from the Default Network ACL Config,
+ // but have not been reassigned. The result is that the Subnets are still
+ // there, and we have a non-empty plan
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_Subnets_remove,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{}, 2),
+ ),
+ ExpectNonEmptyPlan: true,
+ },
+ },
+ })
+}
+
+func TestAccAWSDefaultNetworkAcl_SubnetReassign(t *testing.T) {
+ var networkAcl ec2.NetworkAcl
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_Subnets,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{}, 2),
+ ),
+ },
+
+ // Here we've reassigned the subnets to a different ACL.
+ // Without any otherwise association between the `aws_network_acl` and
+ // `aws_default_network_acl` resources, we cannot guarantee that the
+ // reassignment of the two subnets to the `aws_network_acl` will happen
+ // before the update/read on the `aws_default_network_acl` resource.
+ // Because of this, there could be a non-empty plan if a READ is done on
+ // the default before the reassignment occurs on the other resource.
+ //
+ // For the sake of testing, here we introduce a depends_on attribute from
+ // the default resource to the other acl resource, to ensure the latter's
+ // update occurs first, and the former's READ will correctly read zero
+ // subnets
+ resource.TestStep{
+ Config: testAccAWSDefaultNetworkConfig_Subnets_move,
+ Check: resource.ComposeTestCheckFunc(
+ testAccGetWSDefaultNetworkAcl("aws_default_network_acl.default", &networkAcl),
+ testAccCheckAWSDefaultACLAttributes(&networkAcl, []*ec2.NetworkAclEntry{}, 0),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckAWSDefaultNetworkAclDestroy(s *terraform.State) error {
+ // We can't destroy this resource; it comes and goes with the VPC itself.
+ return nil
+}
+
+func testAccCheckAWSDefaultACLAttributes(acl *ec2.NetworkAcl, rules []*ec2.NetworkAclEntry, subnetCount int) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ aclEntriesCount := len(acl.Entries)
+ ruleCount := len(rules)
+
+ // Default ACL has 2 hidden rules we can't do anything about
+ ruleCount = ruleCount + 2
+
+ if ruleCount != aclEntriesCount {
+ return fmt.Errorf("Expected (%d) Rules, got (%d)", ruleCount, aclEntriesCount)
+ }
+
+ if len(acl.Associations) != subnetCount {
+ return fmt.Errorf("Expected (%d) Subnets, got (%d)", subnetCount, len(acl.Associations))
+ }
+
+ return nil
+ }
+}
+
+func testAccGetWSDefaultNetworkAcl(n string, networkAcl *ec2.NetworkAcl) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No Network ACL is set")
+ }
+ conn := testAccProvider.Meta().(*AWSClient).ec2conn
+
+ resp, err := conn.DescribeNetworkAcls(&ec2.DescribeNetworkAclsInput{
+ NetworkAclIds: []*string{aws.String(rs.Primary.ID)},
+ })
+ if err != nil {
+ return err
+ }
+
+ if len(resp.NetworkAcls) > 0 && *resp.NetworkAcls[0].NetworkAclId == rs.Primary.ID {
+ *networkAcl = *resp.NetworkAcls[0]
+ return nil
+ }
+
+ return fmt.Errorf("Network Acls not found")
+ }
+}
+
+const testAccAWSDefaultNetworkConfig_basic = `
+resource "aws_vpc" "tftestvpc" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.tftestvpc.default_network_acl_id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_basicDefaultRules = `
+resource "aws_vpc" "tftestvpc" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.tftestvpc.default_network_acl_id}"
+
+ ingress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+
+ egress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_deny = `
+resource "aws_vpc" "tftestvpc" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.tftestvpc.default_network_acl_id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_deny_ingress = `
+resource "aws_vpc" "tftestvpc" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.tftestvpc.default_network_acl_id}"
+
+ egress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_basic"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_Subnets = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "one" {
+ cidr_block = "10.1.111.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "two" {
+ cidr_block = "10.1.1.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_network_acl" "bar" {
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.foo.default_network_acl_id}"
+
+ subnet_ids = ["${aws_subnet.one.id}", "${aws_subnet.two.id}"]
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_Subnets_remove = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "one" {
+ cidr_block = "10.1.111.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "two" {
+ cidr_block = "10.1.1.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_network_acl" "bar" {
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.foo.default_network_acl_id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+`
+
+const testAccAWSDefaultNetworkConfig_Subnets_move = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "one" {
+ cidr_block = "10.1.111.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_subnet" "two" {
+ cidr_block = "10.1.1.0/24"
+ vpc_id = "${aws_vpc.foo.id}"
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_network_acl" "bar" {
+ vpc_id = "${aws_vpc.foo.id}"
+
+ subnet_ids = ["${aws_subnet.one.id}", "${aws_subnet.two.id}"]
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.foo.default_network_acl_id}"
+
+ depends_on = ["aws_network_acl.bar"]
+
+ tags {
+ Name = "TestAccAWSDefaultNetworkAcl_SubnetRemoval"
+ }
+}
+`
diff --git a/builtin/providers/aws/resource_aws_directory_service_directory.go b/builtin/providers/aws/resource_aws_directory_service_directory.go
index 4f241f48a29c..711617a60e1f 100644
--- a/builtin/providers/aws/resource_aws_directory_service_directory.go
+++ b/builtin/providers/aws/resource_aws_directory_service_directory.go
@@ -401,6 +401,13 @@ func resourceAwsDirectoryServiceDirectoryRead(d *schema.ResourceData, meta inter
out, err := dsconn.DescribeDirectories(&input)
if err != nil {
return err
+
+ }
+
+ if len(out.DirectoryDescriptions) == 0 {
+ log.Printf("[WARN] Directory %s not found", d.Id())
+ d.SetId("")
+ return nil
}
dir := out.DirectoryDescriptions[0]
diff --git a/builtin/providers/aws/resource_aws_ebs_volume_test.go b/builtin/providers/aws/resource_aws_ebs_volume_test.go
index 940c8157cabf..f161e32bbf4f 100644
--- a/builtin/providers/aws/resource_aws_ebs_volume_test.go
+++ b/builtin/providers/aws/resource_aws_ebs_volume_test.go
@@ -13,8 +13,9 @@ import (
func TestAccAWSEBSVolume_basic(t *testing.T) {
var v ec2.Volume
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_ebs_volume.test",
+ Providers: testAccProviders,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAwsEbsVolumeConfig,
@@ -45,8 +46,9 @@ func TestAccAWSEBSVolume_NoIops(t *testing.T) {
func TestAccAWSEBSVolume_withTags(t *testing.T) {
var v ec2.Volume
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_ebs_volume.tags_test",
+ Providers: testAccProviders,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAwsEbsVolumeConfigWithTags,
diff --git a/builtin/providers/aws/resource_aws_eip.go b/builtin/providers/aws/resource_aws_eip.go
index ee1aec8bc80a..00033289e1a9 100644
--- a/builtin/providers/aws/resource_aws_eip.go
+++ b/builtin/providers/aws/resource_aws_eip.go
@@ -158,6 +158,14 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error {
d.Set("private_ip", address.PrivateIpAddress)
d.Set("public_ip", address.PublicIp)
+ // On import (domain never set, which it must've been if we created),
+ // set the 'vpc' attribute depending on if we're in a VPC.
+ if _, ok := d.GetOk("domain"); !ok {
+ d.Set("vpc", *address.Domain == "vpc")
+ }
+
+ d.Set("domain", address.Domain)
+
return nil
}
diff --git a/builtin/providers/aws/resource_aws_eip_test.go b/builtin/providers/aws/resource_aws_eip_test.go
index ef3e8113bd5a..9c0064e074db 100644
--- a/builtin/providers/aws/resource_aws_eip_test.go
+++ b/builtin/providers/aws/resource_aws_eip_test.go
@@ -16,9 +16,10 @@ func TestAccAWSEIP_basic(t *testing.T) {
var conf ec2.Address
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSEIPDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_eip.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSEIPDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSEIPConfig,
@@ -35,9 +36,10 @@ func TestAccAWSEIP_instance(t *testing.T) {
var conf ec2.Address
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSEIPDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_eip.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSEIPDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSEIPInstanceConfig,
@@ -62,9 +64,10 @@ func TestAccAWSEIP_network_interface(t *testing.T) {
var conf ec2.Address
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSEIPDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_eip.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSEIPDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSEIPNetworkInterfaceConfig,
@@ -82,9 +85,10 @@ func TestAccAWSEIP_twoEIPsOneNetworkInterface(t *testing.T) {
var one, two ec2.Address
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSEIPDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_eip.one",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSEIPDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSEIPMultiNetworkInterfaceConfig,
diff --git a/builtin/providers/aws/resource_aws_elastic_beanstalk_configuration_template.go b/builtin/providers/aws/resource_aws_elastic_beanstalk_configuration_template.go
index 15cb8543a59d..346fcd5ff354 100644
--- a/builtin/providers/aws/resource_aws_elastic_beanstalk_configuration_template.go
+++ b/builtin/providers/aws/resource_aws_elastic_beanstalk_configuration_template.go
@@ -3,10 +3,12 @@ package aws
import (
"fmt"
"log"
+ "strings"
"github.com/hashicorp/terraform/helper/schema"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/elasticbeanstalk"
)
@@ -101,17 +103,16 @@ func resourceAwsElasticBeanstalkConfigurationTemplateRead(d *schema.ResourceData
})
if err != nil {
+ if awsErr, ok := err.(awserr.Error); ok {
+ if awsErr.Code() == "InvalidParameterValue" && strings.Contains(awsErr.Message(), "No Configuration Template named") {
+ log.Printf("[WARN] No Configuration Template named (%s) found", d.Id())
+ d.SetId("")
+ return nil
+ }
+ }
return err
}
- // if len(resp.ConfigurationSettings) > 1 {
-
- // settings := make(map[string]map[string]string)
- // for _, setting := range resp.ConfigurationSettings {
- // k := fmt.Sprintf("%s.%s", setting.)
- // }
- // }
-
if len(resp.ConfigurationSettings) != 1 {
log.Printf("[DEBUG] Elastic Beanstalk unexpected describe configuration template response: %+v", resp)
return fmt.Errorf("Error reading application properties: found %d applications, expected 1", len(resp.ConfigurationSettings))
@@ -171,11 +172,29 @@ func resourceAwsElasticBeanstalkConfigurationTemplateOptionSettingsUpdate(conn *
}
os := o.(*schema.Set)
- ns := o.(*schema.Set)
+ ns := n.(*schema.Set)
- remove := extractOptionSettings(os.Difference(ns))
+ rm := extractOptionSettings(os.Difference(ns))
add := extractOptionSettings(ns.Difference(os))
+ // Additions and removals of options are done in a single API call, so we
+ // can't do our normal "remove these" and then later "add these", re-adding
+ // any updated settings.
+ // Because of this, we need to remove any settings in the "removable"
+ // settings that are also found in the "add" settings, otherwise they
+ // conflict. Here we loop through all the initial removables from the set
+ // difference, and we build up a slice of settings not found in the "add"
+ // set
+ var remove []*elasticbeanstalk.ConfigurationOptionSetting
+ for _, r := range rm {
+ for _, a := range add {
+ if *r.Namespace == *a.Namespace && *r.OptionName == *a.OptionName {
+ continue
+ }
+ remove = append(remove, r)
+ }
+ }
+
req := &elasticbeanstalk.UpdateConfigurationTemplateInput{
ApplicationName: aws.String(d.Get("application").(string)),
TemplateName: aws.String(d.Get("name").(string)),
@@ -189,6 +208,7 @@ func resourceAwsElasticBeanstalkConfigurationTemplateOptionSettingsUpdate(conn *
})
}
+ log.Printf("[DEBUG] Update Configuration Template request: %s", req)
if _, err := conn.UpdateConfigurationTemplate(req); err != nil {
return err
}
diff --git a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment.go b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment.go
index 204790eef9f4..b6ce29ca4fc3 100644
--- a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment.go
+++ b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment.go
@@ -43,6 +43,9 @@ func resourceAwsElasticBeanstalkEnvironment() *schema.Resource {
Update: resourceAwsElasticBeanstalkEnvironmentUpdate,
Delete: resourceAwsElasticBeanstalkEnvironmentDelete,
+ SchemaVersion: 1,
+ MigrateState: resourceAwsElasticBeanstalkEnvironmentMigrateState,
+
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
@@ -253,50 +256,22 @@ func resourceAwsElasticBeanstalkEnvironmentCreate(d *schema.ResourceData, meta i
func resourceAwsElasticBeanstalkEnvironmentUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).elasticbeanstalkconn
- if d.HasChange("description") {
- if err := resourceAwsElasticBeanstalkEnvironmentDescriptionUpdate(conn, d); err != nil {
- return err
- }
+ envId := d.Id()
+ waitForReadyTimeOut, err := time.ParseDuration(d.Get("wait_for_ready_timeout").(string))
+ if err != nil {
+ return err
}
- if d.HasChange("solution_stack_name") {
- if err := resourceAwsElasticBeanstalkEnvironmentSolutionStackUpdate(conn, d); err != nil {
- return err
- }
+ updateOpts := elasticbeanstalk.UpdateEnvironmentInput{
+ EnvironmentId: aws.String(envId),
}
- if d.HasChange("setting") {
- if err := resourceAwsElasticBeanstalkEnvironmentOptionSettingsUpdate(conn, d); err != nil {
- return err
- }
+ if d.HasChange("description") {
+ updateOpts.Description = aws.String(d.Get("description").(string))
}
- return resourceAwsElasticBeanstalkEnvironmentRead(d, meta)
-}
-
-func resourceAwsElasticBeanstalkEnvironmentDescriptionUpdate(conn *elasticbeanstalk.ElasticBeanstalk, d *schema.ResourceData) error {
- name := d.Get("name").(string)
- desc := d.Get("description").(string)
- envId := d.Id()
-
- log.Printf("[DEBUG] Elastic Beanstalk application: %s, update description: %s", name, desc)
-
- _, err := conn.UpdateEnvironment(&elasticbeanstalk.UpdateEnvironmentInput{
- EnvironmentId: aws.String(envId),
- Description: aws.String(desc),
- })
-
- return err
-}
-
-func resourceAwsElasticBeanstalkEnvironmentOptionSettingsUpdate(conn *elasticbeanstalk.ElasticBeanstalk, d *schema.ResourceData) error {
- name := d.Get("name").(string)
- envId := d.Id()
-
- log.Printf("[DEBUG] Elastic Beanstalk application: %s, update options", name)
-
- req := &elasticbeanstalk.UpdateEnvironmentInput{
- EnvironmentId: aws.String(envId),
+ if d.HasChange("solution_stack_name") {
+ updateOpts.SolutionStackName = aws.String(d.Get("solution_stack_name").(string))
}
if d.HasChange("setting") {
@@ -311,29 +286,36 @@ func resourceAwsElasticBeanstalkEnvironmentOptionSettingsUpdate(conn *elasticbea
os := o.(*schema.Set)
ns := n.(*schema.Set)
- req.OptionSettings = extractOptionSettings(ns.Difference(os))
+ updateOpts.OptionSettings = extractOptionSettings(ns.Difference(os))
}
- if _, err := conn.UpdateEnvironment(req); err != nil {
- return err
+ if d.HasChange("template_name") {
+ updateOpts.TemplateName = aws.String(d.Get("template_name").(string))
}
- return nil
-}
-
-func resourceAwsElasticBeanstalkEnvironmentSolutionStackUpdate(conn *elasticbeanstalk.ElasticBeanstalk, d *schema.ResourceData) error {
- name := d.Get("name").(string)
- solutionStack := d.Get("solution_stack_name").(string)
- envId := d.Id()
+ log.Printf("[DEBUG] Elastic Beanstalk Environment update opts: %s", updateOpts)
+ _, err = conn.UpdateEnvironment(&updateOpts)
+ if err != nil {
+ return err
+ }
- log.Printf("[DEBUG] Elastic Beanstalk application: %s, update solution_stack_name: %s", name, solutionStack)
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"Launching", "Updating"},
+ Target: []string{"Ready"},
+ Refresh: environmentStateRefreshFunc(conn, d.Id()),
+ Timeout: waitForReadyTimeOut,
+ Delay: 10 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
- _, err := conn.UpdateEnvironment(&elasticbeanstalk.UpdateEnvironmentInput{
- EnvironmentId: aws.String(envId),
- SolutionStackName: aws.String(solutionStack),
- })
+ _, err = stateConf.WaitForState()
+ if err != nil {
+ return fmt.Errorf(
+ "Error waiting for Elastic Beanstalk Environment (%s) to become ready: %s",
+ d.Id(), err)
+ }
- return err
+ return resourceAwsElasticBeanstalkEnvironmentRead(d, meta)
}
func resourceAwsElasticBeanstalkEnvironmentRead(d *schema.ResourceData, meta interface{}) error {
diff --git a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate.go b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate.go
new file mode 100644
index 000000000000..31cd5c7777bd
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate.go
@@ -0,0 +1,35 @@
+package aws
+
+import (
+ "fmt"
+ "log"
+
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func resourceAwsElasticBeanstalkEnvironmentMigrateState(
+ v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
+ switch v {
+ case 0:
+ log.Println("[INFO] Found AWS Elastic Beanstalk Environment State v0; migrating to v1")
+ return migrateBeanstalkEnvironmentStateV0toV1(is)
+ default:
+ return is, fmt.Errorf("Unexpected schema version: %d", v)
+ }
+}
+
+func migrateBeanstalkEnvironmentStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) {
+ if is.Empty() || is.Attributes == nil {
+ log.Println("[DEBUG] Empty Elastic Beanstalk Environment State; nothing to migrate.")
+ return is, nil
+ }
+
+ log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes)
+
+ if is.Attributes["tier"] == "" {
+ is.Attributes["tier"] = "WebServer"
+ }
+
+ log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes)
+ return is, nil
+}
diff --git a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate_test.go b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate_test.go
new file mode 100644
index 000000000000..6b7603894bbe
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_migrate_test.go
@@ -0,0 +1,57 @@
+package aws
+
+import (
+ "testing"
+
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func TestAWSElasticBeanstalkEnvironmentMigrateState(t *testing.T) {
+ cases := map[string]struct {
+ StateVersion int
+ Attributes map[string]string
+ Expected map[string]string
+ Meta interface{}
+ }{
+ "v0_1_web": {
+ StateVersion: 0,
+ Attributes: map[string]string{
+ "tier": "",
+ },
+ Expected: map[string]string{
+ "tier": "WebServer",
+ },
+ },
+ "v0_1_web_explicit": {
+ StateVersion: 0,
+ Attributes: map[string]string{
+ "tier": "WebServer",
+ },
+ Expected: map[string]string{
+ "tier": "WebServer",
+ },
+ },
+ "v0_1_worker": {
+ StateVersion: 0,
+ Attributes: map[string]string{
+ "tier": "Worker",
+ },
+ Expected: map[string]string{
+ "tier": "Worker",
+ },
+ },
+ }
+
+ for tn, tc := range cases {
+ is := &terraform.InstanceState{
+ ID: "e-abcde12345",
+ Attributes: tc.Attributes,
+ }
+ is, err := resourceAwsElasticBeanstalkEnvironmentMigrateState(
+ tc.StateVersion, is, tc.Meta)
+
+ if err != nil {
+ t.Fatalf("bad: %s, err: %#v", tn, err)
+ }
+ }
+}
diff --git a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_test.go b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_test.go
index 5a9d14379329..ee4a3acfdad9 100644
--- a/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_test.go
+++ b/builtin/providers/aws/resource_aws_elastic_beanstalk_environment_test.go
@@ -105,6 +105,41 @@ func TestAccAWSBeanstalkEnv_cname_prefix(t *testing.T) {
})
}
+func TestAccAWSBeanstalkEnv_config(t *testing.T) {
+ var app elasticbeanstalk.EnvironmentDescription
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckBeanstalkEnvDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccBeanstalkConfigTemplate,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckBeanstalkEnvExists("aws_elastic_beanstalk_environment.tftest", &app),
+ testAccCheckBeanstalkEnvConfigValue("aws_elastic_beanstalk_environment.tftest", "1"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccBeanstalkConfigTemplateUpdate,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckBeanstalkEnvExists("aws_elastic_beanstalk_environment.tftest", &app),
+ testAccCheckBeanstalkEnvConfigValue("aws_elastic_beanstalk_environment.tftest", "2"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccBeanstalkConfigTemplateUpdate,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckBeanstalkEnvExists("aws_elastic_beanstalk_environment.tftest", &app),
+ testAccCheckBeanstalkEnvConfigValue("aws_elastic_beanstalk_environment.tftest", "3"),
+ ),
+ },
+ },
+ })
+}
+
func testAccCheckBeanstalkEnvDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).elasticbeanstalkconn
@@ -192,6 +227,49 @@ func testAccCheckBeanstalkEnvTier(n string, app *elasticbeanstalk.EnvironmentDes
}
}
+func testAccCheckBeanstalkEnvConfigValue(n string, expectedValue string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ conn := testAccProvider.Meta().(*AWSClient).elasticbeanstalkconn
+
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("Elastic Beanstalk ENV is not set")
+ }
+
+ resp, err := conn.DescribeConfigurationOptions(&elasticbeanstalk.DescribeConfigurationOptionsInput{
+ ApplicationName: aws.String(rs.Primary.Attributes["application"]),
+ EnvironmentName: aws.String(rs.Primary.Attributes["name"]),
+ Options: []*elasticbeanstalk.OptionSpecification{
+ {
+ Namespace: aws.String("aws:elasticbeanstalk:application:environment"),
+ OptionName: aws.String("TEMPLATE"),
+ },
+ },
+ })
+ if err != nil {
+ return err
+ }
+
+ if len(resp.Options) != 1 {
+ return fmt.Errorf("Found %d options, expected 1.", len(resp.Options))
+ }
+
+ log.Printf("[DEBUG] %d Elastic Beanstalk Option values returned.", len(resp.Options[0].ValueOptions))
+
+ for _, value := range resp.Options[0].ValueOptions {
+ if *value != expectedValue {
+ return fmt.Errorf("Option setting value: %s. Expected %s", *value, expectedValue)
+ }
+ }
+
+ return nil
+ }
+}
+
func describeBeanstalkEnv(conn *elasticbeanstalk.ElasticBeanstalk,
envID *string) (*elasticbeanstalk.EnvironmentDescription, error) {
describeBeanstalkEnvOpts := &elasticbeanstalk.DescribeEnvironmentsInput{
@@ -255,3 +333,84 @@ solution_stack_name = "64bit Amazon Linux running Python"
}
`, randString)
}
+
+const testAccBeanstalkConfigTemplate = `
+resource "aws_elastic_beanstalk_application" "tftest" {
+ name = "tf-test-name"
+ description = "tf-test-desc"
+}
+
+resource "aws_elastic_beanstalk_environment" "tftest" {
+ name = "tf-test-name"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ template_name = "${aws_elastic_beanstalk_configuration_template.tftest.name}"
+}
+
+resource "aws_elastic_beanstalk_configuration_template" "tftest" {
+ name = "tf-test-original"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ solution_stack_name = "64bit Amazon Linux running Python"
+
+ setting {
+ namespace = "aws:elasticbeanstalk:application:environment"
+ name = "TEMPLATE"
+ value = "1"
+ }
+}
+`
+
+const testAccBeanstalkConfigTemplateUpdate = `
+resource "aws_elastic_beanstalk_application" "tftest" {
+ name = "tf-test-name"
+ description = "tf-test-desc"
+}
+
+resource "aws_elastic_beanstalk_environment" "tftest" {
+ name = "tf-test-name"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ template_name = "${aws_elastic_beanstalk_configuration_template.tftest.name}"
+}
+
+resource "aws_elastic_beanstalk_configuration_template" "tftest" {
+ name = "tf-test-updated"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ solution_stack_name = "64bit Amazon Linux running Python"
+
+ setting {
+ namespace = "aws:elasticbeanstalk:application:environment"
+ name = "TEMPLATE"
+ value = "2"
+ }
+}
+`
+
+const testAccBeanstalkConfigTemplateOverride = `
+resource "aws_elastic_beanstalk_application" "tftest" {
+ name = "tf-test-name"
+ description = "tf-test-desc"
+}
+
+resource "aws_elastic_beanstalk_environment" "tftest" {
+ name = "tf-test-name"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ template_name = "${aws_elastic_beanstalk_configuration_template.tftest.name}"
+
+ setting {
+ namespace = "aws:elasticbeanstalk:application:environment"
+ name = "TEMPLATE"
+ value = "3"
+ }
+}
+
+resource "aws_elastic_beanstalk_configuration_template" "tftest" {
+ name = "tf-test-updated"
+ application = "${aws_elastic_beanstalk_application.tftest.name}"
+ solution_stack_name = "64bit Amazon Linux running Python"
+
+ setting {
+ namespace = "aws:elasticbeanstalk:application:environment"
+ name = "TEMPLATE"
+ value = "2"
+ }
+}
+`
diff --git a/builtin/providers/aws/resource_aws_elasticache_cluster.go b/builtin/providers/aws/resource_aws_elasticache_cluster.go
index f8c422484fc5..7ff086a2654c 100644
--- a/builtin/providers/aws/resource_aws_elasticache_cluster.go
+++ b/builtin/providers/aws/resource_aws_elasticache_cluster.go
@@ -33,6 +33,7 @@ func resourceAwsElasticacheCluster() *schema.Resource {
// with non-converging diffs.
return strings.ToLower(val.(string))
},
+ ValidateFunc: validateElastiCacheClusterId,
},
"configuration_endpoint": &schema.Schema{
Type: schema.TypeString,
diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go
index 174e5149e795..0042f5c061c9 100644
--- a/builtin/providers/aws/resource_aws_elb.go
+++ b/builtin/providers/aws/resource_aws_elb.go
@@ -362,6 +362,7 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error {
d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout)
d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled)
d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout)
+ d.Set("cross_zone_load_balancing", lbAttrs.CrossZoneLoadBalancing.Enabled)
if lbAttrs.AccessLog != nil {
if err := d.Set("access_logs", flattenAccessLog(lbAttrs.AccessLog)); err != nil {
return err
diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go
index 855d8c5ad69c..ff5d3c4e91f4 100644
--- a/builtin/providers/aws/resource_aws_elb_test.go
+++ b/builtin/providers/aws/resource_aws_elb_test.go
@@ -21,9 +21,10 @@ func TestAccAWSELB_basic(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
@@ -64,9 +65,10 @@ func TestAccAWSELB_fullCharacterRange(t *testing.T) {
rand.New(rand.NewSource(time.Now().UnixNano())).Int())
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccAWSELBFullRangeOfCharacters, lbName),
@@ -84,9 +86,10 @@ func TestAccAWSELB_AccessLogs(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBAccessLogs,
@@ -125,9 +128,10 @@ func TestAccAWSELB_generatedName(t *testing.T) {
generatedNameRegexp := regexp.MustCompile("^tf-lb-")
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBGeneratedName,
@@ -145,9 +149,10 @@ func TestAccAWSELB_availabilityZones(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
@@ -185,9 +190,10 @@ func TestAccAWSELB_tags(t *testing.T) {
var td elb.TagDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
@@ -225,9 +231,10 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) {
return nil
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccELBIAMServerCertConfig(
@@ -272,9 +279,10 @@ func TestAccAWSELB_InstanceAttaching(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
@@ -299,9 +307,10 @@ func TestAccAWSELBUpdate_Listener(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
@@ -329,9 +338,10 @@ func TestAccAWSELB_HealthCheck(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigHealthCheck,
@@ -356,9 +366,10 @@ func TestAccAWSELB_HealthCheck(t *testing.T) {
func TestAccAWSELBUpdate_HealthCheck(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigHealthCheck,
@@ -382,9 +393,10 @@ func TestAccAWSELB_Timeout(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigIdleTimeout,
@@ -401,9 +413,10 @@ func TestAccAWSELB_Timeout(t *testing.T) {
func TestAccAWSELBUpdate_Timeout(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigIdleTimeout,
@@ -427,9 +440,10 @@ func TestAccAWSELBUpdate_Timeout(t *testing.T) {
func TestAccAWSELB_ConnectionDraining(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigConnectionDraining,
@@ -448,9 +462,10 @@ func TestAccAWSELB_ConnectionDraining(t *testing.T) {
func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfigConnectionDraining,
@@ -488,9 +503,10 @@ func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) {
func TestAccAWSELB_SecurityGroups(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSELBDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_elb.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig,
diff --git a/builtin/providers/aws/resource_aws_flow_log.go b/builtin/providers/aws/resource_aws_flow_log.go
index 8580378c7499..c02868f1db91 100644
--- a/builtin/providers/aws/resource_aws_flow_log.go
+++ b/builtin/providers/aws/resource_aws_flow_log.go
@@ -3,6 +3,7 @@ package aws
import (
"fmt"
"log"
+ "strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ec2"
@@ -129,11 +130,22 @@ func resourceAwsLogFlowRead(d *schema.ResourceData, meta interface{}) error {
}
fl := resp.FlowLogs[0]
-
d.Set("traffic_type", fl.TrafficType)
d.Set("log_group_name", fl.LogGroupName)
d.Set("iam_role_arn", fl.DeliverLogsPermissionArn)
+ var resourceKey string
+ if strings.HasPrefix(*fl.ResourceId, "vpc-") {
+ resourceKey = "vpc_id"
+ } else if strings.HasPrefix(*fl.ResourceId, "subnet-") {
+ resourceKey = "subnet_id"
+ } else if strings.HasPrefix(*fl.ResourceId, "eni-") {
+ resourceKey = "eni_id"
+ }
+ if resourceKey != "" {
+ d.Set(resourceKey, fl.ResourceId)
+ }
+
return nil
}
diff --git a/builtin/providers/aws/resource_aws_flow_log_test.go b/builtin/providers/aws/resource_aws_flow_log_test.go
index 061643e9454b..1b44aafeee7a 100644
--- a/builtin/providers/aws/resource_aws_flow_log_test.go
+++ b/builtin/providers/aws/resource_aws_flow_log_test.go
@@ -14,9 +14,10 @@ func TestAccAWSFlowLog_basic(t *testing.T) {
var flowLog ec2.FlowLog
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckFlowLogDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_flow_log.test_flow_log",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckFlowLogDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccFlowLogConfig_basic,
@@ -33,9 +34,10 @@ func TestAccAWSFlowLog_subnet(t *testing.T) {
var flowLog ec2.FlowLog
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckFlowLogDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_flow_log.test_flow_log_subnet",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckFlowLogDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccFlowLogConfig_subnet,
diff --git a/builtin/providers/aws/resource_aws_iam_server_certificate.go b/builtin/providers/aws/resource_aws_iam_server_certificate.go
index 678f13d07c33..a3f170c17eba 100644
--- a/builtin/providers/aws/resource_aws_iam_server_certificate.go
+++ b/builtin/providers/aws/resource_aws_iam_server_certificate.go
@@ -138,6 +138,11 @@ func resourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interface{
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
+ if awsErr.Code() == "NoSuchEntity" {
+ log.Printf("[WARN] IAM Server Cert (%s) not found, removing from state", d.Id())
+ d.SetId("")
+ return nil
+ }
return fmt.Errorf("[WARN] Error reading IAM Server Certificate: %s: %s", awsErr.Code(), awsErr.Message())
}
return fmt.Errorf("[WARN] Error reading IAM Server Certificate: %s", err)
@@ -161,7 +166,7 @@ func resourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interface{
func resourceAwsIAMServerCertificateDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).iamconn
log.Printf("[INFO] Deleting IAM Server Certificate: %s", d.Id())
- err := resource.Retry(1*time.Minute, func() *resource.RetryError {
+ err := resource.Retry(3*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{
ServerCertificateName: aws.String(d.Get("name").(string)),
})
@@ -172,6 +177,11 @@ func resourceAwsIAMServerCertificateDelete(d *schema.ResourceData, meta interfac
log.Printf("[WARN] Conflict deleting server certificate: %s, retrying", awsErr.Message())
return resource.RetryableError(err)
}
+ if awsErr.Code() == "NoSuchEntity" {
+ log.Printf("[WARN] IAM Server Certificate (%s) not found, removing from state", d.Id())
+ d.SetId("")
+ return nil
+ }
}
return resource.NonRetryableError(err)
}
diff --git a/builtin/providers/aws/resource_aws_iam_server_certificate_test.go b/builtin/providers/aws/resource_aws_iam_server_certificate_test.go
index 11780ded79d3..c848bd37e25d 100644
--- a/builtin/providers/aws/resource_aws_iam_server_certificate_test.go
+++ b/builtin/providers/aws/resource_aws_iam_server_certificate_test.go
@@ -51,6 +51,45 @@ func TestAccAWSIAMServerCertificate_name_prefix(t *testing.T) {
})
}
+func TestAccAWSIAMServerCertificate_disappears(t *testing.T) {
+ var cert iam.ServerCertificate
+
+ testDestroyCert := func(*terraform.State) error {
+ // reach out and DELETE the Cert
+ conn := testAccProvider.Meta().(*AWSClient).iamconn
+ _, err := conn.DeleteServerCertificate(&iam.DeleteServerCertificateInput{
+ ServerCertificateName: cert.ServerCertificateMetadata.ServerCertificateName,
+ })
+
+ if err != nil {
+ return fmt.Errorf("Error destorying cert in test: %s", err)
+ }
+
+ return nil
+ }
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckIAMServerCertificateDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccIAMServerCertConfig_random,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckCertExists("aws_iam_server_certificate.test_cert", &cert),
+ testAccCheckAWSServerCertAttributes(&cert),
+ testDestroyCert,
+ ),
+ ExpectNonEmptyPlan: true,
+ },
+ // Follow up plan w/ empty config should be empty, since the Cert is gone
+ resource.TestStep{
+ Config: "",
+ },
+ },
+ })
+}
+
func testAccCheckCertExists(n string, cert *iam.ServerCertificate) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
diff --git a/builtin/providers/aws/resource_aws_instance.go b/builtin/providers/aws/resource_aws_instance.go
index bacf975aa769..e62df1cf34bf 100644
--- a/builtin/providers/aws/resource_aws_instance.go
+++ b/builtin/providers/aws/resource_aws_instance.go
@@ -533,6 +533,9 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error {
if err := d.Set("vpc_security_group_ids", sgs); err != nil {
return err
}
+ if err := d.Set("security_groups", []string{}); err != nil {
+ return err
+ }
} else {
for _, sg := range instance.SecurityGroups {
sgs = append(sgs, *sg.GroupName)
@@ -541,11 +544,29 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error {
if err := d.Set("security_groups", sgs); err != nil {
return err
}
+ if err := d.Set("vpc_security_group_ids", []string{}); err != nil {
+ return err
+ }
}
if err := readBlockDevices(d, instance, conn); err != nil {
return err
}
+ if _, ok := d.GetOk("ephemeral_block_device"); !ok {
+ d.Set("ephemeral_block_device", []interface{}{})
+ }
+
+ // Instance attributes
+ {
+ attr, err := conn.DescribeInstanceAttribute(&ec2.DescribeInstanceAttributeInput{
+ Attribute: aws.String("disableApiTermination"),
+ InstanceId: aws.String(d.Id()),
+ })
+ if err != nil {
+ return err
+ }
+ d.Set("disable_api_termination", attr.DisableApiTermination.Value)
+ }
return nil
}
@@ -696,8 +717,17 @@ func readBlockDevices(d *schema.ResourceData, instance *ec2.Instance, conn *ec2.
if err := d.Set("ebs_block_device", ibds["ebs"]); err != nil {
return err
}
+
+ // This handles the import case which needs to be defaulted to empty
+ if _, ok := d.GetOk("root_block_device"); !ok {
+ if err := d.Set("root_block_device", []interface{}{}); err != nil {
+ return err
+ }
+ }
+
if ibds["root"] != nil {
- if err := d.Set("root_block_device", []interface{}{ibds["root"]}); err != nil {
+ roots := []interface{}{ibds["root"]}
+ if err := d.Set("root_block_device", roots); err != nil {
return err
}
}
@@ -964,8 +994,16 @@ func buildAwsInstanceOpts(
Name: aws.String(d.Get("iam_instance_profile").(string)),
}
- opts.UserData64 = aws.String(
- base64.StdEncoding.EncodeToString([]byte(d.Get("user_data").(string))))
+ user_data := d.Get("user_data").(string)
+
+ // Check whether the user_data is already Base64 encoded; don't double-encode
+ _, base64DecodeError := base64.StdEncoding.DecodeString(user_data)
+
+ if base64DecodeError == nil {
+ opts.UserData64 = aws.String(user_data)
+ } else {
+ opts.UserData64 = aws.String(base64.StdEncoding.EncodeToString([]byte(user_data)))
+ }
// check for non-default Subnet, and cast it to a String
subnet, hasSubnet := d.GetOk("subnet_id")
diff --git a/builtin/providers/aws/resource_aws_instance_test.go b/builtin/providers/aws/resource_aws_instance_test.go
index 7eef3ce2d1a6..c19ddc0e43e8 100644
--- a/builtin/providers/aws/resource_aws_instance_test.go
+++ b/builtin/providers/aws/resource_aws_instance_test.go
@@ -33,7 +33,14 @@ func TestAccAWSInstance_basic(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
+ PreCheck: func() { testAccPreCheck(t) },
+
+ // We ignore security groups because even with EC2 classic
+ // we'll import as VPC security groups, which is fine. We verify
+ // VPC security group import in other tests
+ IDRefreshName: "aws_instance.foo",
+ IDRefreshIgnore: []string{"user_data", "security_groups", "vpc_security_group_ids"},
+
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
@@ -135,7 +142,10 @@ func TestAccAWSInstance_blockDevices(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ IDRefreshIgnore: []string{
+ "ephemeral_block_device", "user_data", "security_groups", "vpc_security_groups"},
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
@@ -202,9 +212,10 @@ func TestAccAWSInstance_sourceDestCheck(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigSourceDestDisable,
@@ -255,9 +266,10 @@ func TestAccAWSInstance_disableApiTermination(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigDisableAPITermination(true),
@@ -282,15 +294,21 @@ func TestAccAWSInstance_vpc(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ IDRefreshIgnore: []string{"associate_public_ip_address", "user_data"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigVPC,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists(
"aws_instance.foo", &v),
+ resource.TestCheckResourceAttr(
+ "aws_instance.foo",
+ "user_data",
+ "2fad308761514d9d73c3c7fdc877607e06cf950d"),
),
},
},
@@ -333,9 +351,11 @@ func TestAccAWSInstance_NetworkInstanceSecurityGroups(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo_instance",
+ IDRefreshIgnore: []string{"associate_public_ip_address"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceNetworkInstanceSecurityGroups,
@@ -352,9 +372,10 @@ func TestAccAWSInstance_NetworkInstanceVPCSecurityGroupIDs(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo_instance",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceNetworkInstanceVPCSecurityGroupIDs,
@@ -415,9 +436,10 @@ func TestAccAWSInstance_privateIP(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigPrivateIP,
@@ -444,9 +466,11 @@ func TestAccAWSInstance_associatePublicIPAndPrivateIP(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ IDRefreshIgnore: []string{"associate_public_ip_address"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigAssociatePublicIPAndPrivateIP,
@@ -478,9 +502,11 @@ func TestAccAWSInstance_keyPairCheck(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ IDRefreshIgnore: []string{"source_dest_check"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigKeyPair,
@@ -526,9 +552,10 @@ func TestAccAWSInstance_forceNewAndTagsDrift(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInstanceDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_instance.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigForceNewAndTagsDrift,
@@ -825,6 +852,8 @@ resource "aws_instance" "foo" {
subnet_id = "${aws_subnet.foo.id}"
associate_public_ip_address = true
tenancy = "dedicated"
+ # pre-encoded base64 data
+ user_data = "3dc39dda39be1205215e776bad998da361a5955d"
}
`
diff --git a/builtin/providers/aws/resource_aws_internet_gateway.go b/builtin/providers/aws/resource_aws_internet_gateway.go
index c2561a7b8d23..dacb02a56ad3 100644
--- a/builtin/providers/aws/resource_aws_internet_gateway.go
+++ b/builtin/providers/aws/resource_aws_internet_gateway.go
@@ -45,6 +45,18 @@ func resourceAwsInternetGatewayCreate(d *schema.ResourceData, meta interface{})
d.SetId(*ig.InternetGatewayId)
log.Printf("[INFO] InternetGateway ID: %s", d.Id())
+ resource.Retry(5*time.Minute, func() *resource.RetryError {
+ igRaw, _, err := IGStateRefreshFunc(conn, d.Id())()
+ if igRaw != nil {
+ return nil
+ }
+ if err == nil {
+ return resource.RetryableError(err)
+ } else {
+ return resource.NonRetryableError(err)
+ }
+ })
+
err = setTags(conn, d)
if err != nil {
return err
diff --git a/builtin/providers/aws/resource_aws_internet_gateway_test.go b/builtin/providers/aws/resource_aws_internet_gateway_test.go
index 3fe9711af3c5..9131d6b16e9f 100644
--- a/builtin/providers/aws/resource_aws_internet_gateway_test.go
+++ b/builtin/providers/aws/resource_aws_internet_gateway_test.go
@@ -32,9 +32,10 @@ func TestAccAWSInternetGateway_basic(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInternetGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_internet_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInternetGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInternetGatewayConfig,
@@ -70,9 +71,10 @@ func TestAccAWSInternetGateway_delete(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInternetGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_internet_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInternetGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInternetGatewayConfig,
@@ -91,9 +93,10 @@ func TestAccAWSInternetGateway_tags(t *testing.T) {
var v ec2.InternetGateway
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckInternetGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_internet_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckInternetGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccCheckInternetGatewayConfigTags,
diff --git a/builtin/providers/aws/resource_aws_kms_alias.go b/builtin/providers/aws/resource_aws_kms_alias.go
index 23bbf0b377a2..64eec56a66fd 100644
--- a/builtin/providers/aws/resource_aws_kms_alias.go
+++ b/builtin/providers/aws/resource_aws_kms_alias.go
@@ -89,14 +89,13 @@ func resourceAwsKmsAliasCreate(d *schema.ResourceData, meta interface{}) error {
func resourceAwsKmsAliasRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).kmsconn
- name := d.Get("name").(string)
- alias, err := findKmsAliasByName(conn, name, nil)
+ alias, err := findKmsAliasByName(conn, d.Id(), nil)
if err != nil {
return err
}
if alias == nil {
- log.Printf("[DEBUG] Removing KMS Alias %q as it's already gone", name)
+ log.Printf("[DEBUG] Removing KMS Alias (%s) as it's already gone", d.Id())
d.SetId("")
return nil
}
@@ -138,17 +137,16 @@ func resourceAwsKmsAliasTargetUpdate(conn *kms.KMS, d *schema.ResourceData) erro
func resourceAwsKmsAliasDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).kmsconn
- name := d.Get("name").(string)
req := &kms.DeleteAliasInput{
- AliasName: aws.String(name),
+ AliasName: aws.String(d.Id()),
}
_, err := conn.DeleteAlias(req)
if err != nil {
return err
}
- log.Printf("[DEBUG] KMS Alias: %s deleted.", name)
+ log.Printf("[DEBUG] KMS Alias: (%s) deleted.", d.Id())
d.SetId("")
return nil
}
diff --git a/builtin/providers/aws/resource_aws_lambda_function.go b/builtin/providers/aws/resource_aws_lambda_function.go
index 1c6e706b1565..f54bc1a2e56b 100644
--- a/builtin/providers/aws/resource_aws_lambda_function.go
+++ b/builtin/providers/aws/resource_aws_lambda_function.go
@@ -98,6 +98,10 @@ func resourceAwsLambdaFunction() *schema.Resource {
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
+ "vpc_id": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ },
},
},
},
@@ -249,7 +253,11 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err
d.Set("runtime", function.Runtime)
d.Set("timeout", function.Timeout)
if config := flattenLambdaVpcConfigResponse(function.VpcConfig); len(config) > 0 {
- d.Set("vpc_config", config)
+ log.Printf("[INFO] Setting Lambda %s VPC config %#v from API", d.Id(), config)
+ err := d.Set("vpc_config", config)
+ if err != nil {
+ return fmt.Errorf("Failed setting vpc_config: %s", err)
+ }
}
d.Set("source_code_hash", function.CodeSha256)
diff --git a/builtin/providers/aws/resource_aws_lambda_function_test.go b/builtin/providers/aws/resource_aws_lambda_function_test.go
index 1530ec34afe0..abfc40ae8248 100644
--- a/builtin/providers/aws/resource_aws_lambda_function_test.go
+++ b/builtin/providers/aws/resource_aws_lambda_function_test.go
@@ -6,6 +6,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
+ "regexp"
"strings"
"testing"
@@ -51,6 +52,10 @@ func TestAccAWSLambdaFunction_VPC(t *testing.T) {
testAccCheckAwsLambdaFunctionName(&conf, "example_lambda_name"),
testAccCheckAwsLambdaFunctionArnHasSuffix(&conf, ":example_lambda_name"),
testAccCheckAWSLambdaFunctionVersion(&conf, "$LATEST"),
+ resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "vpc_config.#", "1"),
+ resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "vpc_config.0.subnet_ids.#", "1"),
+ resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "vpc_config.0.security_group_ids.#", "1"),
+ resource.TestMatchResourceAttr("aws_lambda_function.lambda_function_test", "vpc_config.0.vpc_id", regexp.MustCompile("^vpc-")),
),
},
},
diff --git a/builtin/providers/aws/resource_aws_launch_configuration.go b/builtin/providers/aws/resource_aws_launch_configuration.go
index 9607446ddd48..55256b9de825 100644
--- a/builtin/providers/aws/resource_aws_launch_configuration.go
+++ b/builtin/providers/aws/resource_aws_launch_configuration.go
@@ -334,13 +334,16 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface
bd := v.(map[string]interface{})
ebs := &autoscaling.Ebs{
DeleteOnTermination: aws.Bool(bd["delete_on_termination"].(bool)),
- Encrypted: aws.Bool(bd["encrypted"].(bool)),
}
if v, ok := bd["snapshot_id"].(string); ok && v != "" {
ebs.SnapshotId = aws.String(v)
}
+ if v, ok := bd["encrypted"].(bool); ok && v {
+ ebs.Encrypted = aws.Bool(v)
+ }
+
if v, ok := bd["volume_size"].(int); ok && v != 0 {
ebs.VolumeSize = aws.Int64(int64(v))
}
diff --git a/builtin/providers/aws/resource_aws_nat_gateway.go b/builtin/providers/aws/resource_aws_nat_gateway.go
index c8c46ff3222a..c57fb9f649e0 100644
--- a/builtin/providers/aws/resource_aws_nat_gateway.go
+++ b/builtin/providers/aws/resource_aws_nat_gateway.go
@@ -106,7 +106,11 @@ func resourceAwsNatGatewayRead(d *schema.ResourceData, meta interface{}) error {
// Set NAT Gateway attributes
ng := ngRaw.(*ec2.NatGateway)
+ d.Set("subnet_id", ng.SubnetId)
+
+ // Address
address := ng.NatGatewayAddresses[0]
+ d.Set("allocation_id", address.AllocationId)
d.Set("network_interface_id", address.NetworkInterfaceId)
d.Set("private_ip", address.PrivateIp)
d.Set("public_ip", address.PublicIp)
diff --git a/builtin/providers/aws/resource_aws_nat_gateway_test.go b/builtin/providers/aws/resource_aws_nat_gateway_test.go
index 40b6f77c29eb..c4dd8b6f6820 100644
--- a/builtin/providers/aws/resource_aws_nat_gateway_test.go
+++ b/builtin/providers/aws/resource_aws_nat_gateway_test.go
@@ -16,9 +16,10 @@ func TestAccAWSNatGateway_basic(t *testing.T) {
var natGateway ec2.NatGateway
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckNatGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_nat_gateway.gateway",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckNatGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccNatGatewayConfig,
diff --git a/builtin/providers/aws/resource_aws_network_acl.go b/builtin/providers/aws/resource_aws_network_acl.go
index b8fe88021544..e946bb9327e4 100644
--- a/builtin/providers/aws/resource_aws_network_acl.go
+++ b/builtin/providers/aws/resource_aws_network_acl.go
@@ -190,7 +190,7 @@ func resourceAwsNetworkAclRead(d *schema.ResourceData, meta interface{}) error {
for _, e := range networkAcl.Entries {
// Skip the default rules added by AWS. They can be neither
// configured or deleted by users.
- if *e.RuleNumber == 32767 {
+ if *e.RuleNumber == awsDefaultAclRuleNumber {
continue
}
@@ -285,6 +285,7 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error
if err != nil {
return fmt.Errorf("Failed to find acl association: acl %s with subnet %s: %s", d.Id(), r, err)
}
+ log.Printf("DEBUG] Replacing Network Acl Association (%s) with Default Network ACL ID (%s)", *association.NetworkAclAssociationId, *defaultAcl.NetworkAclId)
_, err = conn.ReplaceNetworkAclAssociation(&ec2.ReplaceNetworkAclAssociationInput{
AssociationId: association.NetworkAclAssociationId,
NetworkAclId: defaultAcl.NetworkAclId,
@@ -324,7 +325,6 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error
}
func updateNetworkAclEntries(d *schema.ResourceData, entryType string, conn *ec2.EC2) error {
-
if d.HasChange(entryType) {
o, n := d.GetChange(entryType)
@@ -343,16 +343,16 @@ func updateNetworkAclEntries(d *schema.ResourceData, entryType string, conn *ec2
return err
}
for _, remove := range toBeDeleted {
-
// AWS includes default rules with all network ACLs that can be
// neither modified nor destroyed. They have a custom rule
// number that is out of bounds for any other rule. If we
// encounter it, just continue. There's no work to be done.
- if *remove.RuleNumber == 32767 {
+ if *remove.RuleNumber == awsDefaultAclRuleNumber {
continue
}
// Delete old Acl
+ log.Printf("[DEBUG] Destroying Network ACL Entry number (%d)", int(*remove.RuleNumber))
_, err := conn.DeleteNetworkAclEntry(&ec2.DeleteNetworkAclEntryInput{
NetworkAclId: aws.String(d.Id()),
RuleNumber: remove.RuleNumber,
@@ -455,12 +455,30 @@ func resourceAwsNetworkAclDelete(d *schema.ResourceData, meta interface{}) error
}
for _, a := range associations {
+ log.Printf("DEBUG] Replacing Network Acl Association (%s) with Default Network ACL ID (%s)", *a.NetworkAclAssociationId, *defaultAcl.NetworkAclId)
_, replaceErr := conn.ReplaceNetworkAclAssociation(&ec2.ReplaceNetworkAclAssociationInput{
AssociationId: a.NetworkAclAssociationId,
NetworkAclId: defaultAcl.NetworkAclId,
})
if replaceErr != nil {
- log.Printf("[ERR] Non retryable error in replacing associtions for Network ACL (%s): %s", d.Id(), replaceErr)
+ if replaceEc2err, ok := replaceErr.(awserr.Error); ok {
+ // It's possible that during an attempt to replace this
+ // association, the Subnet in question has already been moved to
+ // another ACL. This can happen if you're destroying a network acl
+ // and simultaneously re-associating it's subnet(s) with another
+ // ACL; Terraform may have already re-associated the subnet(s) by
+ // the time we attempt to destroy them, even between the time we
+ // list them and then try to destroy them. In this case, the
+ // association we're trying to replace will no longer exist and
+ // this call will fail. Here we trap that error and fail
+ // gracefully; the association we tried to replace gone, we trust
+ // someone else has taken ownership.
+ if replaceEc2err.Code() == "InvalidAssociationID.NotFound" {
+ log.Printf("[WARN] Network Association (%s) no longer found; Network Association likely updated or removed externally, removing from state", *a.NetworkAclAssociationId)
+ continue
+ }
+ }
+ log.Printf("[ERR] Non retry-able error in replacing associations for Network ACL (%s): %s", d.Id(), replaceErr)
return resource.NonRetryableError(replaceErr)
}
}
diff --git a/builtin/providers/aws/resource_aws_network_acl_test.go b/builtin/providers/aws/resource_aws_network_acl_test.go
index bce803a96933..4c54c84de361 100644
--- a/builtin/providers/aws/resource_aws_network_acl_test.go
+++ b/builtin/providers/aws/resource_aws_network_acl_test.go
@@ -15,9 +15,10 @@ func TestAccAWSNetworkAcl_EgressAndIngressRules(t *testing.T) {
var networkAcl ec2.NetworkAcl
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclEgressNIngressConfig,
@@ -57,9 +58,10 @@ func TestAccAWSNetworkAcl_OnlyIngressRules_basic(t *testing.T) {
var networkAcl ec2.NetworkAcl
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.foos",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclIngressConfig,
@@ -88,9 +90,10 @@ func TestAccAWSNetworkAcl_OnlyIngressRules_update(t *testing.T) {
var networkAcl ec2.NetworkAcl
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.foos",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclIngressConfig,
@@ -142,9 +145,10 @@ func TestAccAWSNetworkAcl_OnlyEgressRules(t *testing.T) {
var networkAcl ec2.NetworkAcl
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.bond",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclEgressConfig,
@@ -160,9 +164,10 @@ func TestAccAWSNetworkAcl_OnlyEgressRules(t *testing.T) {
func TestAccAWSNetworkAcl_SubnetChange(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclSubnetConfig,
@@ -196,9 +201,10 @@ func TestAccAWSNetworkAcl_Subnets(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSNetworkAclDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_acl.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclSubnet_SubnetIds,
@@ -385,7 +391,8 @@ resource "aws_network_acl" "foos" {
from_port = 443
to_port = 443
}
- subnet_id = "${aws_subnet.blob.id}"
+
+ subnet_ids = ["${aws_subnet.blob.id}"]
}
`
const testAccAWSNetworkAclIngressConfigChange = `
@@ -410,7 +417,7 @@ resource "aws_network_acl" "foos" {
from_port = 0
to_port = 22
}
- subnet_id = "${aws_subnet.blob.id}"
+ subnet_ids = ["${aws_subnet.blob.id}"]
}
`
@@ -522,11 +529,11 @@ resource "aws_subnet" "new" {
}
resource "aws_network_acl" "roll" {
vpc_id = "${aws_vpc.foo.id}"
- subnet_id = "${aws_subnet.new.id}"
+ subnet_ids = ["${aws_subnet.new.id}"]
}
resource "aws_network_acl" "bar" {
vpc_id = "${aws_vpc.foo.id}"
- subnet_id = "${aws_subnet.old.id}"
+ subnet_ids = ["${aws_subnet.old.id}"]
}
`
@@ -549,7 +556,7 @@ resource "aws_subnet" "new" {
}
resource "aws_network_acl" "bar" {
vpc_id = "${aws_vpc.foo.id}"
- subnet_id = "${aws_subnet.new.id}"
+ subnet_ids = ["${aws_subnet.new.id}"]
}
`
@@ -622,7 +629,7 @@ resource "aws_subnet" "four" {
resource "aws_network_acl" "bar" {
vpc_id = "${aws_vpc.foo.id}"
subnet_ids = [
- "${aws_subnet.one.id}",
+ "${aws_subnet.one.id}",
"${aws_subnet.three.id}",
"${aws_subnet.four.id}",
]
diff --git a/builtin/providers/aws/resource_aws_network_interface_test.go b/builtin/providers/aws/resource_aws_network_interface_test.go
index b19d6948db77..f7d72ec00b28 100644
--- a/builtin/providers/aws/resource_aws_network_interface_test.go
+++ b/builtin/providers/aws/resource_aws_network_interface_test.go
@@ -15,9 +15,10 @@ func TestAccAWSENI_basic(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfig,
@@ -40,9 +41,10 @@ func TestAccAWSENI_updatedDescription(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfig,
@@ -69,9 +71,10 @@ func TestAccAWSENI_attached(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfigWithAttachment,
@@ -92,9 +95,10 @@ func TestAccAWSENI_ignoreExternalAttachment(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfigExternalAttachment,
@@ -112,9 +116,10 @@ func TestAccAWSENI_sourceDestCheck(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfigWithSourceDestCheck,
@@ -132,9 +137,10 @@ func TestAccAWSENI_computedIPs(t *testing.T) {
var conf ec2.NetworkInterface
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSENIDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_network_interface.bar",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSENIConfigWithNoPrivateIPs,
diff --git a/builtin/providers/aws/resource_aws_opsworks_application.go b/builtin/providers/aws/resource_aws_opsworks_application.go
new file mode 100644
index 000000000000..cf63c3b2344e
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_opsworks_application.go
@@ -0,0 +1,603 @@
+package aws
+
+import (
+ "fmt"
+ "log"
+ "strings"
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ "github.com/aws/aws-sdk-go/service/opsworks"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+)
+
+func resourceAwsOpsworksApplication() *schema.Resource {
+ return &schema.Resource{
+
+ Create: resourceAwsOpsworksApplicationCreate,
+ Read: resourceAwsOpsworksApplicationRead,
+ Update: resourceAwsOpsworksApplicationUpdate,
+ Delete: resourceAwsOpsworksApplicationDelete,
+ Schema: map[string]*schema.Schema{
+ "id": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "short_name": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ Optional: true,
+ },
+ // aws-flow-ruby | java | rails | php | nodejs | static | other
+ "type": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "stack_id": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ // TODO: the following 4 vals are really part of the Attributes array. We should validate that only ones relevant to the chosen type are set, perhaps. (what is the default type? how do they map?)
+ "document_root": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ //Default: "public",
+ },
+ "rails_env": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ //Default: "production",
+ },
+ "auto_bundle_on_deploy": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ //Default: true,
+ },
+ "aws_flow_ruby_settings": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "app_source": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "type": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "url": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
+ "username": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
+ "password": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
+ "revision": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
+ "ssh_key": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ },
+ },
+ },
+ // AutoSelectOpsworksMysqlInstance, OpsworksMysqlInstance, or RdsDbInstance.
+ // anything beside auto select will lead into failure in case the instance doesn't exist
+ // XXX: validation?
+ "data_source_type": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "data_source_database_name": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "data_source_arn": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "description": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "domains": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ },
+ "environment": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "key": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "value": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "secure": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: true,
+ },
+ },
+ },
+ },
+ "enable_ssl": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ },
+ "ssl_configuration": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ //Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "certificate": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ StateFunc: func(v interface{}) string {
+ switch v.(type) {
+ case string:
+ return strings.TrimSpace(v.(string))
+ default:
+ return ""
+ }
+ },
+ },
+ "private_key": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ StateFunc: func(v interface{}) string {
+ switch v.(type) {
+ case string:
+ return strings.TrimSpace(v.(string))
+ default:
+ return ""
+ }
+ },
+ },
+ "chain": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ StateFunc: func(v interface{}) string {
+ switch v.(type) {
+ case string:
+ return strings.TrimSpace(v.(string))
+ default:
+ return ""
+ }
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
+func resourceAwsOpsworksApplicationValidate(d *schema.ResourceData) error {
+ appSourceCount := d.Get("app_source.#").(int)
+ if appSourceCount > 1 {
+ return fmt.Errorf("Only one app_source is permitted.")
+ }
+
+ sslCount := d.Get("ssl_configuration.#").(int)
+ if sslCount > 1 {
+ return fmt.Errorf("Only one ssl_configuration is permitted.")
+ }
+
+ if d.Get("type").(string) == opsworks.AppTypeRails {
+ if _, ok := d.GetOk("rails_env"); !ok {
+ return fmt.Errorf("Set rails_env must be set if type is set to rails.")
+ }
+ }
+ switch d.Get("type").(string) {
+ case opsworks.AppTypeStatic:
+ case opsworks.AppTypeRails:
+ case opsworks.AppTypePhp:
+ case opsworks.AppTypeOther:
+ case opsworks.AppTypeNodejs:
+ case opsworks.AppTypeJava:
+ case opsworks.AppTypeAwsFlowRuby:
+ log.Printf("[DEBUG] type supported")
+ default:
+ return fmt.Errorf("opsworks_application.type must be one of %s, %s, %s, %s, %s, %s, %s",
+ opsworks.AppTypeStatic,
+ opsworks.AppTypeRails,
+ opsworks.AppTypePhp,
+ opsworks.AppTypeOther,
+ opsworks.AppTypeNodejs,
+ opsworks.AppTypeJava,
+ opsworks.AppTypeAwsFlowRuby)
+ }
+
+ return nil
+}
+
+func resourceAwsOpsworksApplicationRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ req := &opsworks.DescribeAppsInput{
+ AppIds: []*string{
+ aws.String(d.Id()),
+ },
+ }
+
+ log.Printf("[DEBUG] Reading OpsWorks app: %s", d.Id())
+
+ resp, err := client.DescribeApps(req)
+ if err != nil {
+ if awserr, ok := err.(awserr.Error); ok {
+ if awserr.Code() == "ResourceNotFoundException" {
+ log.Printf("[INFO] App not found: %s", d.Id())
+ d.SetId("")
+ return nil
+ }
+ }
+ return err
+ }
+
+ app := resp.Apps[0]
+
+ d.Set("name", app.Name)
+ d.Set("stack_id", app.StackId)
+ d.Set("type", app.Type)
+ d.Set("description", app.Description)
+ d.Set("domains", flattenStringList(app.Domains))
+ d.Set("enable_ssl", app.EnableSsl)
+ resourceAwsOpsworksSetApplicationSsl(d, app.SslConfiguration)
+ resourceAwsOpsworksSetApplicationSource(d, app.AppSource)
+ resourceAwsOpsworksSetApplicationDataSources(d, app.DataSources)
+ resourceAwsOpsworksSetApplicationEnvironmentVariable(d, app.Environment)
+ resourceAwsOpsworksSetApplicationAttributes(d, app.Attributes)
+ return nil
+}
+
+func resourceAwsOpsworksApplicationCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ err := resourceAwsOpsworksApplicationValidate(d)
+ if err != nil {
+ return err
+ }
+
+ req := &opsworks.CreateAppInput{
+ Name: aws.String(d.Get("name").(string)),
+ Shortname: aws.String(d.Get("short_name").(string)),
+ StackId: aws.String(d.Get("stack_id").(string)),
+ Type: aws.String(d.Get("type").(string)),
+ Description: aws.String(d.Get("description").(string)),
+ Domains: expandStringList(d.Get("domains").([]interface{})),
+ EnableSsl: aws.Bool(d.Get("enable_ssl").(bool)),
+ SslConfiguration: resourceAwsOpsworksApplicationSsl(d),
+ AppSource: resourceAwsOpsworksApplicationSource(d),
+ DataSources: resourceAwsOpsworksApplicationDataSources(d),
+ Environment: resourceAwsOpsworksApplicationEnvironmentVariable(d),
+ Attributes: resourceAwsOpsworksApplicationAttributes(d),
+ }
+
+ var resp *opsworks.CreateAppOutput
+ err = resource.Retry(2*time.Minute, func() *resource.RetryError {
+ var cerr error
+ resp, cerr = client.CreateApp(req)
+ if cerr != nil {
+ log.Printf("[INFO] client error")
+ if opserr, ok := cerr.(awserr.Error); ok {
+ // XXX: handle errors
+ log.Printf("[ERROR] OpsWorks error: %s message: %s", opserr.Code(), opserr.Message())
+ return resource.RetryableError(cerr)
+ }
+ return resource.NonRetryableError(cerr)
+ }
+ return nil
+ })
+
+ if err != nil {
+ return err
+ }
+
+ appID := *resp.AppId
+ d.SetId(appID)
+ d.Set("id", appID)
+
+ return resourceAwsOpsworksApplicationRead(d, meta)
+}
+
+func resourceAwsOpsworksApplicationUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ req := &opsworks.UpdateAppInput{
+ AppId: aws.String(d.Id()),
+ Name: aws.String(d.Get("name").(string)),
+ Type: aws.String(d.Get("type").(string)),
+ Description: aws.String(d.Get("description").(string)),
+ Domains: expandStringList(d.Get("domains").([]interface{})),
+ EnableSsl: aws.Bool(d.Get("enable_ssl").(bool)),
+ SslConfiguration: resourceAwsOpsworksApplicationSsl(d),
+ AppSource: resourceAwsOpsworksApplicationSource(d),
+ DataSources: resourceAwsOpsworksApplicationDataSources(d),
+ Environment: resourceAwsOpsworksApplicationEnvironmentVariable(d),
+ Attributes: resourceAwsOpsworksApplicationAttributes(d),
+ }
+
+ log.Printf("[DEBUG] Updating OpsWorks layer: %s", d.Id())
+
+ var resp *opsworks.UpdateAppOutput
+ err := resource.Retry(2*time.Minute, func() *resource.RetryError {
+ var cerr error
+ resp, cerr = client.UpdateApp(req)
+ if cerr != nil {
+ log.Printf("[INFO] client error")
+ if opserr, ok := cerr.(awserr.Error); ok {
+ // XXX: handle errors
+ log.Printf("[ERROR] OpsWorks error: %s message: %s", opserr.Code(), opserr.Message())
+ return resource.NonRetryableError(cerr)
+ }
+ return resource.RetryableError(cerr)
+ }
+ return nil
+ })
+
+ if err != nil {
+ return err
+ }
+ return resourceAwsOpsworksApplicationRead(d, meta)
+}
+
+func resourceAwsOpsworksApplicationDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ req := &opsworks.DeleteAppInput{
+ AppId: aws.String(d.Id()),
+ }
+
+ log.Printf("[DEBUG] Deleting OpsWorks application: %s", d.Id())
+
+ _, err := client.DeleteApp(req)
+ return err
+}
+
+func resourceAwsOpsworksSetApplicationEnvironmentVariable(d *schema.ResourceData, v []*opsworks.EnvironmentVariable) {
+ log.Printf("[DEBUG] envs: %s %d", v, len(v))
+ if len(v) == 0 {
+ d.Set("environment", nil)
+ return
+ }
+ newValue := make([]*map[string]interface{}, len(v))
+
+ for i := 0; i < len(v); i++ {
+ config := v[i]
+ data := make(map[string]interface{})
+ newValue[i] = &data
+
+ if config.Key != nil {
+ data["key"] = *config.Key
+ }
+ if config.Value != nil {
+ data["value"] = *config.Value
+ }
+ if config.Secure != nil {
+
+ if bool(*config.Secure) {
+ data["secure"] = &opsworksTrueString
+ } else {
+ data["secure"] = &opsworksFalseString
+ }
+ }
+ log.Printf("[DEBUG] v: %s", data)
+ }
+
+ d.Set("environment", newValue)
+}
+
+func resourceAwsOpsworksApplicationEnvironmentVariable(d *schema.ResourceData) []*opsworks.EnvironmentVariable {
+ environmentVariables := d.Get("environment").(*schema.Set).List()
+ result := make([]*opsworks.EnvironmentVariable, len(environmentVariables))
+
+ for i := 0; i < len(environmentVariables); i++ {
+ env := environmentVariables[i].(map[string]interface{})
+
+ result[i] = &opsworks.EnvironmentVariable{
+ Key: aws.String(env["key"].(string)),
+ Value: aws.String(env["value"].(string)),
+ Secure: aws.Bool(env["secure"].(bool)),
+ }
+ }
+ return result
+}
+
+func resourceAwsOpsworksApplicationSource(d *schema.ResourceData) *opsworks.Source {
+ count := d.Get("app_source.#").(int)
+ if count == 0 {
+ return nil
+ }
+
+ return &opsworks.Source{
+ Type: aws.String(d.Get("app_source.0.type").(string)),
+ Url: aws.String(d.Get("app_source.0.url").(string)),
+ Username: aws.String(d.Get("app_source.0.username").(string)),
+ Password: aws.String(d.Get("app_source.0.password").(string)),
+ Revision: aws.String(d.Get("app_source.0.revision").(string)),
+ SshKey: aws.String(d.Get("app_source.0.ssh_key").(string)),
+ }
+}
+
+func resourceAwsOpsworksSetApplicationSource(d *schema.ResourceData, v *opsworks.Source) {
+ nv := make([]interface{}, 0, 1)
+ if v != nil {
+ m := make(map[string]interface{})
+ if v.Type != nil {
+ m["type"] = *v.Type
+ }
+ if v.Url != nil {
+ m["url"] = *v.Url
+ }
+ if v.Username != nil {
+ m["username"] = *v.Username
+ }
+ if v.Password != nil {
+ m["password"] = *v.Password
+ }
+ if v.Revision != nil {
+ m["revision"] = *v.Revision
+ }
+ if v.SshKey != nil {
+ m["ssh_key"] = *v.SshKey
+ }
+ nv = append(nv, m)
+ }
+
+ err := d.Set("app_source", nv)
+ if err != nil {
+ // should never happen
+ panic(err)
+ }
+}
+
+func resourceAwsOpsworksApplicationDataSources(d *schema.ResourceData) []*opsworks.DataSource {
+ arn := d.Get("data_source_arn").(string)
+ databaseName := d.Get("data_source_database_name").(string)
+ databaseType := d.Get("data_source_type").(string)
+
+ result := make([]*opsworks.DataSource, 1)
+
+ if len(arn) > 0 || len(databaseName) > 0 || len(databaseType) > 0 {
+ result[0] = &opsworks.DataSource{
+ Arn: aws.String(arn),
+ DatabaseName: aws.String(databaseName),
+ Type: aws.String(databaseType),
+ }
+ }
+ return result
+}
+
+func resourceAwsOpsworksSetApplicationDataSources(d *schema.ResourceData, v []*opsworks.DataSource) {
+ d.Set("data_source_arn", nil)
+ d.Set("data_source_database_name", nil)
+ d.Set("data_source_type", nil)
+
+ if len(v) == 0 {
+ return
+ }
+
+ d.Set("data_source_arn", v[0].Arn)
+ d.Set("data_source_database_name", v[0].DatabaseName)
+ d.Set("data_source_type", v[0].Type)
+}
+
+func resourceAwsOpsworksApplicationSsl(d *schema.ResourceData) *opsworks.SslConfiguration {
+ count := d.Get("ssl_configuration.#").(int)
+ if count == 0 {
+ return nil
+ }
+
+ return &opsworks.SslConfiguration{
+ PrivateKey: aws.String(d.Get("ssl_configuration.0.private_key").(string)),
+ Certificate: aws.String(d.Get("ssl_configuration.0.certificate").(string)),
+ Chain: aws.String(d.Get("ssl_configuration.0.chain").(string)),
+ }
+}
+
+func resourceAwsOpsworksSetApplicationSsl(d *schema.ResourceData, v *opsworks.SslConfiguration) {
+ nv := make([]interface{}, 0, 1)
+ set := false
+ if v != nil {
+ m := make(map[string]interface{})
+ if v.PrivateKey != nil {
+ m["private_key"] = *v.PrivateKey
+ set = true
+ }
+ if v.Certificate != nil {
+ m["certificate"] = *v.Certificate
+ set = true
+ }
+ if v.Chain != nil {
+ m["chain"] = *v.Chain
+ set = true
+ }
+ if set {
+ nv = append(nv, m)
+ }
+ }
+
+ err := d.Set("ssl_configuration", nv)
+ if err != nil {
+ // should never happen
+ panic(err)
+ }
+}
+
+func resourceAwsOpsworksApplicationAttributes(d *schema.ResourceData) map[string]*string {
+ if d.Get("type") != opsworks.AppTypeRails {
+ return nil
+ }
+ attributes := make(map[string]*string)
+
+ if val := d.Get("document_root").(string); len(val) > 0 {
+ attributes[opsworks.AppAttributesKeysDocumentRoot] = aws.String(val)
+ }
+ if val := d.Get("aws_flow_ruby_settings").(string); len(val) > 0 {
+ attributes[opsworks.AppAttributesKeysAwsFlowRubySettings] = aws.String(val)
+ }
+ if val := d.Get("rails_env").(string); len(val) > 0 {
+ attributes[opsworks.AppAttributesKeysRailsEnv] = aws.String(val)
+ }
+ if val := d.Get("auto_bundle_on_deploy").(string); len(val) > 0 {
+ if val == "1" {
+ val = "true"
+ } else if val == "0" {
+ val = "false"
+ }
+ attributes[opsworks.AppAttributesKeysAutoBundleOnDeploy] = aws.String(val)
+ }
+
+ return attributes
+}
+
+func resourceAwsOpsworksSetApplicationAttributes(d *schema.ResourceData, v map[string]*string) {
+ d.Set("document_root", nil)
+ d.Set("rails_env", nil)
+ d.Set("aws_flow_ruby_settings", nil)
+ d.Set("auto_bundle_on_deploy", nil)
+
+ if d.Get("type") != opsworks.AppTypeRails {
+ return
+ }
+ if val, ok := v[opsworks.AppAttributesKeysDocumentRoot]; ok {
+ d.Set("document_root", val)
+ }
+ if val, ok := v[opsworks.AppAttributesKeysAwsFlowRubySettings]; ok {
+ d.Set("aws_flow_ruby_settings", val)
+ }
+ if val, ok := v[opsworks.AppAttributesKeysRailsEnv]; ok {
+ d.Set("rails_env", val)
+ }
+ if val, ok := v[opsworks.AppAttributesKeysAutoBundleOnDeploy]; ok {
+ d.Set("auto_bundle_on_deploy", val)
+ }
+}
diff --git a/builtin/providers/aws/resource_aws_opsworks_application_test.go b/builtin/providers/aws/resource_aws_opsworks_application_test.go
new file mode 100644
index 000000000000..58a37a2371a5
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_opsworks_application_test.go
@@ -0,0 +1,221 @@
+package aws
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ "github.com/aws/aws-sdk-go/service/opsworks"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func TestAccAWSOpsworksApplication(t *testing.T) {
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAwsOpsworksApplicationDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAwsOpsworksApplicationCreate,
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "name", "tf-ops-acc-application",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "type", "other",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "enable_ssl", "false",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "ssl_configuration", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "domains", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.key", "key1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.value", "value1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.secret", "",
+ ),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccAwsOpsworksApplicationUpdate,
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "name", "tf-ops-acc-application",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "type", "rails",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "enable_ssl", "true",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "ssl_configuration.0.certificate", "-----BEGIN CERTIFICATE-----\nMIIBkDCB+gIJALoScFD0sJq3MA0GCSqGSIb3DQEBBQUAMA0xCzAJBgNVBAYTAkRF\nMB4XDTE1MTIxOTIwMzU1MVoXDTE2MDExODIwMzU1MVowDTELMAkGA1UEBhMCREUw\ngZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAKKQKbTTH/Julz16xY7ArYlzJYCP\nedTCx1bopuryCx/+d1gC94MtRdlPSpQl8mfc9iBdtXbJppp73Qh/DzLzO9Ns25xZ\n+kUQMhbIyLsaCBzuEGLgAaVdGpNvRBw++UoYtd0U7QczFAreTGLH8n8+FIzuI5Mc\n+MJ1TKbbt5gFfRSzAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEALARo96wCDmaHKCaX\nS0IGLGnZCfiIUfCmBxOXBSJxDBwter95QHR0dMGxYIujee5n4vvavpVsqZnfMC3I\nOZWPlwiUJbNIpK+04Bg2vd5m/NMMrvi75RfmyeMtSfq/NrIX2Q3+nyWI7DLq7yZI\nV/YEvOqdAiy5NEWBztHx8HvB9G4=\n-----END CERTIFICATE-----",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "ssl_configuration.0.private_key", "-----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQCikCm00x/ybpc9esWOwK2JcyWAj3nUwsdW6Kbq8gsf/ndYAveD\nLUXZT0qUJfJn3PYgXbV2yaaae90Ifw8y8zvTbNucWfpFEDIWyMi7Gggc7hBi4AGl\nXRqTb0QcPvlKGLXdFO0HMxQK3kxix/J/PhSM7iOTHPjCdUym27eYBX0UswIDAQAB\nAoGBAIYcrvuqDboguI8U4TUjCkfSAgds1pLLWk79wu8jXkA329d1IyNKT0y3WIye\nPbyoEzmidZmZROQ/+ZsPz8c12Y0DrX73WSVzKNyJeP7XMk9HSzA1D9RX0U0S+5Kh\nFAMc2NEVVFIfQtVtoVmHdKDpnRYtOCHLW9rRpvqOOjd4mYk5AkEAzeiFr1mtlnsa\n67shMxzDaOTAFMchRz6G7aSovvCztxcB63ulFI/w9OTUMdTQ7ff7pet+lVihLc2W\nefIL0HvsjQJBAMocNTKaR/TnsV5GSk2kPAdR+zFP5sQy8sfMy0lEXTylc7zN4ajX\nMeHVoxp+GZgpfDcZ3ya808H1umyXh+xA1j8CQE9x9ZKQYT98RAjL7KVR5btk9w+N\nPTPF1j1+mHUDXfO4ds8qp6jlWKzEVXLcj7ghRADiebaZuaZ4eiSW1SQdjEkCQQC4\nwDhQ3X9RfEpCp3ZcqvjEqEg6t5N3XitYQPjDLN8eBRBbUsgpEy3iBuxl10eGNMX7\niIbYXlwkPYAArDPv3wT5AkAwp4vym+YKmDqh6gseKfRDuJqRiW9yD5A8VGr/w88k\n5rkuduVGP7tK3uIp00Its3aEyKF8mLGWYszVGeeLxAMH\n-----END RSA PRIVATE KEY-----",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "domains.0", "example.com",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "domains.1", "sub.example.com",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.password", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.revision", "master",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.ssh_key", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.type", "git",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.url", "https://github.com/aws/example.git",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "app_source.0.username", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.2107898637.key", "key2",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.2107898637.value", "value2",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.2107898637.secure", "true",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.key", "key1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.value", "value1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "environment.3077298702.secret", "",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "document_root", "root",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "auto_bundle_on_deploy", "true",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_application.tf-acc-app", "rails_env", "staging",
+ ),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckAwsOpsworksApplicationDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*AWSClient).opsworksconn
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "aws_opsworks_application" {
+ continue
+ }
+
+ req := &opsworks.DescribeAppsInput{
+ AppIds: []*string{
+ aws.String(rs.Primary.ID),
+ },
+ }
+
+ resp, err := client.DescribeApps(req)
+ if err == nil {
+ if len(resp.Apps) > 0 {
+ return fmt.Errorf("OpsWorks App still exist.")
+ }
+ }
+
+ if awserr, ok := err.(awserr.Error); ok {
+ if awserr.Code() != "ResourceNotFoundException" {
+ return err
+ }
+ }
+ }
+
+ return nil
+}
+
+var testAccAwsOpsworksApplicationCreate = testAccAwsOpsworksStackConfigNoVpcCreate("tf-ops-acc-application") + `
+resource "aws_opsworks_application" "tf-acc-app" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+ name = "tf-ops-acc-application"
+ type = "other"
+ enable_ssl = false
+ app_source ={
+ type = "other"
+ }
+ environment = { key = "key1" value = "value1" secure = false}
+}
+`
+
+var testAccAwsOpsworksApplicationUpdate = testAccAwsOpsworksStackConfigNoVpcCreate("tf-ops-acc-application") + `
+resource "aws_opsworks_application" "tf-acc-app" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+ name = "tf-ops-acc-application"
+ type = "rails"
+ domains = ["example.com", "sub.example.com"]
+ enable_ssl = true
+ ssl_configuration = {
+ private_key = < 0 {
+ ebs.Iops = aws.Int64(int64(v))
+ }
+
+ blockDevices = append(blockDevices, &opsworks.BlockDeviceMapping{
+ DeviceName: aws.String(bd["device_name"].(string)),
+ Ebs: ebs,
+ })
+ }
+ }
+
+ if v, ok := d.GetOk("ephemeral_block_device"); ok {
+ vL := v.(*schema.Set).List()
+ for _, v := range vL {
+ bd := v.(map[string]interface{})
+ blockDevices = append(blockDevices, &opsworks.BlockDeviceMapping{
+ DeviceName: aws.String(bd["device_name"].(string)),
+ VirtualName: aws.String(bd["virtual_name"].(string)),
+ })
+ }
+ }
+
+ if v, ok := d.GetOk("root_block_device"); ok {
+ vL := v.(*schema.Set).List()
+ if len(vL) > 1 {
+ return fmt.Errorf("Cannot specify more than one root_block_device.")
+ }
+ for _, v := range vL {
+ bd := v.(map[string]interface{})
+ ebs := &opsworks.EbsBlockDevice{
+ DeleteOnTermination: aws.Bool(bd["delete_on_termination"].(bool)),
+ }
+
+ if v, ok := bd["volume_size"].(int); ok && v != 0 {
+ ebs.VolumeSize = aws.Int64(int64(v))
+ }
+
+ if v, ok := bd["volume_type"].(string); ok && v != "" {
+ ebs.VolumeType = aws.String(v)
+ }
+
+ if v, ok := bd["iops"].(int); ok && v > 0 {
+ ebs.Iops = aws.Int64(int64(v))
+ }
+
+ blockDevices = append(blockDevices, &opsworks.BlockDeviceMapping{
+ DeviceName: aws.String("ROOT_DEVICE"),
+ Ebs: ebs,
+ })
+ }
+ }
+
+ if len(blockDevices) > 0 {
+ req.BlockDeviceMappings = blockDevices
+ }
+
+ log.Printf("[DEBUG] Creating OpsWorks instance")
+
+ var resp *opsworks.CreateInstanceOutput
+
+ resp, err = client.CreateInstance(req)
+ if err != nil {
+ return err
+ }
+
+ if resp.InstanceId == nil {
+ return fmt.Errorf("Error launching instance: no instance returned in response")
+ }
+
+ instanceId := *resp.InstanceId
+ d.SetId(instanceId)
+ d.Set("id", instanceId)
+
+ if v, ok := d.GetOk("state"); ok && v.(string) == "running" {
+ err := startOpsworksInstance(d, meta, false)
+ if err != nil {
+ return err
+ }
+ }
+
+ return resourceAwsOpsworksInstanceRead(d, meta)
+}
+
+func resourceAwsOpsworksInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ err := resourceAwsOpsworksInstanceValidate(d)
+ if err != nil {
+ return err
+ }
+
+ req := &opsworks.UpdateInstanceInput{
+ AgentVersion: aws.String(d.Get("agent_version").(string)),
+ Architecture: aws.String(d.Get("architecture").(string)),
+ InstanceId: aws.String(d.Get("id").(string)),
+ InstallUpdatesOnBoot: aws.Bool(d.Get("install_updates_on_boot").(bool)),
+ }
+
+ if v, ok := d.GetOk("ami_id"); ok {
+ req.AmiId = aws.String(v.(string))
+ req.Os = aws.String("Custom")
+ }
+
+ if v, ok := d.GetOk("auto_scaling_type"); ok {
+ req.AutoScalingType = aws.String(v.(string))
+ }
+
+ if v, ok := d.GetOk("hostname"); ok {
+ req.Hostname = aws.String(v.(string))
+ }
+
+ if v, ok := d.GetOk("instance_type"); ok {
+ req.InstanceType = aws.String(v.(string))
+ }
+
+ if v, ok := d.GetOk("layer_ids"); ok {
+ req.LayerIds = expandStringList(v.([]interface{}))
+
+ }
+
+ if v, ok := d.GetOk("os"); ok {
+ req.Os = aws.String(v.(string))
+ }
+
+ if v, ok := d.GetOk("ssh_key_name"); ok {
+ req.SshKeyName = aws.String(v.(string))
+ }
+
+ log.Printf("[DEBUG] Updating OpsWorks instance: %s", d.Id())
+
+ _, err = client.UpdateInstance(req)
+ if err != nil {
+ return err
+ }
+
+ var status string
+
+ if v, ok := d.GetOk("status"); ok {
+ status = v.(string)
+ } else {
+ status = "stopped"
+ }
+
+ if v, ok := d.GetOk("state"); ok {
+ state := v.(string)
+ if state == "running" {
+ if status == "stopped" || status == "stopping" || status == "shutting_down" {
+ err := startOpsworksInstance(d, meta, false)
+ if err != nil {
+ return err
+ }
+ }
+ } else {
+ if status != "stopped" && status != "stopping" && status != "shutting_down" {
+ err := stopOpsworksInstance(d, meta, false)
+ if err != nil {
+ return err
+ }
+ }
+ }
+ }
+
+ return resourceAwsOpsworksInstanceRead(d, meta)
+}
+
+func resourceAwsOpsworksInstanceDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ if v, ok := d.GetOk("status"); ok && v.(string) != "stopped" {
+ err := stopOpsworksInstance(d, meta, true)
+ if err != nil {
+ return err
+ }
+ }
+
+ req := &opsworks.DeleteInstanceInput{
+ InstanceId: aws.String(d.Id()),
+ DeleteElasticIp: aws.Bool(d.Get("delete_eip").(bool)),
+ DeleteVolumes: aws.Bool(d.Get("delete_ebs").(bool)),
+ }
+
+ log.Printf("[DEBUG] Deleting OpsWorks instance: %s", d.Id())
+
+ _, err := client.DeleteInstance(req)
+ if err != nil {
+ return err
+ }
+
+ d.SetId("")
+ return nil
+}
+
+func startOpsworksInstance(d *schema.ResourceData, meta interface{}, wait bool) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ instanceId := d.Get("id").(string)
+
+ req := &opsworks.StartInstanceInput{
+ InstanceId: aws.String(instanceId),
+ }
+
+ log.Printf("[DEBUG] Starting OpsWorks instance: %s", instanceId)
+
+ _, err := client.StartInstance(req)
+
+ if err != nil {
+ return err
+ }
+
+ if wait {
+ log.Printf("[DEBUG] Waiting for instance (%s) to become running", instanceId)
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"requested", "pending", "booting", "running_setup"},
+ Target: []string{"online"},
+ Refresh: OpsworksInstanceStateRefreshFunc(client, instanceId),
+ Timeout: 10 * time.Minute,
+ Delay: 10 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+ _, err = stateConf.WaitForState()
+ if err != nil {
+ return fmt.Errorf("Error waiting for instance (%s) to become stopped: %s",
+ instanceId, err)
+ }
+ }
+
+ return nil
+}
+
+func stopOpsworksInstance(d *schema.ResourceData, meta interface{}, wait bool) error {
+ client := meta.(*AWSClient).opsworksconn
+
+ instanceId := d.Get("id").(string)
+
+ req := &opsworks.StopInstanceInput{
+ InstanceId: aws.String(instanceId),
+ }
+
+ log.Printf("[DEBUG] Stopping OpsWorks instance: %s", instanceId)
+
+ _, err := client.StopInstance(req)
+
+ if err != nil {
+ return err
+ }
+
+ if wait {
+ log.Printf("[DEBUG] Waiting for instance (%s) to become stopped", instanceId)
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"stopping", "terminating", "shutting_down", "terminated"},
+ Target: []string{"stopped"},
+ Refresh: OpsworksInstanceStateRefreshFunc(client, instanceId),
+ Timeout: 10 * time.Minute,
+ Delay: 10 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+ _, err = stateConf.WaitForState()
+ if err != nil {
+ return fmt.Errorf("Error waiting for instance (%s) to become stopped: %s",
+ instanceId, err)
+ }
+ }
+
+ return nil
+}
+
+func readOpsworksBlockDevices(d *schema.ResourceData, instance *opsworks.Instance, meta interface{}) (
+ map[string]interface{}, error) {
+
+ blockDevices := make(map[string]interface{})
+ blockDevices["ebs"] = make([]map[string]interface{}, 0)
+ blockDevices["ephemeral"] = make([]map[string]interface{}, 0)
+ blockDevices["root"] = nil
+
+ if len(instance.BlockDeviceMappings) == 0 {
+ return nil, nil
+ }
+
+ for _, bdm := range instance.BlockDeviceMappings {
+ bd := make(map[string]interface{})
+ if bdm.Ebs != nil && bdm.Ebs.DeleteOnTermination != nil {
+ bd["delete_on_termination"] = *bdm.Ebs.DeleteOnTermination
+ }
+ if bdm.Ebs != nil && bdm.Ebs.VolumeSize != nil {
+ bd["volume_size"] = *bdm.Ebs.VolumeSize
+ }
+ if bdm.Ebs != nil && bdm.Ebs.VolumeType != nil {
+ bd["volume_type"] = *bdm.Ebs.VolumeType
+ }
+ if bdm.Ebs != nil && bdm.Ebs.Iops != nil {
+ bd["iops"] = *bdm.Ebs.Iops
+ }
+ if bdm.DeviceName != nil && *bdm.DeviceName == "ROOT_DEVICE" {
+ blockDevices["root"] = bd
+ } else {
+ if bdm.DeviceName != nil {
+ bd["device_name"] = *bdm.DeviceName
+ }
+ if bdm.VirtualName != nil {
+ bd["virtual_name"] = *bdm.VirtualName
+ blockDevices["ephemeral"] = append(blockDevices["ephemeral"].([]map[string]interface{}), bd)
+ } else {
+ if bdm.Ebs != nil && bdm.Ebs.SnapshotId != nil {
+ bd["snapshot_id"] = *bdm.Ebs.SnapshotId
+ }
+ blockDevices["ebs"] = append(blockDevices["ebs"].([]map[string]interface{}), bd)
+ }
+ }
+ }
+ return blockDevices, nil
+}
+
+func OpsworksInstanceStateRefreshFunc(conn *opsworks.OpsWorks, instanceID string) resource.StateRefreshFunc {
+ return func() (interface{}, string, error) {
+ resp, err := conn.DescribeInstances(&opsworks.DescribeInstancesInput{
+ InstanceIds: []*string{aws.String(instanceID)},
+ })
+ if err != nil {
+ if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "ResourceNotFoundException" {
+ // Set this to nil as if we didn't find anything.
+ resp = nil
+ } else {
+ log.Printf("Error on OpsworksInstanceStateRefresh: %s", err)
+ return nil, "", err
+ }
+ }
+
+ if resp == nil || len(resp.Instances) == 0 {
+ // Sometimes AWS just has consistency issues and doesn't see
+ // our instance yet. Return an empty state.
+ return nil, "", nil
+ }
+
+ i := resp.Instances[0]
+ return i, *i.Status, nil
+ }
+}
diff --git a/builtin/providers/aws/resource_aws_opsworks_instance_test.go b/builtin/providers/aws/resource_aws_opsworks_instance_test.go
new file mode 100644
index 000000000000..e79f8bb45dfc
--- /dev/null
+++ b/builtin/providers/aws/resource_aws_opsworks_instance_test.go
@@ -0,0 +1,274 @@
+package aws
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
+ "github.com/aws/aws-sdk-go/service/opsworks"
+ "github.com/hashicorp/terraform/helper/acctest"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func TestAccAWSOpsworksInstance(t *testing.T) {
+ stackName := fmt.Sprintf("tf-%d", acctest.RandInt())
+ var opsinst opsworks.Instance
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAwsOpsworksInstanceConfigCreate(stackName),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSOpsworksInstanceExists(
+ "aws_opsworks_instance.tf-acc", &opsinst),
+ testAccCheckAWSOpsworksInstanceAttributes(&opsinst),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "hostname", "tf-acc1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "instance_type", "t2.micro",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "state", "stopped",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "layer_ids.#", "1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "install_updates_on_boot", "true",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "architecture", "x86_64",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "os", "Amazon Linux 2014.09", // inherited from opsworks_stack_test
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "root_device_type", "ebs", // inherited from opsworks_stack_test
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "availability_zone", "us-west-2a", // inherited from opsworks_stack_test
+ ),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccAwsOpsworksInstanceConfigUpdate(stackName),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSOpsworksInstanceExists(
+ "aws_opsworks_instance.tf-acc", &opsinst),
+ testAccCheckAWSOpsworksInstanceAttributes(&opsinst),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "hostname", "tf-acc1",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "instance_type", "t2.small",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "layer_ids.#", "2",
+ ),
+ resource.TestCheckResourceAttr(
+ "aws_opsworks_instance.tf-acc", "os", "Amazon Linux 2015.09",
+ ),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckAWSOpsworksInstanceExists(
+ n string, opsinst *opsworks.Instance) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No Opsworks Instance is set")
+ }
+
+ conn := testAccProvider.Meta().(*AWSClient).opsworksconn
+
+ params := &opsworks.DescribeInstancesInput{
+ InstanceIds: []*string{&rs.Primary.ID},
+ }
+ resp, err := conn.DescribeInstances(params)
+
+ if err != nil {
+ return err
+ }
+
+ if v := len(resp.Instances); v != 1 {
+ return fmt.Errorf("Expected 1 request returned, got %d", v)
+ }
+
+ *opsinst = *resp.Instances[0]
+
+ return nil
+ }
+}
+
+func testAccCheckAWSOpsworksInstanceAttributes(
+ opsinst *opsworks.Instance) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ // Depending on the timing, the state could be requested or stopped
+ if *opsinst.Status != "stopped" && *opsinst.Status != "requested" {
+ return fmt.Errorf("Unexpected request status: %s", *opsinst.Status)
+ }
+ if *opsinst.AvailabilityZone != "us-west-2a" {
+ return fmt.Errorf("Unexpected availability zone: %s", *opsinst.AvailabilityZone)
+ }
+ if *opsinst.Architecture != "x86_64" {
+ return fmt.Errorf("Unexpected architecture: %s", *opsinst.Architecture)
+ }
+ if *opsinst.InfrastructureClass != "ec2" {
+ return fmt.Errorf("Unexpected infrastructure class: %s", *opsinst.InfrastructureClass)
+ }
+ if *opsinst.RootDeviceType != "ebs" {
+ return fmt.Errorf("Unexpected root device type: %s", *opsinst.RootDeviceType)
+ }
+ if *opsinst.VirtualizationType != "hvm" {
+ return fmt.Errorf("Unexpected virtualization type: %s", *opsinst.VirtualizationType)
+ }
+ return nil
+ }
+}
+
+func testAccCheckAwsOpsworksInstanceDestroy(s *terraform.State) error {
+ opsworksconn := testAccProvider.Meta().(*AWSClient).opsworksconn
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "aws_opsworks_instance" {
+ continue
+ }
+ req := &opsworks.DescribeInstancesInput{
+ InstanceIds: []*string{
+ aws.String(rs.Primary.ID),
+ },
+ }
+
+ _, err := opsworksconn.DescribeInstances(req)
+ if err != nil {
+ if awserr, ok := err.(awserr.Error); ok {
+ if awserr.Code() == "ResourceNotFoundException" {
+ // not found, good to go
+ return nil
+ }
+ }
+ return err
+ }
+ }
+
+ return fmt.Errorf("Fall through error on OpsWorks instance test")
+}
+
+func testAccAwsOpsworksInstanceConfigCreate(name string) string {
+ return fmt.Sprintf(`
+resource "aws_security_group" "tf-ops-acc-web" {
+ name = "%s-web"
+ ingress {
+ from_port = 80
+ to_port = 80
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+}
+
+resource "aws_security_group" "tf-ops-acc-php" {
+ name = "%s-php"
+ ingress {
+ from_port = 8080
+ to_port = 8080
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+}
+
+resource "aws_opsworks_static_web_layer" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+
+ custom_security_group_ids = [
+ "${aws_security_group.tf-ops-acc-web.id}",
+ ]
+}
+
+resource "aws_opsworks_php_app_layer" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+
+ custom_security_group_ids = [
+ "${aws_security_group.tf-ops-acc-php.id}",
+ ]
+}
+
+resource "aws_opsworks_instance" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+ layer_ids = [
+ "${aws_opsworks_static_web_layer.tf-acc.id}",
+ ]
+ instance_type = "t2.micro"
+ state = "stopped"
+ hostname = "tf-acc1"
+}
+
+%s
+
+`, name, name, testAccAwsOpsworksStackConfigVpcCreate(name))
+}
+
+func testAccAwsOpsworksInstanceConfigUpdate(name string) string {
+ return fmt.Sprintf(`
+resource "aws_security_group" "tf-ops-acc-web" {
+ name = "%s-web"
+ ingress {
+ from_port = 80
+ to_port = 80
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+}
+
+resource "aws_security_group" "tf-ops-acc-php" {
+ name = "%s-php"
+ ingress {
+ from_port = 8080
+ to_port = 8080
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+}
+
+resource "aws_opsworks_static_web_layer" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+
+ custom_security_group_ids = [
+ "${aws_security_group.tf-ops-acc-web.id}",
+ ]
+}
+
+resource "aws_opsworks_php_app_layer" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+
+ custom_security_group_ids = [
+ "${aws_security_group.tf-ops-acc-php.id}",
+ ]
+}
+
+resource "aws_opsworks_instance" "tf-acc" {
+ stack_id = "${aws_opsworks_stack.tf-acc.id}"
+ layer_ids = [
+ "${aws_opsworks_static_web_layer.tf-acc.id}",
+ "${aws_opsworks_php_app_layer.tf-acc.id}",
+ ]
+ instance_type = "t2.small"
+ state = "stopped"
+ hostname = "tf-acc1"
+ os = "Amazon Linux 2015.09"
+}
+
+%s
+
+`, name, name, testAccAwsOpsworksStackConfigVpcCreate(name))
+}
diff --git a/builtin/providers/aws/resource_aws_opsworks_stack.go b/builtin/providers/aws/resource_aws_opsworks_stack.go
index c021f16a999f..44748984d286 100644
--- a/builtin/providers/aws/resource_aws_opsworks_stack.go
+++ b/builtin/providers/aws/resource_aws_opsworks_stack.go
@@ -214,7 +214,7 @@ func resourceAwsOpsworksStackCustomCookbooksSource(d *schema.ResourceData) *opsw
func resourceAwsOpsworksSetStackCustomCookbooksSource(d *schema.ResourceData, v *opsworks.Source) {
nv := make([]interface{}, 0, 1)
- if v != nil {
+ if v != nil && v.Type != nil && *v.Type != "" {
m := make(map[string]interface{})
if v.Type != nil {
m["type"] = *v.Type
@@ -225,12 +225,12 @@ func resourceAwsOpsworksSetStackCustomCookbooksSource(d *schema.ResourceData, v
if v.Username != nil {
m["username"] = *v.Username
}
- if v.Password != nil {
- m["password"] = *v.Password
- }
if v.Revision != nil {
m["revision"] = *v.Revision
}
+ // v.Password will, on read, contain the placeholder string
+ // "*****FILTERED*****", so we ignore it on read and let persist
+ // the value already in the state.
nv = append(nv, m)
}
@@ -310,6 +310,10 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er
DefaultOs: aws.String(d.Get("default_os").(string)),
UseOpsworksSecurityGroups: aws.Bool(d.Get("use_opsworks_security_groups").(bool)),
}
+ req.ConfigurationManager = &opsworks.StackConfigurationManager{
+ Name: aws.String(d.Get("configuration_manager_name").(string)),
+ Version: aws.String(d.Get("configuration_manager_version").(string)),
+ }
inVpc := false
if vpcId, ok := d.GetOk("vpc_id"); ok {
req.VpcId = aws.String(vpcId.(string))
@@ -341,7 +345,8 @@ func resourceAwsOpsworksStackCreate(d *schema.ResourceData, meta interface{}) er
// Service Role Arn: [...] is not yet propagated, please try again in a couple of minutes
propErr := "not yet propagated"
trustErr := "not the necessary trust relationship"
- if opserr.Code() == "ValidationException" && (strings.Contains(opserr.Message(), trustErr) || strings.Contains(opserr.Message(), propErr)) {
+ validateErr := "validate IAM role permission"
+ if opserr.Code() == "ValidationException" && (strings.Contains(opserr.Message(), trustErr) || strings.Contains(opserr.Message(), propErr) || strings.Contains(opserr.Message(), validateErr)) {
log.Printf("[INFO] Waiting for service IAM role to propagate")
return resource.RetryableError(cerr)
}
diff --git a/builtin/providers/aws/resource_aws_opsworks_stack_test.go b/builtin/providers/aws/resource_aws_opsworks_stack_test.go
index d3e8334fd353..0a23273df012 100644
--- a/builtin/providers/aws/resource_aws_opsworks_stack_test.go
+++ b/builtin/providers/aws/resource_aws_opsworks_stack_test.go
@@ -329,6 +329,8 @@ resource "aws_opsworks_stack" "tf-acc" {
type = "git"
revision = "master"
url = "https://github.com/aws/opsworks-example-cookbooks.git"
+ username = "example"
+ password = "example"
}
resource "aws_iam_role" "opsworks_service" {
name = "%s_opsworks_service"
diff --git a/builtin/providers/aws/resource_aws_rds_cluster.go b/builtin/providers/aws/resource_aws_rds_cluster.go
index 190b3e275c04..2d2b0fdf42f5 100644
--- a/builtin/providers/aws/resource_aws_rds_cluster.go
+++ b/builtin/providers/aws/resource_aws_rds_cluster.go
@@ -368,7 +368,7 @@ func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error
_, err := conn.DeleteDBCluster(&deleteOpts)
stateConf := &resource.StateChangeConf{
- Pending: []string{"deleting", "backing-up", "modifying"},
+ Pending: []string{"available", "deleting", "backing-up", "modifying"},
Target: []string{"destroyed"},
Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta),
Timeout: 5 * time.Minute,
diff --git a/builtin/providers/aws/resource_aws_redshift_cluster.go b/builtin/providers/aws/resource_aws_redshift_cluster.go
index f648a95ebf12..3e39561ed597 100644
--- a/builtin/providers/aws/resource_aws_redshift_cluster.go
+++ b/builtin/providers/aws/resource_aws_redshift_cluster.go
@@ -265,8 +265,8 @@ func resourceAwsRedshiftClusterCreate(d *schema.ResourceData, meta interface{})
Pending: []string{"creating", "backing-up", "modifying"},
Target: []string{"available"},
Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta),
- Timeout: 5 * time.Minute,
- MinTimeout: 3 * time.Second,
+ Timeout: 40 * time.Minute,
+ MinTimeout: 10 * time.Second,
}
_, err = stateConf.WaitForState()
@@ -375,6 +375,7 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{})
} else {
req.ClusterType = aws.String("single-node")
}
+ req.NodeType = aws.String(d.Get("node_type").(string))
}
if d.HasChange("cluster_security_groups") {
@@ -424,8 +425,8 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{})
Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming", "modifying"},
Target: []string{"available"},
Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta),
- Timeout: 10 * time.Minute,
- MinTimeout: 5 * time.Second,
+ Timeout: 40 * time.Minute,
+ MinTimeout: 10 * time.Second,
}
// Wait, catching any errors
diff --git a/builtin/providers/aws/resource_aws_redshift_cluster_test.go b/builtin/providers/aws/resource_aws_redshift_cluster_test.go
index 400b031b3c35..1c3c1ef8547a 100644
--- a/builtin/providers/aws/resource_aws_redshift_cluster_test.go
+++ b/builtin/providers/aws/resource_aws_redshift_cluster_test.go
@@ -71,6 +71,39 @@ func TestAccAWSRedshiftCluster_publiclyAccessible(t *testing.T) {
})
}
+func TestAccAWSRedshiftCluster_updateNodeCount(t *testing.T) {
+ var v redshift.Cluster
+
+ ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int()
+ preConfig := fmt.Sprintf(testAccAWSRedshiftClusterConfig_basic, ri)
+ postConfig := fmt.Sprintf(testAccAWSRedshiftClusterConfig_updateNodeCount, ri)
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSRedshiftClusterDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: preConfig,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v),
+ resource.TestCheckResourceAttr(
+ "aws_redshift_cluster.default", "number_of_nodes", "1"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: postConfig,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v),
+ resource.TestCheckResourceAttr(
+ "aws_redshift_cluster.default", "number_of_nodes", "2"),
+ ),
+ },
+ },
+ })
+}
+
func testAccCheckAWSRedshiftClusterDestroy(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_redshift_cluster" {
@@ -272,6 +305,24 @@ func TestResourceAWSRedshiftClusterMasterUsernameValidation(t *testing.T) {
}
}
+var testAccAWSRedshiftClusterConfig_updateNodeCount = `
+provider "aws" {
+ region = "us-west-2"
+}
+
+resource "aws_redshift_cluster" "default" {
+ cluster_identifier = "tf-redshift-cluster-%d"
+ availability_zone = "us-west-2a"
+ database_name = "mydb"
+ master_username = "foo_test"
+ master_password = "Mustbe8characters"
+ node_type = "dc1.large"
+ automated_snapshot_retention_period = 0
+ allow_version_upgrade = false
+ number_of_nodes = 2
+}
+`
+
var testAccAWSRedshiftClusterConfig_basic = `
provider "aws" {
region = "us-west-2"
@@ -284,7 +335,7 @@ resource "aws_redshift_cluster" "default" {
master_username = "foo_test"
master_password = "Mustbe8characters"
node_type = "dc1.large"
- automated_snapshot_retention_period = 7
+ automated_snapshot_retention_period = 0
allow_version_upgrade = false
}`
@@ -344,7 +395,7 @@ resource "aws_redshift_cluster" "default" {
master_username = "foo"
master_password = "Mustbe8characters"
node_type = "dc1.large"
- automated_snapshot_retention_period = 7
+ automated_snapshot_retention_period = 0
allow_version_upgrade = false
cluster_subnet_group_name = "${aws_redshift_subnet_group.foo.name}"
publicly_accessible = false
@@ -406,7 +457,7 @@ resource "aws_redshift_cluster" "default" {
master_username = "foo"
master_password = "Mustbe8characters"
node_type = "dc1.large"
- automated_snapshot_retention_period = 7
+ automated_snapshot_retention_period = 0
allow_version_upgrade = false
cluster_subnet_group_name = "${aws_redshift_subnet_group.foo.name}"
publicly_accessible = true
diff --git a/builtin/providers/aws/resource_aws_route.go b/builtin/providers/aws/resource_aws_route.go
index 1232067c4fbf..f6e0a85896ef 100644
--- a/builtin/providers/aws/resource_aws_route.go
+++ b/builtin/providers/aws/resource_aws_route.go
@@ -4,10 +4,13 @@ import (
"errors"
"fmt"
"log"
+ "time"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
+ "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
@@ -152,7 +155,26 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Route create config: %s", createOpts)
// Create the route
- _, err := conn.CreateRoute(createOpts)
+ var err error
+
+ err = resource.Retry(2*time.Minute, func() *resource.RetryError {
+ _, err = conn.CreateRoute(createOpts)
+
+ if err != nil {
+ ec2err, ok := err.(awserr.Error)
+ if !ok {
+ return resource.NonRetryableError(err)
+ }
+ if ec2err.Code() == "InvalidParameterException" {
+ log.Printf("[DEBUG] Trying to create route again: %q", ec2err.Message())
+ return resource.RetryableError(err)
+ }
+
+ return resource.NonRetryableError(err)
+ }
+
+ return nil
+ })
if err != nil {
return fmt.Errorf("Error creating route: %s", err)
}
@@ -269,8 +291,29 @@ func resourceAwsRouteDelete(d *schema.ResourceData, meta interface{}) error {
}
log.Printf("[DEBUG] Route delete opts: %s", deleteOpts)
- resp, err := conn.DeleteRoute(deleteOpts)
- log.Printf("[DEBUG] Route delete result: %s", resp)
+ var err error
+ err = resource.Retry(5*time.Minute, func() *resource.RetryError {
+ log.Printf("[DEBUG] Trying to delete route with opts %s", deleteOpts)
+ resp, err := conn.DeleteRoute(deleteOpts)
+ log.Printf("[DEBUG] Route delete result: %s", resp)
+
+ if err == nil {
+ return nil
+ }
+
+ ec2err, ok := err.(awserr.Error)
+ if !ok {
+ return resource.NonRetryableError(err)
+ }
+ if ec2err.Code() == "InvalidParameterException" {
+ log.Printf("[DEBUG] Trying to delete route again: %q",
+ ec2err.Message())
+ return resource.RetryableError(err)
+ }
+
+ return resource.NonRetryableError(err)
+ })
+
if err != nil {
return err
}
@@ -332,7 +375,7 @@ func findResourceRoute(conn *ec2.EC2, rtbid string, cidr string) (*ec2.Route, er
}
for _, route := range (*resp.RouteTables[0]).Routes {
- if *route.DestinationCidrBlock == cidr {
+ if route.DestinationCidrBlock != nil && *route.DestinationCidrBlock == cidr {
return route, nil
}
}
diff --git a/builtin/providers/aws/resource_aws_route53_delegation_set_test.go b/builtin/providers/aws/resource_aws_route53_delegation_set_test.go
index 9af9e8dd1bb1..26e88f60d614 100644
--- a/builtin/providers/aws/resource_aws_route53_delegation_set_test.go
+++ b/builtin/providers/aws/resource_aws_route53_delegation_set_test.go
@@ -15,9 +15,11 @@ import (
func TestAccAWSRoute53DelegationSet_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53ZoneDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_delegation_set.test",
+ IDRefreshIgnore: []string{"reference_name"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53DelegationSetConfig,
@@ -33,9 +35,11 @@ func TestAccAWSRoute53DelegationSet_withZones(t *testing.T) {
var zone route53.GetHostedZoneOutput
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53ZoneDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_delegation_set.main",
+ IDRefreshIgnore: []string{"reference_name"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53DelegationSetWithZonesConfig,
diff --git a/builtin/providers/aws/resource_aws_route53_health_check.go b/builtin/providers/aws/resource_aws_route53_health_check.go
index 4034996a9aa7..479dc32fcf00 100644
--- a/builtin/providers/aws/resource_aws_route53_health_check.go
+++ b/builtin/providers/aws/resource_aws_route53_health_check.go
@@ -246,7 +246,7 @@ func resourceAwsRoute53HealthCheckRead(d *schema.ResourceData, meta interface{})
d.Set("port", updated.Port)
d.Set("resource_path", updated.ResourcePath)
d.Set("measure_latency", updated.MeasureLatency)
- d.Set("invent_healthcheck", updated.Inverted)
+ d.Set("invert_healthcheck", updated.Inverted)
d.Set("child_healthchecks", updated.ChildHealthChecks)
d.Set("child_health_threshold", updated.HealthThreshold)
diff --git a/builtin/providers/aws/resource_aws_route53_health_check_test.go b/builtin/providers/aws/resource_aws_route53_health_check_test.go
index 3e27bc102301..9792ac10fd0f 100644
--- a/builtin/providers/aws/resource_aws_route53_health_check_test.go
+++ b/builtin/providers/aws/resource_aws_route53_health_check_test.go
@@ -12,9 +12,10 @@ import (
func TestAccAWSRoute53HealthCheck_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53HealthCheckDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_health_check.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53HealthCheckDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53HealthCheckConfig,
diff --git a/builtin/providers/aws/resource_aws_route53_record.go b/builtin/providers/aws/resource_aws_route53_record.go
index ee33842584d8..cd8cbf7c8102 100644
--- a/builtin/providers/aws/resource_aws_route53_record.go
+++ b/builtin/providers/aws/resource_aws_route53_record.go
@@ -68,7 +68,7 @@ func resourceAwsRoute53Record() *schema.Resource {
ConflictsWith: []string{"alias"},
},
- // Weight uses a special sentinel value to indicate it's presense.
+ // Weight uses a special sentinel value to indicate its presence.
// Because 0 is a valid value for Weight, we default to -1 so that any
// inclusion of a weight (zero or not) will be a usable value
"weight": &schema.Schema{
@@ -246,6 +246,19 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er
}
func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) error {
+ // If we don't have a zone ID we're doing an import. Parse it from the ID.
+ if _, ok := d.GetOk("zone_id"); !ok {
+ parts := strings.Split(d.Id(), "_")
+ d.Set("zone_id", parts[0])
+ d.Set("name", parts[1])
+ d.Set("type", parts[2])
+ if len(parts) > 3 {
+ d.Set("set_identifier", parts[3])
+ }
+
+ d.Set("weight", -1)
+ }
+
record, err := findRecord(d, meta)
if err != nil {
switch err {
@@ -263,6 +276,18 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro
return fmt.Errorf("[DEBUG] Error setting records for: %s, error: %#v", d.Id(), err)
}
+ if alias := record.AliasTarget; alias != nil {
+ if _, ok := d.GetOk("alias"); !ok {
+ d.Set("alias", []interface{}{
+ map[string]interface{}{
+ "zone_id": *alias.HostedZoneId,
+ "name": *alias.DNSName,
+ "evaluate_target_health": *alias.EvaluateTargetHealth,
+ },
+ })
+ }
+ }
+
d.Set("ttl", record.TTL)
// Only set the weight if it's non-nil, otherwise we end up with a 0 weight
// which has actual contextual meaning with Route 53 records
diff --git a/builtin/providers/aws/resource_aws_route53_record_test.go b/builtin/providers/aws/resource_aws_route53_record_test.go
index 65df31729ada..32acf9abfd30 100644
--- a/builtin/providers/aws/resource_aws_route53_record_test.go
+++ b/builtin/providers/aws/resource_aws_route53_record_test.go
@@ -53,9 +53,10 @@ func TestExpandRecordName(t *testing.T) {
func TestAccAWSRoute53Record_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.default",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordConfig,
@@ -69,9 +70,11 @@ func TestAccAWSRoute53Record_basic(t *testing.T) {
func TestAccAWSRoute53Record_txtSupport(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.default",
+ IDRefreshIgnore: []string{"zone_id"}, // just for this test
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordConfigTXT,
@@ -85,9 +88,10 @@ func TestAccAWSRoute53Record_txtSupport(t *testing.T) {
func TestAccAWSRoute53Record_spfSupport(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.default",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordConfigSPF,
@@ -102,9 +106,10 @@ func TestAccAWSRoute53Record_spfSupport(t *testing.T) {
}
func TestAccAWSRoute53Record_generatesSuffix(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.default",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordConfigSuffix,
@@ -118,9 +123,10 @@ func TestAccAWSRoute53Record_generatesSuffix(t *testing.T) {
func TestAccAWSRoute53Record_wildcard(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.wildcard",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53WildCardRecordConfig,
@@ -142,9 +148,10 @@ func TestAccAWSRoute53Record_wildcard(t *testing.T) {
func TestAccAWSRoute53Record_failover(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.www-primary",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53FailoverCNAMERecord,
@@ -159,9 +166,10 @@ func TestAccAWSRoute53Record_failover(t *testing.T) {
func TestAccAWSRoute53Record_weighted_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.www-live",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53WeightedCNAMERecord,
@@ -177,9 +185,10 @@ func TestAccAWSRoute53Record_weighted_basic(t *testing.T) {
func TestAccAWSRoute53Record_alias(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.alias",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53ElbAliasRecord,
@@ -209,9 +218,10 @@ func TestAccAWSRoute53Record_s3_alias(t *testing.T) {
func TestAccAWSRoute53Record_weighted_alias(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.elb_weighted_alias_live",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53WeightedElbAliasRecord,
@@ -236,9 +246,10 @@ func TestAccAWSRoute53Record_weighted_alias(t *testing.T) {
func TestAccAWSRoute53Record_TypeChange(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53RecordDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_record.sample",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53RecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53RecordTypeChangePre,
diff --git a/builtin/providers/aws/resource_aws_route53_zone.go b/builtin/providers/aws/resource_aws_route53_zone.go
index 2b2930ac1d7a..4cbd24d2de0d 100644
--- a/builtin/providers/aws/resource_aws_route53_zone.go
+++ b/builtin/providers/aws/resource_aws_route53_zone.go
@@ -138,6 +138,14 @@ func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error
return err
}
+ // In the import case this will be empty
+ if _, ok := d.GetOk("zone_id"); !ok {
+ d.Set("zone_id", d.Id())
+ }
+ if _, ok := d.GetOk("name"); !ok {
+ d.Set("name", zone.HostedZone.Name)
+ }
+
if !*zone.HostedZone.Config.PrivateZone {
ns := make([]string, len(zone.DelegationSet.NameServers))
for i := range zone.DelegationSet.NameServers {
@@ -156,10 +164,24 @@ func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error
return fmt.Errorf("[DEBUG] Error setting name servers for: %s, error: %#v", d.Id(), err)
}
+ // In the import case we just associate it with the first VPC
+ if _, ok := d.GetOk("vpc_id"); !ok {
+ if len(zone.VPCs) > 1 {
+ return fmt.Errorf(
+ "Can't import a route53_zone with more than one VPC attachment")
+ }
+
+ if len(zone.VPCs) > 0 {
+ d.Set("vpc_id", zone.VPCs[0].VPCId)
+ d.Set("vpc_region", zone.VPCs[0].VPCRegion)
+ }
+ }
+
var associatedVPC *route53.VPC
for _, vpc := range zone.VPCs {
if *vpc.VPCId == d.Get("vpc_id") {
associatedVPC = vpc
+ break
}
}
if associatedVPC == nil {
diff --git a/builtin/providers/aws/resource_aws_route53_zone_test.go b/builtin/providers/aws/resource_aws_route53_zone_test.go
index 9f9ef0006695..bea1d93200fd 100644
--- a/builtin/providers/aws/resource_aws_route53_zone_test.go
+++ b/builtin/providers/aws/resource_aws_route53_zone_test.go
@@ -69,9 +69,10 @@ func TestAccAWSRoute53Zone_basic(t *testing.T) {
var td route53.ResourceTagSet
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53ZoneDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_zone.main",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53ZoneConfig,
@@ -90,9 +91,10 @@ func TestAccAWSRoute53Zone_updateComment(t *testing.T) {
var td route53.ResourceTagSet
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53ZoneDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_zone.main",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53ZoneConfig,
@@ -122,9 +124,10 @@ func TestAccAWSRoute53Zone_private_basic(t *testing.T) {
var zone route53.GetHostedZoneOutput
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRoute53ZoneDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_zone.main",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRoute53ZoneDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRoute53PrivateZoneConfig,
@@ -153,6 +156,7 @@ func TestAccAWSRoute53Zone_private_region(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route53_zone.main",
ProviderFactories: providerFactories,
CheckDestroy: testAccCheckRoute53ZoneDestroyWithProviders(&providers),
Steps: []resource.TestStep{
@@ -310,7 +314,7 @@ func testAccLoadTagsR53(zone *route53.GetHostedZoneOutput, td *route53.ResourceT
const testAccRoute53ZoneConfig = `
resource "aws_route53_zone" "main" {
- name = "hashicorp.com"
+ name = "hashicorp.com."
comment = "Custom comment"
tags {
@@ -322,7 +326,7 @@ resource "aws_route53_zone" "main" {
const testAccRoute53ZoneConfigUpdateComment = `
resource "aws_route53_zone" "main" {
- name = "hashicorp.com"
+ name = "hashicorp.com."
comment = "Change Custom Comment"
tags {
@@ -341,7 +345,7 @@ resource "aws_vpc" "main" {
}
resource "aws_route53_zone" "main" {
- name = "hashicorp.com"
+ name = "hashicorp.com."
vpc_id = "${aws_vpc.main.id}"
}
`
@@ -367,7 +371,7 @@ resource "aws_vpc" "main" {
resource "aws_route53_zone" "main" {
provider = "aws.west"
- name = "hashicorp.com"
+ name = "hashicorp.com."
vpc_id = "${aws_vpc.main.id}"
vpc_region = "us-east-1"
}
diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go
index 5c74a57ddb0b..d81ce05f7046 100644
--- a/builtin/providers/aws/resource_aws_route_table_test.go
+++ b/builtin/providers/aws/resource_aws_route_table_test.go
@@ -59,9 +59,10 @@ func TestAccAWSRouteTable_basic(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRouteTableDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route_table.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRouteTableDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRouteTableConfig,
@@ -108,9 +109,10 @@ func TestAccAWSRouteTable_instance(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRouteTableDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route_table.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRouteTableDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRouteTableConfigInstance,
@@ -128,9 +130,10 @@ func TestAccAWSRouteTable_tags(t *testing.T) {
var route_table ec2.RouteTable
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckRouteTableDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_route_table.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckRouteTableDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRouteTableConfigTags,
diff --git a/builtin/providers/aws/resource_aws_route_test.go b/builtin/providers/aws/resource_aws_route_test.go
index a63d91acb80e..cf0ef0781fc7 100644
--- a/builtin/providers/aws/resource_aws_route_test.go
+++ b/builtin/providers/aws/resource_aws_route_test.go
@@ -158,6 +158,24 @@ func TestAccAWSRoute_noopdiff(t *testing.T) {
})
}
+func TestAccAWSRoute_doesNotCrashWithVPCEndpoint(t *testing.T) {
+ var route ec2.Route
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSRouteDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSRouteWithVPCEndpoint,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSRouteExists("aws_route.bar", &route),
+ ),
+ },
+ },
+ })
+}
+
// Acceptance test if mixed inline and external routes are implemented
/*
func TestAccAWSRoute_mix(t *testing.T) {
@@ -365,3 +383,32 @@ resource "aws_instance" "nat" {
subnet_id = "${aws_subnet.test.id}"
}
`)
+
+var testAccAWSRouteWithVPCEndpoint = fmt.Sprint(`
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
+resource "aws_internet_gateway" "foo" {
+ vpc_id = "${aws_vpc.foo.id}"
+}
+
+resource "aws_route_table" "foo" {
+ vpc_id = "${aws_vpc.foo.id}"
+}
+
+resource "aws_route" "bar" {
+ route_table_id = "${aws_route_table.foo.id}"
+ destination_cidr_block = "10.3.0.0/16"
+ gateway_id = "${aws_internet_gateway.foo.id}"
+
+ # Forcing endpoint to create before route - without this the crash is a race.
+ depends_on = ["aws_vpc_endpoint.baz"]
+}
+
+resource "aws_vpc_endpoint" "baz" {
+ vpc_id = "${aws_vpc.foo.id}"
+ service_name = "com.amazonaws.us-west-2.s3"
+ route_table_ids = ["${aws_route_table.foo.id}"]
+}
+`)
diff --git a/builtin/providers/aws/resource_aws_s3_bucket.go b/builtin/providers/aws/resource_aws_s3_bucket.go
index 6cb98db8e557..d4e384b64ac6 100644
--- a/builtin/providers/aws/resource_aws_s3_bucket.go
+++ b/builtin/providers/aws/resource_aws_s3_bucket.go
@@ -183,6 +183,109 @@ func resourceAwsS3Bucket() *schema.Resource {
},
},
+ "lifecycle_rule": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ValidateFunc: validateS3BucketLifecycleRuleId,
+ },
+ "prefix": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "enabled": &schema.Schema{
+ Type: schema.TypeBool,
+ Required: true,
+ },
+ "abort_incomplete_multipart_upload_days": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ "expiration": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: expirationHash,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "date": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateFunc: validateS3BucketLifecycleTimestamp,
+ },
+ "days": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ "expired_object_delete_marker": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ },
+ },
+ },
+ },
+ "noncurrent_version_expiration": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: expirationHash,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "days": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ },
+ },
+ },
+ "transition": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: transitionHash,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "date": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateFunc: validateS3BucketLifecycleTimestamp,
+ },
+ "days": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ "storage_class": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validateS3BucketLifecycleStorageClass,
+ },
+ },
+ },
+ },
+ "noncurrent_version_transition": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: transitionHash,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "days": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ "storage_class": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validateS3BucketLifecycleStorageClass,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+
"tags": tagsSchema(),
"force_destroy": &schema.Schema{
@@ -286,6 +389,12 @@ func resourceAwsS3BucketUpdate(d *schema.ResourceData, meta interface{}) error {
}
}
+ if d.HasChange("lifecycle_rule") {
+ if err := resourceAwsS3BucketLifecycleUpdate(s3conn, d); err != nil {
+ return err
+ }
+ }
+
return resourceAwsS3BucketRead(d, meta)
}
@@ -308,6 +417,11 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error {
}
}
+ // In the import case, we won't have this
+ if _, ok := d.GetOk("bucket"); !ok {
+ d.Set("bucket", d.Id())
+ }
+
// Read the policy
pol, err := s3conn.GetBucketPolicy(&s3.GetBucketPolicyInput{
Bucket: aws.String(d.Id()),
@@ -434,6 +548,110 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error {
}
}
+ // Read the lifecycle configuration
+ lifecycle, err := s3conn.GetBucketLifecycleConfiguration(&s3.GetBucketLifecycleConfigurationInput{
+ Bucket: aws.String(d.Id()),
+ })
+ if err != nil {
+ if awsError, ok := err.(awserr.RequestFailure); ok && awsError.StatusCode() != 404 {
+ return err
+ }
+ }
+ log.Printf("[DEBUG] S3 Bucket: %s, lifecycle: %v", d.Id(), lifecycle)
+ if len(lifecycle.Rules) > 0 {
+ rules := make([]map[string]interface{}, 0, len(lifecycle.Rules))
+
+ for _, lifecycleRule := range lifecycle.Rules {
+ rule := make(map[string]interface{})
+
+ // ID
+ if lifecycleRule.ID != nil && *lifecycleRule.ID != "" {
+ rule["id"] = *lifecycleRule.ID
+ }
+ // Prefix
+ if lifecycleRule.Prefix != nil && *lifecycleRule.Prefix != "" {
+ rule["prefix"] = *lifecycleRule.Prefix
+ }
+ // Enabled
+ if lifecycleRule.Status != nil {
+ if *lifecycleRule.Status == s3.ExpirationStatusEnabled {
+ rule["enabled"] = true
+ } else {
+ rule["enabled"] = false
+ }
+ }
+
+ // AbortIncompleteMultipartUploadDays
+ if lifecycleRule.AbortIncompleteMultipartUpload != nil {
+ if lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation != nil {
+ rule["abort_incomplete_multipart_upload_days"] = int(*lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation)
+ }
+ }
+
+ // expiration
+ if lifecycleRule.Expiration != nil {
+ e := make(map[string]interface{})
+ if lifecycleRule.Expiration.Date != nil {
+ e["date"] = (*lifecycleRule.Expiration.Date).Format("2006-01-02")
+ }
+ if lifecycleRule.Expiration.Days != nil {
+ e["days"] = int(*lifecycleRule.Expiration.Days)
+ }
+ if lifecycleRule.Expiration.ExpiredObjectDeleteMarker != nil {
+ e["expired_object_delete_marker"] = *lifecycleRule.Expiration.ExpiredObjectDeleteMarker
+ }
+ rule["expiration"] = schema.NewSet(expirationHash, []interface{}{e})
+ }
+ // noncurrent_version_expiration
+ if lifecycleRule.NoncurrentVersionExpiration != nil {
+ e := make(map[string]interface{})
+ if lifecycleRule.NoncurrentVersionExpiration.NoncurrentDays != nil {
+ e["days"] = int(*lifecycleRule.NoncurrentVersionExpiration.NoncurrentDays)
+ }
+ rule["noncurrent_version_expiration"] = schema.NewSet(expirationHash, []interface{}{e})
+ }
+ //// transition
+ if len(lifecycleRule.Transitions) > 0 {
+ transitions := make([]interface{}, 0, len(lifecycleRule.Transitions))
+ for _, v := range lifecycleRule.Transitions {
+ t := make(map[string]interface{})
+ if v.Date != nil {
+ t["date"] = (*v.Date).Format("2006-01-02")
+ }
+ if v.Days != nil {
+ t["days"] = int(*v.Days)
+ }
+ if v.StorageClass != nil {
+ t["storage_class"] = *v.StorageClass
+ }
+ transitions = append(transitions, t)
+ }
+ rule["transition"] = schema.NewSet(transitionHash, transitions)
+ }
+ // noncurrent_version_transition
+ if len(lifecycleRule.NoncurrentVersionTransitions) > 0 {
+ transitions := make([]interface{}, 0, len(lifecycleRule.NoncurrentVersionTransitions))
+ for _, v := range lifecycleRule.NoncurrentVersionTransitions {
+ t := make(map[string]interface{})
+ if v.NoncurrentDays != nil {
+ t["days"] = int(*v.NoncurrentDays)
+ }
+ if v.StorageClass != nil {
+ t["storage_class"] = *v.StorageClass
+ }
+ transitions = append(transitions, t)
+ }
+ rule["noncurrent_version_transition"] = schema.NewSet(transitionHash, transitions)
+ }
+
+ rules = append(rules, rule)
+ }
+
+ if err := d.Set("lifecycle_rule", rules); err != nil {
+ return err
+ }
+ }
+
// Add the region as an attribute
location, err := s3conn.GetBucketLocation(
&s3.GetBucketLocationInput{
@@ -658,7 +876,12 @@ func resourceAwsS3BucketWebsiteUpdate(s3conn *s3.S3, d *schema.ResourceData) err
ws := d.Get("website").([]interface{})
if len(ws) == 1 {
- w := ws[0].(map[string]interface{})
+ var w map[string]interface{}
+ if ws[0] != nil {
+ w = ws[0].(map[string]interface{})
+ } else {
+ w = make(map[string]interface{})
+ }
return resourceAwsS3BucketWebsitePut(s3conn, d, w)
} else if len(ws) == 0 {
return resourceAwsS3BucketWebsiteDelete(s3conn, d)
@@ -670,10 +893,19 @@ func resourceAwsS3BucketWebsiteUpdate(s3conn *s3.S3, d *schema.ResourceData) err
func resourceAwsS3BucketWebsitePut(s3conn *s3.S3, d *schema.ResourceData, website map[string]interface{}) error {
bucket := d.Get("bucket").(string)
- indexDocument := website["index_document"].(string)
- errorDocument := website["error_document"].(string)
- redirectAllRequestsTo := website["redirect_all_requests_to"].(string)
- routingRules := website["routing_rules"].(string)
+ var indexDocument, errorDocument, redirectAllRequestsTo, routingRules string
+ if v, ok := website["index_document"]; ok {
+ indexDocument = v.(string)
+ }
+ if v, ok := website["error_document"]; ok {
+ errorDocument = v.(string)
+ }
+ if v, ok := website["redirect_all_requests_to"]; ok {
+ redirectAllRequestsTo = v.(string)
+ }
+ if v, ok := website["routing_rules"]; ok {
+ routingRules = v.(string)
+ }
if indexDocument == "" && redirectAllRequestsTo == "" {
return fmt.Errorf("Must specify either index_document or redirect_all_requests_to.")
@@ -863,6 +1095,137 @@ func resourceAwsS3BucketLoggingUpdate(s3conn *s3.S3, d *schema.ResourceData) err
return nil
}
+func resourceAwsS3BucketLifecycleUpdate(s3conn *s3.S3, d *schema.ResourceData) error {
+ bucket := d.Get("bucket").(string)
+
+ lifecycleRules := d.Get("lifecycle_rule").([]interface{})
+
+ rules := make([]*s3.LifecycleRule, 0, len(lifecycleRules))
+
+ for i, lifecycleRule := range lifecycleRules {
+ r := lifecycleRule.(map[string]interface{})
+
+ rule := &s3.LifecycleRule{
+ Prefix: aws.String(r["prefix"].(string)),
+ }
+
+ // ID
+ if val, ok := r["id"].(string); ok && val != "" {
+ rule.ID = aws.String(val)
+ } else {
+ rule.ID = aws.String(resource.PrefixedUniqueId("tf-s3-lifecycle-"))
+ }
+
+ // Enabled
+ if val, ok := r["enabled"].(bool); ok && val {
+ rule.Status = aws.String(s3.ExpirationStatusEnabled)
+ } else {
+ rule.Status = aws.String(s3.ExpirationStatusDisabled)
+ }
+
+ // AbortIncompleteMultipartUpload
+ if val, ok := r["abort_incomplete_multipart_upload_days"].(int); ok && val > 0 {
+ rule.AbortIncompleteMultipartUpload = &s3.AbortIncompleteMultipartUpload{
+ DaysAfterInitiation: aws.Int64(int64(val)),
+ }
+ }
+
+ // Expiration
+ expiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.expiration", i)).(*schema.Set).List()
+ if len(expiration) > 0 {
+ e := expiration[0].(map[string]interface{})
+ i := &s3.LifecycleExpiration{}
+
+ if val, ok := e["date"].(string); ok && val != "" {
+ t, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", val))
+ if err != nil {
+ return fmt.Errorf("Error Parsing AWS S3 Bucket Lifecycle Expiration Date: %s", err.Error())
+ }
+ i.Date = aws.Time(t)
+ } else if val, ok := e["days"].(int); ok && val > 0 {
+ i.Days = aws.Int64(int64(val))
+ } else if val, ok := e["expired_object_delete_marker"].(bool); ok {
+ i.ExpiredObjectDeleteMarker = aws.Bool(val)
+ }
+ rule.Expiration = i
+ }
+
+ // NoncurrentVersionExpiration
+ nc_expiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_expiration", i)).(*schema.Set).List()
+ if len(nc_expiration) > 0 {
+ e := nc_expiration[0].(map[string]interface{})
+
+ if val, ok := e["days"].(int); ok && val > 0 {
+ rule.NoncurrentVersionExpiration = &s3.NoncurrentVersionExpiration{
+ NoncurrentDays: aws.Int64(int64(val)),
+ }
+ }
+ }
+
+ // Transitions
+ transitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.transition", i)).(*schema.Set).List()
+ if len(transitions) > 0 {
+ rule.Transitions = make([]*s3.Transition, 0, len(transitions))
+ for _, transition := range transitions {
+ transition := transition.(map[string]interface{})
+ i := &s3.Transition{}
+ if val, ok := transition["date"].(string); ok && val != "" {
+ t, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", val))
+ if err != nil {
+ return fmt.Errorf("Error Parsing AWS S3 Bucket Lifecycle Expiration Date: %s", err.Error())
+ }
+ i.Date = aws.Time(t)
+ } else if val, ok := transition["days"].(int); ok && val > 0 {
+ i.Days = aws.Int64(int64(val))
+ }
+ if val, ok := transition["storage_class"].(string); ok && val != "" {
+ i.StorageClass = aws.String(val)
+ }
+
+ rule.Transitions = append(rule.Transitions, i)
+ }
+ }
+ // NoncurrentVersionTransitions
+ nc_transitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_transition", i)).(*schema.Set).List()
+ if len(nc_transitions) > 0 {
+ rule.NoncurrentVersionTransitions = make([]*s3.NoncurrentVersionTransition, 0, len(nc_transitions))
+ for _, transition := range nc_transitions {
+ transition := transition.(map[string]interface{})
+ i := &s3.NoncurrentVersionTransition{}
+ if val, ok := transition["days"].(int); ok && val > 0 {
+ i.NoncurrentDays = aws.Int64(int64(val))
+ }
+ if val, ok := transition["storage_class"].(string); ok && val != "" {
+ i.StorageClass = aws.String(val)
+ }
+
+ rule.NoncurrentVersionTransitions = append(rule.NoncurrentVersionTransitions, i)
+ }
+ }
+
+ rules = append(rules, rule)
+ }
+
+ i := &s3.PutBucketLifecycleConfigurationInput{
+ Bucket: aws.String(bucket),
+ LifecycleConfiguration: &s3.BucketLifecycleConfiguration{
+ Rules: rules,
+ },
+ }
+
+ err := resource.Retry(1*time.Minute, func() *resource.RetryError {
+ if _, err := s3conn.PutBucketLifecycleConfiguration(i); err != nil {
+ return resource.NonRetryableError(err)
+ }
+ return nil
+ })
+ if err != nil {
+ return fmt.Errorf("Error putting S3 lifecycle: %s", err)
+ }
+
+ return nil
+}
+
func normalizeRoutingRules(w []*s3.RoutingRule) (string, error) {
withNulls, err := json.Marshal(w)
if err != nil {
@@ -927,6 +1290,36 @@ func normalizeRegion(region string) string {
return region
}
+func expirationHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ if v, ok := m["date"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ if v, ok := m["days"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", v.(int)))
+ }
+ if v, ok := m["expired_object_delete_marker"]; ok {
+ buf.WriteString(fmt.Sprintf("%t-", v.(bool)))
+ }
+ return hashcode.String(buf.String())
+}
+
+func transitionHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ if v, ok := m["date"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ if v, ok := m["days"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", v.(int)))
+ }
+ if v, ok := m["storage_class"]; ok {
+ buf.WriteString(fmt.Sprintf("%s-", v.(string)))
+ }
+ return hashcode.String(buf.String())
+}
+
type S3Website struct {
Endpoint, Domain string
}
diff --git a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go
index 60ca49081698..63ccf68618b1 100644
--- a/builtin/providers/aws/resource_aws_s3_bucket_object_test.go
+++ b/builtin/providers/aws/resource_aws_s3_bucket_object_test.go
@@ -354,7 +354,7 @@ resource "aws_s3_bucket_object" "object" {
bucket = "${aws_s3_bucket.object_bucket_2.bucket}"
key = "test-key"
content = "stuff"
- kms_key_id = "${aws_kms_key.kms_key_1.key_id}"
+ kms_key_id = "${aws_kms_key.kms_key_1.arn}"
}
`, randInt)
}
diff --git a/builtin/providers/aws/resource_aws_s3_bucket_test.go b/builtin/providers/aws/resource_aws_s3_bucket_test.go
index 43b7a8ad6b99..c6a893e7efd0 100644
--- a/builtin/providers/aws/resource_aws_s3_bucket_test.go
+++ b/builtin/providers/aws/resource_aws_s3_bucket_test.go
@@ -23,7 +23,11 @@ func TestAccAWSS3Bucket_basic(t *testing.T) {
"^arn:aws:s3:::")
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
+ PreCheck: func() { testAccPreCheck(t) },
+ /*
+ IDRefreshName: "aws_s3_bucket.bucket",
+ IDRefreshIgnore: []string{"force_destroy"},
+ */
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSS3BucketDestroy,
Steps: []resource.TestStep{
@@ -343,6 +347,85 @@ func TestAccAWSS3Bucket_Logging(t *testing.T) {
})
}
+func TestAccAWSS3Bucket_Lifecycle(t *testing.T) {
+ rInt := acctest.RandInt()
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSS3BucketDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccAWSS3BucketConfigWithLifecycle(rInt),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.id", "id1"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.prefix", "path1/"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.expiration.2613713285.days", "365"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.expiration.2613713285.date", ""),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.expiration.2613713285.expired_object_delete_marker", "false"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.2000431762.date", ""),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.2000431762.days", "30"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.2000431762.storage_class", "STANDARD_IA"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.6450812.date", ""),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.6450812.days", "60"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.transition.6450812.storage_class", "GLACIER"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.id", "id2"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.prefix", "path2/"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.expiration.2855832418.date", "2016-01-12"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.expiration.2855832418.days", "0"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.expiration.2855832418.expired_object_delete_marker", "false"),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccAWSS3BucketConfigWithVersioningLifecycle(rInt),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckAWSS3BucketExists("aws_s3_bucket.bucket"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.id", "id1"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.prefix", "path1/"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.enabled", "true"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.noncurrent_version_expiration.80908210.days", "365"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.noncurrent_version_transition.1377917700.days", "30"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.noncurrent_version_transition.1377917700.storage_class", "STANDARD_IA"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.noncurrent_version_transition.2528035817.days", "60"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.0.noncurrent_version_transition.2528035817.storage_class", "GLACIER"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.id", "id2"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.prefix", "path2/"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.enabled", "false"),
+ resource.TestCheckResourceAttr(
+ "aws_s3_bucket.bucket", "lifecycle_rule.1.noncurrent_version_expiration.80908210.days", "365"),
+ ),
+ },
+ },
+ })
+}
+
func testAccCheckAWSS3BucketDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).s3conn
@@ -812,3 +895,77 @@ resource "aws_s3_bucket" "bucket" {
}
`, randInt, randInt)
}
+
+func testAccAWSS3BucketConfigWithLifecycle(randInt int) string {
+ return fmt.Sprintf(`
+resource "aws_s3_bucket" "bucket" {
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ lifecycle_rule {
+ id = "id1"
+ prefix = "path1/"
+ enabled = true
+
+ expiration {
+ days = 365
+ }
+
+ transition {
+ days = 30
+ storage_class = "STANDARD_IA"
+ }
+ transition {
+ days = 60
+ storage_class = "GLACIER"
+ }
+ }
+ lifecycle_rule {
+ id = "id2"
+ prefix = "path2/"
+ enabled = true
+
+ expiration {
+ date = "2016-01-12"
+ }
+ }
+}
+`, randInt)
+}
+
+func testAccAWSS3BucketConfigWithVersioningLifecycle(randInt int) string {
+ return fmt.Sprintf(`
+resource "aws_s3_bucket" "bucket" {
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ versioning {
+ enabled = false
+ }
+ lifecycle_rule {
+ id = "id1"
+ prefix = "path1/"
+ enabled = true
+
+ noncurrent_version_expiration {
+ days = 365
+ }
+ noncurrent_version_transition {
+ days = 30
+ storage_class = "STANDARD_IA"
+ }
+ noncurrent_version_transition {
+ days = 60
+ storage_class = "GLACIER"
+ }
+ }
+ lifecycle_rule {
+ id = "id2"
+ prefix = "path2/"
+ enabled = false
+
+ noncurrent_version_expiration {
+ days = 365
+ }
+ }
+}
+`, randInt)
+}
diff --git a/builtin/providers/aws/resource_aws_security_group_rule.go b/builtin/providers/aws/resource_aws_security_group_rule.go
index 715a0a5cd42d..f1f3883642c1 100644
--- a/builtin/providers/aws/resource_aws_security_group_rule.go
+++ b/builtin/providers/aws/resource_aws_security_group_rule.go
@@ -6,11 +6,13 @@ import (
"log"
"sort"
"strings"
+ "time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
+ "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
@@ -100,6 +102,7 @@ func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{}
}
ruleType := d.Get("type").(string)
+ isVPC := sg.VpcId != nil && *sg.VpcId != ""
var autherr error
switch ruleType {
@@ -112,7 +115,7 @@ func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{}
IpPermissions: []*ec2.IpPermission{perm},
}
- if sg.VpcId == nil || *sg.VpcId == "" {
+ if !isVPC {
req.GroupId = nil
req.GroupName = sg.GroupName
}
@@ -137,11 +140,11 @@ func resourceAwsSecurityGroupRuleCreate(d *schema.ResourceData, meta interface{}
if autherr != nil {
if awsErr, ok := autherr.(awserr.Error); ok {
if awsErr.Code() == "InvalidPermission.Duplicate" {
- return fmt.Errorf(`[WARN] A duplicate Security Group rule was found. This may be
+ return fmt.Errorf(`[WARN] A duplicate Security Group rule was found on (%s). This may be
a side effect of a now-fixed Terraform issue causing two security groups with
identical attributes but different source_security_group_ids to overwrite each
other in the state. See https://github.com/hashicorp/terraform/pull/2376 for more
-information and instructions for recovery. Error message: %s`, awsErr.Message())
+information and instructions for recovery. Error message: %s`, sg_id, awsErr.Message())
}
}
@@ -151,10 +154,44 @@ information and instructions for recovery. Error message: %s`, awsErr.Message())
}
id := ipPermissionIDHash(sg_id, ruleType, perm)
- d.SetId(id)
- log.Printf("[DEBUG] Security group rule ID set to %s", id)
+ log.Printf("[DEBUG] Computed group rule ID %s", id)
+
+ retErr := resource.Retry(5*time.Minute, func() *resource.RetryError {
+ sg, err := findResourceSecurityGroup(conn, sg_id)
+
+ if err != nil {
+ log.Printf("[DEBUG] Error finding Secuirty Group (%s) for Rule (%s): %s", sg_id, id, err)
+ return resource.NonRetryableError(err)
+ }
+
+ var rules []*ec2.IpPermission
+ switch ruleType {
+ case "ingress":
+ rules = sg.IpPermissions
+ default:
+ rules = sg.IpPermissionsEgress
+ }
- return resourceAwsSecurityGroupRuleRead(d, meta)
+ rule := findRuleMatch(perm, rules, isVPC)
+
+ if rule == nil {
+ log.Printf("[DEBUG] Unable to find matching %s Security Group Rule (%s) for Group %s",
+ ruleType, id, sg_id)
+ return resource.RetryableError(fmt.Errorf("No match found"))
+ }
+
+ log.Printf("[DEBUG] Found rule for Security Group Rule (%s): %s", id, rule)
+ return nil
+ })
+
+ if retErr != nil {
+ log.Printf("[DEBUG] Error finding matching %s Security Group Rule (%s) for Group %s -- NO STATE WILL BE SAVED",
+ ruleType, id, sg_id)
+ return nil
+ }
+
+ d.SetId(id)
+ return nil
}
func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) error {
@@ -191,54 +228,7 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{})
return nil
}
- for _, r := range rules {
- if r.ToPort != nil && *p.ToPort != *r.ToPort {
- continue
- }
-
- if r.FromPort != nil && *p.FromPort != *r.FromPort {
- continue
- }
-
- if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol {
- continue
- }
-
- remaining := len(p.IpRanges)
- for _, ip := range p.IpRanges {
- for _, rip := range r.IpRanges {
- if *ip.CidrIp == *rip.CidrIp {
- remaining--
- }
- }
- }
-
- if remaining > 0 {
- continue
- }
-
- remaining = len(p.UserIdGroupPairs)
- for _, ip := range p.UserIdGroupPairs {
- for _, rip := range r.UserIdGroupPairs {
- if isVPC {
- if *ip.GroupId == *rip.GroupId {
- remaining--
- }
- } else {
- if *ip.GroupName == *rip.GroupName {
- remaining--
- }
- }
- }
- }
-
- if remaining > 0 {
- continue
- }
-
- log.Printf("[DEBUG] Found rule for Security Group Rule (%s): %s", d.Id(), r)
- rule = r
- }
+ rule = findRuleMatch(p, rules, isVPC)
if rule == nil {
log.Printf("[DEBUG] Unable to find matching %s Security Group Rule (%s) for Group %s",
@@ -247,6 +237,8 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{})
return nil
}
+ log.Printf("[DEBUG] Found rule for Security Group Rule (%s): %s", d.Id(), rule)
+
d.Set("from_port", rule.FromPort)
d.Set("to_port", rule.ToPort)
d.Set("protocol", rule.IpProtocol)
@@ -362,6 +354,58 @@ func (b ByGroupPair) Less(i, j int) bool {
panic("mismatched security group rules, may be a terraform bug")
}
+func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) *ec2.IpPermission {
+ var rule *ec2.IpPermission
+ for _, r := range rules {
+ if r.ToPort != nil && *p.ToPort != *r.ToPort {
+ continue
+ }
+
+ if r.FromPort != nil && *p.FromPort != *r.FromPort {
+ continue
+ }
+
+ if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol {
+ continue
+ }
+
+ remaining := len(p.IpRanges)
+ for _, ip := range p.IpRanges {
+ for _, rip := range r.IpRanges {
+ if *ip.CidrIp == *rip.CidrIp {
+ remaining--
+ }
+ }
+ }
+
+ if remaining > 0 {
+ continue
+ }
+
+ remaining = len(p.UserIdGroupPairs)
+ for _, ip := range p.UserIdGroupPairs {
+ for _, rip := range r.UserIdGroupPairs {
+ if isVPC {
+ if *ip.GroupId == *rip.GroupId {
+ remaining--
+ }
+ } else {
+ if *ip.GroupName == *rip.GroupName {
+ remaining--
+ }
+ }
+ }
+ }
+
+ if remaining > 0 {
+ continue
+ }
+
+ rule = r
+ }
+ return rule
+}
+
func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string {
var buf bytes.Buffer
buf.WriteString(fmt.Sprintf("%s-", sg_id))
diff --git a/builtin/providers/aws/resource_aws_security_group_test.go b/builtin/providers/aws/resource_aws_security_group_test.go
index 2b5f97140809..23bdb0622944 100644
--- a/builtin/providers/aws/resource_aws_security_group_test.go
+++ b/builtin/providers/aws/resource_aws_security_group_test.go
@@ -236,9 +236,10 @@ func TestAccAWSSecurityGroup_basic(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfig,
@@ -269,9 +270,11 @@ func TestAccAWSSecurityGroup_namePrefix(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.baz",
+ IDRefreshIgnore: []string{"name_prefix"},
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupPrefixNameConfig,
@@ -303,9 +306,10 @@ func TestAccAWSSecurityGroup_self(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigSelf,
@@ -342,9 +346,10 @@ func TestAccAWSSecurityGroup_vpc(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigVpc,
@@ -394,9 +399,10 @@ func TestAccAWSSecurityGroup_vpcNegOneIngress(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigVpcNegOneIngress,
@@ -427,9 +433,10 @@ func TestAccAWSSecurityGroup_MultiIngress(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigMultiIngress,
@@ -445,9 +452,10 @@ func TestAccAWSSecurityGroup_Change(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfig,
@@ -470,9 +478,10 @@ func TestAccAWSSecurityGroup_generatedName(t *testing.T) {
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfig_generatedName,
@@ -499,9 +508,10 @@ func TestAccAWSSecurityGroup_DefaultEgress(t *testing.T) {
// VPC
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.worker",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigDefaultEgress,
@@ -515,9 +525,10 @@ func TestAccAWSSecurityGroup_DefaultEgress(t *testing.T) {
// Classic
var group ec2.SecurityGroup
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_security_group.web",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSecurityGroupConfigClassic,
@@ -958,9 +969,14 @@ func testAccCheckAWSSecurityGroupExistsWithoutDefault(n string) resource.TestChe
}
const testAccAWSSecurityGroupConfig = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "web" {
name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "6"
@@ -983,9 +999,14 @@ resource "aws_security_group" "web" {
`
const testAccAWSSecurityGroupConfigChange = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "web" {
name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1011,9 +1032,14 @@ resource "aws_security_group" "web" {
`
const testAccAWSSecurityGroupConfigSelf = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "web" {
name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1076,9 +1102,14 @@ resource "aws_security_group" "web" {
}
`
const testAccAWSSecurityGroupConfigMultiIngress = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "worker" {
name = "terraform_acceptance_test_example_1"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1098,6 +1129,7 @@ resource "aws_security_group" "worker" {
resource "aws_security_group" "web" {
name = "terraform_acceptance_test_example_2"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1130,9 +1162,14 @@ resource "aws_security_group" "web" {
`
const testAccAWSSecurityGroupConfigTags = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "foo" {
- name = "terraform_acceptance_test_example"
+ name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1155,9 +1192,14 @@ resource "aws_security_group" "foo" {
`
const testAccAWSSecurityGroupConfigTagsUpdate = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "foo" {
name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1180,7 +1222,13 @@ resource "aws_security_group" "foo" {
`
const testAccAWSSecurityGroupConfig_generatedName = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "web" {
+ vpc_id = "${aws_vpc.foo.id}"
+
ingress {
protocol = "tcp"
from_port = 80
@@ -1274,14 +1322,20 @@ resource "aws_security_group" "web" {
func testAccAWSSecurityGroupConfig_drift_complex() string {
return fmt.Sprintf(`
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "otherweb" {
name = "tf_acc_%d"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_security_group" "web" {
name = "tf_acc_%d"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
@@ -1332,8 +1386,13 @@ resource "aws_security_group" "web" {
}
const testAccAWSSecurityGroupCombindCIDRandGroups = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "two" {
name = "tf-test-1"
+ vpc_id = "${aws_vpc.foo.id}"
tags {
Name = "tf-test-1"
}
@@ -1341,6 +1400,7 @@ resource "aws_security_group" "two" {
resource "aws_security_group" "one" {
name = "tf-test-2"
+ vpc_id = "${aws_vpc.foo.id}"
tags {
Name = "tf-test-w"
}
@@ -1348,6 +1408,7 @@ resource "aws_security_group" "one" {
resource "aws_security_group" "three" {
name = "tf-test-3"
+ vpc_id = "${aws_vpc.foo.id}"
tags {
Name = "tf-test-3"
}
@@ -1355,6 +1416,7 @@ resource "aws_security_group" "three" {
resource "aws_security_group" "mixed" {
name = "tf-mix-test"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
from_port = 80
@@ -1376,9 +1438,14 @@ resource "aws_security_group" "mixed" {
`
const testAccAWSSecurityGroupConfig_ingressWithCidrAndSGs = `
+resource "aws_vpc" "foo" {
+ cidr_block = "10.1.0.0/16"
+}
+
resource "aws_security_group" "other_web" {
name = "tf_other_acc_tests"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
tags {
Name = "tf-acc-test"
@@ -1388,6 +1455,7 @@ resource "aws_security_group" "other_web" {
resource "aws_security_group" "web" {
name = "terraform_acceptance_test_example"
description = "Used in the terraform acceptance tests"
+ vpc_id = "${aws_vpc.foo.id}"
ingress {
protocol = "tcp"
diff --git a/builtin/providers/aws/resource_aws_sns_topic.go b/builtin/providers/aws/resource_aws_sns_topic.go
index 4174e8732c94..62f2450c08e7 100644
--- a/builtin/providers/aws/resource_aws_sns_topic.go
+++ b/builtin/providers/aws/resource_aws_sns_topic.go
@@ -18,6 +18,7 @@ import (
// Mutable attributes
var SNSAttributeMap = map[string]string{
+ "arn": "TopicArn",
"display_name": "DisplayName",
"policy": "Policy",
"delivery_policy": "DeliveryPolicy",
@@ -163,7 +164,6 @@ func resourceAwsSnsTopicRead(d *schema.ResourceData, meta interface{}) error {
attributeOutput, err := snsconn.GetTopicAttributes(&sns.GetTopicAttributesInput{
TopicArn: aws.String(d.Id()),
})
-
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFound" {
log.Printf("[WARN] SNS Topic (%s) not found, error code (404)", d.Id())
@@ -198,6 +198,17 @@ func resourceAwsSnsTopicRead(d *schema.ResourceData, meta interface{}) error {
}
}
+ // If we have no name set (import) then determine it from the ARN.
+ // This is a bit of a heuristic for now since AWS provides no other
+ // way to get it.
+ if _, ok := d.GetOk("name"); !ok {
+ arn := d.Get("arn").(string)
+ idx := strings.LastIndex(arn, ":")
+ if idx > -1 {
+ d.Set("name", arn[idx+1:])
+ }
+ }
+
return nil
}
diff --git a/builtin/providers/aws/resource_aws_sns_topic_test.go b/builtin/providers/aws/resource_aws_sns_topic_test.go
index 2852c36fb2c3..2b74c7abb77f 100644
--- a/builtin/providers/aws/resource_aws_sns_topic_test.go
+++ b/builtin/providers/aws/resource_aws_sns_topic_test.go
@@ -13,9 +13,10 @@ import (
func TestAccAWSSNSTopic_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSNSTopicDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_sns_topic.test_topic",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSNSTopicDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSNSTopicConfig,
@@ -29,9 +30,10 @@ func TestAccAWSSNSTopic_basic(t *testing.T) {
func TestAccAWSSNSTopic_withIAMRole(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckAWSSNSTopicDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_sns_topic.test_topic",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckAWSSNSTopicDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSNSTopicConfig_withIAMRole,
diff --git a/builtin/providers/aws/resource_aws_sqs_queue.go b/builtin/providers/aws/resource_aws_sqs_queue.go
index fb3833072997..7d7733bf323a 100644
--- a/builtin/providers/aws/resource_aws_sqs_queue.go
+++ b/builtin/providers/aws/resource_aws_sqs_queue.go
@@ -10,6 +10,7 @@ import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/sqs"
)
@@ -177,6 +178,14 @@ func resourceAwsSqsQueueRead(d *schema.ResourceData, meta interface{}) error {
})
if err != nil {
+ if awsErr, ok := err.(awserr.Error); ok {
+ log.Printf("ERROR Found %s", awsErr.Code())
+ if "AWS.SimpleQueueService.NonExistentQueue" == awsErr.Code() {
+ d.SetId("")
+ log.Printf("[DEBUG] SQS Queue (%s) not found", d.Get("name").(string))
+ return nil
+ }
+ }
return err
}
diff --git a/builtin/providers/aws/resource_aws_subnet_test.go b/builtin/providers/aws/resource_aws_subnet_test.go
index 5b80b84890c5..2a4e3259dc95 100644
--- a/builtin/providers/aws/resource_aws_subnet_test.go
+++ b/builtin/providers/aws/resource_aws_subnet_test.go
@@ -27,9 +27,10 @@ func TestAccAWSSubnet_basic(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckSubnetDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_subnet.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckSubnetDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccSubnetConfig,
diff --git a/builtin/providers/aws/resource_aws_vpc.go b/builtin/providers/aws/resource_aws_vpc.go
index d4689c5ad6f8..e6e3b94a5472 100644
--- a/builtin/providers/aws/resource_aws_vpc.go
+++ b/builtin/providers/aws/resource_aws_vpc.go
@@ -31,6 +31,7 @@ func resourceAwsVpc() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
+ Computed: true,
},
"enable_dns_hostnames": &schema.Schema{
@@ -140,6 +141,7 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error {
vpcid := d.Id()
d.Set("cidr_block", vpc.CidrBlock)
d.Set("dhcp_options_id", vpc.DhcpOptionsId)
+ d.Set("instance_tenancy", vpc.InstanceTenancy)
// Tags
d.Set("tags", tagsToMap(vpc.Tags))
@@ -154,7 +156,7 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error {
if err != nil {
return err
}
- d.Set("enable_dns_support", *resp.EnableDnsSupport)
+ d.Set("enable_dns_support", *resp.EnableDnsSupport.Value)
attribute = "enableDnsHostnames"
DescribeAttrOpts = &ec2.DescribeVpcAttributeInput{
Attribute: &attribute,
@@ -164,7 +166,7 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error {
if err != nil {
return err
}
- d.Set("enable_dns_hostnames", *resp.EnableDnsHostnames)
+ d.Set("enable_dns_hostnames", *resp.EnableDnsHostnames.Value)
DescribeClassiclinkOpts := &ec2.DescribeVpcClassicLinkInput{
VpcIds: []*string{&vpcid},
diff --git a/builtin/providers/aws/resource_aws_vpc_endpoint_test.go b/builtin/providers/aws/resource_aws_vpc_endpoint_test.go
index 4a081b69c0c7..c39162588ff2 100644
--- a/builtin/providers/aws/resource_aws_vpc_endpoint_test.go
+++ b/builtin/providers/aws/resource_aws_vpc_endpoint_test.go
@@ -16,9 +16,10 @@ func TestAccAWSVpcEndpoint_basic(t *testing.T) {
var endpoint ec2.VpcEndpoint
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckVpcEndpointDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpc_endpoint.second-private-s3",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVpcEndpointDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccVpcEndpointWithRouteTableAndPolicyConfig,
@@ -35,9 +36,10 @@ func TestAccAWSVpcEndpoint_withRouteTableAndPolicy(t *testing.T) {
var routeTable ec2.RouteTable
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckVpcEndpointDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpc_endpoint.second-private-s3",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVpcEndpointDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccVpcEndpointWithRouteTableAndPolicyConfig,
diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection.go b/builtin/providers/aws/resource_aws_vpc_peering_connection.go
index a8ae86dce765..8c48b0e96ec2 100644
--- a/builtin/providers/aws/resource_aws_vpc_peering_connection.go
+++ b/builtin/providers/aws/resource_aws_vpc_peering_connection.go
@@ -147,7 +147,6 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error
}
if _, ok := d.GetOk("auto_accept"); ok {
-
pcRaw, _, err := resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id())()
if err != nil {
@@ -160,7 +159,6 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error
pc := pcRaw.(*ec2.VpcPeeringConnection)
if pc.Status != nil && *pc.Status.Code == "pending-acceptance" {
-
status, err := resourceVPCPeeringConnectionAccept(conn, d.Id())
if err != nil {
return err
diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go
index 4318bda688c7..5aefdd6a9c03 100644
--- a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go
+++ b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go
@@ -22,6 +22,10 @@ func TestAccAWSVPCPeeringConnection_basic(t *testing.T) {
t.Fatal("AWS_ACCOUNT_ID must be set")
}
},
+
+ IDRefreshName: "aws_vpc_peering_connection.foo",
+ IDRefreshIgnore: []string{"auto_accept"},
+
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy,
Steps: []resource.TestStep{
@@ -82,7 +86,11 @@ func TestAccAWSVPCPeeringConnection_tags(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
+ PreCheck: func() { testAccPreCheck(t) },
+
+ IDRefreshName: "aws_vpc_peering_connection.foo",
+ IDRefreshIgnore: []string{"auto_accept"},
+
Providers: testAccProviders,
CheckDestroy: testAccCheckVpcDestroy,
Steps: []resource.TestStep{
diff --git a/builtin/providers/aws/resource_aws_vpn_connection.go b/builtin/providers/aws/resource_aws_vpn_connection.go
index 60fb33e2804f..2cdd3adf976e 100644
--- a/builtin/providers/aws/resource_aws_vpn_connection.go
+++ b/builtin/providers/aws/resource_aws_vpn_connection.go
@@ -316,10 +316,8 @@ func resourceAwsVpnConnectionRead(d *schema.ResourceData, meta interface{}) erro
if err := d.Set("vgw_telemetry", telemetryToMapList(vpnConnection.VgwTelemetry)); err != nil {
return err
}
- if vpnConnection.Routes != nil {
- if err := d.Set("routes", routesToMapList(vpnConnection.Routes)); err != nil {
- return err
- }
+ if err := d.Set("routes", routesToMapList(vpnConnection.Routes)); err != nil {
+ return err
}
return nil
diff --git a/builtin/providers/aws/resource_aws_vpn_connection_test.go b/builtin/providers/aws/resource_aws_vpn_connection_test.go
index a26f2769e29b..ceb0553a8a7e 100644
--- a/builtin/providers/aws/resource_aws_vpn_connection_test.go
+++ b/builtin/providers/aws/resource_aws_vpn_connection_test.go
@@ -14,9 +14,10 @@ import (
func TestAccAWSVpnConnection_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccAwsVpnConnectionDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpn_connection.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccAwsVpnConnectionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAwsVpnConnectionConfig,
diff --git a/builtin/providers/aws/resource_aws_vpn_gateway_test.go b/builtin/providers/aws/resource_aws_vpn_gateway_test.go
index 3a4bb1747247..beca547209e3 100644
--- a/builtin/providers/aws/resource_aws_vpn_gateway_test.go
+++ b/builtin/providers/aws/resource_aws_vpn_gateway_test.go
@@ -32,9 +32,10 @@ func TestAccAWSVpnGateway_basic(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckVpnGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpn_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVpnGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccVpnGatewayConfig,
@@ -70,9 +71,10 @@ func TestAccAWSVpnGateway_delete(t *testing.T) {
}
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckVpnGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpn_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVpnGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccVpnGatewayConfig,
@@ -91,9 +93,10 @@ func TestAccAWSVpnGateway_tags(t *testing.T) {
var v ec2.VpnGateway
resource.Test(t, resource.TestCase{
- PreCheck: func() { testAccPreCheck(t) },
- Providers: testAccProviders,
- CheckDestroy: testAccCheckVpnGatewayDestroy,
+ PreCheck: func() { testAccPreCheck(t) },
+ IDRefreshName: "aws_vpn_gateway.foo",
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVpnGatewayDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccCheckVpnGatewayConfigTags,
diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go
index a23b1ef6ca85..b0c84b9ec561 100644
--- a/builtin/providers/aws/structure.go
+++ b/builtin/providers/aws/structure.go
@@ -948,6 +948,59 @@ func expandApiGatewayRequestResponseModelOperations(d *schema.ResourceData, key
return operations
}
+func expandApiGatewayMethodResponseParametersJSONOperations(d *schema.ResourceData, key string, prefix string) ([]*apigateway.PatchOperation, error) {
+ operations := make([]*apigateway.PatchOperation, 0)
+
+ oldParameters, newParameters := d.GetChange(key)
+ oldParametersMap := make(map[string]interface{})
+ newParametersMap := make(map[string]interface{})
+
+ if err := json.Unmarshal([]byte(oldParameters.(string)), &oldParametersMap); err != nil {
+ err := fmt.Errorf("Error unmarshaling old response_parameters_in_json: %s", err)
+ return operations, err
+ }
+
+ if err := json.Unmarshal([]byte(newParameters.(string)), &newParametersMap); err != nil {
+ err := fmt.Errorf("Error unmarshaling new response_parameters_in_json: %s", err)
+ return operations, err
+ }
+
+ for k, _ := range oldParametersMap {
+ operation := apigateway.PatchOperation{
+ Op: aws.String("remove"),
+ Path: aws.String(fmt.Sprintf("/%s/%s", prefix, k)),
+ }
+
+ for nK, nV := range newParametersMap {
+ if nK == k {
+ operation.Op = aws.String("replace")
+ operation.Value = aws.String(strconv.FormatBool(nV.(bool)))
+ }
+ }
+
+ operations = append(operations, &operation)
+ }
+
+ for nK, nV := range newParametersMap {
+ exists := false
+ for k, _ := range oldParametersMap {
+ if k == nK {
+ exists = true
+ }
+ }
+ if !exists {
+ operation := apigateway.PatchOperation{
+ Op: aws.String("add"),
+ Path: aws.String(fmt.Sprintf("/%s/%s", prefix, nK)),
+ Value: aws.String(strconv.FormatBool(nV.(bool))),
+ }
+ operations = append(operations, &operation)
+ }
+ }
+
+ return operations, nil
+}
+
func expandApiGatewayStageKeyOperations(d *schema.ResourceData) []*apigateway.PatchOperation {
operations := make([]*apigateway.PatchOperation, 0)
@@ -1077,3 +1130,42 @@ func flattenBeanstalkTrigger(list []*elasticbeanstalk.Trigger) []string {
}
return strs
}
+
+// There are several parts of the AWS API that will sort lists of strings,
+// causing diffs inbetweeen resources that use lists. This avoids a bit of
+// code duplication for pre-sorts that can be used for things like hash
+// functions, etc.
+func sortInterfaceSlice(in []interface{}) []interface{} {
+ a := []string{}
+ b := []interface{}{}
+ for _, v := range in {
+ a = append(a, v.(string))
+ }
+
+ sort.Strings(a)
+
+ for _, v := range a {
+ b = append(b, v)
+ }
+
+ return b
+}
+
+func flattenApiGatewayThrottleSettings(settings *apigateway.ThrottleSettings) []map[string]interface{} {
+ result := make([]map[string]interface{}, 0, 1)
+
+ if settings != nil {
+ r := make(map[string]interface{})
+ if settings.BurstLimit != nil {
+ r["burst_limit"] = *settings.BurstLimit
+ }
+
+ if settings.RateLimit != nil {
+ r["rate_limit"] = *settings.RateLimit
+ }
+
+ result = append(result, r)
+ }
+
+ return result
+}
diff --git a/builtin/providers/aws/structure_test.go b/builtin/providers/aws/structure_test.go
index aa656710d93f..80b3711aa59b 100644
--- a/builtin/providers/aws/structure_test.go
+++ b/builtin/providers/aws/structure_test.go
@@ -6,6 +6,7 @@ import (
"testing"
"github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/service/apigateway"
"github.com/aws/aws-sdk-go/service/autoscaling"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/aws/aws-sdk-go/service/elasticache"
@@ -902,3 +903,42 @@ func TestFlattenSecurityGroups(t *testing.T) {
}
}
}
+
+func TestFlattenApiGatewayThrottleSettings(t *testing.T) {
+ expectedBurstLimit := int64(140)
+ expectedRateLimit := 120.0
+
+ ts := &apigateway.ThrottleSettings{
+ BurstLimit: aws.Int64(expectedBurstLimit),
+ RateLimit: aws.Float64(expectedRateLimit),
+ }
+ result := flattenApiGatewayThrottleSettings(ts)
+
+ if len(result) != 1 {
+ t.Fatalf("Expected map to have exactly 1 element, got %d", len(result))
+ }
+
+ burstLimit, ok := result[0]["burst_limit"]
+ if !ok {
+ t.Fatal("Expected 'burst_limit' key in the map")
+ }
+ burstLimitInt, ok := burstLimit.(int64)
+ if !ok {
+ t.Fatal("Expected 'burst_limit' to be int")
+ }
+ if burstLimitInt != expectedBurstLimit {
+ t.Fatalf("Expected 'burst_limit' to equal %d, got %d", expectedBurstLimit, burstLimitInt)
+ }
+
+ rateLimit, ok := result[0]["rate_limit"]
+ if !ok {
+ t.Fatal("Expected 'rate_limit' key in the map")
+ }
+ rateLimitFloat, ok := rateLimit.(float64)
+ if !ok {
+ t.Fatal("Expected 'rate_limit' to be float64")
+ }
+ if rateLimitFloat != expectedRateLimit {
+ t.Fatalf("Expected 'rate_limit' to equal %f, got %f", expectedRateLimit, rateLimitFloat)
+ }
+}
diff --git a/builtin/providers/aws/validators.go b/builtin/providers/aws/validators.go
index be19c483bb62..4cb31b3e78e5 100644
--- a/builtin/providers/aws/validators.go
+++ b/builtin/providers/aws/validators.go
@@ -6,6 +6,7 @@ import (
"regexp"
"time"
+ "github.com/aws/aws-sdk-go/service/s3"
"github.com/hashicorp/terraform/helper/schema"
)
@@ -30,6 +31,31 @@ func validateRdsId(v interface{}, k string) (ws []string, errors []error) {
return
}
+func validateElastiCacheClusterId(v interface{}, k string) (ws []string, errors []error) {
+ value := v.(string)
+ if (len(value) < 1) || (len(value) > 20) {
+ errors = append(errors, fmt.Errorf(
+ "%q must contain from 1 to 20 alphanumeric characters or hyphens", k))
+ }
+ if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) {
+ errors = append(errors, fmt.Errorf(
+ "only lowercase alphanumeric characters and hyphens allowed in %q", k))
+ }
+ if !regexp.MustCompile(`^[a-z]`).MatchString(value) {
+ errors = append(errors, fmt.Errorf(
+ "first character of %q must be a letter", k))
+ }
+ if regexp.MustCompile(`--`).MatchString(value) {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot contain two consecutive hyphens", k))
+ }
+ if regexp.MustCompile(`-$`).MatchString(value) {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot end with a hyphen", k))
+ }
+ return
+}
+
func validateASGScheduleTimestamp(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
_, err := time.Parse(awsAutoscalingScheduleTimeLayout, value)
@@ -167,6 +193,21 @@ func validateMaxLength(length int) schema.SchemaValidateFunc {
}
}
+func validateIntegerInRange(min, max int) schema.SchemaValidateFunc {
+ return func(v interface{}, k string) (ws []string, errors []error) {
+ value := v.(int)
+ if value < min {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot be lower than %d: %d", k, min, value))
+ }
+ if value > max {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot be higher than %d: %d", k, max, value))
+ }
+ return
+ }
+}
+
func validateCloudWatchEventTargetId(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if len(value) > 64 {
@@ -367,3 +408,33 @@ func validateLogGroupName(v interface{}, k string) (ws []string, errors []error)
return
}
+
+func validateS3BucketLifecycleTimestamp(v interface{}, k string) (ws []string, errors []error) {
+ value := v.(string)
+ _, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", value))
+ if err != nil {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot be parsed as RFC3339 Timestamp Format", value))
+ }
+
+ return
+}
+
+func validateS3BucketLifecycleStorageClass(v interface{}, k string) (ws []string, errors []error) {
+ value := v.(string)
+ if value != s3.TransitionStorageClassStandardIa && value != s3.TransitionStorageClassGlacier {
+ errors = append(errors, fmt.Errorf(
+ "%q must be one of '%q', '%q'", k, s3.TransitionStorageClassStandardIa, s3.TransitionStorageClassGlacier))
+ }
+
+ return
+}
+
+func validateS3BucketLifecycleRuleId(v interface{}, k string) (ws []string, errors []error) {
+ value := v.(string)
+ if len(value) > 255 {
+ errors = append(errors, fmt.Errorf(
+ "%q cannot exceed 255 characters", k))
+ }
+ return
+}
diff --git a/builtin/providers/aws/validators_test.go b/builtin/providers/aws/validators_test.go
index 972a9cbf2a55..96b391f41b58 100644
--- a/builtin/providers/aws/validators_test.go
+++ b/builtin/providers/aws/validators_test.go
@@ -382,3 +382,141 @@ func TestValidateLogGroupName(t *testing.T) {
}
}
}
+
+func TestValidateS3BucketLifecycleTimestamp(t *testing.T) {
+ validDates := []string{
+ "2016-01-01",
+ "2006-01-02",
+ }
+
+ for _, v := range validDates {
+ _, errors := validateS3BucketLifecycleTimestamp(v, "date")
+ if len(errors) != 0 {
+ t.Fatalf("%q should be valid date: %q", v, errors)
+ }
+ }
+
+ invalidDates := []string{
+ "Jan 01 2016",
+ "20160101",
+ }
+
+ for _, v := range invalidDates {
+ _, errors := validateS3BucketLifecycleTimestamp(v, "date")
+ if len(errors) == 0 {
+ t.Fatalf("%q should be invalid date", v)
+ }
+ }
+}
+
+func TestValidateS3BucketLifecycleStorageClass(t *testing.T) {
+ validStorageClass := []string{
+ "STANDARD_IA",
+ "GLACIER",
+ }
+
+ for _, v := range validStorageClass {
+ _, errors := validateS3BucketLifecycleStorageClass(v, "storage_class")
+ if len(errors) != 0 {
+ t.Fatalf("%q should be valid storage class: %q", v, errors)
+ }
+ }
+
+ invalidStorageClass := []string{
+ "STANDARD",
+ "1234",
+ }
+ for _, v := range invalidStorageClass {
+ _, errors := validateS3BucketLifecycleStorageClass(v, "storage_class")
+ if len(errors) == 0 {
+ t.Fatalf("%q should be invalid storage class", v)
+ }
+ }
+}
+
+func TestValidateS3BucketLifecycleRuleId(t *testing.T) {
+ validId := []string{
+ "YadaHereAndThere",
+ "Valid-5Rule_ID",
+ "This . is also %% valid@!)+*(:ID",
+ "1234",
+ strings.Repeat("W", 255),
+ }
+ for _, v := range validId {
+ _, errors := validateS3BucketLifecycleRuleId(v, "id")
+ if len(errors) != 0 {
+ t.Fatalf("%q should be a valid lifecycle rule id: %q", v, errors)
+ }
+ }
+
+ invalidId := []string{
+ // length > 255
+ strings.Repeat("W", 256),
+ }
+ for _, v := range invalidId {
+ _, errors := validateS3BucketLifecycleRuleId(v, "id")
+ if len(errors) == 0 {
+ t.Fatalf("%q should be an invalid lifecycle rule id", v)
+ }
+ }
+}
+
+func TestValidateIntegerInRange(t *testing.T) {
+ validIntegers := []int{-259, 0, 1, 5, 999}
+ min := -259
+ max := 999
+ for _, v := range validIntegers {
+ _, errors := validateIntegerInRange(min, max)(v, "name")
+ if len(errors) != 0 {
+ t.Fatalf("%q should be an integer in range (%d, %d): %q", v, min, max, errors)
+ }
+ }
+
+ invalidIntegers := []int{-260, -99999, 1000, 25678}
+ for _, v := range invalidIntegers {
+ _, errors := validateIntegerInRange(min, max)(v, "name")
+ if len(errors) == 0 {
+ t.Fatalf("%q should be an integer outside range (%d, %d)", v, min, max)
+ }
+ }
+}
+
+func TestResourceAWSElastiCacheClusterIdValidation(t *testing.T) {
+ cases := []struct {
+ Value string
+ ErrCount int
+ }{
+ {
+ Value: "tEsting",
+ ErrCount: 1,
+ },
+ {
+ Value: "t.sting",
+ ErrCount: 1,
+ },
+ {
+ Value: "t--sting",
+ ErrCount: 1,
+ },
+ {
+ Value: "1testing",
+ ErrCount: 1,
+ },
+ {
+ Value: "testing-",
+ ErrCount: 1,
+ },
+ {
+ Value: randomString(65),
+ ErrCount: 1,
+ },
+ }
+
+ for _, tc := range cases {
+ _, errors := validateElastiCacheClusterId(tc.Value, "aws_elasticache_cluster_cluster_id")
+
+ if len(errors) != tc.ErrCount {
+ t.Fatalf("Expected the ElastiCache Cluster cluster_id to trigger a validation error")
+ }
+ }
+}
diff --git a/builtin/providers/azure/resource_azure_dns_server_test.go b/builtin/providers/azure/resource_azure_dns_server_test.go
index ac87ebc262b0..ef5188ecb952 100644
--- a/builtin/providers/azure/resource_azure_dns_server_test.go
+++ b/builtin/providers/azure/resource_azure_dns_server_test.go
@@ -5,6 +5,7 @@ import (
"testing"
"github.com/Azure/azure-sdk-for-go/management"
+ "github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
@@ -12,16 +13,20 @@ import (
func TestAccAzureDnsServerBasic(t *testing.T) {
name := "azure_dns_server.foo"
+ random := acctest.RandInt()
+ config := testAccAzureDnsServerBasic(random)
+ serverName := fmt.Sprintf("tf-dns-server-%d", random)
+
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAzureDnsServerDestroy,
Steps: []resource.TestStep{
resource.TestStep{
- Config: testAccAzureDnsServerBasic,
+ Config: config,
Check: resource.ComposeTestCheckFunc(
testAccCheckAzureDnsServerExists(name),
- resource.TestCheckResourceAttr(name, "name", "terraform-dns-server"),
+ resource.TestCheckResourceAttr(name, "name", serverName),
resource.TestCheckResourceAttr(name, "dns_address", "8.8.8.8"),
),
},
@@ -32,25 +37,30 @@ func TestAccAzureDnsServerBasic(t *testing.T) {
func TestAccAzureDnsServerUpdate(t *testing.T) {
name := "azure_dns_server.foo"
+ random := acctest.RandInt()
+ basicConfig := testAccAzureDnsServerBasic(random)
+ updateConfig := testAccAzureDnsServerUpdate(random)
+ serverName := fmt.Sprintf("tf-dns-server-%d", random)
+
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAzureDnsServerDestroy,
Steps: []resource.TestStep{
resource.TestStep{
- Config: testAccAzureDnsServerBasic,
+ Config: basicConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAzureDnsServerExists(name),
- resource.TestCheckResourceAttr(name, "name", "terraform-dns-server"),
+ resource.TestCheckResourceAttr(name, "name", serverName),
resource.TestCheckResourceAttr(name, "dns_address", "8.8.8.8"),
),
},
resource.TestStep{
- Config: testAccAzureDnsServerUpdate,
+ Config: updateConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckAzureDnsServerExists(name),
- resource.TestCheckResourceAttr(name, "name", "terraform-dns-server"),
+ resource.TestCheckResourceAttr(name, "name", serverName),
resource.TestCheckResourceAttr(name, "dns_address", "8.8.4.4"),
),
},
@@ -116,16 +126,20 @@ func testAccCheckAzureDnsServerDestroy(s *terraform.State) error {
return nil
}
-const testAccAzureDnsServerBasic = `
+func testAccAzureDnsServerBasic(random int) string {
+ return fmt.Sprintf(`
resource "azure_dns_server" "foo" {
- name = "terraform-dns-server"
+ name = "tf-dns-server-%d"
dns_address = "8.8.8.8"
}
-`
+`, random)
+}
-const testAccAzureDnsServerUpdate = `
+func testAccAzureDnsServerUpdate(random int) string {
+ return fmt.Sprintf(`
resource "azure_dns_server" "foo" {
- name = "terraform-dns-server"
+ name = "tf-dns-server-%d"
dns_address = "8.8.4.4"
}
-`
+`, random)
+}
diff --git a/builtin/providers/azurerm/config.go b/builtin/providers/azurerm/config.go
index 73f0f612064b..eb6b0d4dc0c5 100644
--- a/builtin/providers/azurerm/config.go
+++ b/builtin/providers/azurerm/config.go
@@ -107,11 +107,6 @@ func setUserAgent(client *autorest.Client) {
// getArmClient is a helper method which returns a fully instantiated
// *ArmClient based on the Config's current settings.
func (c *Config) getArmClient() (*ArmClient, error) {
- spt, err := azure.NewServicePrincipalToken(c.ClientID, c.ClientSecret, c.TenantID, azure.AzureResourceManagerScope)
- if err != nil {
- return nil, err
- }
-
// client declarations:
client := ArmClient{}
@@ -125,8 +120,21 @@ func (c *Config) getArmClient() (*ArmClient, error) {
return nil, fmt.Errorf("Error creating Riviera client: %s", err)
}
+ // validate that the credentials are correct using Riviera. Note that this must be
+ // done _before_ using the Microsoft SDK, because Riviera handles errors. Using a
+ // namespace registration instead of a simple OAuth token refresh guarantees that
+ // service delegation is correct. This has the effect of registering Microsoft.Compute
+ // which is neccessary anyway.
+ if err := registerProviderWithSubscription("Microsoft.Compute", rivieraClient); err != nil {
+ return nil, err
+ }
client.rivieraClient = rivieraClient
+ spt, err := azure.NewServicePrincipalToken(c.ClientID, c.ClientSecret, c.TenantID, azure.AzureResourceManagerScope)
+ if err != nil {
+ return nil, err
+ }
+
// NOTE: these declarations should be left separate for clarity should the
// clients be wished to be configured with custom Responders/PollingModess etc...
asc := compute.NewAvailabilitySetsClient(c.SubscriptionID)
diff --git a/builtin/providers/azurerm/provider.go b/builtin/providers/azurerm/provider.go
index 179c2a2d24a2..f0852ee8af86 100644
--- a/builtin/providers/azurerm/provider.go
+++ b/builtin/providers/azurerm/provider.go
@@ -14,6 +14,7 @@ import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
riviera "github.com/jen20/riviera/azure"
+ "sync"
)
// Provider returns a terraform.ResourceProvider.
@@ -91,9 +92,11 @@ type Config struct {
ClientID string
ClientSecret string
TenantID string
+
+ validateCredentialsOnce sync.Once
}
-func (c Config) validate() error {
+func (c *Config) validate() error {
var err *multierror.Error
if c.SubscriptionID == "" {
@@ -113,7 +116,7 @@ func (c Config) validate() error {
}
func providerConfigure(d *schema.ResourceData) (interface{}, error) {
- config := Config{
+ config := &Config{
SubscriptionID: d.Get("subscription_id").(string),
ClientID: d.Get("client_id").(string),
ClientSecret: d.Get("client_secret").(string),
@@ -129,7 +132,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
return nil, err
}
- err = registerAzureResourceProvidersWithSubscription(&config, client)
+ err = registerAzureResourceProvidersWithSubscription(client.rivieraClient)
if err != nil {
return nil, err
}
@@ -137,27 +140,52 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
return client, nil
}
+func registerProviderWithSubscription(providerName string, client *riviera.Client) error {
+ request := client.NewRequest()
+ request.Command = riviera.RegisterResourceProvider{
+ Namespace: providerName,
+ }
+
+ response, err := request.Execute()
+ if err != nil {
+ return fmt.Errorf("Cannot request provider registration for Azure Resource Manager: %s.", err)
+ }
+
+ if !response.IsSuccessful() {
+ return fmt.Errorf("Credentials for acessing the Azure Resource Manager API are likely " +
+ "to be incorrect, or\n the service principal does not have permission to use " +
+ "the Azure Service Management\n API.")
+ }
+
+ return nil
+}
+
+var providerRegistrationOnce sync.Once
+
// registerAzureResourceProvidersWithSubscription uses the providers client to register
// all Azure resource providers which the Terraform provider may require (regardless of
// whether they are actually used by the configuration or not). It was confirmed by Microsoft
// that this is the approach their own internal tools also take.
-func registerAzureResourceProvidersWithSubscription(config *Config, client *ArmClient) error {
- providerClient := client.providers
-
- providers := []string{"Microsoft.Network", "Microsoft.Compute", "Microsoft.Cdn", "Microsoft.Storage", "Microsoft.Sql", "Microsoft.Search", "Microsoft.Resources"}
-
- for _, v := range providers {
- res, err := providerClient.Register(v)
- if err != nil {
- return err
+func registerAzureResourceProvidersWithSubscription(client *riviera.Client) error {
+ var err error
+ providerRegistrationOnce.Do(func() {
+ // We register Microsoft.Compute during client initialization
+ providers := []string{"Microsoft.Network", "Microsoft.Cdn", "Microsoft.Storage", "Microsoft.Sql", "Microsoft.Search", "Microsoft.Resources"}
+
+ var wg sync.WaitGroup
+ wg.Add(len(providers))
+ for _, providerName := range providers {
+ go func(p string) {
+ defer wg.Done()
+ if innerErr := registerProviderWithSubscription(p, client); err != nil {
+ err = innerErr
+ }
+ }(providerName)
}
+ wg.Wait()
+ })
- if res.StatusCode != http.StatusOK {
- return fmt.Errorf("Error registering provider %q with subscription %q", v, config.SubscriptionID)
- }
- }
-
- return nil
+ return err
}
// azureRMNormalizeLocation is a function which normalises human-readable region/location
diff --git a/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go b/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go
index d76ec91686bc..f7a7d86ab85f 100644
--- a/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go
+++ b/builtin/providers/azurerm/resource_arm_cdn_endpoint_test.go
@@ -145,6 +145,8 @@ resource "azurerm_cdn_endpoint" "test" {
origin {
name = "acceptanceTestCdnOrigin1"
host_name = "www.example.com"
+ https_port = 443
+ http_port = 80
}
}
`
@@ -170,6 +172,8 @@ resource "azurerm_cdn_endpoint" "test" {
origin {
name = "acceptanceTestCdnOrigin2"
host_name = "www.example.com"
+ https_port = 443
+ http_port = 80
}
tags {
@@ -200,6 +204,8 @@ resource "azurerm_cdn_endpoint" "test" {
origin {
name = "acceptanceTestCdnOrigin2"
host_name = "www.example.com"
+ https_port = 443
+ http_port = 80
}
tags {
diff --git a/builtin/providers/azurerm/resource_arm_template_deployment.go b/builtin/providers/azurerm/resource_arm_template_deployment.go
index fe425af6247f..33b8b0f8f320 100644
--- a/builtin/providers/azurerm/resource_arm_template_deployment.go
+++ b/builtin/providers/azurerm/resource_arm_template_deployment.go
@@ -110,7 +110,7 @@ func resourceArmTemplateDeploymentCreate(d *schema.ResourceData, meta interface{
Pending: []string{"creating", "updating", "accepted", "running"},
Target: []string{"succeeded"},
Refresh: templateDeploymentStateRefreshFunc(client, resGroup, name),
- Timeout: 10 * time.Minute,
+ Timeout: 40 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf("Error waiting for Template Deployment (%s) to become available: %s", name, err)
diff --git a/builtin/providers/cloudflare/config.go b/builtin/providers/cloudflare/config.go
index 4e4bb6d2f28e..e11fa8ec1690 100644
--- a/builtin/providers/cloudflare/config.go
+++ b/builtin/providers/cloudflare/config.go
@@ -1,10 +1,10 @@
package cloudflare
import (
- "fmt"
"log"
- "github.com/pearkes/cloudflare"
+ // NOTE: Temporary until they merge my PR:
+ "github.com/mitchellh/cloudflare-go"
)
type Config struct {
@@ -13,14 +13,8 @@ type Config struct {
}
// Client() returns a new client for accessing cloudflare.
-func (c *Config) Client() (*cloudflare.Client, error) {
- client, err := cloudflare.NewClient(c.Email, c.Token)
-
- if err != nil {
- return nil, fmt.Errorf("Error setting up client: %s", err)
- }
-
- log.Printf("[INFO] CloudFlare Client configured for user: %s", client.Email)
-
+func (c *Config) Client() (*cloudflare.API, error) {
+ client := cloudflare.New(c.Token, c.Email)
+ log.Printf("[INFO] CloudFlare Client configured for user: %s", c.Email)
return client, nil
}
diff --git a/builtin/providers/cloudflare/provider_test.go b/builtin/providers/cloudflare/provider_test.go
index 3306633cfb89..e8cd4ffafccf 100644
--- a/builtin/providers/cloudflare/provider_test.go
+++ b/builtin/providers/cloudflare/provider_test.go
@@ -38,6 +38,6 @@ func testAccPreCheck(t *testing.T) {
}
if v := os.Getenv("CLOUDFLARE_DOMAIN"); v == "" {
- t.Fatal("CLOUDFLARE_DOMAIN must be set for acceptance tests. The domain is used to ` and destroy record against.")
+ t.Fatal("CLOUDFLARE_DOMAIN must be set for acceptance tests. The domain is used to create and destroy record against.")
}
}
diff --git a/builtin/providers/cloudflare/resource_cloudflare_record.go b/builtin/providers/cloudflare/resource_cloudflare_record.go
index d27cdf6c256b..ad478dc7ddd8 100644
--- a/builtin/providers/cloudflare/resource_cloudflare_record.go
+++ b/builtin/providers/cloudflare/resource_cloudflare_record.go
@@ -3,10 +3,11 @@ package cloudflare
import (
"fmt"
"log"
- "strings"
"github.com/hashicorp/terraform/helper/schema"
- "github.com/pearkes/cloudflare"
+
+ // NOTE: Temporary until they merge my PR:
+ "github.com/mitchellh/cloudflare-go"
)
func resourceCloudFlareRecord() *schema.Resource {
@@ -44,97 +45,130 @@ func resourceCloudFlareRecord() *schema.Resource {
},
"ttl": &schema.Schema{
- Type: schema.TypeString,
+ Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"priority": &schema.Schema{
- Type: schema.TypeString,
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+
+ "proxied": &schema.Schema{
+ Default: false,
Optional: true,
+ Type: schema.TypeBool,
+ },
+
+ "zone_id": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
},
},
}
}
func resourceCloudFlareRecordCreate(d *schema.ResourceData, meta interface{}) error {
- client := meta.(*cloudflare.Client)
+ client := meta.(*cloudflare.API)
+
+ newRecord := cloudflare.DNSRecord{
+ Type: d.Get("type").(string),
+ Name: d.Get("name").(string),
+ Content: d.Get("value").(string),
+ Proxied: d.Get("proxied").(bool),
+ ZoneName: d.Get("domain").(string),
+ }
- // Create the new record
- newRecord := &cloudflare.CreateRecord{
- Name: d.Get("name").(string),
- Type: d.Get("type").(string),
- Content: d.Get("value").(string),
+ if priority, ok := d.GetOk("priority"); ok {
+ newRecord.Priority = priority.(int)
}
if ttl, ok := d.GetOk("ttl"); ok {
- newRecord.Ttl = ttl.(string)
+ newRecord.TTL = ttl.(int)
}
- if priority, ok := d.GetOk("priority"); ok {
- newRecord.Priority = priority.(string)
+ zoneId, err := client.ZoneIDByName(newRecord.ZoneName)
+ if err != nil {
+ return fmt.Errorf("Error finding zone %q: %s", newRecord.ZoneName, err)
}
- log.Printf("[DEBUG] CloudFlare Record create configuration: %#v", newRecord)
+ d.Set("zone_id", zoneId)
+ newRecord.ZoneID = zoneId
- rec, err := client.CreateRecord(d.Get("domain").(string), newRecord)
+ log.Printf("[DEBUG] CloudFlare Record create configuration: %#v", newRecord)
+ r, err := client.CreateDNSRecord(zoneId, newRecord)
if err != nil {
- return fmt.Errorf("Failed to create CloudFlare Record: %s", err)
+ return fmt.Errorf("Failed to create record: %s", err)
}
- d.SetId(rec.Id)
+ d.SetId(r.ID)
+
log.Printf("[INFO] CloudFlare Record ID: %s", d.Id())
return resourceCloudFlareRecordRead(d, meta)
}
func resourceCloudFlareRecordRead(d *schema.ResourceData, meta interface{}) error {
- client := meta.(*cloudflare.Client)
+ client := meta.(*cloudflare.API)
+ domain := d.Get("domain").(string)
- rec, err := client.RetrieveRecord(d.Get("domain").(string), d.Id())
+ zoneId, err := client.ZoneIDByName(domain)
if err != nil {
- if strings.Contains(err.Error(), "not found") {
- d.SetId("")
- return nil
- }
+ return fmt.Errorf("Error finding zone %q: %s", domain, err)
+ }
- return fmt.Errorf(
- "Couldn't find CloudFlare Record ID (%s) for domain (%s): %s",
- d.Id(), d.Get("domain").(string), err)
+ record, err := client.DNSRecord(zoneId, d.Id())
+ if err != nil {
+ return err
}
- d.Set("name", rec.Name)
- d.Set("hostname", rec.FullName)
- d.Set("type", rec.Type)
- d.Set("value", rec.Value)
- d.Set("ttl", rec.Ttl)
- d.Set("priority", rec.Priority)
+ d.SetId(record.ID)
+ d.Set("hostname", record.Name)
+ d.Set("type", record.Type)
+ d.Set("value", record.Content)
+ d.Set("ttl", record.TTL)
+ d.Set("priority", record.Priority)
+ d.Set("proxied", record.Proxied)
+ d.Set("zone_id", zoneId)
return nil
}
func resourceCloudFlareRecordUpdate(d *schema.ResourceData, meta interface{}) error {
- client := meta.(*cloudflare.Client)
+ client := meta.(*cloudflare.API)
+
+ updateRecord := cloudflare.DNSRecord{
+ ID: d.Id(),
+ Type: d.Get("type").(string),
+ Name: d.Get("name").(string),
+ Content: d.Get("value").(string),
+ ZoneName: d.Get("domain").(string),
+ Proxied: false,
+ }
+
+ if priority, ok := d.GetOk("priority"); ok {
+ updateRecord.Priority = priority.(int)
+ }
- // CloudFlare requires we send all values for an update request
- updateRecord := &cloudflare.UpdateRecord{
- Name: d.Get("name").(string),
- Type: d.Get("type").(string),
- Content: d.Get("value").(string),
+ if proxied, ok := d.GetOk("proxied"); ok {
+ updateRecord.Proxied = proxied.(bool)
}
if ttl, ok := d.GetOk("ttl"); ok {
- updateRecord.Ttl = ttl.(string)
+ updateRecord.TTL = ttl.(int)
}
- if priority, ok := d.GetOk("priority"); ok {
- updateRecord.Priority = priority.(string)
+ zoneId, err := client.ZoneIDByName(updateRecord.ZoneName)
+ if err != nil {
+ return fmt.Errorf("Error finding zone %q: %s", updateRecord.ZoneName, err)
}
- log.Printf("[DEBUG] CloudFlare Record update configuration: %#v", updateRecord)
+ updateRecord.ZoneID = zoneId
- err := client.UpdateRecord(d.Get("domain").(string), d.Id(), updateRecord)
+ log.Printf("[DEBUG] CloudFlare Record update configuration: %#v", updateRecord)
+ err = client.UpdateDNSRecord(zoneId, d.Id(), updateRecord)
if err != nil {
return fmt.Errorf("Failed to update CloudFlare Record: %s", err)
}
@@ -143,12 +177,17 @@ func resourceCloudFlareRecordUpdate(d *schema.ResourceData, meta interface{}) er
}
func resourceCloudFlareRecordDelete(d *schema.ResourceData, meta interface{}) error {
- client := meta.(*cloudflare.Client)
+ client := meta.(*cloudflare.API)
+ domain := d.Get("domain").(string)
- log.Printf("[INFO] Deleting CloudFlare Record: %s, %s", d.Get("domain").(string), d.Id())
+ zoneId, err := client.ZoneIDByName(domain)
+ if err != nil {
+ return fmt.Errorf("Error finding zone %q: %s", domain, err)
+ }
- err := client.DestroyRecord(d.Get("domain").(string), d.Id())
+ log.Printf("[INFO] Deleting CloudFlare Record: %s, %s", domain, d.Id())
+ err = client.DeleteDNSRecord(zoneId, d.Id())
if err != nil {
return fmt.Errorf("Error deleting CloudFlare Record: %s", err)
}
diff --git a/builtin/providers/cloudflare/resource_cloudflare_record_test.go b/builtin/providers/cloudflare/resource_cloudflare_record_test.go
index 6c3a13c63966..8044eef064cd 100644
--- a/builtin/providers/cloudflare/resource_cloudflare_record_test.go
+++ b/builtin/providers/cloudflare/resource_cloudflare_record_test.go
@@ -7,23 +7,25 @@ import (
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
- "github.com/pearkes/cloudflare"
+
+ // NOTE: Temporary until they merge my PR:
+ "github.com/mitchellh/cloudflare-go"
)
-func TestAccCLOudflareRecord_Basic(t *testing.T) {
- var record cloudflare.Record
+func TestAccCloudFlareRecord_Basic(t *testing.T) {
+ var record cloudflare.DNSRecord
domain := os.Getenv("CLOUDFLARE_DOMAIN")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
- CheckDestroy: testAccCheckCLOudflareRecordDestroy,
+ CheckDestroy: testAccCheckCloudFlareRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
- Config: fmt.Sprintf(testAccCheckCLoudFlareRecordConfig_basic, domain),
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigBasic, domain),
Check: resource.ComposeTestCheckFunc(
- testAccCheckCLOudflareRecordExists("cloudflare_record.foobar", &record),
- testAccCheckCLOudflareRecordAttributes(&record),
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &record),
+ testAccCheckCloudFlareRecordAttributes(&record),
resource.TestCheckResourceAttr(
"cloudflare_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
@@ -36,20 +38,75 @@ func TestAccCLOudflareRecord_Basic(t *testing.T) {
})
}
-func TestAccCLOudflareRecord_Updated(t *testing.T) {
- var record cloudflare.Record
+func TestAccCloudFlareRecord_Apex(t *testing.T) {
+ var record cloudflare.DNSRecord
+ domain := os.Getenv("CLOUDFLARE_DOMAIN")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckCloudFlareRecordDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigApex, domain),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &record),
+ testAccCheckCloudFlareRecordAttributes(&record),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "name", "@"),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "domain", domain),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "value", "192.168.0.10"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccCloudFlareRecord_Proxied(t *testing.T) {
+ var record cloudflare.DNSRecord
+ domain := os.Getenv("CLOUDFLARE_DOMAIN")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckCloudFlareRecordDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigProxied, domain, domain),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &record),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "domain", domain),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "name", "terraform"),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "proxied", "true"),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "type", "CNAME"),
+ resource.TestCheckResourceAttr(
+ "cloudflare_record.foobar", "value", domain),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccCloudFlareRecord_Updated(t *testing.T) {
+ var record cloudflare.DNSRecord
domain := os.Getenv("CLOUDFLARE_DOMAIN")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
- CheckDestroy: testAccCheckCLOudflareRecordDestroy,
+ CheckDestroy: testAccCheckCloudFlareRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
- Config: fmt.Sprintf(testAccCheckCLoudFlareRecordConfig_basic, domain),
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigBasic, domain),
Check: resource.ComposeTestCheckFunc(
- testAccCheckCLOudflareRecordExists("cloudflare_record.foobar", &record),
- testAccCheckCLOudflareRecordAttributes(&record),
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &record),
+ testAccCheckCloudFlareRecordAttributes(&record),
resource.TestCheckResourceAttr(
"cloudflare_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
@@ -59,10 +116,10 @@ func TestAccCLOudflareRecord_Updated(t *testing.T) {
),
},
resource.TestStep{
- Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfig_new_value, domain),
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigNewValue, domain),
Check: resource.ComposeTestCheckFunc(
- testAccCheckCLOudflareRecordExists("cloudflare_record.foobar", &record),
- testAccCheckCLOudflareRecordAttributesUpdated(&record),
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &record),
+ testAccCheckCloudFlareRecordAttributesUpdated(&record),
resource.TestCheckResourceAttr(
"cloudflare_record.foobar", "name", "terraform"),
resource.TestCheckResourceAttr(
@@ -75,25 +132,25 @@ func TestAccCLOudflareRecord_Updated(t *testing.T) {
})
}
-func TestAccCLOudflareRecord_forceNewRecord(t *testing.T) {
- var afterCreate, afterUpdate cloudflare.Record
+func TestAccCloudFlareRecord_forceNewRecord(t *testing.T) {
+ var afterCreate, afterUpdate cloudflare.DNSRecord
domain := os.Getenv("CLOUDFLARE_DOMAIN")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
- CheckDestroy: testAccCheckCLOudflareRecordDestroy,
+ CheckDestroy: testAccCheckCloudFlareRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
- Config: fmt.Sprintf(testAccCheckCLoudFlareRecordConfig_basic, domain),
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigBasic, domain),
Check: resource.ComposeTestCheckFunc(
- testAccCheckCLOudflareRecordExists("cloudflare_record.foobar", &afterCreate),
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &afterCreate),
),
},
resource.TestStep{
- Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfig_forceNew, domain, domain),
+ Config: fmt.Sprintf(testAccCheckCloudFlareRecordConfigForceNew, domain, domain),
Check: resource.ComposeTestCheckFunc(
- testAccCheckCLOudflareRecordExists("cloudflare_record.foobar", &afterUpdate),
+ testAccCheckCloudFlareRecordExists("cloudflare_record.foobar", &afterUpdate),
testAccCheckCloudFlareRecordRecreated(t, &afterCreate, &afterUpdate),
),
},
@@ -102,25 +159,24 @@ func TestAccCLOudflareRecord_forceNewRecord(t *testing.T) {
}
func testAccCheckCloudFlareRecordRecreated(t *testing.T,
- before, after *cloudflare.Record) resource.TestCheckFunc {
+ before, after *cloudflare.DNSRecord) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if before.Id == after.Id {
- t.Fatalf("Expected change of Record Ids, but both were %v", before.Id)
+ if before.ID == after.ID {
+ t.Fatalf("Expected change of Record Ids, but both were %v", before.ID)
}
return nil
}
}
-func testAccCheckCLOudflareRecordDestroy(s *terraform.State) error {
- client := testAccProvider.Meta().(*cloudflare.Client)
+func testAccCheckCloudFlareRecordDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*cloudflare.API)
for _, rs := range s.RootModule().Resources {
if rs.Type != "cloudflare_record" {
continue
}
- _, err := client.RetrieveRecord(rs.Primary.Attributes["domain"], rs.Primary.ID)
-
+ _, err := client.DNSRecord(rs.Primary.Attributes["zone_id"], rs.Primary.ID)
if err == nil {
return fmt.Errorf("Record still exists")
}
@@ -129,32 +185,31 @@ func testAccCheckCLOudflareRecordDestroy(s *terraform.State) error {
return nil
}
-func testAccCheckCLOudflareRecordAttributes(record *cloudflare.Record) resource.TestCheckFunc {
+func testAccCheckCloudFlareRecordAttributes(record *cloudflare.DNSRecord) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if record.Value != "192.168.0.10" {
- return fmt.Errorf("Bad value: %s", record.Value)
+ if record.Content != "192.168.0.10" {
+ return fmt.Errorf("Bad content: %s", record.Content)
}
return nil
}
}
-func testAccCheckCLOudflareRecordAttributesUpdated(record *cloudflare.Record) resource.TestCheckFunc {
+func testAccCheckCloudFlareRecordAttributesUpdated(record *cloudflare.DNSRecord) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if record.Value != "192.168.0.11" {
- return fmt.Errorf("Bad value: %s", record.Value)
+ if record.Content != "192.168.0.11" {
+ return fmt.Errorf("Bad content: %s", record.Content)
}
return nil
}
}
-func testAccCheckCLOudflareRecordExists(n string, record *cloudflare.Record) resource.TestCheckFunc {
+func testAccCheckCloudFlareRecordExists(n string, record *cloudflare.DNSRecord) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
-
if !ok {
return fmt.Errorf("Not found: %s", n)
}
@@ -163,25 +218,23 @@ func testAccCheckCLOudflareRecordExists(n string, record *cloudflare.Record) res
return fmt.Errorf("No Record ID is set")
}
- client := testAccProvider.Meta().(*cloudflare.Client)
-
- foundRecord, err := client.RetrieveRecord(rs.Primary.Attributes["domain"], rs.Primary.ID)
-
+ client := testAccProvider.Meta().(*cloudflare.API)
+ foundRecord, err := client.DNSRecord(rs.Primary.Attributes["zone_id"], rs.Primary.ID)
if err != nil {
return err
}
- if foundRecord.Id != rs.Primary.ID {
+ if foundRecord.ID != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
- *record = *foundRecord
+ *record = foundRecord
return nil
}
}
-const testAccCheckCLoudFlareRecordConfig_basic = `
+const testAccCheckCloudFlareRecordConfigBasic = `
resource "cloudflare_record" "foobar" {
domain = "%s"
@@ -191,7 +244,26 @@ resource "cloudflare_record" "foobar" {
ttl = 3600
}`
-const testAccCheckCloudFlareRecordConfig_new_value = `
+const testAccCheckCloudFlareRecordConfigApex = `
+resource "cloudflare_record" "foobar" {
+ domain = "%s"
+ name = "@"
+ value = "192.168.0.10"
+ type = "A"
+ ttl = 3600
+}`
+
+const testAccCheckCloudFlareRecordConfigProxied = `
+resource "cloudflare_record" "foobar" {
+ domain = "%s"
+
+ name = "terraform"
+ value = "%s"
+ type = "CNAME"
+ proxied = true
+}`
+
+const testAccCheckCloudFlareRecordConfigNewValue = `
resource "cloudflare_record" "foobar" {
domain = "%s"
@@ -201,7 +273,7 @@ resource "cloudflare_record" "foobar" {
ttl = 3600
}`
-const testAccCheckCloudFlareRecordConfig_forceNew = `
+const testAccCheckCloudFlareRecordConfigForceNew = `
resource "cloudflare_record" "foobar" {
domain = "%s"
diff --git a/builtin/providers/cloudstack/resource_cloudstack_disk.go b/builtin/providers/cloudstack/resource_cloudstack_disk.go
index 63a788f66237..fd4807ae5aeb 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_disk.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_disk.go
@@ -78,7 +78,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro
name := d.Get("name").(string)
// Create a new parameter struct
- p := cs.Volume.NewCreateVolumeParams(name)
+ p := cs.Volume.NewCreateVolumeParams()
+ p.SetName(name)
// Retrieve the disk_offering ID
diskofferingid, e := retrieveID(cs, "disk_offering", d.Get("disk_offering").(string))
@@ -94,14 +95,8 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Retrieve the zone ID
@@ -146,7 +141,10 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error
cs := meta.(*cloudstack.CloudStackClient)
// Get the volume details
- v, count, err := cs.Volume.GetVolumeByID(d.Id())
+ v, count, err := cs.Volume.GetVolumeByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
d.SetId("")
@@ -157,7 +155,7 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error
}
d.Set("name", v.Name)
- d.Set("attach", v.Attached != "") // If attached this will contain a timestamp when attached
+ d.Set("attach", v.Attached != "") // If attached this contains a timestamp when attached
d.Set("size", int(v.Size/(1024*1024*1024))) // Needed to get GB's again
setValueOrID(d, "disk_offering", v.Diskofferingname, v.Diskofferingid)
@@ -166,7 +164,10 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error
if v.Attached != "" {
// Get the virtual machine details
- vm, _, err := cs.VirtualMachine.GetVirtualMachineByID(v.Virtualmachineid)
+ vm, _, err := cs.VirtualMachine.GetVirtualMachineByID(
+ v.Virtualmachineid,
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
return err
}
@@ -295,12 +296,17 @@ func resourceCloudStackDiskAttach(d *schema.ResourceData, meta interface{}) erro
cs := meta.(*cloudstack.CloudStackClient)
// First check if the disk isn't already attached
- if attached, err := isAttached(cs, d.Id()); err != nil || attached {
+ if attached, err := isAttached(d, meta); err != nil || attached {
return err
}
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(
+ cs,
+ "virtual_machine",
+ d.Get("virtual_machine").(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
@@ -334,7 +340,7 @@ func resourceCloudStackDiskDetach(d *schema.ResourceData, meta interface{}) erro
cs := meta.(*cloudstack.CloudStackClient)
// Check if the volume is actually attached, before detaching
- if attached, err := isAttached(cs, d.Id()); err != nil || !attached {
+ if attached, err := isAttached(d, meta); err != nil || !attached {
return err
}
@@ -347,7 +353,12 @@ func resourceCloudStackDiskDetach(d *schema.ResourceData, meta interface{}) erro
// Detach the currently attached volume
if _, err := cs.Volume.DetachVolume(p); err != nil {
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(
+ cs,
+ "virtual_machine",
+ d.Get("virtual_machine").(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
@@ -377,9 +388,14 @@ func resourceCloudStackDiskDetach(d *schema.ResourceData, meta interface{}) erro
return nil
}
-func isAttached(cs *cloudstack.CloudStackClient, id string) (bool, error) {
+func isAttached(d *schema.ResourceData, meta interface{}) (bool, error) {
+ cs := meta.(*cloudstack.CloudStackClient)
+
// Get the volume details
- v, _, err := cs.Volume.GetVolumeByID(id)
+ v, _, err := cs.Volume.GetVolumeByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
return false, err
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_disk_test.go b/builtin/providers/cloudstack/resource_cloudstack_disk_test.go
index 5eee8ed8dd4c..e22c649f8abc 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_disk_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_disk_test.go
@@ -175,7 +175,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -200,7 +200,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -224,7 +224,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go
index 0ff330ef40a2..3744cf8fdbf3 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"strconv"
"strings"
@@ -20,10 +21,19 @@ func resourceCloudStackEgressFirewall() *schema.Resource {
Delete: resourceCloudStackEgressFirewallDelete,
Schema: map[string]*schema.Schema{
+ "network_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"network"},
+ },
+
"network": &schema.Schema{
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `network_id` field instead",
+ ConflictsWith: []string{"network_id"},
},
"managed": &schema.Schema{
@@ -99,8 +109,16 @@ func resourceCloudStackEgressFirewallCreate(d *schema.ResourceData, meta interfa
return err
}
+ network, ok := d.GetOk("network_id")
+ if !ok {
+ network, ok = d.GetOk("network")
+ }
+ if !ok {
+ return errors.New("Either `network_id` or [deprecated] `network` must be provided.")
+ }
+
// Retrieve the network ID
- networkid, e := retrieveID(cs, "network", d.Get("network").(string))
+ networkid, e := retrieveID(cs, "network", network.(string))
if e != nil {
return e.Error()
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go
index 07f4e0d8a247..cc640ac951f5 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_egress_firewall_test.go
@@ -21,7 +21,7 @@ func TestAccCloudStackEgressFirewall_basic(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackEgressFirewallRulesExist("cloudstack_egress_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1),
+ "cloudstack_egress_firewall.foo", "network_id", CLOUDSTACK_NETWORK_1),
resource.TestCheckResourceAttr(
"cloudstack_egress_firewall.foo", "rule.#", "2"),
resource.TestCheckResourceAttr(
@@ -59,7 +59,7 @@ func TestAccCloudStackEgressFirewall_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackEgressFirewallRulesExist("cloudstack_egress_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1),
+ "cloudstack_egress_firewall.foo", "network_id", CLOUDSTACK_NETWORK_1),
resource.TestCheckResourceAttr(
"cloudstack_egress_firewall.foo", "rule.#", "2"),
resource.TestCheckResourceAttr(
@@ -88,7 +88,7 @@ func TestAccCloudStackEgressFirewall_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackEgressFirewallRulesExist("cloudstack_egress_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_egress_firewall.foo", "network", CLOUDSTACK_NETWORK_1),
+ "cloudstack_egress_firewall.foo", "network_id", CLOUDSTACK_NETWORK_1),
resource.TestCheckResourceAttr(
"cloudstack_egress_firewall.foo", "rule.#", "3"),
resource.TestCheckResourceAttr(
@@ -188,7 +188,7 @@ func testAccCheckCloudStackEgressFirewallDestroy(s *terraform.State) error {
var testAccCloudStackEgressFirewall_basic = fmt.Sprintf(`
resource "cloudstack_egress_firewall" "foo" {
- network = "%s"
+ network_id = "%s"
rule {
cidr_list = ["%s/32"]
@@ -208,7 +208,7 @@ resource "cloudstack_egress_firewall" "foo" {
var testAccCloudStackEgressFirewall_update = fmt.Sprintf(`
resource "cloudstack_egress_firewall" "foo" {
- network = "%s"
+ network_id = "%s"
rule {
cidr_list = ["%s/32", "%s/32"]
diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall.go b/builtin/providers/cloudstack/resource_cloudstack_firewall.go
index f10f5a6384b6..3b8ebe13c118 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_firewall.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_firewall.go
@@ -21,7 +21,7 @@ func resourceCloudStackFirewall() *schema.Resource {
Delete: resourceCloudStackFirewallDelete,
Schema: map[string]*schema.Schema{
- "ip_address": &schema.Schema{
+ "ip_address_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true,
@@ -32,8 +32,8 @@ func resourceCloudStackFirewall() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Deprecated: "Please use the `ip_address` field instead",
- ConflictsWith: []string{"ip_address"},
+ Deprecated: "Please use the `ip_address_id` field instead",
+ ConflictsWith: []string{"ip_address_id"},
},
"managed": &schema.Schema{
@@ -109,12 +109,12 @@ func resourceCloudStackFirewallCreate(d *schema.ResourceData, meta interface{})
return err
}
- ipaddress, ok := d.GetOk("ip_address")
+ ipaddress, ok := d.GetOk("ip_address_id")
if !ok {
ipaddress, ok = d.GetOk("ipaddress")
}
if !ok {
- return errors.New("Either `ip_address` or [deprecated] `ipaddress` must be provided.")
+ return errors.New("Either `ip_address_id` or [deprecated] `ipaddress` must be provided.")
}
// Retrieve the ipaddress ID
diff --git a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go
index f7fda8110bbf..1b4f48959b71 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_firewall_test.go
@@ -21,7 +21,7 @@ func TestAccCloudStackFirewall_basic(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackFirewallRulesExist("cloudstack_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_firewall.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_firewall.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
"cloudstack_firewall.foo", "rule.#", "2"),
resource.TestCheckResourceAttr(
@@ -55,7 +55,7 @@ func TestAccCloudStackFirewall_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackFirewallRulesExist("cloudstack_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_firewall.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_firewall.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
"cloudstack_firewall.foo", "rule.#", "2"),
resource.TestCheckResourceAttr(
@@ -80,7 +80,7 @@ func TestAccCloudStackFirewall_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackFirewallRulesExist("cloudstack_firewall.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_firewall.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_firewall.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
"cloudstack_firewall.foo", "rule.#", "3"),
resource.TestCheckResourceAttr(
@@ -174,7 +174,7 @@ func testAccCheckCloudStackFirewallDestroy(s *terraform.State) error {
var testAccCloudStackFirewall_basic = fmt.Sprintf(`
resource "cloudstack_firewall" "foo" {
- ip_address = "%s"
+ ip_address_id = "%s"
rule {
cidr_list = ["10.0.0.0/24"]
@@ -191,7 +191,7 @@ resource "cloudstack_firewall" "foo" {
var testAccCloudStackFirewall_update = fmt.Sprintf(`
resource "cloudstack_firewall" "foo" {
- ip_address = "%s"
+ ip_address_id = "%s"
rule {
cidr_list = ["10.0.0.0/24", "10.0.1.0/24"]
diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance.go b/builtin/providers/cloudstack/resource_cloudstack_instance.go
index 6408faaa0f09..9c787be84d10 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_instance.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_instance.go
@@ -4,6 +4,7 @@ import (
"crypto/sha1"
"encoding/base64"
"encoding/hex"
+ "errors"
"fmt"
"log"
"strings"
@@ -37,12 +38,20 @@ func resourceCloudStackInstance() *schema.Resource {
Required: true,
},
- "network": &schema.Schema{
+ "network_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "network": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `network_id` field instead",
+ },
+
"ip_address": &schema.Schema{
Type: schema.TypeString,
Optional: true,
@@ -101,6 +110,12 @@ func resourceCloudStackInstance() *schema.Resource {
Optional: true,
Default: false,
},
+
+ "group": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ },
},
}
}
@@ -149,11 +164,26 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{})
}
if zone.Networktype == "Advanced" {
+ network, ok := d.GetOk("network_id")
+ if !ok {
+ network, ok = d.GetOk("network")
+ }
+ if !ok {
+ return errors.New(
+ "Either `network_id` or [deprecated] `network` must be provided when using a zone with network type `advanced`.")
+ }
+
// Retrieve the network ID
- networkid, e := retrieveID(cs, "network", d.Get("network").(string))
+ networkid, e := retrieveID(
+ cs,
+ "network",
+ network.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
+
// Set the default network ID
p.SetNetworkids([]string{networkid})
}
@@ -168,14 +198,8 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{})
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// If a keypair is supplied, add it to the parameter struct
@@ -205,6 +229,11 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{})
p.SetUserdata(ud)
}
+ // If there is a group supplied, add it to the parameter struct
+ if group, ok := d.GetOk("group"); ok {
+ p.SetGroup(group.(string))
+ }
+
// Create the new instance
r, err := cs.VirtualMachine.DeployVirtualMachine(p)
if err != nil {
@@ -226,7 +255,10 @@ func resourceCloudStackInstanceRead(d *schema.ResourceData, meta interface{}) er
cs := meta.(*cloudstack.CloudStackClient)
// Get the virtual machine details
- vm, count, err := cs.VirtualMachine.GetVirtualMachineByID(d.Id())
+ vm, count, err := cs.VirtualMachine.GetVirtualMachineByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
log.Printf("[DEBUG] Instance %s does no longer exist", d.Get("name").(string))
@@ -240,10 +272,10 @@ func resourceCloudStackInstanceRead(d *schema.ResourceData, meta interface{}) er
// Update the config
d.Set("name", vm.Name)
d.Set("display_name", vm.Displayname)
+ d.Set("network_id", vm.Nic[0].Networkid)
d.Set("ip_address", vm.Nic[0].Ipaddress)
- //NB cloudstack sometimes sends back the wrong keypair name, so dont update it
+ d.Set("group", vm.Group)
- setValueOrID(d, "network", vm.Nic[0].Networkname, vm.Nic[0].Networkid)
setValueOrID(d, "service_offering", vm.Serviceofferingname, vm.Serviceofferingid)
setValueOrID(d, "template", vm.Templatename, vm.Templateid)
setValueOrID(d, "project", vm.Project, vm.Projectid)
@@ -278,6 +310,26 @@ func resourceCloudStackInstanceUpdate(d *schema.ResourceData, meta interface{})
d.SetPartial("display_name")
}
+ // Check if the group is changed and if so, update the virtual machine
+ if d.HasChange("group") {
+ log.Printf("[DEBUG] Group changed for %s, starting update", name)
+
+ // Create a new parameter struct
+ p := cs.VirtualMachine.NewUpdateVirtualMachineParams(d.Id())
+
+ // Set the new group
+ p.SetGroup(d.Get("group").(string))
+
+ // Update the display name
+ _, err := cs.VirtualMachine.UpdateVirtualMachine(p)
+ if err != nil {
+ return fmt.Errorf(
+ "Error updating the group for instance %s: %s", name, err)
+ }
+
+ d.SetPartial("group")
+ }
+
// Attributes that require reboot to update
if d.HasChange("name") || d.HasChange("service_offering") || d.HasChange("keypair") {
// Before we can actually make these changes, the virtual machine must be stopped
diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance_test.go b/builtin/providers/cloudstack/resource_cloudstack_instance_test.go
index f6416b8cf211..2d9743d30d9e 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_instance_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_instance_test.go
@@ -180,8 +180,8 @@ func testAccCheckCloudStackInstanceAttributes(
return fmt.Errorf("Bad template: %s", instance.Templatename)
}
- if instance.Nic[0].Networkname != CLOUDSTACK_NETWORK_1 {
- return fmt.Errorf("Bad network: %s", instance.Nic[0].Networkname)
+ if instance.Nic[0].Networkid != CLOUDSTACK_NETWORK_1 {
+ return fmt.Errorf("Bad network ID: %s", instance.Nic[0].Networkid)
}
return nil
@@ -234,7 +234,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
user_data = "foobar\nfoo\nbar"
@@ -250,7 +250,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-updated"
display_name = "terraform-updated"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
user_data = "foobar\nfoo\nbar"
@@ -266,7 +266,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
ip_address = "%s"
template = "%s"
zone = "%s"
@@ -287,7 +287,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
ip_address = "%s"
template = "%s"
zone = "%s"
@@ -305,7 +305,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
project = "%s"
zone = "%s"
diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go
index 4c140639a8f6..548d12dad8e1 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress.go
@@ -16,18 +16,34 @@ func resourceCloudStackIPAddress() *schema.Resource {
Delete: resourceCloudStackIPAddressDelete,
Schema: map[string]*schema.Schema{
- "network": &schema.Schema{
+ "network_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
+ Computed: true,
ForceNew: true,
},
- "vpc": &schema.Schema{
+ "network": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `network_id` field instead",
+ },
+
+ "vpc_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "vpc": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `vpc_id` field instead",
+ },
+
"project": &schema.Schema{
Type: schema.TypeString,
Optional: true,
@@ -52,9 +68,18 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{})
// Create a new parameter struct
p := cs.Address.NewAssociateIpAddressParams()
- if network, ok := d.GetOk("network"); ok {
+ network, ok := d.GetOk("network_id")
+ if !ok {
+ network, ok = d.GetOk("network")
+ }
+ if ok {
// Retrieve the network ID
- networkid, e := retrieveID(cs, "network", network.(string))
+ networkid, e := retrieveID(
+ cs,
+ "network",
+ network.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
@@ -63,9 +88,18 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{})
p.SetNetworkid(networkid)
}
- if vpc, ok := d.GetOk("vpc"); ok {
+ vpc, ok := d.GetOk("vpc_id")
+ if !ok {
+ vpc, ok = d.GetOk("vpc")
+ }
+ if ok {
// Retrieve the vpc ID
- vpcid, e := retrieveID(cs, "vpc", vpc.(string))
+ vpcid, e := retrieveID(
+ cs,
+ "vpc",
+ vpc.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
@@ -75,14 +109,8 @@ func resourceCloudStackIPAddressCreate(d *schema.ResourceData, meta interface{})
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Associate a new IP address
@@ -100,7 +128,10 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e
cs := meta.(*cloudstack.CloudStackClient)
// Get the IP address details
- ip, count, err := cs.Address.GetPublicIpAddressByID(d.Id())
+ ip, count, err := cs.Address.GetPublicIpAddressByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
log.Printf(
@@ -115,24 +146,16 @@ func resourceCloudStackIPAddressRead(d *schema.ResourceData, meta interface{}) e
// Updated the IP address
d.Set("ip_address", ip.Ipaddress)
- if _, ok := d.GetOk("network"); ok {
- // Get the network details
- n, _, err := cs.Network.GetNetworkByID(ip.Associatednetworkid)
- if err != nil {
- return err
- }
-
- setValueOrID(d, "network", n.Name, ip.Associatednetworkid)
+ _, networkID := d.GetOk("network_id")
+ _, network := d.GetOk("network")
+ if networkID || network {
+ d.Set("network_id", ip.Associatednetworkid)
}
- if _, ok := d.GetOk("vpc"); ok {
- // Get the VPC details
- v, _, err := cs.VPC.GetVPCByID(ip.Vpcid)
- if err != nil {
- return err
- }
-
- setValueOrID(d, "vpc", v.Name, ip.Vpcid)
+ _, vpcID := d.GetOk("vpc_id")
+ _, vpc := d.GetOk("vpc")
+ if vpcID || vpc {
+ d.Set("vpc_id", ip.Vpcid)
}
setValueOrID(d, "project", ip.Project, ip.Projectid)
@@ -162,12 +185,14 @@ func resourceCloudStackIPAddressDelete(d *schema.ResourceData, meta interface{})
}
func verifyIPAddressParams(d *schema.ResourceData) error {
+ _, networkID := d.GetOk("network_id")
_, network := d.GetOk("network")
+ _, vpcID := d.GetOk("vpc_id")
_, vpc := d.GetOk("vpc")
- if network && vpc || !network && !vpc {
+ if (networkID || network) && (vpcID || vpc) || (!networkID && !network) && (!vpcID && !vpc) {
return fmt.Errorf(
- "You must supply a value for either (so not both) the 'network' or 'vpc' parameter")
+ "You must supply a value for either (so not both) the 'network_id' or 'vpc_id' parameter")
}
return nil
diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go
index edf120573f73..6b74e96922d9 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go
@@ -42,8 +42,6 @@ func TestAccCloudStackIPAddress_vpc(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackIPAddressExists(
"cloudstack_ipaddress.foo", &ipaddr),
- resource.TestCheckResourceAttr(
- "cloudstack_ipaddress.foo", "vpc", "terraform-vpc"),
),
},
},
@@ -83,8 +81,8 @@ func testAccCheckCloudStackIPAddressAttributes(
ipaddr *cloudstack.PublicIpAddress) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if ipaddr.Associatednetworkname != CLOUDSTACK_NETWORK_1 {
- return fmt.Errorf("Bad network: %s", ipaddr.Associatednetworkname)
+ if ipaddr.Associatednetworkid != CLOUDSTACK_NETWORK_1 {
+ return fmt.Errorf("Bad network ID: %s", ipaddr.Associatednetworkid)
}
return nil
@@ -114,7 +112,7 @@ func testAccCheckCloudStackIPAddressDestroy(s *terraform.State) error {
var testAccCloudStackIPAddress_basic = fmt.Sprintf(`
resource "cloudstack_ipaddress" "foo" {
- network = "%s"
+ network_id = "%s"
}`, CLOUDSTACK_NETWORK_1)
var testAccCloudStackIPAddress_vpc = fmt.Sprintf(`
@@ -126,7 +124,7 @@ resource "cloudstack_vpc" "foobar" {
}
resource "cloudstack_ipaddress" "foo" {
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule.go
index d4f3143ccc48..829d7296e766 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule.go
@@ -29,27 +29,34 @@ func resourceCloudStackLoadBalancerRule() *schema.Resource {
Computed: true,
},
- "ip_address": &schema.Schema{
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- ConflictsWith: []string{"ipaddress"},
+ "ip_address_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
},
"ipaddress": &schema.Schema{
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- Deprecated: "Please use the `ip_address` field instead",
- ConflictsWith: []string{"ip_address"},
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `ip_address_id` field instead",
},
- "network": &schema.Schema{
+ "network_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "network": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `network_id` field instead",
+ },
+
"algorithm": &schema.Schema{
Type: schema.TypeString,
Required: true,
@@ -67,11 +74,21 @@ func resourceCloudStackLoadBalancerRule() *schema.Resource {
ForceNew: true,
},
+ "member_ids": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ ConflictsWith: []string{"members"},
+ },
+
"members": &schema.Schema{
- Type: schema.TypeList,
- Required: true,
- ForceNew: true,
- Elem: &schema.Schema{Type: schema.TypeString},
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ Deprecated: "Please use the `member_ids` field instead",
+ ConflictsWith: []string{"member_ids"},
},
},
}
@@ -99,23 +116,27 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter
p.SetDescription(d.Get("name").(string))
}
- // Retrieve the network and the ID
- if network, ok := d.GetOk("network"); ok {
+ network, ok := d.GetOk("network_id")
+ if !ok {
+ network, ok = d.GetOk("network")
+ }
+ if ok {
+ // Retrieve the network ID
networkid, e := retrieveID(cs, "network", network.(string))
if e != nil {
return e.Error()
}
- // Set the default network ID
+ // Set the networkid
p.SetNetworkid(networkid)
}
- ipaddress, ok := d.GetOk("ip_address")
+ ipaddress, ok := d.GetOk("ip_address_id")
if !ok {
ipaddress, ok = d.GetOk("ipaddress")
}
if !ok {
- return errors.New("Either `ip_address` or [deprecated] `ipaddress` must be provided.")
+ return errors.New("Either `ip_address_id` or [deprecated] `ipaddress` must be provided.")
}
// Retrieve the ipaddress ID
@@ -135,8 +156,8 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter
d.SetId(r.Id)
d.SetPartial("name")
d.SetPartial("description")
- d.SetPartial("ip_address")
- d.SetPartial("network")
+ d.SetPartial("ip_address_id")
+ d.SetPartial("network_id")
d.SetPartial("algorithm")
d.SetPartial("private_port")
d.SetPartial("public_port")
@@ -144,8 +165,16 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter
// Create a new parameter struct
ap := cs.LoadBalancer.NewAssignToLoadBalancerRuleParams(r.Id)
+ members, ok := d.GetOk("member_ids")
+ if !ok {
+ members, ok = d.GetOk("members")
+ }
+ if !ok {
+ return errors.New("Either `member_ids` or [deprecated] `members` must be provided.")
+ }
+
var mbs []string
- for _, id := range d.Get("members").([]interface{}) {
+ for _, id := range members.([]interface{}) {
mbs = append(mbs, id.(string))
}
@@ -156,9 +185,10 @@ func resourceCloudStackLoadBalancerRuleCreate(d *schema.ResourceData, meta inter
return err
}
+ d.SetPartial("member_ids")
d.SetPartial("members")
-
d.Partial(false)
+
return resourceCloudStackLoadBalancerRuleRead(d, meta)
}
@@ -180,16 +210,13 @@ func resourceCloudStackLoadBalancerRuleRead(d *schema.ResourceData, meta interfa
d.Set("algorithm", lb.Algorithm)
d.Set("public_port", lb.Publicport)
d.Set("private_port", lb.Privateport)
-
- setValueOrID(d, "ip_address", lb.Publicip, lb.Publicipid)
+ d.Set("ip_address_id", lb.Publicipid)
// Only set network if user specified it to avoid spurious diffs
- if _, ok := d.GetOk("network"); ok {
- network, _, err := cs.Network.GetNetworkByID(lb.Networkid)
- if err != nil {
- return err
- }
- setValueOrID(d, "network", network.Name, lb.Networkid)
+ _, networkID := d.GetOk("network_id")
+ _, network := d.GetOk("network")
+ if networkID || network {
+ d.Set("network_id", lb.Networkid)
}
return nil
diff --git a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule_test.go
index b34c4f555f59..9d3f6ec1e651 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_loadbalancer_rule_test.go
@@ -75,7 +75,7 @@ func TestAccCloudStackLoadBalancerRule_update(t *testing.T) {
})
}
-func TestAccCloudStackLoadBalancerRule_forcenew(t *testing.T) {
+func TestAccCloudStackLoadBalancerRule_forceNew(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
@@ -138,7 +138,7 @@ func TestAccCloudStackLoadBalancerRule_vpc(t *testing.T) {
})
}
-func TestAccCloudStackLoadBalancerRule_vpc_update(t *testing.T) {
+func TestAccCloudStackLoadBalancerRule_vpcUpdate(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
@@ -243,7 +243,7 @@ resource "cloudstack_instance" "foobar1" {
name = "terraform-server1"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -251,11 +251,11 @@ resource "cloudstack_instance" "foobar1" {
resource "cloudstack_loadbalancer_rule" "foo" {
name = "terraform-lb"
- ip_address = "%s"
+ ip_address_id = "%s"
algorithm = "roundrobin"
public_port = 80
private_port = 80
- members = ["${cloudstack_instance.foobar1.id}"]
+ member_ids = ["${cloudstack_instance.foobar1.id}"]
}
`,
CLOUDSTACK_SERVICE_OFFERING_1,
@@ -269,7 +269,7 @@ resource "cloudstack_instance" "foobar1" {
name = "terraform-server1"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -277,11 +277,11 @@ resource "cloudstack_instance" "foobar1" {
resource "cloudstack_loadbalancer_rule" "foo" {
name = "terraform-lb-update"
- ip_address = "%s"
+ ip_address_id = "%s"
algorithm = "leastconn"
public_port = 80
private_port = 80
- members = ["${cloudstack_instance.foobar1.id}"]
+ member_ids = ["${cloudstack_instance.foobar1.id}"]
}
`,
CLOUDSTACK_SERVICE_OFFERING_1,
@@ -295,7 +295,7 @@ resource "cloudstack_instance" "foobar1" {
name = "terraform-server1"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -303,11 +303,11 @@ resource "cloudstack_instance" "foobar1" {
resource "cloudstack_loadbalancer_rule" "foo" {
name = "terraform-lb-update"
- ip_address = "%s"
+ ip_address_id = "%s"
algorithm = "leastconn"
public_port = 443
private_port = 443
- members = ["${cloudstack_instance.foobar1.id}"]
+ member_ids = ["${cloudstack_instance.foobar1.id}"]
}
`,
CLOUDSTACK_SERVICE_OFFERING_1,
@@ -328,19 +328,19 @@ resource "cloudstack_network" "foo" {
name = "terraform-network"
cidr = "%s"
network_offering = "%s"
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
zone = "${cloudstack_vpc.foobar.zone}"
}
resource "cloudstack_ipaddress" "foo" {
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}
resource "cloudstack_instance" "foobar1" {
name = "terraform-server1"
display_name = "terraform"
service_offering= "%s"
- network = "${cloudstack_network.foo.name}"
+ network_id = "${cloudstack_network.foo.id}"
template = "%s"
zone = "${cloudstack_network.foo.zone}"
expunge = true
@@ -348,12 +348,12 @@ resource "cloudstack_instance" "foobar1" {
resource "cloudstack_loadbalancer_rule" "foo" {
name = "terraform-lb"
- ip_address = "${cloudstack_ipaddress.foo.ip_address}"
+ ip_address_id = "${cloudstack_ipaddress.foo.id}"
algorithm = "roundrobin"
- network = "${cloudstack_network.foo.id}"
+ network_id = "${cloudstack_network.foo.id}"
public_port = 80
private_port = 80
- members = ["${cloudstack_instance.foobar1.id}"]
+ member_ids = ["${cloudstack_instance.foobar1.id}"]
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
@@ -375,19 +375,19 @@ resource "cloudstack_network" "foo" {
name = "terraform-network"
cidr = "%s"
network_offering = "%s"
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
zone = "${cloudstack_vpc.foobar.zone}"
}
resource "cloudstack_ipaddress" "foo" {
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}
resource "cloudstack_instance" "foobar1" {
name = "terraform-server1"
display_name = "terraform"
service_offering= "%s"
- network = "${cloudstack_network.foo.name}"
+ network_id = "${cloudstack_network.foo.id}"
template = "%s"
zone = "${cloudstack_network.foo.zone}"
expunge = true
@@ -397,7 +397,7 @@ resource "cloudstack_instance" "foobar2" {
name = "terraform-server2"
display_name = "terraform"
service_offering= "%s"
- network = "${cloudstack_network.foo.name}"
+ network_id = "${cloudstack_network.foo.id}"
template = "%s"
zone = "${cloudstack_network.foo.zone}"
expunge = true
@@ -405,12 +405,12 @@ resource "cloudstack_instance" "foobar2" {
resource "cloudstack_loadbalancer_rule" "foo" {
name = "terraform-lb-update"
- ip_address = "${cloudstack_ipaddress.foo.ip_address}"
+ ip_address_id = "${cloudstack_ipaddress.foo.id}"
algorithm = "leastconn"
- network = "${cloudstack_network.foo.id}"
+ network_id = "${cloudstack_network.foo.id}"
public_port = 443
private_port = 443
- members = ["${cloudstack_instance.foobar1.id}", "${cloudstack_instance.foobar2.id}"]
+ member_ids = ["${cloudstack_instance.foobar1.id}", "${cloudstack_instance.foobar2.id}"]
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go
index 261d0ec508d1..69dc27091a99 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network.go
@@ -68,16 +68,33 @@ func resourceCloudStackNetwork() *schema.Resource {
ForceNew: true,
},
- "vpc": &schema.Schema{
+ "vpc_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "vpc": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `vpc_id` field instead",
+ },
+
+ "acl_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ ConflictsWith: []string{"aclid"},
+ },
+
"aclid": &schema.Schema{
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `acl_id` field instead",
},
"project": &schema.Schema{
@@ -138,34 +155,39 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e
}
// Check is this network needs to be created in a VPC
- vpc := d.Get("vpc").(string)
- if vpc != "" {
+ vpc, ok := d.GetOk("vpc_id")
+ if !ok {
+ vpc, ok = d.GetOk("vpc")
+ }
+ if ok {
// Retrieve the vpc ID
- vpcid, e := retrieveID(cs, "vpc", vpc)
+ vpcid, e := retrieveID(
+ cs,
+ "vpc",
+ vpc.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
- // Set the vpc ID
+ // Set the vpcid
p.SetVpcid(vpcid)
// Since we're in a VPC, check if we want to assiciate an ACL list
- aclid := d.Get("aclid").(string)
- if aclid != "" {
+ aclid, ok := d.GetOk("acl_id")
+ if !ok {
+ aclid, ok = d.GetOk("acl")
+ }
+ if ok {
// Set the acl ID
- p.SetAclid(aclid)
+ p.SetAclid(aclid.(string))
}
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Create the new network
@@ -188,7 +210,10 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err
cs := meta.(*cloudstack.CloudStackClient)
// Get the virtual machine details
- n, count, err := cs.Network.GetNetworkByID(d.Id())
+ n, count, err := cs.Network.GetNetworkByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
log.Printf(
@@ -205,6 +230,18 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err
d.Set("cidr", n.Cidr)
d.Set("gateway", n.Gateway)
+ _, vpcID := d.GetOk("vpc_id")
+ _, vpc := d.GetOk("vpc")
+ if vpcID || vpc {
+ d.Set("vpc_id", n.Vpcid)
+ }
+
+ _, aclID := d.GetOk("acl_id")
+ _, acl := d.GetOk("aclid")
+ if aclID || acl {
+ d.Set("acl_id", n.Aclid)
+ }
+
// Read the tags and store them in a map
tags := make(map[string]interface{})
for item := range n.Tags {
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go
index 2504b762bfab..c39c695d9b76 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network_acl.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"log"
"strings"
@@ -29,11 +30,19 @@ func resourceCloudStackNetworkACL() *schema.Resource {
ForceNew: true,
},
- "vpc": &schema.Schema{
+ "vpc_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+
+ "vpc": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `vpc_id` field instead",
+ },
},
}
}
@@ -43,8 +52,16 @@ func resourceCloudStackNetworkACLCreate(d *schema.ResourceData, meta interface{}
name := d.Get("name").(string)
+ vpc, ok := d.GetOk("vpc_id")
+ if !ok {
+ vpc, ok = d.GetOk("vpc")
+ }
+ if !ok {
+ return errors.New("Either `vpc_id` or [deprecated] `vpc` must be provided.")
+ }
+
// Retrieve the vpc ID
- vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string))
+ vpcid, e := retrieveID(cs, "vpc", vpc.(string))
if e != nil {
return e.Error()
}
@@ -88,14 +105,7 @@ func resourceCloudStackNetworkACLRead(d *schema.ResourceData, meta interface{})
d.Set("name", f.Name)
d.Set("description", f.Description)
-
- // Get the VPC details
- v, _, err := cs.VPC.GetVPCByID(f.Vpcid)
- if err != nil {
- return err
- }
-
- setValueOrID(d, "vpc", v.Name, v.Id)
+ d.Set("vpc_id", f.Vpcid)
return nil
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go
index 14e39d99c9e7..88de58f911ff 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"strconv"
"strings"
@@ -20,10 +21,19 @@ func resourceCloudStackNetworkACLRule() *schema.Resource {
Delete: resourceCloudStackNetworkACLRuleDelete,
Schema: map[string]*schema.Schema{
+ "acl_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ConflictsWith: []string{"aclid"},
+ },
+
"aclid": &schema.Schema{
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `acl_id` field instead",
+ ConflictsWith: []string{"acl_id"},
},
"managed": &schema.Schema{
@@ -109,8 +119,16 @@ func resourceCloudStackNetworkACLRuleCreate(d *schema.ResourceData, meta interfa
return err
}
+ aclid, ok := d.GetOk("acl_id")
+ if !ok {
+ aclid, ok = d.GetOk("aclid")
+ }
+ if !ok {
+ return errors.New("Either `acl_id` or [deprecated] `aclid` must be provided.")
+ }
+
// We need to set this upfront in order to be able to save a partial state
- d.SetId(d.Get("aclid").(string))
+ d.SetId(aclid.(string))
// Create all rules that are configured
if nrs := d.Get("rule").(*schema.Set); nrs.Len() > 0 {
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go
index 862418f704e3..3fb978172a75 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go
@@ -219,11 +219,11 @@ resource "cloudstack_vpc" "foobar" {
resource "cloudstack_network_acl" "foo" {
name = "terraform-acl"
description = "terraform-acl-text"
- vpc = "${cloudstack_vpc.foobar.id}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}
resource "cloudstack_network_acl_rule" "foo" {
- aclid = "${cloudstack_network_acl.foo.id}"
+ acl_id = "${cloudstack_network_acl.foo.id}"
rule {
action = "allow"
@@ -263,11 +263,11 @@ resource "cloudstack_vpc" "foobar" {
resource "cloudstack_network_acl" "foo" {
name = "terraform-acl"
description = "terraform-acl-text"
- vpc = "${cloudstack_vpc.foobar.id}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}
resource "cloudstack_network_acl_rule" "foo" {
- aclid = "${cloudstack_network_acl.foo.id}"
+ acl_id = "${cloudstack_network_acl.foo.id}"
rule {
action = "deny"
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go
index c8a58a8fe6ee..d6431c39956b 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go
@@ -22,8 +22,6 @@ func TestAccCloudStackNetworkACL_basic(t *testing.T) {
testAccCheckCloudStackNetworkACLExists(
"cloudstack_network_acl.foo", &acl),
testAccCheckCloudStackNetworkACLBasicAttributes(&acl),
- resource.TestCheckResourceAttr(
- "cloudstack_network_acl.foo", "vpc", "terraform-vpc"),
),
},
},
@@ -106,7 +104,7 @@ resource "cloudstack_vpc" "foobar" {
resource "cloudstack_network_acl" "foo" {
name = "terraform-acl"
description = "terraform-acl-text"
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_test.go
index 3bc1744b9bf7..49400dad7e8b 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_network_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_network_test.go
@@ -44,8 +44,6 @@ func TestAccCloudStackNetwork_vpc(t *testing.T) {
testAccCheckCloudStackNetworkExists(
"cloudstack_network.foo", &network),
testAccCheckCloudStackNetworkVPCAttributes(&network),
- resource.TestCheckResourceAttr(
- "cloudstack_network.foo", "vpc", "terraform-vpc"),
),
},
},
@@ -187,7 +185,7 @@ resource "cloudstack_network" "foo" {
name = "terraform-network"
cidr = "%s"
network_offering = "%s"
- vpc = "${cloudstack_vpc.foobar.name}"
+ vpc_id = "${cloudstack_vpc.foobar.id}"
zone = "${cloudstack_vpc.foobar.zone}"
}`,
CLOUDSTACK_VPC_CIDR_1,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_nic.go b/builtin/providers/cloudstack/resource_cloudstack_nic.go
index 6902f197e5fe..0baae852ead0 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_nic.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_nic.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"log"
"strings"
@@ -16,12 +17,20 @@ func resourceCloudStackNIC() *schema.Resource {
Delete: resourceCloudStackNICDelete,
Schema: map[string]*schema.Schema{
- "network": &schema.Schema{
+ "network_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "network": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `network_id` field instead",
+ },
+
"ip_address": &schema.Schema{
Type: schema.TypeString,
Optional: true,
@@ -32,16 +41,23 @@ func resourceCloudStackNIC() *schema.Resource {
"ipaddress": &schema.Schema{
Type: schema.TypeString,
Optional: true,
- Computed: true,
ForceNew: true,
Deprecated: "Please use the `ip_address` field instead",
},
- "virtual_machine": &schema.Schema{
+ "virtual_machine_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+
+ "virtual_machine": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `virtual_machine_id` field instead",
+ },
},
}
}
@@ -49,14 +65,31 @@ func resourceCloudStackNIC() *schema.Resource {
func resourceCloudStackNICCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ network, ok := d.GetOk("network_id")
+ if !ok {
+ network, ok = d.GetOk("network")
+ }
+ if !ok {
+ return errors.New("Either `network_id` or [deprecated] `network` must be provided.")
+ }
+
// Retrieve the network ID
- networkid, e := retrieveID(cs, "network", d.Get("network").(string))
+ networkid, e := retrieveID(cs, "network", network.(string))
if e != nil {
return e.Error()
}
+ virtualmachine, ok := d.GetOk("virtual_machine_id")
+ if !ok {
+ virtualmachine, ok = d.GetOk("virtual_machine")
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine.(string))
if e != nil {
return e.Error()
}
@@ -89,7 +122,7 @@ func resourceCloudStackNICCreate(d *schema.ResourceData, meta interface{}) error
}
if !found {
- return fmt.Errorf("Could not find NIC ID for network: %s", d.Get("network").(string))
+ return fmt.Errorf("Could not find NIC ID for network ID: %s", networkid)
}
return resourceCloudStackNICRead(d, meta)
@@ -98,8 +131,23 @@ func resourceCloudStackNICCreate(d *schema.ResourceData, meta interface{}) error
func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ virtualmachine, ok := d.GetOk("virtual_machine_id")
+ if !ok {
+ virtualmachine, ok = d.GetOk("virtual_machine")
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
+ // Retrieve the virtual_machine ID
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine.(string))
+ if e != nil {
+ return e.Error()
+ }
+
// Get the virtual machine details
- vm, count, err := cs.VirtualMachine.GetVirtualMachineByName(d.Get("virtual_machine").(string))
+ vm, count, err := cs.VirtualMachine.GetVirtualMachineByID(virtualmachineid)
if err != nil {
if count == 0 {
log.Printf("[DEBUG] Instance %s does no longer exist", d.Get("virtual_machine").(string))
@@ -115,15 +163,15 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error {
for _, n := range vm.Nic {
if n.Id == d.Id() {
d.Set("ip_address", n.Ipaddress)
- setValueOrID(d, "network", n.Networkname, n.Networkid)
- setValueOrID(d, "virtual_machine", vm.Name, vm.Id)
+ d.Set("network_id", n.Networkid)
+ d.Set("virtual_machine_id", vm.Id)
found = true
break
}
}
if !found {
- log.Printf("[DEBUG] NIC for network %s does no longer exist", d.Get("network").(string))
+ log.Printf("[DEBUG] NIC for network ID %s does no longer exist", d.Get("network_id").(string))
d.SetId("")
}
@@ -133,8 +181,17 @@ func resourceCloudStackNICRead(d *schema.ResourceData, meta interface{}) error {
func resourceCloudStackNICDelete(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ virtualmachine, ok := d.GetOk("virtual_machine_id")
+ if !ok {
+ virtualmachine, ok = d.GetOk("virtual_machine")
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine.(string))
if e != nil {
return e.Error()
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_nic_test.go b/builtin/providers/cloudstack/resource_cloudstack_nic_test.go
index 249c02d89d9c..a7e6fcff6d30 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_nic_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_nic_test.go
@@ -103,8 +103,8 @@ func testAccCheckCloudStackNICAttributes(
nic *cloudstack.Nic) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if nic.Networkname != CLOUDSTACK_2ND_NIC_NETWORK {
- return fmt.Errorf("Bad network: %s", nic.Networkname)
+ if nic.Networkid != CLOUDSTACK_2ND_NIC_NETWORK {
+ return fmt.Errorf("Bad network ID: %s", nic.Networkid)
}
return nil
@@ -115,8 +115,8 @@ func testAccCheckCloudStackNICIPAddress(
nic *cloudstack.Nic) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if nic.Networkname != CLOUDSTACK_2ND_NIC_NETWORK {
- return fmt.Errorf("Bad network: %s", nic.Networkname)
+ if nic.Networkid != CLOUDSTACK_2ND_NIC_NETWORK {
+ return fmt.Errorf("Bad network ID: %s", nic.Networkname)
}
if nic.Ipaddress != CLOUDSTACK_2ND_NIC_IPADDRESS {
@@ -154,15 +154,15 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
}
resource "cloudstack_nic" "foo" {
- network = "%s"
- virtual_machine = "${cloudstack_instance.foobar.name}"
+ network_id = "%s"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
CLOUDSTACK_NETWORK_1,
@@ -175,16 +175,16 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
}
resource "cloudstack_nic" "foo" {
- network = "%s"
+ network_id = "%s"
ip_address = "%s"
- virtual_machine = "${cloudstack_instance.foobar.name}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
CLOUDSTACK_NETWORK_1,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go
index 64fd6a3bb95c..6615b82ae7bb 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_port_forward.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward.go
@@ -22,7 +22,7 @@ func resourceCloudStackPortForward() *schema.Resource {
Delete: resourceCloudStackPortForwardDelete,
Schema: map[string]*schema.Schema{
- "ip_address": &schema.Schema{
+ "ip_address_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true,
@@ -33,8 +33,8 @@ func resourceCloudStackPortForward() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Deprecated: "Please use the `ip_address` field instead",
- ConflictsWith: []string{"ip_address"},
+ Deprecated: "Please use the `ip_address_id` field instead",
+ ConflictsWith: []string{"ip_address_id"},
},
"managed": &schema.Schema{
@@ -69,9 +69,15 @@ func resourceCloudStackPortForward() *schema.Resource {
Required: true,
},
- "virtual_machine": &schema.Schema{
+ "virtual_machine_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ },
+
+ "virtual_machine": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Deprecated: "Please use the `virtual_machine_id` field instead",
},
"uuid": &schema.Schema{
@@ -88,16 +94,21 @@ func resourceCloudStackPortForward() *schema.Resource {
func resourceCloudStackPortForwardCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
- ipaddress, ok := d.GetOk("ip_address")
+ ipaddress, ok := d.GetOk("ip_address_id")
if !ok {
ipaddress, ok = d.GetOk("ipaddress")
}
if !ok {
- return errors.New("Either `ip_address` or [deprecated] `ipaddress` must be provided.")
+ return errors.New("Either `ip_address_id` or [deprecated] `ipaddress` must be provided.")
}
// Retrieve the ipaddress ID
- ipaddressid, e := retrieveID(cs, "ip_address", ipaddress.(string))
+ ipaddressid, e := retrieveID(
+ cs,
+ "ip_address",
+ ipaddress.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
@@ -173,13 +184,30 @@ func createPortForward(
return err
}
+ virtualmachine, ok := forward["virtual_machine_id"]
+ if !ok {
+ virtualmachine, ok = forward["virtual_machine"]
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", forward["virtual_machine"].(string))
+ virtualmachineid, e := retrieveID(
+ cs,
+ "virtual_machine",
+ virtualmachine.(string),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if e != nil {
return e.Error()
}
- vm, _, err := cs.VirtualMachine.GetVirtualMachineByID(virtualmachineid)
+ vm, _, err := cs.VirtualMachine.GetVirtualMachineByID(
+ virtualmachineid,
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
return err
}
@@ -265,12 +293,7 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{})
forward["protocol"] = f.Protocol
forward["private_port"] = privPort
forward["public_port"] = pubPort
-
- if isID(forward["virtual_machine"].(string)) {
- forward["virtual_machine"] = f.Virtualmachineid
- } else {
- forward["virtual_machine"] = f.Virtualmachinename
- }
+ forward["virtual_machine_id"] = f.Virtualmachineid
forwards.Add(forward)
}
@@ -282,11 +305,11 @@ func resourceCloudStackPortForwardRead(d *schema.ResourceData, meta interface{})
for uuid := range forwardMap {
// Make a dummy forward to hold the unknown UUID
forward := map[string]interface{}{
- "protocol": uuid,
- "private_port": 0,
- "public_port": 0,
- "virtual_machine": uuid,
- "uuid": uuid,
+ "protocol": uuid,
+ "private_port": 0,
+ "public_port": 0,
+ "virtual_machine_id": uuid,
+ "uuid": uuid,
}
// Add the dummy forward to the forwards set
@@ -316,9 +339,9 @@ func resourceCloudStackPortForwardUpdate(d *schema.ResourceData, meta interface{
// set to make sure we end up in a consistent state
forwards := o.(*schema.Set).Intersection(n.(*schema.Set))
- // First loop through all the new forwards and create (before destroy) them
- if nrs.Len() > 0 {
- err := createPortForwards(d, meta, forwards, nrs)
+ // First loop through all the old forwards and delete them
+ if ors.Len() > 0 {
+ err := deletePortForwards(d, meta, forwards, ors)
// We need to update this first to preserve the correct state
d.Set("forward", forwards)
@@ -328,9 +351,9 @@ func resourceCloudStackPortForwardUpdate(d *schema.ResourceData, meta interface{
}
}
- // Then loop through all the old forwards and delete them
- if ors.Len() > 0 {
- err := deletePortForwards(d, meta, forwards, ors)
+ // Then loop through all the new forwards and create them
+ if nrs.Len() > 0 {
+ err := createPortForwards(d, meta, forwards, nrs)
// We need to update this first to preserve the correct state
d.Set("forward", forwards)
diff --git a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go
index 8e9104ea13fd..f9038b21c839 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_port_forward_test.go
@@ -21,15 +21,9 @@ func TestAccCloudStackPortForward_basic(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackPortForwardsExist("cloudstack_port_forward.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_port_forward.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"),
+ "cloudstack_port_forward.foo", "forward.#", "1"),
),
},
},
@@ -47,17 +41,9 @@ func TestAccCloudStackPortForward_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackPortForwardsExist("cloudstack_port_forward.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_port_forward.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
"cloudstack_port_forward.foo", "forward.#", "1"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"),
),
},
@@ -66,25 +52,9 @@ func TestAccCloudStackPortForward_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackPortForwardsExist("cloudstack_port_forward.foo"),
resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "ip_address", CLOUDSTACK_PUBLIC_IPADDRESS),
+ "cloudstack_port_forward.foo", "ip_address_id", CLOUDSTACK_PUBLIC_IPADDRESS),
resource.TestCheckResourceAttr(
"cloudstack_port_forward.foo", "forward.#", "2"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.260687715.protocol", "tcp"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.260687715.private_port", "80"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.260687715.public_port", "8080"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.260687715.virtual_machine", "terraform-test"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.protocol", "tcp"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.private_port", "443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.public_port", "8443"),
- resource.TestCheckResourceAttr(
- "cloudstack_port_forward.foo", "forward.952396423.virtual_machine", "terraform-test"),
),
},
},
@@ -161,13 +131,13 @@ resource "cloudstack_instance" "foobar" {
}
resource "cloudstack_port_forward" "foo" {
- ip_address = "%s"
+ ip_address_id = "%s"
forward {
protocol = "tcp"
private_port = 443
public_port = 8443
- virtual_machine = "${cloudstack_instance.foobar.name}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
@@ -187,20 +157,20 @@ resource "cloudstack_instance" "foobar" {
}
resource "cloudstack_port_forward" "foo" {
- ip_address = "%s"
+ ip_address_id = "%s"
forward {
protocol = "tcp"
private_port = 443
public_port = 8443
- virtual_machine = "${cloudstack_instance.foobar.name}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}
forward {
protocol = "tcp"
private_port = 80
public_port = 8080
- virtual_machine = "${cloudstack_instance.foobar.name}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go
index cac479791e9e..a9940fd4c14a 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"log"
"strings"
@@ -26,23 +27,37 @@ func resourceCloudStackSecondaryIPAddress() *schema.Resource {
"ipaddress": &schema.Schema{
Type: schema.TypeString,
Optional: true,
- Computed: true,
ForceNew: true,
Deprecated: "Please use the `ip_address` field instead",
},
- "nicid": &schema.Schema{
+ "nic_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
- "virtual_machine": &schema.Schema{
+ "nicid": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `nic_id` field instead",
+ },
+
+ "virtual_machine_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+
+ "virtual_machine": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `virtual_machine_id` field instead",
+ },
},
}
}
@@ -50,10 +65,22 @@ func resourceCloudStackSecondaryIPAddress() *schema.Resource {
func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
- nicid := d.Get("nicid").(string)
- if nicid == "" {
+ nicid, ok := d.GetOk("nic_id")
+ if !ok {
+ nicid, ok = d.GetOk("nicid")
+ }
+ if !ok {
+ virtualmachine, ok := d.GetOk("virtual_machine_id")
+ if !ok {
+ virtualmachine, ok = d.GetOk("virtual_machine")
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine.(string))
if e != nil {
return e.Error()
}
@@ -62,7 +89,7 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int
vm, count, err := cs.VirtualMachine.GetVirtualMachineByID(virtualmachineid)
if err != nil {
if count == 0 {
- log.Printf("[DEBUG] Instance %s does no longer exist", d.Get("virtual_machine").(string))
+ log.Printf("[DEBUG] Virtual Machine %s does no longer exist", virtualmachineid)
d.SetId("")
return nil
}
@@ -73,7 +100,7 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int
}
// Create a new parameter struct
- p := cs.Nic.NewAddIpToNicParams(nicid)
+ p := cs.Nic.NewAddIpToNicParams(nicid.(string))
// If there is a ipaddres supplied, add it to the parameter struct
ipaddress, ok := d.GetOk("ip_address")
@@ -97,8 +124,17 @@ func resourceCloudStackSecondaryIPAddressCreate(d *schema.ResourceData, meta int
func resourceCloudStackSecondaryIPAddressRead(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ virtualmachine, ok := d.GetOk("virtual_machine_id")
+ if !ok {
+ virtualmachine, ok = d.GetOk("virtual_machine")
+ }
+ if !ok {
+ return errors.New(
+ "Either `virtual_machine_id` or [deprecated] `virtual_machine` must be provided.")
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine.(string))
if e != nil {
return e.Error()
}
@@ -107,20 +143,23 @@ func resourceCloudStackSecondaryIPAddressRead(d *schema.ResourceData, meta inter
vm, count, err := cs.VirtualMachine.GetVirtualMachineByID(virtualmachineid)
if err != nil {
if count == 0 {
- log.Printf("[DEBUG] Instance %s does no longer exist", d.Get("virtual_machine").(string))
+ log.Printf("[DEBUG] Virtual Machine %s does no longer exist", virtualmachineid)
d.SetId("")
return nil
}
return err
}
- nicid := d.Get("nicid").(string)
- if nicid == "" {
+ nicid, ok := d.GetOk("nic_id")
+ if !ok {
+ nicid, ok = d.GetOk("nicid")
+ }
+ if !ok {
nicid = vm.Nic[0].Id
}
p := cs.Nic.NewListNicsParams(virtualmachineid)
- p.SetNicid(nicid)
+ p.SetNicid(nicid.(string))
l, err := cs.Nic.ListNics(p)
if err != nil {
@@ -140,7 +179,8 @@ func resourceCloudStackSecondaryIPAddressRead(d *schema.ResourceData, meta inter
for _, ip := range l.Nics[0].Secondaryip {
if ip.Id == d.Id() {
d.Set("ip_address", ip.Ipaddress)
- d.Set("nicid", l.Nics[0].Id)
+ d.Set("nic_id", l.Nics[0].Id)
+ d.Set("virtual_machine_id", l.Nics[0].Virtualmachineid)
return nil
}
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go
index 8b9614831ea3..879ebd4a1e3e 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_secondary_ipaddress_test.go
@@ -64,9 +64,13 @@ func testAccCheckCloudStackSecondaryIPAddressExists(
cs := testAccProvider.Meta().(*cloudstack.CloudStackClient)
+ virtualmachine, ok := rs.Primary.Attributes["virtual_machine_id"]
+ if !ok {
+ virtualmachine, ok = rs.Primary.Attributes["virtual_machine"]
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(
- cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"])
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine)
if e != nil {
return e.Error()
}
@@ -80,8 +84,11 @@ func testAccCheckCloudStackSecondaryIPAddressExists(
return err
}
- nicid := rs.Primary.Attributes["nicid"]
- if nicid == "" {
+ nicid, ok := rs.Primary.Attributes["nic_id"]
+ if !ok {
+ nicid, ok = rs.Primary.Attributes["nicid"]
+ }
+ if !ok {
nicid = vm.Nic[0].Id
}
@@ -136,9 +143,13 @@ func testAccCheckCloudStackSecondaryIPAddressDestroy(s *terraform.State) error {
return fmt.Errorf("No IP address ID is set")
}
+ virtualmachine, ok := rs.Primary.Attributes["virtual_machine_id"]
+ if !ok {
+ virtualmachine, ok = rs.Primary.Attributes["virtual_machine"]
+ }
+
// Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(
- cs, "virtual_machine", rs.Primary.Attributes["virtual_machine"])
+ virtualmachineid, e := retrieveID(cs, "virtual_machine", virtualmachine)
if e != nil {
return e.Error()
}
@@ -152,8 +163,11 @@ func testAccCheckCloudStackSecondaryIPAddressDestroy(s *terraform.State) error {
return err
}
- nicid := rs.Primary.Attributes["nicid"]
- if nicid == "" {
+ nicid, ok := rs.Primary.Attributes["nic_id"]
+ if !ok {
+ nicid, ok = rs.Primary.Attributes["nicid"]
+ }
+ if !ok {
nicid = vm.Nic[0].Id
}
@@ -189,14 +203,14 @@ var testAccCloudStackSecondaryIPAddress_basic = fmt.Sprintf(`
resource "cloudstack_instance" "foobar" {
name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
}
resource "cloudstack_secondary_ipaddress" "foo" {
- virtual_machine = "${cloudstack_instance.foobar.id}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}
`,
CLOUDSTACK_SERVICE_OFFERING_1,
@@ -208,7 +222,7 @@ var testAccCloudStackSecondaryIPAddress_fixedIP = fmt.Sprintf(`
resource "cloudstack_instance" "foobar" {
name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
expunge = true
@@ -216,7 +230,7 @@ resource "cloudstack_instance" "foobar" {
resource "cloudstack_secondary_ipaddress" "foo" {
ip_address = "%s"
- virtual_machine = "${cloudstack_instance.foobar.id}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
CLOUDSTACK_NETWORK_1,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go
index a418c4cf65ca..508077c4121d 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair.go
@@ -63,6 +63,7 @@ func resourceCloudStackSSHKeyPairCreate(d *schema.ResourceData, meta interface{}
p := cs.SSH.NewRegisterSSHKeyPairParams(name, string(key))
+ // If there is a project supplied, we retrieve and set the project id
if err := setProjectid(p, cs, d); err != nil {
return err
}
@@ -75,6 +76,7 @@ func resourceCloudStackSSHKeyPairCreate(d *schema.ResourceData, meta interface{}
// No key supplied, must create one and return the private key
p := cs.SSH.NewCreateSSHKeyPairParams(name)
+ // If there is a project supplied, we retrieve and set the project id
if err := setProjectid(p, cs, d); err != nil {
return err
}
@@ -100,6 +102,7 @@ func resourceCloudStackSSHKeyPairRead(d *schema.ResourceData, meta interface{})
p := cs.SSH.NewListSSHKeyPairsParams()
p.SetName(d.Id())
+ // If there is a project supplied, we retrieve and set the project id
if err := setProjectid(p, cs, d); err != nil {
return err
}
@@ -127,6 +130,7 @@ func resourceCloudStackSSHKeyPairDelete(d *schema.ResourceData, meta interface{}
// Create a new parameter struct
p := cs.SSH.NewDeleteSSHKeyPairParams(d.Id())
+ // If there is a project supplied, we retrieve and set the project id
if err := setProjectid(p, cs, d); err != nil {
return err
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair_test.go b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair_test.go
index ba70518d5b36..e367d1a73a39 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_ssh_keypair_test.go
@@ -146,12 +146,11 @@ func testAccCheckCloudStackSSHKeyPairDestroy(s *terraform.State) error {
if err != nil {
return err
}
- if list.Count != 1 {
- return fmt.Errorf("Found more Key pair %s still exists", rs.Primary.ID)
- }
- if list.SSHKeyPairs[0].Name == rs.Primary.ID {
- return fmt.Errorf("Key pair %s still exists", rs.Primary.ID)
+ for _, keyPair := range list.SSHKeyPairs {
+ if keyPair.Name == rs.Primary.ID {
+ return fmt.Errorf("Key pair %s still exists", rs.Primary.ID)
+ }
}
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_static_nat.go b/builtin/providers/cloudstack/resource_cloudstack_static_nat.go
index 0f7d7a439bd8..b96991eef0f2 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_static_nat.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_static_nat.go
@@ -17,20 +17,20 @@ func resourceCloudStackStaticNAT() *schema.Resource {
Delete: resourceCloudStackStaticNATDelete,
Schema: map[string]*schema.Schema{
- "ipaddress": &schema.Schema{
+ "ip_address_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
- "network": &schema.Schema{
+ "network_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
- "virtual_machine": &schema.Schema{
+ "virtual_machine_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
@@ -49,29 +49,14 @@ func resourceCloudStackStaticNAT() *schema.Resource {
func resourceCloudStackStaticNATCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
- // Retrieve the ipaddress ID
- ipaddressid, e := retrieveID(cs, "ipaddress", d.Get("ipaddress").(string))
- if e != nil {
- return e.Error()
- }
-
- // Retrieve the virtual_machine ID
- virtualmachineid, e := retrieveID(cs, "virtual_machine", d.Get("virtual_machine").(string))
- if e != nil {
- return e.Error()
- }
+ ipaddressid := d.Get("ip_address_id").(string)
+ virtualmachineid := d.Get("virtual_machine_id").(string)
// Create a new parameter struct
p := cs.NAT.NewEnableStaticNatParams(ipaddressid, virtualmachineid)
- if network, ok := d.GetOk("network"); ok {
- // Retrieve the network ID
- networkid, e := retrieveID(cs, "network", network.(string))
- if e != nil {
- return e.Error()
- }
-
- p.SetNetworkid(networkid)
+ if networkid, ok := d.GetOk("network_id"); ok {
+ p.SetNetworkid(networkid.(string))
}
if vmGuestIP, ok := d.GetOk("vm_guest_ip"); ok {
@@ -126,8 +111,8 @@ func resourceCloudStackStaticNATRead(d *schema.ResourceData, meta interface{}) e
return nil
}
- setValueOrID(d, "network", ip.Associatednetworkname, ip.Associatednetworkid)
- setValueOrID(d, "virtual_machine", ip.Virtualmachinename, ip.Virtualmachineid)
+ d.Set("network_id", ip.Associatednetworkid)
+ d.Set("virtual_machine_id", ip.Virtualmachineid)
d.Set("vm_guest_ip", ip.Vmipaddress)
return nil
diff --git a/builtin/providers/cloudstack/resource_cloudstack_static_nat_test.go b/builtin/providers/cloudstack/resource_cloudstack_static_nat_test.go
index f6b86364f46e..be0bd6560b35 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_static_nat_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_static_nat_test.go
@@ -66,12 +66,8 @@ func testAccCheckCloudStackStaticNATAttributes(
ipaddr *cloudstack.PublicIpAddress) resource.TestCheckFunc {
return func(s *terraform.State) error {
- if ipaddr.Associatednetworkname != CLOUDSTACK_NETWORK_1 {
- return fmt.Errorf("Bad network: %s", ipaddr.Associatednetworkname)
- }
-
- if ipaddr.Virtualmachinename != "terraform-test" {
- return fmt.Errorf("Bad virtual_machine: %s", ipaddr.Virtualmachinename)
+ if ipaddr.Associatednetworkid != CLOUDSTACK_NETWORK_1 {
+ return fmt.Errorf("Bad network ID: %s", ipaddr.Associatednetworkid)
}
return nil
@@ -104,7 +100,7 @@ resource "cloudstack_instance" "foobar" {
name = "terraform-test"
display_name = "terraform-test"
service_offering= "%s"
- network = "%s"
+ network_id = "%s"
template = "%s"
zone = "%s"
user_data = "foobar\nfoo\nbar"
@@ -112,17 +108,16 @@ resource "cloudstack_instance" "foobar" {
}
resource "cloudstack_ipaddress" "foo" {
- network = "%s"
+ network_id = "${cloudstack_instance.foobar.network_id}"
}
resource "cloudstack_static_nat" "foo" {
- ipaddress = "${cloudstack_ipaddress.foo.id}"
- network = "${cloudstack_ipaddress.foo.network}"
- virtual_machine = "${cloudstack_instance.foobar.id}"
+ ip_address_id = "${cloudstack_ipaddress.foo.id}"
+ network_id = "${cloudstack_ipaddress.foo.network_id}"
+ virtual_machine_id = "${cloudstack_instance.foobar.id}"
}`,
CLOUDSTACK_SERVICE_OFFERING_1,
CLOUDSTACK_NETWORK_1,
CLOUDSTACK_TEMPLATE,
CLOUDSTACK_ZONE,
- CLOUDSTACK_NETWORK_1,
)
diff --git a/builtin/providers/cloudstack/resource_cloudstack_template.go b/builtin/providers/cloudstack/resource_cloudstack_template.go
index 04aaca22ede0..b3b3a45185df 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_template.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_template.go
@@ -168,14 +168,8 @@ func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{})
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Create the new template
@@ -213,7 +207,11 @@ func resourceCloudStackTemplateRead(d *schema.ResourceData, meta interface{}) er
cs := meta.(*cloudstack.CloudStackClient)
// Get the template details
- t, count, err := cs.Template.GetTemplateByID(d.Id(), "executable")
+ t, count, err := cs.Template.GetTemplateByID(
+ d.Id(),
+ "executable",
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
log.Printf(
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpc.go b/builtin/providers/cloudstack/resource_cloudstack_vpc.go
index d99a4042a523..16cf2ad11d28 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpc.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpc.go
@@ -106,14 +106,8 @@ func resourceCloudStackVPCCreate(d *schema.ResourceData, meta interface{}) error
}
// If there is a project supplied, we retrieve and set the project id
- if project, ok := d.GetOk("project"); ok {
- // Retrieve the project ID
- projectid, e := retrieveID(cs, "project", project.(string))
- if e != nil {
- return e.Error()
- }
- // Set the default project ID
- p.SetProjectid(projectid)
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Create the new VPC
@@ -131,7 +125,10 @@ func resourceCloudStackVPCRead(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
// Get the VPC details
- v, count, err := cs.VPC.GetVPCByID(d.Id())
+ v, count, err := cs.VPC.GetVPCByID(
+ d.Id(),
+ cloudstack.WithProject(d.Get("project").(string)),
+ )
if err != nil {
if count == 0 {
log.Printf(
@@ -163,8 +160,9 @@ func resourceCloudStackVPCRead(d *schema.ResourceData, meta interface{}) error {
p.SetVpcid(d.Id())
p.SetIssourcenat(true)
- if _, ok := d.GetOk("project"); ok {
- p.SetProjectid(v.Projectid)
+ // If there is a project supplied, we retrieve and set the project id
+ if err := setProjectid(p, cs, d); err != nil {
+ return err
}
// Get the source NAT IP assigned to the VPC
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go
index 322f07a2c9d0..98fb27b9da09 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"log"
"strings"
@@ -16,17 +17,33 @@ func resourceCloudStackVPNConnection() *schema.Resource {
Delete: resourceCloudStackVPNConnectionDelete,
Schema: map[string]*schema.Schema{
- "customergatewayid": &schema.Schema{
+ "customer_gateway_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
- "vpngatewayid": &schema.Schema{
+ "customergatewayid": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `customer_gateway_id` field instead",
+ },
+
+ "vpn_gateway_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+
+ "vpngatewayid": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `vpn_gateway_id` field instead",
+ },
},
}
}
@@ -34,10 +51,27 @@ func resourceCloudStackVPNConnection() *schema.Resource {
func resourceCloudStackVPNConnectionCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ customergatewayid, ok := d.GetOk("customer_gateway_id")
+ if !ok {
+ customergatewayid, ok = d.GetOk("customergatewayid")
+ }
+ if !ok {
+ return errors.New(
+ "Either `customer_gateway_id` or [deprecated] `customergatewayid` must be provided.")
+ }
+
+ vpngatewayid, ok := d.GetOk("vpn_gateway_id")
+ if !ok {
+ vpngatewayid, ok = d.GetOk("vpngatewayid")
+ }
+ if !ok {
+ return errors.New("Either `vpn_gateway_id` or [deprecated] `vpngatewayid` must be provided.")
+ }
+
// Create a new parameter struct
p := cs.VPN.NewCreateVpnConnectionParams(
- d.Get("customergatewayid").(string),
- d.Get("vpngatewayid").(string),
+ customergatewayid.(string),
+ vpngatewayid.(string),
)
// Create the new VPN Connection
@@ -66,8 +100,8 @@ func resourceCloudStackVPNConnectionRead(d *schema.ResourceData, meta interface{
return err
}
- d.Set("customergatewayid", v.S2scustomergatewayid)
- d.Set("vpngatewayid", v.S2svpngatewayid)
+ d.Set("customer_gateway_id", v.S2scustomergatewayid)
+ d.Set("vpn_gateway_id", v.S2svpngatewayid)
return nil
}
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go
index 7d09eea9bb5e..930866853901 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go
@@ -96,11 +96,11 @@ resource "cloudstack_vpc" "bar" {
}
resource "cloudstack_vpn_gateway" "foo" {
- vpc = "${cloudstack_vpc.foo.name}"
+ vpc_id = "${cloudstack_vpc.foo.id}"
}
resource "cloudstack_vpn_gateway" "bar" {
- vpc = "${cloudstack_vpc.bar.name}"
+ vpc_id = "${cloudstack_vpc.bar.id}"
}
resource "cloudstack_vpn_customer_gateway" "foo" {
@@ -122,13 +122,13 @@ resource "cloudstack_vpn_customer_gateway" "bar" {
}
resource "cloudstack_vpn_connection" "foo-bar" {
- customergatewayid = "${cloudstack_vpn_customer_gateway.foo.id}"
- vpngatewayid = "${cloudstack_vpn_gateway.bar.id}"
+ customer_gateway_id = "${cloudstack_vpn_customer_gateway.foo.id}"
+ vpn_gateway_id = "${cloudstack_vpn_gateway.bar.id}"
}
resource "cloudstack_vpn_connection" "bar-foo" {
- customergatewayid = "${cloudstack_vpn_customer_gateway.bar.id}"
- vpngatewayid = "${cloudstack_vpn_gateway.foo.id}"
+ customer_gateway_id = "${cloudstack_vpn_customer_gateway.bar.id}"
+ vpn_gateway_id = "${cloudstack_vpn_gateway.foo.id}"
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go
index b24eb356721c..acf181ace677 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go
@@ -188,11 +188,11 @@ resource "cloudstack_vpc" "bar" {
}
resource "cloudstack_vpn_gateway" "foo" {
- vpc = "${cloudstack_vpc.foo.name}"
+ vpc_id = "${cloudstack_vpc.foo.id}"
}
resource "cloudstack_vpn_gateway" "bar" {
- vpc = "${cloudstack_vpc.bar.name}"
+ vpc_id = "${cloudstack_vpc.bar.id}"
}
resource "cloudstack_vpn_customer_gateway" "foo" {
@@ -235,11 +235,11 @@ resource "cloudstack_vpc" "bar" {
}
resource "cloudstack_vpn_gateway" "foo" {
- vpc = "${cloudstack_vpc.foo.name}"
+ vpc_id = "${cloudstack_vpc.foo.id}"
}
resource "cloudstack_vpn_gateway" "bar" {
- vpc = "${cloudstack_vpc.bar.name}"
+ vpc_id = "${cloudstack_vpc.bar.id}"
}
resource "cloudstack_vpn_customer_gateway" "foo" {
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go
index 17533a3a6250..b6a926dc128e 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go
@@ -1,6 +1,7 @@
package cloudstack
import (
+ "errors"
"fmt"
"log"
"strings"
@@ -16,12 +17,20 @@ func resourceCloudStackVPNGateway() *schema.Resource {
Delete: resourceCloudStackVPNGatewayDelete,
Schema: map[string]*schema.Schema{
- "vpc": &schema.Schema{
+ "vpc_id": &schema.Schema{
Type: schema.TypeString,
- Required: true,
+ Optional: true,
+ Computed: true,
ForceNew: true,
},
+ "vpc": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use the `vpc_id` field instead",
+ },
+
"public_ip": &schema.Schema{
Type: schema.TypeString,
Computed: true,
@@ -33,8 +42,16 @@ func resourceCloudStackVPNGateway() *schema.Resource {
func resourceCloudStackVPNGatewayCreate(d *schema.ResourceData, meta interface{}) error {
cs := meta.(*cloudstack.CloudStackClient)
+ vpc, ok := d.GetOk("vpc_id")
+ if !ok {
+ vpc, ok = d.GetOk("vpc")
+ }
+ if !ok {
+ return errors.New("Either `vpc_id` or [deprecated] `vpc` must be provided.")
+ }
+
// Retrieve the VPC ID
- vpcid, e := retrieveID(cs, "vpc", d.Get("vpc").(string))
+ vpcid, e := retrieveID(cs, "vpc", vpc.(string))
if e != nil {
return e.Error()
}
@@ -45,7 +62,7 @@ func resourceCloudStackVPNGatewayCreate(d *schema.ResourceData, meta interface{}
// Create the new VPN Gateway
v, err := cs.VPN.CreateVpnGateway(p)
if err != nil {
- return fmt.Errorf("Error creating VPN Gateway for VPC %s: %s", d.Get("vpc").(string), err)
+ return fmt.Errorf("Error creating VPN Gateway for VPC ID %s: %s", vpcid, err)
}
d.SetId(v.Id)
@@ -61,7 +78,7 @@ func resourceCloudStackVPNGatewayRead(d *schema.ResourceData, meta interface{})
if err != nil {
if count == 0 {
log.Printf(
- "[DEBUG] VPN Gateway for VPC %s does no longer exist", d.Get("vpc").(string))
+ "[DEBUG] VPN Gateway for VPC ID %s does no longer exist", d.Get("vpc_id").(string))
d.SetId("")
return nil
}
@@ -69,8 +86,7 @@ func resourceCloudStackVPNGatewayRead(d *schema.ResourceData, meta interface{})
return err
}
- setValueOrID(d, "vpc", d.Get("vpc").(string), v.Vpcid)
-
+ d.Set("vpc_id", v.Vpcid)
d.Set("public_ip", v.Publicip)
return nil
@@ -92,7 +108,7 @@ func resourceCloudStackVPNGatewayDelete(d *schema.ResourceData, meta interface{}
return nil
}
- return fmt.Errorf("Error deleting VPN Gateway for VPC %s: %s", d.Get("vpc").(string), err)
+ return fmt.Errorf("Error deleting VPN Gateway for VPC %s: %s", d.Get("vpc_id").(string), err)
}
return nil
diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go
index 61fc151601b9..862daefe97aa 100644
--- a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go
+++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go
@@ -22,8 +22,6 @@ func TestAccCloudStackVPNGateway_basic(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudStackVPNGatewayExists(
"cloudstack_vpn_gateway.foo", &vpnGateway),
- resource.TestCheckResourceAttr(
- "cloudstack_vpn_gateway.foo", "vpc", "terraform-vpc"),
),
},
},
@@ -90,7 +88,7 @@ resource "cloudstack_vpc" "foo" {
}
resource "cloudstack_vpn_gateway" "foo" {
- vpc = "${cloudstack_vpc.foo.name}"
+ vpc_id = "${cloudstack_vpc.foo.id}"
}`,
CLOUDSTACK_VPC_CIDR_1,
CLOUDSTACK_VPC_OFFERING,
diff --git a/builtin/providers/cloudstack/resources.go b/builtin/providers/cloudstack/resources.go
index d404e38c6be9..a6fbb932539a 100644
--- a/builtin/providers/cloudstack/resources.go
+++ b/builtin/providers/cloudstack/resources.go
@@ -11,9 +11,6 @@ import (
"github.com/xanzy/go-cloudstack/cloudstack"
)
-// UnlimitedResourceID is a "special" ID to define an unlimited resource
-const UnlimitedResourceID = "-1"
-
// Define a regexp for parsing the port
var splitPorts = regexp.MustCompile(`^(\d+)(?:-(\d+))?$`)
@@ -28,11 +25,11 @@ func (e *retrieveError) Error() error {
}
func setValueOrID(d *schema.ResourceData, key string, value string, id string) {
- if isID(d.Get(key).(string)) {
+ if cloudstack.IsID(d.Get(key).(string)) {
// If the given id is an empty string, check if the configured value matches
// the UnlimitedResourceID in which case we set id to UnlimitedResourceID
- if id == "" && d.Get(key).(string) == UnlimitedResourceID {
- id = UnlimitedResourceID
+ if id == "" && d.Get(key).(string) == cloudstack.UnlimitedResourceID {
+ id = cloudstack.UnlimitedResourceID
}
d.Set(key, id)
@@ -41,9 +38,13 @@ func setValueOrID(d *schema.ResourceData, key string, value string, id string) {
}
}
-func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string, e *retrieveError) {
+func retrieveID(
+ cs *cloudstack.CloudStackClient,
+ name string,
+ value string,
+ opts ...cloudstack.OptionFunc) (id string, e *retrieveError) {
// If the supplied value isn't a ID, try to retrieve the ID ourselves
- if isID(value) {
+ if cloudstack.IsID(value) {
return value, nil
}
@@ -54,7 +55,7 @@ func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string,
case "disk_offering":
id, err = cs.DiskOffering.GetDiskOfferingID(value)
case "virtual_machine":
- id, err = cs.VirtualMachine.GetVirtualMachineID(value)
+ id, err = cs.VirtualMachine.GetVirtualMachineID(value, opts...)
case "service_offering":
id, err = cs.ServiceOffering.GetServiceOfferingID(value)
case "network_offering":
@@ -64,14 +65,20 @@ func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string,
case "vpc_offering":
id, err = cs.VPC.GetVPCOfferingID(value)
case "vpc":
- id, err = cs.VPC.GetVPCID(value)
+ id, err = cs.VPC.GetVPCID(value, opts...)
case "network":
- id, err = cs.Network.GetNetworkID(value)
+ id, err = cs.Network.GetNetworkID(value, opts...)
case "zone":
id, err = cs.Zone.GetZoneID(value)
case "ip_address":
p := cs.Address.NewListPublicIpAddressesParams()
p.SetIpaddress(value)
+ for _, fn := range opts {
+ if e := fn(cs, p); e != nil {
+ err = e
+ break
+ }
+ }
l, e := cs.Address.ListPublicIpAddresses(p)
if e != nil {
err = e
@@ -109,7 +116,7 @@ func retrieveID(cs *cloudstack.CloudStackClient, name, value string) (id string,
func retrieveTemplateID(cs *cloudstack.CloudStackClient, zoneid, value string) (id string, e *retrieveError) {
// If the supplied value isn't a ID, try to retrieve the ID ourselves
- if isID(value) {
+ if cloudstack.IsID(value) {
return value, nil
}
@@ -123,12 +130,6 @@ func retrieveTemplateID(cs *cloudstack.CloudStackClient, zoneid, value string) (
return id, nil
}
-// ID can be either a UUID or a UnlimitedResourceID
-func isID(id string) bool {
- re := regexp.MustCompile(`^([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|-1)$`)
- return re.MatchString(id)
-}
-
// RetryFunc is the function retried n times
type RetryFunc func() (interface{}, error)
@@ -183,12 +184,8 @@ func setCidrList(rule map[string]interface{}, cidrList string) {
rule["cidr_list"] = cidrs
}
-type projectidSetter interface {
- SetProjectid(string)
-}
-
// If there is a project supplied, we retrieve and set the project id
-func setProjectid(p projectidSetter, cs *cloudstack.CloudStackClient, d *schema.ResourceData) error {
+func setProjectid(p cloudstack.ProjectIDSetter, cs *cloudstack.CloudStackClient, d *schema.ResourceData) error {
if project, ok := d.GetOk("project"); ok {
projectid, e := retrieveID(cs, "project", project.(string))
if e != nil {
diff --git a/builtin/providers/cobbler/acceptance_env/deploy.sh b/builtin/providers/cobbler/acceptance_env/deploy.sh
new file mode 100644
index 000000000000..59563d110f2f
--- /dev/null
+++ b/builtin/providers/cobbler/acceptance_env/deploy.sh
@@ -0,0 +1,94 @@
+#!/bin/bash
+
+set -e
+
+# This script assumes Ubuntu 14.04 is being used.
+# It will create a standard Cobbler environment that can be used for acceptance testing.
+
+# With this enviornment spun up, the config should be:
+# COBBLER_URL=http://127.0.0.1:25151
+# COBBLER_USERNAME=cobbler
+# COBBLER_PASSWORD=cobbler
+
+sudo apt-get update
+sudo apt-get install -y build-essential git mercurial
+
+cd
+echo 'export PATH=$PATH:$HOME/terraform:$HOME/go/bin' >> ~/.bashrc
+export PATH=$PATH:$HOME/terraform:$HOME/go/bin
+
+sudo wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
+sudo chmod +x /usr/local/bin/gimme
+/usr/local/bin/gimme 1.6 >> ~/.bashrc
+eval "$(/usr/local/bin/gimme 1.6)"
+
+mkdir ~/go
+echo 'export GOPATH=$HOME/go' >> ~/.bashrc
+echo 'export GO15VENDOREXPERIMENT=1' >> ~/.bashrc
+export GOPATH=$HOME/go
+source ~/.bashrc
+
+go get github.com/tools/godep
+go get github.com/hashicorp/terraform
+cd $GOPATH/src/github.com/hashicorp/terraform
+godep restore
+
+# Cobbler
+sudo apt-get install -y cobbler cobbler-web debmirror dnsmasq
+
+sudo tee /etc/cobbler/modules.conf <s",
+ Description: "Apache-style string or VCL variables to use for log formatting",
+ },
+ "timestamp_format": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "%Y-%m-%dT%H:%M:%S.000",
+ Description: "specified timestamp formatting (default `%Y-%m-%dT%H:%M:%S.000`)",
+ },
+ },
+ },
+ },
},
}
}
@@ -194,7 +401,15 @@ func resourceServiceV1Update(d *schema.ResourceData, meta interface{}) error {
// DefaultTTL, a new Version must be created first, and updates posted to that
// Version. Loop these attributes and determine if we need to create a new version first
var needsChange bool
- for _, v := range []string{"domain", "backend", "default_host", "default_ttl"} {
+ for _, v := range []string{
+ "domain",
+ "backend",
+ "default_host",
+ "default_ttl",
+ "header",
+ "gzip",
+ "s3logging",
+ } {
if d.HasChange(v) {
needsChange = true
}
@@ -369,6 +584,206 @@ func resourceServiceV1Update(d *schema.ResourceData, meta interface{}) error {
}
}
+ if d.HasChange("header") {
+ // Note: we don't utilize the PUT endpoint to update a Header, we simply
+ // destroy it and create a new one. This is how Terraform works with nested
+ // sub resources, we only get the full diff not a partial set item diff.
+ // Because this is done on a new version of the configuration, this is
+ // considered safe
+ oh, nh := d.GetChange("header")
+ if oh == nil {
+ oh = new(schema.Set)
+ }
+ if nh == nil {
+ nh = new(schema.Set)
+ }
+
+ ohs := oh.(*schema.Set)
+ nhs := nh.(*schema.Set)
+
+ remove := ohs.Difference(nhs).List()
+ add := nhs.Difference(ohs).List()
+
+ // Delete removed headers
+ for _, dRaw := range remove {
+ df := dRaw.(map[string]interface{})
+ opts := gofastly.DeleteHeaderInput{
+ Service: d.Id(),
+ Version: latestVersion,
+ Name: df["name"].(string),
+ }
+
+ log.Printf("[DEBUG] Fastly Header Removal opts: %#v", opts)
+ err := conn.DeleteHeader(&opts)
+ if err != nil {
+ return err
+ }
+ }
+
+ // POST new Headers
+ for _, dRaw := range add {
+ opts, err := buildHeader(dRaw.(map[string]interface{}))
+ if err != nil {
+ log.Printf("[DEBUG] Error building Header: %s", err)
+ return err
+ }
+ opts.Service = d.Id()
+ opts.Version = latestVersion
+
+ log.Printf("[DEBUG] Fastly Header Addition opts: %#v", opts)
+ _, err = conn.CreateHeader(opts)
+ if err != nil {
+ return err
+ }
+ }
+ }
+
+ // Find differences in Gzips
+ if d.HasChange("gzip") {
+ // Note: we don't utilize the PUT endpoint to update a Gzip rule, we simply
+ // destroy it and create a new one. This is how Terraform works with nested
+ // sub resources, we only get the full diff not a partial set item diff.
+ // Because this is done on a new version of the configuration, this is
+ // considered safe
+ og, ng := d.GetChange("gzip")
+ if og == nil {
+ og = new(schema.Set)
+ }
+ if ng == nil {
+ ng = new(schema.Set)
+ }
+
+ ogs := og.(*schema.Set)
+ ngs := ng.(*schema.Set)
+
+ remove := ogs.Difference(ngs).List()
+ add := ngs.Difference(ogs).List()
+
+ // Delete removed gzip rules
+ for _, dRaw := range remove {
+ df := dRaw.(map[string]interface{})
+ opts := gofastly.DeleteGzipInput{
+ Service: d.Id(),
+ Version: latestVersion,
+ Name: df["name"].(string),
+ }
+
+ log.Printf("[DEBUG] Fastly Gzip Removal opts: %#v", opts)
+ err := conn.DeleteGzip(&opts)
+ if err != nil {
+ return err
+ }
+ }
+
+ // POST new Gzips
+ for _, dRaw := range add {
+ df := dRaw.(map[string]interface{})
+ opts := gofastly.CreateGzipInput{
+ Service: d.Id(),
+ Version: latestVersion,
+ Name: df["name"].(string),
+ }
+
+ if v, ok := df["content_types"]; ok {
+ if len(v.(*schema.Set).List()) > 0 {
+ var cl []string
+ for _, c := range v.(*schema.Set).List() {
+ cl = append(cl, c.(string))
+ }
+ opts.ContentTypes = strings.Join(cl, " ")
+ }
+ }
+
+ if v, ok := df["extensions"]; ok {
+ if len(v.(*schema.Set).List()) > 0 {
+ var el []string
+ for _, e := range v.(*schema.Set).List() {
+ el = append(el, e.(string))
+ }
+ opts.Extensions = strings.Join(el, " ")
+ }
+ }
+
+ log.Printf("[DEBUG] Fastly Gzip Addition opts: %#v", opts)
+ _, err := conn.CreateGzip(&opts)
+ if err != nil {
+ return err
+ }
+ }
+ }
+
+ // find difference in s3logging
+ if d.HasChange("s3logging") {
+ // POST new Logging
+ // Note: we don't utilize the PUT endpoint to update a S3 Logs, we simply
+ // destroy it and create a new one. This is how Terraform works with nested
+ // sub resources, we only get the full diff not a partial set item diff.
+ // Because this is done on a new version of the configuration, this is
+ // considered safe
+ os, ns := d.GetChange("s3logging")
+ if os == nil {
+ os = new(schema.Set)
+ }
+ if ns == nil {
+ ns = new(schema.Set)
+ }
+
+ oss := os.(*schema.Set)
+ nss := ns.(*schema.Set)
+ removeS3Logging := oss.Difference(nss).List()
+ addS3Logging := nss.Difference(oss).List()
+
+ // DELETE old S3 Log configurations
+ for _, sRaw := range removeS3Logging {
+ sf := sRaw.(map[string]interface{})
+ opts := gofastly.DeleteS3Input{
+ Service: d.Id(),
+ Version: latestVersion,
+ Name: sf["name"].(string),
+ }
+
+ log.Printf("[DEBUG] Fastly S3 Logging Removal opts: %#v", opts)
+ err := conn.DeleteS3(&opts)
+ if err != nil {
+ return err
+ }
+ }
+
+ // POST new/updated S3 Logging
+ for _, sRaw := range addS3Logging {
+ sf := sRaw.(map[string]interface{})
+
+ // Fastly API will not error if these are omitted, so we throw an error
+ // if any of these are empty
+ for _, sk := range []string{"s3_access_key", "s3_secret_key"} {
+ if sf[sk].(string) == "" {
+ return fmt.Errorf("[ERR] No %s found for S3 Log stream setup for Service (%s)", sk, d.Id())
+ }
+ }
+
+ opts := gofastly.CreateS3Input{
+ Service: d.Id(),
+ Version: latestVersion,
+ Name: sf["name"].(string),
+ BucketName: sf["bucket_name"].(string),
+ AccessKey: sf["s3_access_key"].(string),
+ SecretKey: sf["s3_secret_key"].(string),
+ Period: uint(sf["period"].(int)),
+ GzipLevel: uint(sf["gzip_level"].(int)),
+ Domain: sf["domain"].(string),
+ Path: sf["path"].(string),
+ Format: sf["format"].(string),
+ TimestampFormat: sf["timestamp_format"].(string),
+ }
+
+ log.Printf("[DEBUG] Create S3 Logging Opts: %#v", opts)
+ _, err := conn.CreateS3(&opts)
+ if err != nil {
+ return err
+ }
+ }
+ }
+
// validate version
log.Printf("[DEBUG] Validating Fastly Service (%s), Version (%s)", d.Id(), latestVersion)
valid, msg, err := conn.ValidateVersion(&gofastly.ValidateVersionInput{
@@ -447,6 +862,7 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error {
// TODO: update go-fastly to support an ActiveVersion struct, which contains
// domain and backend info in the response. Here we do 2 additional queries
// to find out that info
+ log.Printf("[DEBUG] Refreshing Domains for (%s)", d.Id())
domainList, err := conn.ListDomains(&gofastly.ListDomainsInput{
Service: d.Id(),
Version: s.ActiveVersion.Number,
@@ -464,6 +880,7 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error {
}
// Refresh Backends
+ log.Printf("[DEBUG] Refreshing Backends for (%s)", d.Id())
backendList, err := conn.ListBackends(&gofastly.ListBackendsInput{
Service: d.Id(),
Version: s.ActiveVersion.Number,
@@ -478,6 +895,58 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error {
if err := d.Set("backend", bl); err != nil {
log.Printf("[WARN] Error setting Backends for (%s): %s", d.Id(), err)
}
+
+ // refresh headers
+ log.Printf("[DEBUG] Refreshing Headers for (%s)", d.Id())
+ headerList, err := conn.ListHeaders(&gofastly.ListHeadersInput{
+ Service: d.Id(),
+ Version: s.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up Headers for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err)
+ }
+
+ hl := flattenHeaders(headerList)
+
+ if err := d.Set("header", hl); err != nil {
+ log.Printf("[WARN] Error setting Headers for (%s): %s", d.Id(), err)
+ }
+
+ // refresh gzips
+ log.Printf("[DEBUG] Refreshing Gzips for (%s)", d.Id())
+ gzipsList, err := conn.ListGzips(&gofastly.ListGzipsInput{
+ Service: d.Id(),
+ Version: s.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up Gzips for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err)
+ }
+
+ gl := flattenGzips(gzipsList)
+
+ if err := d.Set("gzip", gl); err != nil {
+ log.Printf("[WARN] Error setting Gzips for (%s): %s", d.Id(), err)
+ }
+
+ // refresh S3 Logging
+ log.Printf("[DEBUG] Refreshing S3 Logging for (%s)", d.Id())
+ s3List, err := conn.ListS3s(&gofastly.ListS3sInput{
+ Service: d.Id(),
+ Version: s.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up S3 Logging for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err)
+ }
+
+ sl := flattenS3s(s3List)
+
+ if err := d.Set("s3logging", sl); err != nil {
+ log.Printf("[WARN] Error setting S3 Logging for (%s): %s", d.Id(), err)
+ }
+
} else {
log.Printf("[DEBUG] Active Version for Service (%s) is empty, no state to refresh", d.Id())
}
@@ -590,7 +1059,7 @@ func findService(id string, meta interface{}) (*gofastly.Service, error) {
l, err := conn.ListServices(&gofastly.ListServicesInput{})
if err != nil {
- return nil, fmt.Errorf("[WARN] Error listing servcies when deleting Fastly Service (%s): %s", id, err)
+ return nil, fmt.Errorf("[WARN] Error listing services when deleting Fastly Service (%s): %s", id, err)
}
for _, s := range l {
@@ -602,3 +1071,147 @@ func findService(id string, meta interface{}) (*gofastly.Service, error) {
return nil, fastlyNoServiceFoundErr
}
+
+func flattenHeaders(headerList []*gofastly.Header) []map[string]interface{} {
+ var hl []map[string]interface{}
+ for _, h := range headerList {
+ // Convert Header to a map for saving to state.
+ nh := map[string]interface{}{
+ "name": h.Name,
+ "action": h.Action,
+ "ignore_if_set": h.IgnoreIfSet,
+ "type": h.Type,
+ "destination": h.Destination,
+ "source": h.Source,
+ "regex": h.Regex,
+ "substitution": h.Substitution,
+ "priority": int(h.Priority),
+ "request_condition": h.RequestCondition,
+ "cache_condition": h.CacheCondition,
+ "response_condition": h.ResponseCondition,
+ }
+
+ for k, v := range nh {
+ if v == "" {
+ delete(nh, k)
+ }
+ }
+
+ hl = append(hl, nh)
+ }
+ return hl
+}
+
+func buildHeader(headerMap interface{}) (*gofastly.CreateHeaderInput, error) {
+ df := headerMap.(map[string]interface{})
+ opts := gofastly.CreateHeaderInput{
+ Name: df["name"].(string),
+ IgnoreIfSet: df["ignore_if_set"].(bool),
+ Destination: df["destination"].(string),
+ Priority: uint(df["priority"].(int)),
+ Source: df["source"].(string),
+ Regex: df["regex"].(string),
+ Substitution: df["substitution"].(string),
+ RequestCondition: df["request_condition"].(string),
+ CacheCondition: df["cache_condition"].(string),
+ ResponseCondition: df["response_condition"].(string),
+ }
+
+ act := strings.ToLower(df["action"].(string))
+ switch act {
+ case "set":
+ opts.Action = gofastly.HeaderActionSet
+ case "append":
+ opts.Action = gofastly.HeaderActionAppend
+ case "delete":
+ opts.Action = gofastly.HeaderActionDelete
+ case "regex":
+ opts.Action = gofastly.HeaderActionRegex
+ case "regex_repeat":
+ opts.Action = gofastly.HeaderActionRegexRepeat
+ }
+
+ ty := strings.ToLower(df["type"].(string))
+ switch ty {
+ case "request":
+ opts.Type = gofastly.HeaderTypeRequest
+ case "fetch":
+ opts.Type = gofastly.HeaderTypeFetch
+ case "cache":
+ opts.Type = gofastly.HeaderTypeCache
+ case "response":
+ opts.Type = gofastly.HeaderTypeResponse
+ }
+
+ return &opts, nil
+}
+
+func flattenGzips(gzipsList []*gofastly.Gzip) []map[string]interface{} {
+ var gl []map[string]interface{}
+ for _, g := range gzipsList {
+ // Convert Gzip to a map for saving to state.
+ ng := map[string]interface{}{
+ "name": g.Name,
+ "cache_condition": g.CacheCondition,
+ }
+
+ if g.Extensions != "" {
+ e := strings.Split(g.Extensions, " ")
+ var et []interface{}
+ for _, ev := range e {
+ et = append(et, ev)
+ }
+ ng["extensions"] = schema.NewSet(schema.HashString, et)
+ }
+
+ if g.ContentTypes != "" {
+ c := strings.Split(g.ContentTypes, " ")
+ var ct []interface{}
+ for _, cv := range c {
+ ct = append(ct, cv)
+ }
+ ng["content_types"] = schema.NewSet(schema.HashString, ct)
+ }
+
+ // prune any empty values that come from the default string value in structs
+ for k, v := range ng {
+ if v == "" {
+ delete(ng, k)
+ }
+ }
+
+ gl = append(gl, ng)
+ }
+
+ return gl
+}
+
+func flattenS3s(s3List []*gofastly.S3) []map[string]interface{} {
+ var sl []map[string]interface{}
+ for _, s := range s3List {
+ // Convert S3s to a map for saving to state.
+ ns := map[string]interface{}{
+ "name": s.Name,
+ "bucket_name": s.BucketName,
+ "s3_access_key": s.AccessKey,
+ "s3_secret_key": s.SecretKey,
+ "path": s.Path,
+ "period": s.Period,
+ "domain": s.Domain,
+ "gzip_level": s.GzipLevel,
+ "format": s.Format,
+ "timestamp_format": s.TimestampFormat,
+ }
+
+ // prune any empty values that come from the default string value in structs
+ for k, v := range ns {
+ if v == "" {
+ delete(ns, k)
+ }
+ }
+
+ sl = append(sl, ns)
+ }
+
+ return sl
+}
diff --git a/builtin/providers/fastly/resource_fastly_service_v1_gzip_test.go b/builtin/providers/fastly/resource_fastly_service_v1_gzip_test.go
new file mode 100644
index 000000000000..755d7b98b0e1
--- /dev/null
+++ b/builtin/providers/fastly/resource_fastly_service_v1_gzip_test.go
@@ -0,0 +1,238 @@
+package fastly
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/acctest"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/hashicorp/terraform/terraform"
+ gofastly "github.com/sethvargo/go-fastly"
+)
+
+func TestFastlyServiceV1_FlattenGzips(t *testing.T) {
+ cases := []struct {
+ remote []*gofastly.Gzip
+ local []map[string]interface{}
+ }{
+ {
+ remote: []*gofastly.Gzip{
+ &gofastly.Gzip{
+ Name: "somegzip",
+ Extensions: "css",
+ },
+ },
+ local: []map[string]interface{}{
+ map[string]interface{}{
+ "name": "somegzip",
+ "extensions": schema.NewSet(schema.HashString, []interface{}{"css"}),
+ },
+ },
+ },
+ {
+ remote: []*gofastly.Gzip{
+ &gofastly.Gzip{
+ Name: "somegzip",
+ Extensions: "css json js",
+ ContentTypes: "text/html",
+ },
+ &gofastly.Gzip{
+ Name: "someothergzip",
+ Extensions: "css js",
+ ContentTypes: "text/html text/xml",
+ },
+ },
+ local: []map[string]interface{}{
+ map[string]interface{}{
+ "name": "somegzip",
+ "extensions": schema.NewSet(schema.HashString, []interface{}{"css", "json", "js"}),
+ "content_types": schema.NewSet(schema.HashString, []interface{}{"text/html"}),
+ },
+ map[string]interface{}{
+ "name": "someothergzip",
+ "extensions": schema.NewSet(schema.HashString, []interface{}{"css", "js"}),
+ "content_types": schema.NewSet(schema.HashString, []interface{}{"text/html", "text/xml"}),
+ },
+ },
+ },
+ }
+
+ for _, c := range cases {
+ out := flattenGzips(c.remote)
+ // loop, because deepequal wont work with our sets
+ expectedCount := len(c.local)
+ var found int
+ for _, o := range out {
+ for _, l := range c.local {
+ if o["name"].(string) == l["name"].(string) {
+ found++
+ if o["extensions"] == nil && l["extensions"] != nil {
+ t.Fatalf("output extensions are nil, local are not")
+ }
+
+ if o["extensions"] != nil {
+ oex := o["extensions"].(*schema.Set)
+ lex := l["extensions"].(*schema.Set)
+ if !oex.Equal(lex) {
+ t.Fatalf("Extensions don't match, expected: %#v, got: %#v", lex, oex)
+ }
+ }
+
+ if o["content_types"] == nil && l["content_types"] != nil {
+ t.Fatalf("output content types are nil, local are not")
+ }
+
+ if o["content_types"] != nil {
+ oct := o["content_types"].(*schema.Set)
+ lct := l["content_types"].(*schema.Set)
+ if !oct.Equal(lct) {
+ t.Fatalf("ContentTypes don't match, expected: %#v, got: %#v", lct, oct)
+ }
+ }
+
+ }
+ }
+ }
+
+ if found != expectedCount {
+ t.Fatalf("Found and expected mismatch: %d / %d", found, expectedCount)
+ }
+ }
+}
+
+func TestAccFastlyServiceV1_gzips_basic(t *testing.T) {
+ var service gofastly.ServiceDetail
+ name := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
+ domainName1 := fmt.Sprintf("%s.notadomain.com", acctest.RandString(10))
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckServiceV1Destroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccServiceV1GzipsConfig(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1GzipsAttributes(&service, name, 2),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.#", "2"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3704620722.extensions.#", "2"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3704620722.content_types.#", "0"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3820313126.content_types.#", "2"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3820313126.extensions.#", "0"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccServiceV1GzipsConfig_update(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1GzipsAttributes(&service, name, 1),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.#", "1"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3694165387.extensions.#", "3"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "gzip.3694165387.content_types.#", "5"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckFastlyServiceV1GzipsAttributes(service *gofastly.ServiceDetail, name string, gzipCount int) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ if service.Name != name {
+ return fmt.Errorf("Bad name, expected (%s), got (%s)", name, service.Name)
+ }
+
+ conn := testAccProvider.Meta().(*FastlyClient).conn
+ gzipsList, err := conn.ListGzips(&gofastly.ListGzipsInput{
+ Service: service.ID,
+ Version: service.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up Gzips for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
+ }
+
+ if len(gzipsList) != gzipCount {
+ return fmt.Errorf("Gzip count mismatch, expected (%d), got (%d)", gzipCount, len(gzipsList))
+ }
+
+ return nil
+ }
+}
+
+func testAccServiceV1GzipsConfig(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ gzip {
+ name = "gzip file types"
+ extensions = ["css", "js"]
+ }
+
+ gzip {
+ name = "gzip extensions"
+ content_types = ["text/html", "text/css"]
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
+
+func testAccServiceV1GzipsConfig_update(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ gzip {
+ name = "all"
+ extensions = ["css", "js", "html"]
+
+ content_types = [
+ "text/html",
+ "text/css",
+ "application/x-javascript",
+ "text/css",
+ "application/javascript",
+ "text/javascript",
+ ]
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
diff --git a/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go b/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go
new file mode 100644
index 000000000000..306de61f458f
--- /dev/null
+++ b/builtin/providers/fastly/resource_fastly_service_v1_headers_test.go
@@ -0,0 +1,233 @@
+package fastly
+
+import (
+ "fmt"
+ "reflect"
+ "sort"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/acctest"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ gofastly "github.com/sethvargo/go-fastly"
+)
+
+func TestFastlyServiceV1_BuildHeaders(t *testing.T) {
+ cases := []struct {
+ remote *gofastly.CreateHeaderInput
+ local map[string]interface{}
+ }{
+ {
+ remote: &gofastly.CreateHeaderInput{
+ Name: "someheadder",
+ Action: gofastly.HeaderActionDelete,
+ IgnoreIfSet: true,
+ Type: gofastly.HeaderTypeCache,
+ Destination: "http.aws-id",
+ Priority: uint(100),
+ },
+ local: map[string]interface{}{
+ "name": "someheadder",
+ "action": "delete",
+ "ignore_if_set": true,
+ "destination": "http.aws-id",
+ "priority": 100,
+ "source": "",
+ "regex": "",
+ "substitution": "",
+ "request_condition": "",
+ "cache_condition": "",
+ "response_condition": "",
+ "type": "cache",
+ },
+ },
+ {
+ remote: &gofastly.CreateHeaderInput{
+ Name: "someheadder",
+ Action: gofastly.HeaderActionSet,
+ Type: gofastly.HeaderTypeCache,
+ Destination: "http.aws-id",
+ Priority: uint(100),
+ Source: "http.server-name",
+ },
+ local: map[string]interface{}{
+ "name": "someheadder",
+ "action": "set",
+ "ignore_if_set": false,
+ "destination": "http.aws-id",
+ "priority": 100,
+ "source": "http.server-name",
+ "regex": "",
+ "substitution": "",
+ "request_condition": "",
+ "cache_condition": "",
+ "response_condition": "",
+ "type": "cache",
+ },
+ },
+ }
+
+ for _, c := range cases {
+ out, _ := buildHeader(c.local)
+ if !reflect.DeepEqual(out, c.remote) {
+ t.Fatalf("Error matching:\nexpected: %#v\ngot: %#v", c.remote, out)
+ }
+ }
+}
+
+func TestAccFastlyServiceV1_headers_basic(t *testing.T) {
+ var service gofastly.ServiceDetail
+ name := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
+ domainName1 := fmt.Sprintf("%s.notadomain.com", acctest.RandString(10))
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckServiceV1Destroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccServiceV1HeadersConfig(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1HeaderAttributes(&service, name, []string{"http.x-amz-request-id", "http.Server"}, nil),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "header.#", "2"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccServiceV1HeadersConfig_update(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1HeaderAttributes(&service, name, []string{"http.x-amz-request-id", "http.Server"}, []string{"http.server-name"}),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "header.#", "3"),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "header.1147514417.source", "server.identity"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckFastlyServiceV1HeaderAttributes(service *gofastly.ServiceDetail, name string, headersDeleted, headersAdded []string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ if service.Name != name {
+ return fmt.Errorf("Bad name, expected (%s), got (%s)", name, service.Name)
+ }
+
+ conn := testAccProvider.Meta().(*FastlyClient).conn
+ headersList, err := conn.ListHeaders(&gofastly.ListHeadersInput{
+ Service: service.ID,
+ Version: service.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up Headers for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
+ }
+
+ var deleted []string
+ var added []string
+ for _, h := range headersList {
+ if h.Action == gofastly.HeaderActionDelete {
+ deleted = append(deleted, h.Destination)
+ }
+ if h.Action == gofastly.HeaderActionSet {
+ added = append(added, h.Destination)
+ }
+ }
+
+ sort.Strings(headersAdded)
+ sort.Strings(headersDeleted)
+ sort.Strings(deleted)
+ sort.Strings(added)
+
+ if !reflect.DeepEqual(headersDeleted, deleted) {
+ return fmt.Errorf("Deleted Headers did not match.\n\tExpected: (%#v)\n\tGot: (%#v)", headersDeleted, deleted)
+ }
+ if !reflect.DeepEqual(headersAdded, added) {
+ return fmt.Errorf("Added Headers did not match.\n\tExpected: (%#v)\n\tGot: (%#v)", headersAdded, added)
+ }
+
+ return nil
+ }
+}
+
+func testAccServiceV1HeadersConfig(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ header {
+ destination = "http.x-amz-request-id"
+ type = "cache"
+ action = "delete"
+ name = "remove x-amz-request-id"
+ }
+
+ header {
+ destination = "http.Server"
+ type = "cache"
+ action = "delete"
+ name = "remove s3 server"
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
+
+func testAccServiceV1HeadersConfig_update(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ header {
+ destination = "http.x-amz-request-id"
+ type = "cache"
+ action = "delete"
+ name = "remove x-amz-request-id"
+ }
+
+ header {
+ destination = "http.Server"
+ type = "cache"
+ action = "delete"
+ name = "DESTROY S3"
+ }
+
+ header {
+ destination = "http.server-name"
+ type = "request"
+ action = "set"
+ source = "server.identity"
+ name = "Add server name"
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
diff --git a/builtin/providers/fastly/resource_fastly_service_v1_s3logging_test.go b/builtin/providers/fastly/resource_fastly_service_v1_s3logging_test.go
new file mode 100644
index 000000000000..193b48945f20
--- /dev/null
+++ b/builtin/providers/fastly/resource_fastly_service_v1_s3logging_test.go
@@ -0,0 +1,287 @@
+package fastly
+
+import (
+ "fmt"
+ "os"
+ "reflect"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/acctest"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ gofastly "github.com/sethvargo/go-fastly"
+)
+
+func TestAccFastlyServiceV1_s3logging_basic(t *testing.T) {
+ var service gofastly.ServiceDetail
+ name := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
+ domainName1 := fmt.Sprintf("%s.notadomain.com", acctest.RandString(10))
+
+ log1 := gofastly.S3{
+ Version: "1",
+ Name: "somebucketlog",
+ BucketName: "fastlytestlogging",
+ Domain: "s3-us-west-2.amazonaws.com",
+ AccessKey: "somekey",
+ SecretKey: "somesecret",
+ Period: uint(3600),
+ GzipLevel: uint(0),
+ Format: "%h %l %u %t %r %>s",
+ TimestampFormat: "%Y-%m-%dT%H:%M:%S.000",
+ }
+
+ log2 := gofastly.S3{
+ Version: "1",
+ Name: "someotherbucketlog",
+ BucketName: "fastlytestlogging2",
+ Domain: "s3-us-west-2.amazonaws.com",
+ AccessKey: "someotherkey",
+ SecretKey: "someothersecret",
+ GzipLevel: uint(3),
+ Period: uint(60),
+ Format: "%h %l %u %t %r %>s",
+ TimestampFormat: "%Y-%m-%dT%H:%M:%S.000",
+ }
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckServiceV1Destroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccServiceV1S3LoggingConfig(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1S3LoggingAttributes(&service, []*gofastly.S3{&log1}),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "s3logging.#", "1"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccServiceV1S3LoggingConfig_update(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1S3LoggingAttributes(&service, []*gofastly.S3{&log1, &log2}),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "s3logging.#", "2"),
+ ),
+ },
+ },
+ })
+}
+
+// Tests that s3_access_key and s3_secret_key are read from the env
+func TestAccFastlyServiceV1_s3logging_s3_env(t *testing.T) {
+ var service gofastly.ServiceDetail
+ name := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
+ domainName1 := fmt.Sprintf("%s.notadomain.com", acctest.RandString(10))
+
+ // set env Vars to something we expect
+ resetEnv := setEnv("someEnv", t)
+ defer resetEnv()
+
+ log3 := gofastly.S3{
+ Version: "1",
+ Name: "somebucketlog",
+ BucketName: "fastlytestlogging",
+ Domain: "s3-us-west-2.amazonaws.com",
+ AccessKey: "someEnv",
+ SecretKey: "someEnv",
+ Period: uint(3600),
+ GzipLevel: uint(0),
+ Format: "%h %l %u %t %r %>s",
+ TimestampFormat: "%Y-%m-%dT%H:%M:%S.000",
+ }
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckServiceV1Destroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccServiceV1S3LoggingConfig_env(name, domainName1),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
+ testAccCheckFastlyServiceV1S3LoggingAttributes(&service, []*gofastly.S3{&log3}),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "name", name),
+ resource.TestCheckResourceAttr(
+ "fastly_service_v1.foo", "s3logging.#", "1"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckFastlyServiceV1S3LoggingAttributes(service *gofastly.ServiceDetail, s3s []*gofastly.S3) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ conn := testAccProvider.Meta().(*FastlyClient).conn
+ s3List, err := conn.ListS3s(&gofastly.ListS3sInput{
+ Service: service.ID,
+ Version: service.ActiveVersion.Number,
+ })
+
+ if err != nil {
+ return fmt.Errorf("[ERR] Error looking up S3 Logging for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
+ }
+
+ if len(s3List) != len(s3s) {
+ return fmt.Errorf("S3 List count mismatch, expected (%d), got (%d)", len(s3s), len(s3List))
+ }
+
+ var found int
+ for _, s := range s3s {
+ for _, ls := range s3List {
+ if s.Name == ls.Name {
+ // we don't know these things ahead of time, so populate them now
+ s.ServiceID = service.ID
+ s.Version = service.ActiveVersion.Number
+ // We don't track these, so clear them out because we also wont know
+ // these ahead of time
+ ls.CreatedAt = nil
+ ls.UpdatedAt = nil
+ if !reflect.DeepEqual(s, ls) {
+ return fmt.Errorf("Bad match S3 logging match, expected (%#v), got (%#v)", s, ls)
+ }
+ found++
+ }
+ }
+ }
+
+ if found != len(s3s) {
+ return fmt.Errorf("Error matching S3 Logging rules")
+ }
+
+ return nil
+ }
+}
+
+func testAccServiceV1S3LoggingConfig(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ s3logging {
+ name = "somebucketlog"
+ bucket_name = "fastlytestlogging"
+ domain = "s3-us-west-2.amazonaws.com"
+ s3_access_key = "somekey"
+ s3_secret_key = "somesecret"
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
+
+func testAccServiceV1S3LoggingConfig_update(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ s3logging {
+ name = "somebucketlog"
+ bucket_name = "fastlytestlogging"
+ domain = "s3-us-west-2.amazonaws.com"
+ s3_access_key = "somekey"
+ s3_secret_key = "somesecret"
+ }
+
+ s3logging {
+ name = "someotherbucketlog"
+ bucket_name = "fastlytestlogging2"
+ domain = "s3-us-west-2.amazonaws.com"
+ s3_access_key = "someotherkey"
+ s3_secret_key = "someothersecret"
+ period = 60
+ gzip_level = 3
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
+
+func testAccServiceV1S3LoggingConfig_env(name, domain string) string {
+ return fmt.Sprintf(`
+resource "fastly_service_v1" "foo" {
+ name = "%s"
+
+ domain {
+ name = "%s"
+ comment = "tf-testing-domain"
+ }
+
+ backend {
+ address = "aws.amazon.com"
+ name = "amazon docs"
+ }
+
+ s3logging {
+ name = "somebucketlog"
+ bucket_name = "fastlytestlogging"
+ domain = "s3-us-west-2.amazonaws.com"
+ }
+
+ force_destroy = true
+}`, name, domain)
+}
+
+func setEnv(s string, t *testing.T) func() {
+ e := getEnv()
+ // Set all the envs to a dummy value
+ if err := os.Setenv("FASTLY_S3_ACCESS_KEY", s); err != nil {
+ t.Fatalf("Error setting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Setenv("FASTLY_S3_SECRET_KEY", s); err != nil {
+ t.Fatalf("Error setting env var FASTLY_S3_SECRET_KEY: %s", err)
+ }
+
+ return func() {
+ // re-set all the envs we unset above
+ if err := os.Setenv("FASTLY_S3_ACCESS_KEY", e.Key); err != nil {
+ t.Fatalf("Error resetting env var AWS_ACCESS_KEY_ID: %s", err)
+ }
+ if err := os.Setenv("FASTLY_S3_SECRET_KEY", e.Secret); err != nil {
+ t.Fatalf("Error resetting env var FASTLY_S3_SECRET_KEY: %s", err)
+ }
+ }
+}
+
+// struct to preserve the current environment
+type currentEnv struct {
+ Key, Secret string
+}
+
+func getEnv() *currentEnv {
+ // Grab any existing Fastly AWS S3 keys and preserve, in the off chance
+ // they're actually set in the enviornment
+ return ¤tEnv{
+ Key: os.Getenv("FASTLY_S3_ACCESS_KEY"),
+ Secret: os.Getenv("FASTLY_S3_SECRET_KEY"),
+ }
+}
diff --git a/builtin/providers/github/resource_github_membership_test.go b/builtin/providers/github/resource_github_membership_test.go
index 670ccb486c2d..cebad98da905 100644
--- a/builtin/providers/github/resource_github_membership_test.go
+++ b/builtin/providers/github/resource_github_membership_test.go
@@ -2,12 +2,12 @@ package github
import (
"fmt"
+ "os"
"testing"
"github.com/google/go-github/github"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
- "os"
)
func TestAccGithubMembership_basic(t *testing.T) {
diff --git a/builtin/providers/github/resource_github_team_membership_test.go b/builtin/providers/github/resource_github_team_membership_test.go
index 4a12e5c9fbf0..074112b4b499 100644
--- a/builtin/providers/github/resource_github_team_membership_test.go
+++ b/builtin/providers/github/resource_github_team_membership_test.go
@@ -2,12 +2,12 @@ package github
import (
"fmt"
+ "os"
"testing"
"github.com/google/go-github/github"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
- "os"
)
func TestAccGithubTeamMembership_basic(t *testing.T) {
diff --git a/builtin/providers/google/provider.go b/builtin/providers/google/provider.go
index 8fd5339f51b3..89e176979a62 100644
--- a/builtin/providers/google/provider.go
+++ b/builtin/providers/google/provider.go
@@ -27,20 +27,29 @@ func Provider() terraform.ResourceProvider {
DefaultFunc: schema.MultiEnvDefaultFunc([]string{
"GOOGLE_CREDENTIALS",
"GOOGLE_CLOUD_KEYFILE_JSON",
+ "GCLOUD_KEYFILE_JSON",
}, nil),
ValidateFunc: validateCredentials,
},
"project": &schema.Schema{
- Type: schema.TypeString,
- Optional: true,
- DefaultFunc: schema.EnvDefaultFunc("GOOGLE_PROJECT", ""),
+ Type: schema.TypeString,
+ Optional: true,
+ DefaultFunc: schema.MultiEnvDefaultFunc([]string{
+ "GOOGLE_PROJECT",
+ "GCLOUD_PROJECT",
+ "CLOUDSDK_CORE_PROJECT",
+ }, nil),
},
"region": &schema.Schema{
- Type: schema.TypeString,
- Required: true,
- DefaultFunc: schema.EnvDefaultFunc("GOOGLE_REGION", nil),
+ Type: schema.TypeString,
+ Required: true,
+ DefaultFunc: schema.MultiEnvDefaultFunc([]string{
+ "GOOGLE_REGION",
+ "GCLOUD_REGION",
+ "CLOUDSDK_COMPUTE_REGION",
+ }, nil),
},
},
diff --git a/builtin/providers/google/provider_test.go b/builtin/providers/google/provider_test.go
index 9bf5414b74e4..40bf1654efaa 100644
--- a/builtin/providers/google/provider_test.go
+++ b/builtin/providers/google/provider_test.go
@@ -3,6 +3,7 @@ package google
import (
"io/ioutil"
"os"
+ "strings"
"testing"
"github.com/hashicorp/terraform/helper/schema"
@@ -38,18 +39,40 @@ func testAccPreCheck(t *testing.T) {
os.Setenv("GOOGLE_CREDENTIALS", string(creds))
}
- if v := os.Getenv("GOOGLE_CREDENTIALS"); v == "" {
- if w := os.Getenv("GOOGLE_CLOUD_KEYFILE_JSON"); w == "" {
- t.Fatal("GOOGLE_CREDENTIALS or GOOGLE_CLOUD_KEYFILE_JSON must be set for acceptance tests")
+ multiEnvSearch := func(ks []string) string {
+ for _, k := range ks {
+ if v := os.Getenv(k); v != "" {
+ return v
+ }
}
+ return ""
}
- if v := os.Getenv("GOOGLE_PROJECT"); v == "" {
- t.Fatal("GOOGLE_PROJECT must be set for acceptance tests")
+ creds := []string{
+ "GOOGLE_CREDENTIALS",
+ "GOOGLE_CLOUD_KEYFILE_JSON",
+ "GCLOUD_KEYFILE_JSON",
+ }
+ if v := multiEnvSearch(creds); v == "" {
+ t.Fatalf("One of %s must be set for acceptance tests", strings.Join(creds, ", "))
}
- if v := os.Getenv("GOOGLE_REGION"); v != "us-central1" {
- t.Fatal("GOOGLE_REGION must be set to us-central1 for acceptance tests")
+ projs := []string{
+ "GOOGLE_PROJECT",
+ "GCLOUD_PROJECT",
+ "CLOUDSDK_CORE_PROJECT",
+ }
+ if v := multiEnvSearch(projs); v == "" {
+ t.Fatalf("One of %s must be set for acceptance tests", strings.Join(creds, ", "))
+ }
+
+ regs := []string{
+ "GOOGLE_REGION",
+ "GCLOUD_REGION",
+ "CLOUDSDK_COMPUTE_REGION",
+ }
+ if v := multiEnvSearch(regs); v != "us-central-1" {
+ t.Fatalf("One of %s must be set to us-central-1 for acceptance tests", strings.Join(creds, ", "))
}
}
diff --git a/builtin/providers/google/resource_compute_instance_template_test.go b/builtin/providers/google/resource_compute_instance_template_test.go
index f4b96eb7716e..ec8e2b72fd4f 100644
--- a/builtin/providers/google/resource_compute_instance_template_test.go
+++ b/builtin/providers/google/resource_compute_instance_template_test.go
@@ -2,13 +2,13 @@ package google
import (
"fmt"
+ "strings"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"google.golang.org/api/compute/v1"
- "strings"
)
func TestAccComputeInstanceTemplate_basic(t *testing.T) {
diff --git a/builtin/providers/google/resource_container_cluster.go b/builtin/providers/google/resource_container_cluster.go
index e68fadff8489..6954fcfa2c18 100644
--- a/builtin/providers/google/resource_container_cluster.go
+++ b/builtin/providers/google/resource_container_cluster.go
@@ -146,7 +146,51 @@ func resourceContainerCluster() *schema.Resource {
Default: "default",
ForceNew: true,
},
-
+ "subnetwork": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ "addons_config": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "http_load_balancing": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "disabled": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ },
+ },
+ },
+ "horizontal_pod_autoscaling": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "disabled": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
"node_config": &schema.Schema{
Type: schema.TypeList,
Optional: true,
@@ -249,6 +293,28 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er
cluster.Network = v.(string)
}
+ if v, ok := d.GetOk("subnetwork"); ok {
+ cluster.Subnetwork = v.(string)
+ }
+
+ if v, ok := d.GetOk("addons_config"); ok {
+ addonsConfig := v.([]interface{})[0].(map[string]interface{})
+ cluster.AddonsConfig = &container.AddonsConfig{}
+
+ if v, ok := addonsConfig["http_load_balancing"]; ok {
+ addon := v.([]interface{})[0].(map[string]interface{})
+ cluster.AddonsConfig.HttpLoadBalancing = &container.HttpLoadBalancing{
+ Disabled: addon["disabled"].(bool),
+ }
+ }
+
+ if v, ok := addonsConfig["horizontal_pod_autoscaling"]; ok {
+ addon := v.([]interface{})[0].(map[string]interface{})
+ cluster.AddonsConfig.HorizontalPodAutoscaling = &container.HorizontalPodAutoscaling{
+ Disabled: addon["disabled"].(bool),
+ }
+ }
+ }
if v, ok := d.GetOk("node_config"); ok {
nodeConfigs := v.([]interface{})
if len(nodeConfigs) > 1 {
@@ -360,6 +426,7 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro
d.Set("logging_service", cluster.LoggingService)
d.Set("monitoring_service", cluster.MonitoringService)
d.Set("network", cluster.Network)
+ d.Set("subnetwork", cluster.Subnetwork)
d.Set("node_config", flattenClusterNodeConfig(cluster.NodeConfig))
d.Set("instance_group_urls", cluster.InstanceGroupUrls)
diff --git a/builtin/providers/librato/provider.go b/builtin/providers/librato/provider.go
new file mode 100644
index 000000000000..0b7894f6fe76
--- /dev/null
+++ b/builtin/providers/librato/provider.go
@@ -0,0 +1,41 @@
+package librato
+
+import (
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/hashicorp/terraform/terraform"
+ "github.com/henrikhodne/go-librato/librato"
+)
+
+// Provider returns a schema.Provider for Librato.
+func Provider() terraform.ResourceProvider {
+ return &schema.Provider{
+ Schema: map[string]*schema.Schema{
+ "email": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ DefaultFunc: schema.EnvDefaultFunc("LIBRATO_EMAIL", nil),
+ Description: "The email address for the Librato account.",
+ },
+
+ "token": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ DefaultFunc: schema.EnvDefaultFunc("LIBRATO_TOKEN", nil),
+ Description: "The auth token for the Librato account.",
+ },
+ },
+
+ ResourcesMap: map[string]*schema.Resource{
+ "librato_space": resourceLibratoSpace(),
+ "librato_space_chart": resourceLibratoSpaceChart(),
+ },
+
+ ConfigureFunc: providerConfigure,
+ }
+}
+
+func providerConfigure(d *schema.ResourceData) (interface{}, error) {
+ client := librato.NewClient(d.Get("email").(string), d.Get("token").(string))
+
+ return client, nil
+}
diff --git a/builtin/providers/librato/provider_test.go b/builtin/providers/librato/provider_test.go
new file mode 100644
index 000000000000..f25f17fe2ebe
--- /dev/null
+++ b/builtin/providers/librato/provider_test.go
@@ -0,0 +1,39 @@
+package librato
+
+import (
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+var testAccProviders map[string]terraform.ResourceProvider
+var testAccProvider *schema.Provider
+
+func init() {
+ testAccProvider = Provider().(*schema.Provider)
+ testAccProviders = map[string]terraform.ResourceProvider{
+ "librato": testAccProvider,
+ }
+}
+
+func TestProvider(t *testing.T) {
+ if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+}
+
+func TestProvider_impl(t *testing.T) {
+ var _ terraform.ResourceProvider = Provider()
+}
+
+func testAccPreCheck(t *testing.T) {
+ if v := os.Getenv("LIBRATO_EMAIL"); v == "" {
+ t.Fatal("LIBRATO_EMAIL must be set for acceptance tests")
+ }
+
+ if v := os.Getenv("LIBRATO_TOKEN"); v == "" {
+ t.Fatal("LIBRATO_TOKEN must be set for acceptance tests")
+ }
+}
diff --git a/builtin/providers/librato/resource_librato_space.go b/builtin/providers/librato/resource_librato_space.go
new file mode 100644
index 000000000000..e0c1242a4081
--- /dev/null
+++ b/builtin/providers/librato/resource_librato_space.go
@@ -0,0 +1,134 @@
+package librato
+
+import (
+ "fmt"
+ "log"
+ "strconv"
+ "time"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/henrikhodne/go-librato/librato"
+)
+
+func resourceLibratoSpace() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceLibratoSpaceCreate,
+ Read: resourceLibratoSpaceRead,
+ Update: resourceLibratoSpaceUpdate,
+ Delete: resourceLibratoSpaceDelete,
+
+ Schema: map[string]*schema.Schema{
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: false,
+ },
+ "id": &schema.Schema{
+ Type: schema.TypeInt,
+ Computed: true,
+ },
+ },
+ }
+}
+
+func resourceLibratoSpaceCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ name := d.Get("name").(string)
+
+ space, _, err := client.Spaces.Create(&librato.Space{Name: librato.String(name)})
+ if err != nil {
+ return fmt.Errorf("Error creating Librato space %s: %s", name, err)
+ }
+
+ resource.Retry(1*time.Minute, func() *resource.RetryError {
+ _, _, err := client.Spaces.Get(*space.ID)
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ return resource.RetryableError(err)
+ }
+ return resource.NonRetryableError(err)
+ }
+ return nil
+ })
+
+ return resourceLibratoSpaceReadResult(d, space)
+}
+
+func resourceLibratoSpaceRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ id, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ space, _, err := client.Spaces.Get(uint(id))
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("Error reading Librato Space %s: %s", d.Id(), err)
+ }
+
+ return resourceLibratoSpaceReadResult(d, space)
+}
+
+func resourceLibratoSpaceReadResult(d *schema.ResourceData, space *librato.Space) error {
+ d.SetId(strconv.FormatUint(uint64(*space.ID), 10))
+ if err := d.Set("id", *space.ID); err != nil {
+ return err
+ }
+ if err := d.Set("name", *space.Name); err != nil {
+ return err
+ }
+ return nil
+}
+
+func resourceLibratoSpaceUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+ id, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ if d.HasChange("name") {
+ newName := d.Get("name").(string)
+ log.Printf("[INFO] Modifying name space attribute for %d: %#v", id, newName)
+ if _, err = client.Spaces.Edit(uint(id), &librato.Space{Name: &newName}); err != nil {
+ return err
+ }
+ }
+
+ return resourceLibratoSpaceRead(d, meta)
+}
+
+func resourceLibratoSpaceDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+ id, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[INFO] Deleting Space: %d", id)
+ _, err = client.Spaces.Delete(uint(id))
+ if err != nil {
+ return fmt.Errorf("Error deleting space: %s", err)
+ }
+
+ resource.Retry(1*time.Minute, func() *resource.RetryError {
+ _, _, err := client.Spaces.Get(uint(id))
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ return nil
+ }
+ return resource.NonRetryableError(err)
+ }
+ return resource.RetryableError(fmt.Errorf("space still exists"))
+ })
+
+ d.SetId("")
+ return nil
+}
diff --git a/builtin/providers/librato/resource_librato_space_chart.go b/builtin/providers/librato/resource_librato_space_chart.go
new file mode 100644
index 000000000000..dea499974d0d
--- /dev/null
+++ b/builtin/providers/librato/resource_librato_space_chart.go
@@ -0,0 +1,447 @@
+package librato
+
+import (
+ "bytes"
+ "fmt"
+ "log"
+ "math"
+ "strconv"
+ "time"
+
+ "github.com/hashicorp/terraform/helper/hashcode"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/henrikhodne/go-librato/librato"
+)
+
+func resourceLibratoSpaceChart() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceLibratoSpaceChartCreate,
+ Read: resourceLibratoSpaceChartRead,
+ Update: resourceLibratoSpaceChartUpdate,
+ Delete: resourceLibratoSpaceChartDelete,
+
+ Schema: map[string]*schema.Schema{
+ "space_id": &schema.Schema{
+ Type: schema.TypeInt,
+ Required: true,
+ ForceNew: true,
+ },
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "type": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "min": &schema.Schema{
+ Type: schema.TypeFloat,
+ Default: math.NaN(),
+ Optional: true,
+ },
+ "max": &schema.Schema{
+ Type: schema.TypeFloat,
+ Default: math.NaN(),
+ Optional: true,
+ },
+ "label": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "related_space": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ "stream": &schema.Schema{
+ Type: schema.TypeSet,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "metric": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ConflictsWith: []string{"stream.composite"},
+ },
+ "source": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ConflictsWith: []string{"stream.composite"},
+ },
+ "group_function": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ConflictsWith: []string{"stream.composite"},
+ },
+ "composite": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ConflictsWith: []string{"stream.metric", "stream.source", "stream.group_function"},
+ },
+ "summary_function": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "color": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "units_short": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "units_long": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "min": &schema.Schema{
+ Type: schema.TypeFloat,
+ Default: math.NaN(),
+ Optional: true,
+ },
+ "max": &schema.Schema{
+ Type: schema.TypeFloat,
+ Default: math.NaN(),
+ Optional: true,
+ },
+ "transform_function": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+ "period": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ },
+ },
+ },
+ Set: resourceLibratoSpaceChartHash,
+ },
+ },
+ }
+}
+
+func resourceLibratoSpaceChartHash(v interface{}) int {
+ var buf bytes.Buffer
+ m := v.(map[string]interface{})
+ buf.WriteString(fmt.Sprintf("%s-", m["metric"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["source"].(string)))
+ buf.WriteString(fmt.Sprintf("%s-", m["composite"].(string)))
+
+ return hashcode.String(buf.String())
+}
+
+func resourceLibratoSpaceChartCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ spaceID := uint(d.Get("space_id").(int))
+
+ spaceChart := new(librato.SpaceChart)
+ if v, ok := d.GetOk("name"); ok {
+ spaceChart.Name = librato.String(v.(string))
+ }
+ if v, ok := d.GetOk("type"); ok {
+ spaceChart.Type = librato.String(v.(string))
+ }
+ if v, ok := d.GetOk("min"); ok {
+ if math.IsNaN(v.(float64)) {
+ spaceChart.Min = nil
+ } else {
+ spaceChart.Min = librato.Float(v.(float64))
+ }
+ }
+ if v, ok := d.GetOk("max"); ok {
+ if math.IsNaN(v.(float64)) {
+ spaceChart.Max = nil
+ } else {
+ spaceChart.Max = librato.Float(v.(float64))
+ }
+ }
+ if v, ok := d.GetOk("label"); ok {
+ spaceChart.Label = librato.String(v.(string))
+ }
+ if v, ok := d.GetOk("related_space"); ok {
+ spaceChart.RelatedSpace = librato.Uint(uint(v.(int)))
+ }
+ if v, ok := d.GetOk("stream"); ok {
+ vs := v.(*schema.Set)
+ streams := make([]librato.SpaceChartStream, vs.Len())
+ for i, streamDataM := range vs.List() {
+ streamData := streamDataM.(map[string]interface{})
+ var stream librato.SpaceChartStream
+ if v, ok := streamData["metric"].(string); ok && v != "" {
+ stream.Metric = librato.String(v)
+ }
+ if v, ok := streamData["source"].(string); ok && v != "" {
+ stream.Source = librato.String(v)
+ }
+ if v, ok := streamData["composite"].(string); ok && v != "" {
+ stream.Composite = librato.String(v)
+ }
+ if v, ok := streamData["group_function"].(string); ok && v != "" {
+ stream.GroupFunction = librato.String(v)
+ }
+ if v, ok := streamData["summary_function"].(string); ok && v != "" {
+ stream.SummaryFunction = librato.String(v)
+ }
+ if v, ok := streamData["transform_function"].(string); ok && v != "" {
+ stream.TransformFunction = librato.String(v)
+ }
+ if v, ok := streamData["color"].(string); ok && v != "" {
+ stream.Color = librato.String(v)
+ }
+ if v, ok := streamData["units_short"].(string); ok && v != "" {
+ stream.UnitsShort = librato.String(v)
+ }
+ if v, ok := streamData["units_longs"].(string); ok && v != "" {
+ stream.UnitsLong = librato.String(v)
+ }
+ if v, ok := streamData["min"].(float64); ok && !math.IsNaN(v) {
+ stream.Min = librato.Float(v)
+ }
+ if v, ok := streamData["max"].(float64); ok && !math.IsNaN(v) {
+ stream.Max = librato.Float(v)
+ }
+ streams[i] = stream
+ }
+ spaceChart.Streams = streams
+ }
+
+ spaceChartResult, _, err := client.Spaces.CreateChart(spaceID, spaceChart)
+ if err != nil {
+ return fmt.Errorf("Error creating Librato space chart %s: %s", *spaceChart.Name, err)
+ }
+
+ resource.Retry(1*time.Minute, func() *resource.RetryError {
+ _, _, err := client.Spaces.GetChart(spaceID, *spaceChartResult.ID)
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ return resource.RetryableError(err)
+ }
+ return resource.NonRetryableError(err)
+ }
+ return nil
+ })
+
+ return resourceLibratoSpaceChartReadResult(d, spaceChartResult)
+}
+
+func resourceLibratoSpaceChartRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ spaceID := uint(d.Get("space_id").(int))
+
+ id, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ chart, _, err := client.Spaces.GetChart(spaceID, uint(id))
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("Error reading Librato Space chart %s: %s", d.Id(), err)
+ }
+
+ return resourceLibratoSpaceChartReadResult(d, chart)
+}
+
+func resourceLibratoSpaceChartReadResult(d *schema.ResourceData, chart *librato.SpaceChart) error {
+ d.SetId(strconv.FormatUint(uint64(*chart.ID), 10))
+ if chart.Name != nil {
+ if err := d.Set("name", *chart.Name); err != nil {
+ return err
+ }
+ }
+ if chart.Type != nil {
+ if err := d.Set("type", *chart.Type); err != nil {
+ return err
+ }
+ }
+ if chart.Min != nil {
+ if err := d.Set("min", *chart.Min); err != nil {
+ return err
+ }
+ }
+ if chart.Max != nil {
+ if err := d.Set("max", *chart.Max); err != nil {
+ return err
+ }
+ }
+ if chart.Label != nil {
+ if err := d.Set("label", *chart.Label); err != nil {
+ return err
+ }
+ }
+ if chart.RelatedSpace != nil {
+ if err := d.Set("related_space", *chart.RelatedSpace); err != nil {
+ return err
+ }
+ }
+
+ streams := resourceLibratoSpaceChartStreamsGather(d, chart.Streams)
+ if err := d.Set("stream", streams); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func resourceLibratoSpaceChartStreamsGather(d *schema.ResourceData, streams []librato.SpaceChartStream) []map[string]interface{} {
+ retStreams := make([]map[string]interface{}, 0, len(streams))
+ for _, s := range streams {
+ stream := make(map[string]interface{})
+ if s.Metric != nil {
+ stream["metric"] = *s.Metric
+ }
+ if s.Source != nil {
+ stream["source"] = *s.Source
+ }
+ if s.Composite != nil {
+ stream["composite"] = *s.Composite
+ }
+ if s.GroupFunction != nil {
+ stream["group_function"] = *s.GroupFunction
+ }
+ if s.SummaryFunction != nil {
+ stream["summary_function"] = *s.SummaryFunction
+ }
+ if s.TransformFunction != nil {
+ stream["transform_function"] = *s.TransformFunction
+ }
+ if s.Color != nil {
+ stream["color"] = *s.Color
+ }
+ if s.UnitsShort != nil {
+ stream["units_short"] = *s.UnitsShort
+ }
+ if s.UnitsLong != nil {
+ stream["units_long"] = *s.UnitsLong
+ }
+ retStreams = append(retStreams, stream)
+ }
+
+ return retStreams
+}
+
+func resourceLibratoSpaceChartUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ spaceID := uint(d.Get("space_id").(int))
+ chartID, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ spaceChart := new(librato.SpaceChart)
+ if d.HasChange("name") {
+ spaceChart.Name = librato.String(d.Get("name").(string))
+ }
+ if d.HasChange("min") {
+ if math.IsNaN(d.Get("min").(float64)) {
+ spaceChart.Min = nil
+ } else {
+ spaceChart.Min = librato.Float(d.Get("min").(float64))
+ }
+ }
+ if d.HasChange("max") {
+ if math.IsNaN(d.Get("max").(float64)) {
+ spaceChart.Max = nil
+ } else {
+ spaceChart.Max = librato.Float(d.Get("max").(float64))
+ }
+ }
+ if d.HasChange("label") {
+ spaceChart.Label = librato.String(d.Get("label").(string))
+ }
+ if d.HasChange("related_space") {
+ spaceChart.RelatedSpace = librato.Uint(d.Get("related_space").(uint))
+ }
+ if d.HasChange("stream") {
+ vs := d.Get("stream").(*schema.Set)
+ streams := make([]librato.SpaceChartStream, vs.Len())
+ for i, streamDataM := range vs.List() {
+ streamData := streamDataM.(map[string]interface{})
+ var stream librato.SpaceChartStream
+ if v, ok := streamData["metric"].(string); ok && v != "" {
+ stream.Metric = librato.String(v)
+ }
+ if v, ok := streamData["source"].(string); ok && v != "" {
+ stream.Source = librato.String(v)
+ }
+ if v, ok := streamData["composite"].(string); ok && v != "" {
+ stream.Composite = librato.String(v)
+ }
+ if v, ok := streamData["group_function"].(string); ok && v != "" {
+ stream.GroupFunction = librato.String(v)
+ }
+ if v, ok := streamData["summary_function"].(string); ok && v != "" {
+ stream.SummaryFunction = librato.String(v)
+ }
+ if v, ok := streamData["transform_function"].(string); ok && v != "" {
+ stream.TransformFunction = librato.String(v)
+ }
+ if v, ok := streamData["color"].(string); ok && v != "" {
+ stream.Color = librato.String(v)
+ }
+ if v, ok := streamData["units_short"].(string); ok && v != "" {
+ stream.UnitsShort = librato.String(v)
+ }
+ if v, ok := streamData["units_longs"].(string); ok && v != "" {
+ stream.UnitsLong = librato.String(v)
+ }
+ if v, ok := streamData["min"].(float64); ok && !math.IsNaN(v) {
+ stream.Min = librato.Float(v)
+ }
+ if v, ok := streamData["max"].(float64); ok && !math.IsNaN(v) {
+ stream.Max = librato.Float(v)
+ }
+ streams[i] = stream
+ }
+ spaceChart.Streams = streams
+ }
+
+ _, err = client.Spaces.EditChart(spaceID, uint(chartID), spaceChart)
+ if err != nil {
+ return fmt.Errorf("Error updating Librato space chart %s: %s", *spaceChart.Name, err)
+ }
+
+ return resourceLibratoSpaceChartRead(d, meta)
+}
+
+func resourceLibratoSpaceChartDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*librato.Client)
+
+ spaceID := uint(d.Get("space_id").(int))
+
+ id, err := strconv.ParseUint(d.Id(), 10, 0)
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[INFO] Deleting Chart: %d/%d", spaceID, uint(id))
+ _, err = client.Spaces.DeleteChart(spaceID, uint(id))
+ if err != nil {
+ return fmt.Errorf("Error deleting space: %s", err)
+ }
+
+ resource.Retry(1*time.Minute, func() *resource.RetryError {
+ _, _, err := client.Spaces.GetChart(spaceID, uint(id))
+ if err != nil {
+ if errResp, ok := err.(*librato.ErrorResponse); ok && errResp.Response.StatusCode == 404 {
+ return nil
+ }
+ return resource.NonRetryableError(err)
+ }
+ return resource.RetryableError(fmt.Errorf("space chart still exists"))
+ })
+
+ d.SetId("")
+ return nil
+}
diff --git a/builtin/providers/librato/resource_librato_space_chart_test.go b/builtin/providers/librato/resource_librato_space_chart_test.go
new file mode 100644
index 000000000000..e087b0647e8c
--- /dev/null
+++ b/builtin/providers/librato/resource_librato_space_chart_test.go
@@ -0,0 +1,230 @@
+package librato
+
+import (
+ "fmt"
+ "strconv"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ "github.com/henrikhodne/go-librato/librato"
+)
+
+func TestAccLibratoSpaceChart_Basic(t *testing.T) {
+ var spaceChart librato.SpaceChart
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckLibratoSpaceChartDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckLibratoSpaceChartConfig_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckLibratoSpaceChartExists("librato_space_chart.foobar", &spaceChart),
+ testAccCheckLibratoSpaceChartName(&spaceChart, "Foo Bar"),
+ resource.TestCheckResourceAttr(
+ "librato_space_chart.foobar", "name", "Foo Bar"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccLibratoSpaceChart_Full(t *testing.T) {
+ var spaceChart librato.SpaceChart
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckLibratoSpaceChartDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckLibratoSpaceChartConfig_full,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckLibratoSpaceChartExists("librato_space_chart.foobar", &spaceChart),
+ testAccCheckLibratoSpaceChartName(&spaceChart, "Foo Bar"),
+ resource.TestCheckResourceAttr(
+ "librato_space_chart.foobar", "name", "Foo Bar"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccLibratoSpaceChart_Updated(t *testing.T) {
+ var spaceChart librato.SpaceChart
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckLibratoSpaceChartDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckLibratoSpaceChartConfig_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckLibratoSpaceChartExists("librato_space_chart.foobar", &spaceChart),
+ testAccCheckLibratoSpaceChartName(&spaceChart, "Foo Bar"),
+ resource.TestCheckResourceAttr(
+ "librato_space_chart.foobar", "name", "Foo Bar"),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccCheckLibratoSpaceChartConfig_new_value,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckLibratoSpaceChartExists("librato_space_chart.foobar", &spaceChart),
+ testAccCheckLibratoSpaceChartName(&spaceChart, "Bar Baz"),
+ resource.TestCheckResourceAttr(
+ "librato_space_chart.foobar", "name", "Bar Baz"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckLibratoSpaceChartDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*librato.Client)
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "librato_space_chart" {
+ continue
+ }
+
+ id, err := strconv.ParseUint(rs.Primary.ID, 10, 0)
+ if err != nil {
+ return fmt.Errorf("ID not a number")
+ }
+
+ spaceID, err := strconv.ParseUint(rs.Primary.Attributes["space_id"], 10, 0)
+ if err != nil {
+ return fmt.Errorf("Space ID not a number")
+ }
+
+ _, _, err = client.Spaces.GetChart(uint(spaceID), uint(id))
+
+ if err == nil {
+ return fmt.Errorf("Space Chart still exists")
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckLibratoSpaceChartName(spaceChart *librato.SpaceChart, name string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ if spaceChart.Name == nil || *spaceChart.Name != name {
+ return fmt.Errorf("Bad name: %s", *spaceChart.Name)
+ }
+
+ return nil
+ }
+}
+
+func testAccCheckLibratoSpaceChartExists(n string, spaceChart *librato.SpaceChart) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No Space Chart ID is set")
+ }
+
+ client := testAccProvider.Meta().(*librato.Client)
+
+ id, err := strconv.ParseUint(rs.Primary.ID, 10, 0)
+ if err != nil {
+ return fmt.Errorf("ID not a number")
+ }
+
+ spaceID, err := strconv.ParseUint(rs.Primary.Attributes["space_id"], 10, 0)
+ if err != nil {
+ return fmt.Errorf("Space ID not a number")
+ }
+
+ foundSpaceChart, _, err := client.Spaces.GetChart(uint(spaceID), uint(id))
+
+ if err != nil {
+ return err
+ }
+
+ if foundSpaceChart.ID == nil || *foundSpaceChart.ID != uint(id) {
+ return fmt.Errorf("Space not found")
+ }
+
+ *spaceChart = *foundSpaceChart
+
+ return nil
+ }
+}
+
+const testAccCheckLibratoSpaceChartConfig_basic = `
+resource "librato_space" "foobar" {
+ name = "Foo Bar"
+}
+
+resource "librato_space_chart" "foobar" {
+ space_id = "${librato_space.foobar.id}"
+ name = "Foo Bar"
+ type = "line"
+}`
+
+const testAccCheckLibratoSpaceChartConfig_new_value = `
+resource "librato_space" "foobar" {
+ name = "Foo Bar"
+}
+
+resource "librato_space_chart" "foobar" {
+ space_id = "${librato_space.foobar.id}"
+ name = "Bar Baz"
+ type = "line"
+}`
+
+const testAccCheckLibratoSpaceChartConfig_full = `
+resource "librato_space" "foobar" {
+ name = "Foo Bar"
+}
+
+resource "librato_space" "barbaz" {
+ name = "Bar Baz"
+}
+
+resource "librato_space_chart" "foobar" {
+ space_id = "${librato_space.foobar.id}"
+ name = "Foo Bar"
+ type = "line"
+ min = 0
+ max = 100
+ label = "Percent"
+ related_space = "${librato_space.barbaz.id}"
+
+ # Minimal metric stream
+ stream {
+ metric = "librato.cpu.percent.idle"
+ source = "*"
+ }
+
+ # Minimal composite stream
+ stream {
+ composite = "s(\"cpu\", \"*\")"
+ }
+
+ # Full metric stream
+ stream {
+ metric = "librato.cpu.percent.idle"
+ source = "*"
+ group_function = "average"
+ summary_function = "max"
+ name = "CPU usage"
+ color = "#990000"
+ units_short = "%"
+ units_long = "percent"
+ min = 0
+ max = 100
+ transform_function = "x * 100"
+ period = 60
+ }
+}`
diff --git a/builtin/providers/librato/resource_librato_space_test.go b/builtin/providers/librato/resource_librato_space_test.go
new file mode 100644
index 000000000000..ce055ccd2372
--- /dev/null
+++ b/builtin/providers/librato/resource_librato_space_test.go
@@ -0,0 +1,106 @@
+package librato
+
+import (
+ "fmt"
+ "strconv"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ "github.com/henrikhodne/go-librato/librato"
+)
+
+func TestAccLibratoSpace_Basic(t *testing.T) {
+ var space librato.Space
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckLibratoSpaceDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckLibratoSpaceConfig_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckLibratoSpaceExists("librato_space.foobar", &space),
+ testAccCheckLibratoSpaceAttributes(&space),
+ resource.TestCheckResourceAttr(
+ "librato_space.foobar", "name", "Foo Bar"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckLibratoSpaceDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*librato.Client)
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "librato_space" {
+ continue
+ }
+
+ id, err := strconv.ParseUint(rs.Primary.ID, 10, 0)
+ if err != nil {
+ return fmt.Errorf("ID not a number")
+ }
+
+ _, _, err = client.Spaces.Get(uint(id))
+
+ if err == nil {
+ return fmt.Errorf("Space still exists")
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckLibratoSpaceAttributes(space *librato.Space) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ if space.Name == nil || *space.Name != "Foo Bar" {
+ return fmt.Errorf("Bad name: %s", *space.Name)
+ }
+
+ return nil
+ }
+}
+
+func testAccCheckLibratoSpaceExists(n string, space *librato.Space) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No Space ID is set")
+ }
+
+ client := testAccProvider.Meta().(*librato.Client)
+
+ id, err := strconv.ParseUint(rs.Primary.ID, 10, 0)
+ if err != nil {
+ return fmt.Errorf("ID not a number")
+ }
+
+ foundSpace, _, err := client.Spaces.Get(uint(id))
+
+ if err != nil {
+ return err
+ }
+
+ if foundSpace.ID == nil || *foundSpace.ID != uint(id) {
+ return fmt.Errorf("Space not found")
+ }
+
+ *space = *foundSpace
+
+ return nil
+ }
+}
+
+const testAccCheckLibratoSpaceConfig_basic = `
+resource "librato_space" "foobar" {
+ name = "Foo Bar"
+}`
diff --git a/builtin/providers/openstack/config.go b/builtin/providers/openstack/config.go
index 47ba00f855eb..5001c8ecad98 100644
--- a/builtin/providers/openstack/config.go
+++ b/builtin/providers/openstack/config.go
@@ -15,6 +15,7 @@ type Config struct {
Username string
UserID string
Password string
+ Token string
APIKey string
IdentityEndpoint string
TenantID string
@@ -41,6 +42,7 @@ func (c *Config) loadAndValidate() error {
Username: c.Username,
UserID: c.UserID,
Password: c.Password,
+ TokenID: c.Token,
APIKey: c.APIKey,
IdentityEndpoint: c.IdentityEndpoint,
TenantID: c.TenantID,
diff --git a/builtin/providers/openstack/devstack/deploy.sh b/builtin/providers/openstack/devstack/deploy.sh
index 2225478e1fe2..6c85a4795412 100644
--- a/builtin/providers/openstack/devstack/deploy.sh
+++ b/builtin/providers/openstack/devstack/deploy.sh
@@ -1,30 +1,36 @@
#!/bin/bash
+set -e
+
+cd
sudo apt-get update
sudo apt-get install -y git make mercurial
-GOPKG=go1.5.2.linux-amd64.tar.gz
-wget https://storage.googleapis.com/golang/$GOPKG
-sudo tar -xvf $GOPKG -C /usr/local/
+sudo wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
+sudo chmod +x /usr/local/bin/gimme
+gimme 1.6 >> .bashrc
mkdir ~/go
+eval "$(/usr/local/bin/gimme 1.6)"
echo 'export GOPATH=$HOME/go' >> .bashrc
-echo 'export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin' >> .bashrc
-source .bashrc
export GOPATH=$HOME/go
-export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
+export PATH=$PATH:$HOME/terraform:$HOME/go/bin
+echo 'export PATH=$PATH:$HOME/terraform:$HOME/go/bin' >> .bashrc
+source .bashrc
+
+go get github.com/tools/godep
go get github.com/hashicorp/terraform
cd $GOPATH/src/github.com/hashicorp/terraform
-make updatedeps
+godep restore
cd
-git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty
+git clone https://git.openstack.org/openstack-dev/devstack -b stable/mitaka
cd devstack
cat >local.conf <> openrc
echo export OS_FLAVOR_ID=99 >> openrc
source openrc demo
-cd $GOPATH/src/github.com/hashicorp/terraform
-make updatedeps
-
# Replace the below lines with the repo/branch you want to test
#git remote add jtopjian https://github.com/jtopjian/terraform
#git fetch jtopjian
-#git checkout --track jtopjian/openstack-acctest-fixes
+#git checkout --track jtopjian/openstack-secgroup-safe-delete
#make testacc TEST=./builtin/providers/openstack TESTARGS='-run=AccBlockStorageV1'
#make testacc TEST=./builtin/providers/openstack TESTARGS='-run=AccCompute'
#make testacc TEST=./builtin/providers/openstack
diff --git a/builtin/providers/openstack/provider.go b/builtin/providers/openstack/provider.go
index 2e5a1f8e74d3..8b72ba22a6fd 100644
--- a/builtin/providers/openstack/provider.go
+++ b/builtin/providers/openstack/provider.go
@@ -1,10 +1,14 @@
package openstack
import (
+ "github.com/hashicorp/terraform/helper/mutexkv"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
+// This is a global MutexKV for use within this plugin.
+var osMutexKV = mutexkv.NewMutexKV()
+
// Provider returns a schema.Provider for OpenStack.
func Provider() terraform.ResourceProvider {
return &schema.Provider{
@@ -17,7 +21,7 @@ func Provider() terraform.ResourceProvider {
"user_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
- DefaultFunc: schema.EnvDefaultFunc("OS_USERNAME", nil),
+ DefaultFunc: schema.EnvDefaultFunc("OS_USERNAME", ""),
},
"user_id": &schema.Schema{
Type: schema.TypeString,
@@ -37,13 +41,18 @@ func Provider() terraform.ResourceProvider {
"password": &schema.Schema{
Type: schema.TypeString,
Optional: true,
- DefaultFunc: schema.EnvDefaultFunc("OS_PASSWORD", nil),
+ DefaultFunc: schema.EnvDefaultFunc("OS_PASSWORD", ""),
},
- "api_key": &schema.Schema{
+ "token": &schema.Schema{
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("OS_AUTH_TOKEN", ""),
},
+ "api_key": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ DefaultFunc: schema.EnvDefaultFunc("OS_API_KEY", ""),
+ },
"domain_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
@@ -91,6 +100,9 @@ func Provider() terraform.ResourceProvider {
"openstack_networking_port_v2": resourceNetworkingPortV2(),
"openstack_networking_router_v2": resourceNetworkingRouterV2(),
"openstack_networking_router_interface_v2": resourceNetworkingRouterInterfaceV2(),
+ "openstack_networking_router_route_v2": resourceNetworkingRouterRouteV2(),
+ "openstack_networking_secgroup_v2": resourceNetworkingSecGroupV2(),
+ "openstack_networking_secgroup_rule_v2": resourceNetworkingSecGroupRuleV2(),
"openstack_objectstorage_container_v1": resourceObjectStorageContainerV1(),
},
@@ -104,6 +116,7 @@ func configureProvider(d *schema.ResourceData) (interface{}, error) {
Username: d.Get("user_name").(string),
UserID: d.Get("user_id").(string),
Password: d.Get("password").(string),
+ Token: d.Get("token").(string),
APIKey: d.Get("api_key").(string),
TenantID: d.Get("tenant_id").(string),
TenantName: d.Get("tenant_name").(string),
diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go
index 87cd4e5800c0..14233c6a26e6 100644
--- a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go
+++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go
@@ -1009,25 +1009,33 @@ func getInstanceAccessAddresses(d *schema.ResourceData, networks []map[string]in
hostv4 = floatingIP
}
- // Loop through all networks and check for the following:
- // * If the network is set as an access network.
- // * If the network has a floating IP.
- // * If the network has a v4/v6 fixed IP.
+ // Loop through all networks
+ // If the network has a valid floating, fixed v4, or fixed v6 address
+ // and hostv4 or hostv6 is not set, set hostv4/hostv6.
+ // If the network is an "access_network" overwrite hostv4/hostv6.
for _, n := range networks {
- if n["floating_ip"] != nil {
- hostv4 = n["floating_ip"].(string)
- } else {
- if hostv4 == "" && n["fixed_ip_v4"] != nil {
- hostv4 = n["fixed_ip_v4"].(string)
+ var accessNetwork bool
+
+ if an, ok := n["access_network"].(bool); ok && an {
+ accessNetwork = true
+ }
+
+ if fixedIPv4, ok := n["fixed_ip_v4"].(string); ok && fixedIPv4 != "" {
+ if hostv4 == "" || accessNetwork {
+ hostv4 = fixedIPv4
}
}
- if hostv6 == "" && n["fixed_ip_v6"] != nil {
- hostv6 = n["fixed_ip_v6"].(string)
+ if floatingIP, ok := n["floating_ip"].(string); ok && floatingIP != "" {
+ if hostv4 == "" || accessNetwork {
+ hostv4 = floatingIP
+ }
}
- if an, ok := n["access_network"].(bool); ok && an {
- break
+ if fixedIPv6, ok := n["fixed_ip_v6"].(string); ok && fixedIPv6 != "" {
+ if hostv6 == "" || accessNetwork {
+ hostv6 = fixedIPv6
+ }
}
}
diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go
index 1627693ba351..b87e807572bd 100644
--- a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go
+++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go
@@ -503,6 +503,60 @@ func TestAccComputeV2Instance_multiEphemeral(t *testing.T) {
})
}
+func TestAccComputeV2Instance_accessIPv4(t *testing.T) {
+ var instance servers.Server
+ var testAccComputeV2Instance_accessIPv4 = fmt.Sprintf(`
+ resource "openstack_compute_floatingip_v2" "myip" {
+ }
+
+ resource "openstack_networking_network_v2" "network_1" {
+ name = "network_1"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_1" {
+ name = "subnet_1"
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ cidr = "192.168.1.0/24"
+ ip_version = 4
+ enable_dhcp = true
+ no_gateway = true
+ }
+
+ resource "openstack_compute_instance_v2" "instance_1" {
+ depends_on = ["openstack_networking_subnet_v2.subnet_1"]
+
+ name = "instance_1"
+ security_groups = ["default"]
+ floating_ip = "${openstack_compute_floatingip_v2.myip.address}"
+
+ network {
+ uuid = "%s"
+ }
+
+ network {
+ uuid = "${openstack_networking_network_v2.network_1.id}"
+ fixed_ip_v4 = "192.168.1.100"
+ access_network = true
+ }
+ }`, os.Getenv("OS_NETWORK_ID"))
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckComputeV2InstanceDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccComputeV2Instance_accessIPv4,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.instance_1", &instance),
+ resource.TestCheckResourceAttr(
+ "openstack_compute_instance_v2.instance_1", "access_ip_v4", "192.168.1.100"),
+ ),
+ },
+ },
+ })
+}
+
func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error {
config := testAccProvider.Meta().(*Config)
computeClient, err := config.computeV2Client(OS_REGION_NAME)
diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go
index 43318db19844..e5b814fd1625 100644
--- a/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go
+++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go
@@ -128,9 +128,10 @@ resource "openstack_fw_policy_v1" "accept_test_policy_1" {
const testFirewallConfigUpdated = `
resource "openstack_fw_firewall_v1" "accept_test" {
- name = "accept_test"
- description = "terraform acceptance test"
- policy_id = "${openstack_fw_policy_v1.accept_test_policy_2.id}"
+ name = "accept_test"
+ description = "terraform acceptance test"
+ policy_id = "${openstack_fw_policy_v1.accept_test_policy_2.id}"
+ admin_state_up = true
}
resource "openstack_fw_policy_v1" "accept_test_policy_2" {
diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v1.go b/builtin/providers/openstack/resource_openstack_lb_member_v1.go
index 4fbf3dcca53a..d6d467c1344b 100644
--- a/builtin/providers/openstack/resource_openstack_lb_member_v1.go
+++ b/builtin/providers/openstack/resource_openstack_lb_member_v1.go
@@ -75,7 +75,7 @@ func resourceLBMemberV1Create(d *schema.ResourceData, meta interface{}) error {
ProtocolPort: d.Get("port").(int),
}
- log.Printf("[DEBUG] Create Options: %#v", createOpts)
+ log.Printf("[DEBUG] OpenStack LB Member Create Options: %#v", createOpts)
m, err := members.Create(networkingClient, createOpts).Extract()
if err != nil {
return fmt.Errorf("Error creating OpenStack LB member: %s", err)
@@ -86,7 +86,7 @@ func resourceLBMemberV1Create(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING_CREATE"},
- Target: []string{"ACTIVE"},
+ Target: []string{"ACTIVE", "INACTIVE"},
Refresh: waitForLBMemberActive(networkingClient, m.ID),
Timeout: 2 * time.Minute,
Delay: 5 * time.Second,
@@ -100,6 +100,17 @@ func resourceLBMemberV1Create(d *schema.ResourceData, meta interface{}) error {
d.SetId(m.ID)
+ // Due to the way Gophercloud is currently set up, AdminStateUp must be set post-create
+ updateOpts := members.UpdateOpts{
+ AdminStateUp: d.Get("admin_state_up").(bool),
+ }
+
+ log.Printf("[DEBUG] OpenStack LB Member Update Options: %#v", createOpts)
+ m, err = members.Update(networkingClient, m.ID, updateOpts).Extract()
+ if err != nil {
+ return fmt.Errorf("Error updating OpenStack LB member: %s", err)
+ }
+
return resourceLBMemberV1Read(d, meta)
}
diff --git a/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go
index 292659d64a39..fc4ca0baeadc 100644
--- a/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go
+++ b/builtin/providers/openstack/resource_openstack_lb_member_v1_test.go
@@ -109,6 +109,7 @@ var testAccLBV1Member_basic = fmt.Sprintf(`
pool_id = "${openstack_lb_pool_v1.pool_1.id}"
address = "192.168.199.10"
port = 80
+ admin_state_up = true
}`)
var testAccLBV1Member_update = fmt.Sprintf(`
diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go
index fcb11b7db4d3..c1fb60c25ee4 100644
--- a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go
+++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go
@@ -237,12 +237,14 @@ var testAccLBV1Pool_fullstack = fmt.Sprintf(`
pool_id = "${openstack_lb_pool_v1.pool_1.id}"
address = "${openstack_compute_instance_v2.instance_1.access_ip_v4}"
port = 80
+ admin_state_up = true
}
resource "openstack_lb_member_v1" "member_2" {
pool_id = "${openstack_lb_pool_v1.pool_1.id}"
address = "${openstack_compute_instance_v2.instance_2.access_ip_v4}"
port = 80
+ admin_state_up = true
}
resource "openstack_lb_vip_v1" "vip_1" {
@@ -251,4 +253,5 @@ var testAccLBV1Pool_fullstack = fmt.Sprintf(`
protocol = "TCP"
port = 80
pool_id = "${openstack_lb_pool_v1.pool_1.id}"
+ admin_state_up = true
}`)
diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go
index 0ef369a4e4cd..6a106e1c97e8 100644
--- a/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go
+++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go
@@ -116,6 +116,7 @@ var testAccLBV1VIP_basic = fmt.Sprintf(`
protocol = "HTTP"
port = 80
pool_id = "${openstack_lb_pool_v1.pool_1.id}"
+ admin_state_up = true
persistence {
type = "SOURCE_IP"
}
@@ -154,5 +155,6 @@ var testAccLBV1VIP_update = fmt.Sprintf(`
persistence {
type = "SOURCE_IP"
}
+ admin_state_up = true
}`,
OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME)
diff --git a/builtin/providers/openstack/resource_openstack_networking_router_route_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_route_v2.go
new file mode 100644
index 000000000000..fbf3bcca4b57
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_router_route_v2.go
@@ -0,0 +1,214 @@
+package openstack
+
+import (
+ "fmt"
+ "log"
+
+ "github.com/hashicorp/terraform/helper/schema"
+
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers"
+)
+
+func resourceNetworkingRouterRouteV2() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceNetworkingRouterRouteV2Create,
+ Read: resourceNetworkingRouterRouteV2Read,
+ Delete: resourceNetworkingRouterRouteV2Delete,
+
+ Schema: map[string]*schema.Schema{
+ "region": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ DefaultFunc: schema.EnvDefaultFunc("OS_REGION_NAME", ""),
+ },
+ "router_id": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "destination_cidr": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "next_hop": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ },
+ }
+}
+
+func resourceNetworkingRouterRouteV2Create(d *schema.ResourceData, meta interface{}) error {
+
+ routerId := d.Get("router_id").(string)
+ osMutexKV.Lock(routerId)
+ defer osMutexKV.Unlock(routerId)
+
+ var destCidr string = d.Get("destination_cidr").(string)
+ var nextHop string = d.Get("next_hop").(string)
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ n, err := routers.Get(networkingClient, routerId).Extract()
+ if err != nil {
+ httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ if httpError.Actual == 404 {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ var updateOpts routers.UpdateOpts
+ var routeExists bool = false
+
+ var rts []routers.Route = n.Routes
+ for _, r := range rts {
+
+ if r.DestinationCIDR == destCidr && r.NextHop == nextHop {
+ routeExists = true
+ break
+ }
+ }
+
+ if !routeExists {
+
+ if destCidr != "" && nextHop != "" {
+ r := routers.Route{DestinationCIDR: destCidr, NextHop: nextHop}
+ log.Printf(
+ "[INFO] Adding route %s", r)
+ rts = append(rts, r)
+ }
+
+ updateOpts.Routes = rts
+
+ log.Printf("[DEBUG] Updating Router %s with options: %+v", routerId, updateOpts)
+
+ _, err = routers.Update(networkingClient, routerId, updateOpts).Extract()
+ if err != nil {
+ return fmt.Errorf("Error updating OpenStack Neutron Router: %s", err)
+ }
+ d.SetId(fmt.Sprintf("%s-route-%s-%s", routerId, destCidr, nextHop))
+
+ } else {
+ log.Printf("[DEBUG] Router %s has route already", routerId)
+ }
+
+ return resourceNetworkingRouterRouteV2Read(d, meta)
+}
+
+func resourceNetworkingRouterRouteV2Read(d *schema.ResourceData, meta interface{}) error {
+
+ routerId := d.Get("router_id").(string)
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ n, err := routers.Get(networkingClient, routerId).Extract()
+ if err != nil {
+ httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ if httpError.Actual == 404 {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ log.Printf("[DEBUG] Retrieved Router %s: %+v", routerId, n)
+
+ var destCidr string = d.Get("destination_cidr").(string)
+ var nextHop string = d.Get("next_hop").(string)
+
+ d.Set("next_hop", "")
+ d.Set("destination_cidr", "")
+
+ for _, r := range n.Routes {
+
+ if r.DestinationCIDR == destCidr && r.NextHop == nextHop {
+ d.Set("destination_cidr", destCidr)
+ d.Set("next_hop", nextHop)
+ break
+ }
+ }
+
+ return nil
+}
+
+func resourceNetworkingRouterRouteV2Delete(d *schema.ResourceData, meta interface{}) error {
+
+ routerId := d.Get("router_id").(string)
+ osMutexKV.Lock(routerId)
+ defer osMutexKV.Unlock(routerId)
+
+ config := meta.(*Config)
+
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ n, err := routers.Get(networkingClient, routerId).Extract()
+ if err != nil {
+ httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ if httpError.Actual == 404 {
+ return nil
+ }
+ return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err)
+ }
+
+ var updateOpts routers.UpdateOpts
+
+ var destCidr string = d.Get("destination_cidr").(string)
+ var nextHop string = d.Get("next_hop").(string)
+
+ var oldRts []routers.Route = n.Routes
+ var newRts []routers.Route
+
+ for _, r := range oldRts {
+
+ if r.DestinationCIDR != destCidr || r.NextHop != nextHop {
+ newRts = append(newRts, r)
+ }
+ }
+
+ if len(oldRts) != len(newRts) {
+ r := routers.Route{DestinationCIDR: destCidr, NextHop: nextHop}
+ log.Printf(
+ "[INFO] Deleting route %s", r)
+ updateOpts.Routes = newRts
+
+ log.Printf("[DEBUG] Updating Router %s with options: %+v", routerId, updateOpts)
+
+ _, err = routers.Update(networkingClient, routerId, updateOpts).Extract()
+ if err != nil {
+ return fmt.Errorf("Error updating OpenStack Neutron Router: %s", err)
+ }
+ } else {
+ return fmt.Errorf("Route did not exist already")
+ }
+
+ return nil
+}
diff --git a/builtin/providers/openstack/resource_openstack_networking_router_route_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_route_v2_test.go
new file mode 100644
index 000000000000..44f13869917d
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_router_route_v2_test.go
@@ -0,0 +1,324 @@
+package openstack
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/networks"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/subnets"
+)
+
+func TestAccNetworkingV2RouterRoute_basic(t *testing.T) {
+ var router routers.Router
+ var network [2]networks.Network
+ var subnet [2]subnets.Subnet
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccNetworkingV2RouterRoute_create,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckNetworkingV2RouterExists(t, "openstack_networking_router_v2.router_1", &router),
+ testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.network_1", &network[0]),
+ testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet[0]),
+ testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.network_1", &network[1]),
+ testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet[1]),
+ testAccCheckNetworkingV2RouterInterfaceExists(t, "openstack_networking_router_interface_v2.int_1"),
+ testAccCheckNetworkingV2RouterInterfaceExists(t, "openstack_networking_router_interface_v2.int_2"),
+ testAccCheckNetworkingV2RouterRouteExists(t, "openstack_networking_router_route_v2.router_route_1"),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccNetworkingV2RouterRoute_update,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckNetworkingV2RouterRouteExists(t, "openstack_networking_router_route_v2.router_route_1"),
+ testAccCheckNetworkingV2RouterRouteExists(t, "openstack_networking_router_route_v2.router_route_2"),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccNetworkingV2RouterRoute_destroy,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckNetworkingV2RouterRouteEmpty(t, "openstack_networking_router_v2.router_1"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckNetworkingV2RouterRouteEmpty(t *testing.T, n string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No ID is set")
+ }
+
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2RouterRouteExists) Error creating OpenStack networking client: %s", err)
+ }
+
+ router, err := routers.Get(networkingClient, rs.Primary.ID).Extract()
+ if err != nil {
+ return err
+ }
+
+ if router.ID != rs.Primary.ID {
+ return fmt.Errorf("Router not found")
+ }
+
+ if len(router.Routes) != 0 {
+ return fmt.Errorf("Invalid number of route entries: %d", len(router.Routes))
+ }
+
+ return nil
+ }
+}
+
+func testAccCheckNetworkingV2RouterRouteExists(t *testing.T, n string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No ID is set")
+ }
+
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2RouterRouteExists) Error creating OpenStack networking client: %s", err)
+ }
+
+ router, err := routers.Get(networkingClient, rs.Primary.Attributes["router_id"]).Extract()
+ if err != nil {
+ return err
+ }
+
+ if router.ID != rs.Primary.Attributes["router_id"] {
+ return fmt.Errorf("Router for route not found")
+ }
+
+ var found bool = false
+ for _, r := range router.Routes {
+ if r.DestinationCIDR == rs.Primary.Attributes["destination_cidr"] && r.NextHop == rs.Primary.Attributes["next_hop"] {
+ found = true
+ }
+ }
+ if !found {
+ return fmt.Errorf("Could not find route for destination CIDR: %s, next hop: %s", rs.Primary.Attributes["destination_cidr"], rs.Primary.Attributes["next_hop"])
+ }
+
+ return nil
+ }
+}
+
+var testAccNetworkingV2RouterRoute_create = fmt.Sprintf(`
+ resource "openstack_networking_router_v2" "router_1" {
+ name = "router_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_network_v2" "network_1" {
+ name = "network_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_1" {
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ cidr = "192.168.199.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_1" {
+ name = "port_1"
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}"
+ ip_address = "192.168.199.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_1" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_1.id}"
+ }
+
+ resource "openstack_networking_network_v2" "network_2" {
+ name = "network_2"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_2" {
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ cidr = "192.168.200.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_2" {
+ name = "port_2"
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_2.id}"
+ ip_address = "192.168.200.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_2" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_2.id}"
+ }
+
+ resource "openstack_networking_router_route_v2" "router_route_1" {
+ depends_on = ["openstack_networking_router_interface_v2.int_1"]
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+
+ destination_cidr = "10.0.1.0/24"
+ next_hop = "192.168.199.254"
+ }`)
+
+var testAccNetworkingV2RouterRoute_update = fmt.Sprintf(`
+ resource "openstack_networking_router_v2" "router_1" {
+ name = "router_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_network_v2" "network_1" {
+ name = "network_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_1" {
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ cidr = "192.168.199.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_1" {
+ name = "port_1"
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}"
+ ip_address = "192.168.199.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_1" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_1.id}"
+ }
+
+ resource "openstack_networking_network_v2" "network_2" {
+ name = "network_2"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_2" {
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ cidr = "192.168.200.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_2" {
+ name = "port_2"
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_2.id}"
+ ip_address = "192.168.200.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_2" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_2.id}"
+ }
+
+ resource "openstack_networking_router_route_v2" "router_route_1" {
+ depends_on = ["openstack_networking_router_interface_v2.int_1"]
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+
+ destination_cidr = "10.0.1.0/24"
+ next_hop = "192.168.199.254"
+ }
+
+ resource "openstack_networking_router_route_v2" "router_route_2" {
+ depends_on = ["openstack_networking_router_interface_v2.int_2"]
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+
+ destination_cidr = "10.0.2.0/24"
+ next_hop = "192.168.200.254"
+ }`)
+
+var testAccNetworkingV2RouterRoute_destroy = fmt.Sprintf(`
+ resource "openstack_networking_router_v2" "router_1" {
+ name = "router_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_network_v2" "network_1" {
+ name = "network_1"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_1" {
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ cidr = "192.168.199.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_1" {
+ name = "port_1"
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}"
+ ip_address = "192.168.199.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_1" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_1.id}"
+ }
+
+ resource "openstack_networking_network_v2" "network_2" {
+ name = "network_2"
+ admin_state_up = "true"
+ }
+
+ resource "openstack_networking_subnet_v2" "subnet_2" {
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ cidr = "192.168.200.0/24"
+ ip_version = 4
+ }
+
+ resource "openstack_networking_port_v2" "port_2" {
+ name = "port_2"
+ network_id = "${openstack_networking_network_v2.network_2.id}"
+ admin_state_up = "true"
+ fixed_ip {
+ subnet_id = "${openstack_networking_subnet_v2.subnet_2.id}"
+ ip_address = "192.168.200.1"
+ }
+ }
+
+ resource "openstack_networking_router_interface_v2" "int_2" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ port_id = "${openstack_networking_port_v2.port_2.id}"
+ }`)
diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_v2.go
index 7b5f3b8c25a8..f72000a9f254 100644
--- a/builtin/providers/openstack/resource_openstack_networking_router_v2.go
+++ b/builtin/providers/openstack/resource_openstack_networking_router_v2.go
@@ -54,10 +54,59 @@ func resourceNetworkingRouterV2() *schema.Resource {
ForceNew: true,
Computed: true,
},
+ "value_specs": &schema.Schema{
+ Type: schema.TypeMap,
+ Optional: true,
+ ForceNew: true,
+ },
},
}
}
+// routerCreateOpts contains all the values needed to create a new router. There are
+// no required values.
+type RouterCreateOpts struct {
+ Name string
+ AdminStateUp *bool
+ Distributed *bool
+ TenantID string
+ GatewayInfo *routers.GatewayInfo
+ ValueSpecs map[string]string
+}
+
+// ToRouterCreateMap casts a routerCreateOpts struct to a map.
+func (opts RouterCreateOpts) ToRouterCreateMap() (map[string]interface{}, error) {
+ r := make(map[string]interface{})
+
+ if gophercloud.MaybeString(opts.Name) != nil {
+ r["name"] = opts.Name
+ }
+
+ if opts.AdminStateUp != nil {
+ r["admin_state_up"] = opts.AdminStateUp
+ }
+
+ if opts.Distributed != nil {
+ r["distributed"] = opts.Distributed
+ }
+
+ if gophercloud.MaybeString(opts.TenantID) != nil {
+ r["tenant_id"] = opts.TenantID
+ }
+
+ if opts.GatewayInfo != nil {
+ r["external_gateway_info"] = opts.GatewayInfo
+ }
+
+ if opts.ValueSpecs != nil {
+ for k, v := range opts.ValueSpecs {
+ r[k] = v
+ }
+ }
+
+ return map[string]interface{}{"router": r}, nil
+}
+
func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
networkingClient, err := config.networkingV2Client(d.Get("region").(string))
@@ -65,9 +114,10 @@ func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{})
return fmt.Errorf("Error creating OpenStack networking client: %s", err)
}
- createOpts := routers.CreateOpts{
- Name: d.Get("name").(string),
- TenantID: d.Get("tenant_id").(string),
+ createOpts := RouterCreateOpts{
+ Name: d.Get("name").(string),
+ TenantID: d.Get("tenant_id").(string),
+ ValueSpecs: routerValueSpecs(d),
}
if asuRaw, ok := d.GetOk("admin_state_up"); ok {
@@ -145,6 +195,10 @@ func resourceNetworkingRouterV2Read(d *schema.ResourceData, meta interface{}) er
}
func resourceNetworkingRouterV2Update(d *schema.ResourceData, meta interface{}) error {
+ routerId := d.Id()
+ osMutexKV.Lock(routerId)
+ defer osMutexKV.Unlock(routerId)
+
config := meta.(*Config)
networkingClient, err := config.networkingV2Client(d.Get("region").(string))
if err != nil {
@@ -239,3 +293,11 @@ func waitForRouterDelete(networkingClient *gophercloud.ServiceClient, routerId s
return r, "ACTIVE", nil
}
}
+
+func routerValueSpecs(d *schema.ResourceData) map[string]string {
+ m := make(map[string]string)
+ for key, val := range d.Get("value_specs").(map[string]interface{}) {
+ m[key] = val.(string)
+ }
+ return m
+}
diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go
new file mode 100644
index 000000000000..598813a3897a
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2.go
@@ -0,0 +1,209 @@
+package openstack
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules"
+)
+
+func resourceNetworkingSecGroupRuleV2() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceNetworkingSecGroupRuleV2Create,
+ Read: resourceNetworkingSecGroupRuleV2Read,
+ Delete: resourceNetworkingSecGroupRuleV2Delete,
+
+ Schema: map[string]*schema.Schema{
+ "region": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ DefaultFunc: schema.EnvDefaultFunc("OS_REGION_NAME", ""),
+ },
+ "direction": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "ethertype": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "port_range_min": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "port_range_max": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "protocol": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "remote_group_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "remote_ip_prefix": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "security_group_id": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "tenant_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ },
+ }
+}
+
+func resourceNetworkingSecGroupRuleV2Create(d *schema.ResourceData, meta interface{}) error {
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ portRangeMin := d.Get("port_range_min").(int)
+ portRangeMax := d.Get("port_range_max").(int)
+ protocol := d.Get("protocol").(string)
+
+ if protocol == "" {
+ if portRangeMin != 0 || portRangeMax != 0 {
+ return fmt.Errorf("A protocol must be specified when using port_range_min and port_range_max")
+ }
+ }
+
+ opts := rules.CreateOpts{
+ Direction: d.Get("direction").(string),
+ EtherType: d.Get("ethertype").(string),
+ SecGroupID: d.Get("security_group_id").(string),
+ PortRangeMin: d.Get("port_range_min").(int),
+ PortRangeMax: d.Get("port_range_max").(int),
+ Protocol: d.Get("protocol").(string),
+ RemoteGroupID: d.Get("remote_group_id").(string),
+ RemoteIPPrefix: d.Get("remote_ip_prefix").(string),
+ TenantID: d.Get("tenant_id").(string),
+ }
+
+ log.Printf("[DEBUG] Create OpenStack Neutron security group: %#v", opts)
+
+ security_group_rule, err := rules.Create(networkingClient, opts).Extract()
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[DEBUG] OpenStack Neutron Security Group Rule created: %#v", security_group_rule)
+
+ d.SetId(security_group_rule.ID)
+
+ return resourceNetworkingSecGroupRuleV2Read(d, meta)
+}
+
+func resourceNetworkingSecGroupRuleV2Read(d *schema.ResourceData, meta interface{}) error {
+ log.Printf("[DEBUG] Retrieve information about security group rule: %s", d.Id())
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ security_group_rule, err := rules.Get(networkingClient, d.Id()).Extract()
+
+ if err != nil {
+ return CheckDeleted(d, err, "OpenStack Security Group Rule")
+ }
+
+ d.Set("protocol", security_group_rule.Protocol)
+ d.Set("port_range_min", security_group_rule.PortRangeMin)
+ d.Set("port_range_max", security_group_rule.PortRangeMax)
+ d.Set("remote_group_id", security_group_rule.RemoteGroupID)
+ d.Set("remote_ip_prefix", security_group_rule.RemoteIPPrefix)
+ d.Set("tenant_id", security_group_rule.TenantID)
+ return nil
+}
+
+func resourceNetworkingSecGroupRuleV2Delete(d *schema.ResourceData, meta interface{}) error {
+ log.Printf("[DEBUG] Destroy security group rule: %s", d.Id())
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"ACTIVE"},
+ Target: []string{"DELETED"},
+ Refresh: waitForSecGroupRuleDelete(networkingClient, d.Id()),
+ Timeout: 2 * time.Minute,
+ Delay: 5 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+
+ _, err = stateConf.WaitForState()
+ if err != nil {
+ return fmt.Errorf("Error deleting OpenStack Neutron Security Group Rule: %s", err)
+ }
+
+ d.SetId("")
+ return err
+}
+
+func waitForSecGroupRuleDelete(networkingClient *gophercloud.ServiceClient, secGroupRuleId string) resource.StateRefreshFunc {
+ return func() (interface{}, string, error) {
+ log.Printf("[DEBUG] Attempting to delete OpenStack Security Group Rule %s.\n", secGroupRuleId)
+
+ r, err := rules.Get(networkingClient, secGroupRuleId).Extract()
+ if err != nil {
+ errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return r, "ACTIVE", err
+ }
+ if errCode.Actual == 404 {
+ log.Printf("[DEBUG] Successfully deleted OpenStack Neutron Security Group Rule %s", secGroupRuleId)
+ return r, "DELETED", nil
+ }
+ }
+
+ err = rules.Delete(networkingClient, secGroupRuleId).ExtractErr()
+ if err != nil {
+ errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return r, "ACTIVE", err
+ }
+ if errCode.Actual == 404 {
+ log.Printf("[DEBUG] Successfully deleted OpenStack Neutron Security Group Rule %s", secGroupRuleId)
+ return r, "DELETED", nil
+ }
+ }
+
+ log.Printf("[DEBUG] OpenStack Neutron Security Group Rule %s still active.\n", secGroupRuleId)
+ return r, "ACTIVE", nil
+ }
+}
diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go
new file mode 100644
index 000000000000..5ea0cc3cd44b
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_rule_v2_test.go
@@ -0,0 +1,117 @@
+package openstack
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules"
+)
+
+func TestAccNetworkingV2SecGroupRule_basic(t *testing.T) {
+ var security_group_1 groups.SecGroup
+ var security_group_2 groups.SecGroup
+ var security_group_rule_1 rules.SecGroupRule
+ var security_group_rule_2 rules.SecGroupRule
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckNetworkingV2SecGroupRuleDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccNetworkingV2SecGroupRule_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckNetworkingV2SecGroupExists(t, "openstack_networking_secgroup_v2.sg_foo", &security_group_1),
+ testAccCheckNetworkingV2SecGroupExists(t, "openstack_networking_secgroup_v2.sg_bar", &security_group_2),
+ testAccCheckNetworkingV2SecGroupRuleExists(t, "openstack_networking_secgroup_rule_v2.sr_foo", &security_group_rule_1),
+ testAccCheckNetworkingV2SecGroupRuleExists(t, "openstack_networking_secgroup_rule_v2.sr_bar", &security_group_rule_2),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckNetworkingV2SecGroupRuleDestroy(s *terraform.State) error {
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2SecGroupRuleDestroy) Error creating OpenStack networking client: %s", err)
+ }
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "openstack_networking_secgroup_rule_v2" {
+ continue
+ }
+
+ _, err := rules.Get(networkingClient, rs.Primary.ID).Extract()
+ if err == nil {
+ return fmt.Errorf("Security group rule still exists")
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckNetworkingV2SecGroupRuleExists(t *testing.T, n string, security_group_rule *rules.SecGroupRule) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No ID is set")
+ }
+
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2SecGroupRuleExists) Error creating OpenStack networking client: %s", err)
+ }
+
+ found, err := rules.Get(networkingClient, rs.Primary.ID).Extract()
+ if err != nil {
+ return err
+ }
+
+ if found.ID != rs.Primary.ID {
+ return fmt.Errorf("Security group rule not found")
+ }
+
+ *security_group_rule = *found
+
+ return nil
+ }
+}
+
+var testAccNetworkingV2SecGroupRule_basic = fmt.Sprintf(`
+ resource "openstack_networking_secgroup_v2" "sg_foo" {
+ name = "security_group_1"
+ description = "terraform security group rule acceptance test"
+ }
+ resource "openstack_networking_secgroup_v2" "sg_bar" {
+ name = "security_group_2"
+ description = "terraform security group rule acceptance test"
+ }
+ resource "openstack_networking_secgroup_rule_v2" "sr_foo" {
+ direction = "ingress"
+ ethertype = "IPv4"
+ port_range_max = 22
+ port_range_min = 22
+ protocol = "tcp"
+ remote_ip_prefix = "0.0.0.0/0"
+ security_group_id = "${openstack_networking_secgroup_v2.sg_foo.id}"
+ }
+ resource "openstack_networking_secgroup_rule_v2" "sr_bar" {
+ direction = "ingress"
+ ethertype = "IPv4"
+ port_range_max = 80
+ port_range_min = 80
+ protocol = "tcp"
+ remote_group_id = "${openstack_networking_secgroup_v2.sg_foo.id}"
+ security_group_id = "${openstack_networking_secgroup_v2.sg_bar.id}"
+ }`)
diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go
new file mode 100644
index 000000000000..f08e9affc431
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2.go
@@ -0,0 +1,155 @@
+package openstack
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups"
+)
+
+func resourceNetworkingSecGroupV2() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceNetworkingSecGroupV2Create,
+ Read: resourceNetworkingSecGroupV2Read,
+ Delete: resourceNetworkingSecGroupV2Delete,
+
+ Schema: map[string]*schema.Schema{
+ "region": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ DefaultFunc: schema.EnvDefaultFunc("OS_REGION_NAME", ""),
+ },
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "description": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ "tenant_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ },
+ },
+ }
+}
+
+func resourceNetworkingSecGroupV2Create(d *schema.ResourceData, meta interface{}) error {
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ opts := groups.CreateOpts{
+ Name: d.Get("name").(string),
+ Description: d.Get("description").(string),
+ TenantID: d.Get("tenant_id").(string),
+ }
+
+ log.Printf("[DEBUG] Create OpenStack Neutron Security Group: %#v", opts)
+
+ security_group, err := groups.Create(networkingClient, opts).Extract()
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[DEBUG] OpenStack Neutron Security Group created: %#v", security_group)
+
+ d.SetId(security_group.ID)
+
+ return resourceNetworkingSecGroupV2Read(d, meta)
+}
+
+func resourceNetworkingSecGroupV2Read(d *schema.ResourceData, meta interface{}) error {
+ log.Printf("[DEBUG] Retrieve information about security group: %s", d.Id())
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ security_group, err := groups.Get(networkingClient, d.Id()).Extract()
+
+ if err != nil {
+ return CheckDeleted(d, err, "OpenStack Neutron Security group")
+ }
+
+ d.Set("description", security_group.Description)
+ d.Set("tenant_id", security_group.TenantID)
+ return nil
+}
+
+func resourceNetworkingSecGroupV2Delete(d *schema.ResourceData, meta interface{}) error {
+ log.Printf("[DEBUG] Destroy security group: %s", d.Id())
+
+ config := meta.(*Config)
+ networkingClient, err := config.networkingV2Client(d.Get("region").(string))
+ if err != nil {
+ return fmt.Errorf("Error creating OpenStack networking client: %s", err)
+ }
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"ACTIVE"},
+ Target: []string{"DELETED"},
+ Refresh: waitForSecGroupDelete(networkingClient, d.Id()),
+ Timeout: 2 * time.Minute,
+ Delay: 5 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+
+ _, err = stateConf.WaitForState()
+ if err != nil {
+ return fmt.Errorf("Error deleting OpenStack Neutron Security Group: %s", err)
+ }
+
+ d.SetId("")
+ return err
+}
+
+func waitForSecGroupDelete(networkingClient *gophercloud.ServiceClient, secGroupId string) resource.StateRefreshFunc {
+ return func() (interface{}, string, error) {
+ log.Printf("[DEBUG] Attempting to delete OpenStack Security Group %s.\n", secGroupId)
+
+ r, err := groups.Get(networkingClient, secGroupId).Extract()
+ if err != nil {
+ errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return r, "ACTIVE", err
+ }
+ if errCode.Actual == 404 {
+ log.Printf("[DEBUG] Successfully deleted OpenStack Neutron Security Group %s", secGroupId)
+ return r, "DELETED", nil
+ }
+ }
+
+ err = groups.Delete(networkingClient, secGroupId).ExtractErr()
+ if err != nil {
+ errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
+ if !ok {
+ return r, "ACTIVE", err
+ }
+ if errCode.Actual == 404 {
+ log.Printf("[DEBUG] Successfully deleted OpenStack Neutron Security Group %s", secGroupId)
+ return r, "DELETED", nil
+ }
+ }
+
+ log.Printf("[DEBUG] OpenStack Neutron Security Group %s still active.\n", secGroupId)
+ return r, "ACTIVE", nil
+ }
+}
diff --git a/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go
new file mode 100644
index 000000000000..198d74a5c23e
--- /dev/null
+++ b/builtin/providers/openstack/resource_openstack_networking_secgroup_v2_test.go
@@ -0,0 +1,100 @@
+package openstack
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups"
+)
+
+func TestAccNetworkingV2SecGroup_basic(t *testing.T) {
+ var security_group groups.SecGroup
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckNetworkingV2SecGroupDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccNetworkingV2SecGroup_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckNetworkingV2SecGroupExists(t, "openstack_networking_secgroup_v2.foo", &security_group),
+ ),
+ },
+ resource.TestStep{
+ Config: testAccNetworkingV2SecGroup_update,
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr("openstack_networking_secgroup_v2.foo", "name", "security_group_2"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckNetworkingV2SecGroupDestroy(s *terraform.State) error {
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2SecGroupDestroy) Error creating OpenStack networking client: %s", err)
+ }
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "openstack_networking_secgroup_v2" {
+ continue
+ }
+
+ _, err := groups.Get(networkingClient, rs.Primary.ID).Extract()
+ if err == nil {
+ return fmt.Errorf("Security group still exists")
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckNetworkingV2SecGroupExists(t *testing.T, n string, security_group *groups.SecGroup) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No ID is set")
+ }
+
+ config := testAccProvider.Meta().(*Config)
+ networkingClient, err := config.networkingV2Client(OS_REGION_NAME)
+ if err != nil {
+ return fmt.Errorf("(testAccCheckNetworkingV2SecGroupExists) Error creating OpenStack networking client: %s", err)
+ }
+
+ found, err := groups.Get(networkingClient, rs.Primary.ID).Extract()
+ if err != nil {
+ return err
+ }
+
+ if found.ID != rs.Primary.ID {
+ return fmt.Errorf("Security group not found")
+ }
+
+ *security_group = *found
+
+ return nil
+ }
+}
+
+var testAccNetworkingV2SecGroup_basic = fmt.Sprintf(`
+ resource "openstack_networking_secgroup_v2" "foo" {
+ name = "security_group"
+ description = "terraform security group acceptance test"
+ }`)
+
+var testAccNetworkingV2SecGroup_update = fmt.Sprintf(`
+ resource "openstack_networking_secgroup_v2" "foo" {
+ name = "security_group_2"
+ description = "terraform security group acceptance test"
+ }`)
diff --git a/builtin/providers/postgresql/config.go b/builtin/providers/postgresql/config.go
index 8bf7b2daa512..3d80ea6a15a6 100644
--- a/builtin/providers/postgresql/config.go
+++ b/builtin/providers/postgresql/config.go
@@ -13,6 +13,7 @@ type Config struct {
Port int
Username string
Password string
+ SslMode string
}
// Client struct holding connection string
@@ -23,7 +24,7 @@ type Client struct {
//NewClient returns new client config
func (c *Config) NewClient() (*Client, error) {
- connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres", c.Host, c.Port, c.Username, c.Password)
+ connStr := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=postgres sslmode=%s", c.Host, c.Port, c.Username, c.Password, c.SslMode)
client := Client{
connStr: connStr,
diff --git a/builtin/providers/postgresql/provider.go b/builtin/providers/postgresql/provider.go
index c048ec3ece76..308c11f61678 100644
--- a/builtin/providers/postgresql/provider.go
+++ b/builtin/providers/postgresql/provider.go
@@ -35,6 +35,12 @@ func Provider() terraform.ResourceProvider {
DefaultFunc: schema.EnvDefaultFunc("POSTGRESQL_PASSWORD", nil),
Description: "Password for postgresql server connection",
},
+ "ssl_mode": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "prefer",
+ Description: "Connection mode for postgresql server",
+ },
},
ResourcesMap: map[string]*schema.Resource{
@@ -52,6 +58,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
Port: d.Get("port").(int),
Username: d.Get("username").(string),
Password: d.Get("password").(string),
+ SslMode: d.Get("ssl_mode").(string),
}
client, err := config.NewClient()
diff --git a/builtin/providers/softlayer/config.go b/builtin/providers/softlayer/config.go
new file mode 100644
index 000000000000..8fb9d77baa3c
--- /dev/null
+++ b/builtin/providers/softlayer/config.go
@@ -0,0 +1,39 @@
+package softlayer
+
+import (
+ "log"
+
+ slclient "github.com/maximilien/softlayer-go/client"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type Config struct {
+ Username string
+ ApiKey string
+}
+
+type Client struct {
+ virtualGuestService softlayer.SoftLayer_Virtual_Guest_Service
+ sshKeyService softlayer.SoftLayer_Security_Ssh_Key_Service
+ productOrderService softlayer.SoftLayer_Product_Order_Service
+}
+
+func (c *Config) Client() (*Client, error) {
+ slc := slclient.NewSoftLayerClient(c.Username, c.ApiKey)
+ virtualGuestService, err := slc.GetSoftLayer_Virtual_Guest_Service()
+
+ if err != nil {
+ return nil, err
+ }
+
+ sshKeyService, err := slc.GetSoftLayer_Security_Ssh_Key_Service()
+
+ client := &Client{
+ virtualGuestService: virtualGuestService,
+ sshKeyService: sshKeyService,
+ }
+
+ log.Println("[INFO] Created SoftLayer client")
+
+ return client, nil
+}
diff --git a/builtin/providers/softlayer/provider.go b/builtin/providers/softlayer/provider.go
new file mode 100644
index 000000000000..ceb62425cf9c
--- /dev/null
+++ b/builtin/providers/softlayer/provider.go
@@ -0,0 +1,41 @@
+package softlayer
+
+import (
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func Provider() terraform.ResourceProvider {
+ return &schema.Provider{
+ Schema: map[string]*schema.Schema{
+ "username": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ DefaultFunc: schema.EnvDefaultFunc("SOFTLAYER_USERNAME", nil),
+ Description: "The user name for SoftLayer API operations.",
+ },
+ "api_key": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ DefaultFunc: schema.EnvDefaultFunc("SOFTLAYER_API_KEY", nil),
+ Description: "The API key for SoftLayer API operations.",
+ },
+ },
+
+ ResourcesMap: map[string]*schema.Resource{
+ "softlayer_virtual_guest": resourceSoftLayerVirtualGuest(),
+ "softlayer_ssh_key": resourceSoftLayerSSHKey(),
+ },
+
+ ConfigureFunc: providerConfigure,
+ }
+}
+
+func providerConfigure(d *schema.ResourceData) (interface{}, error) {
+ config := Config{
+ Username: d.Get("username").(string),
+ ApiKey: d.Get("api_key").(string),
+ }
+
+ return config.Client()
+}
diff --git a/builtin/providers/softlayer/provider_test.go b/builtin/providers/softlayer/provider_test.go
new file mode 100644
index 000000000000..5853651934d7
--- /dev/null
+++ b/builtin/providers/softlayer/provider_test.go
@@ -0,0 +1,38 @@
+package softlayer
+
+import (
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+var testAccProviders map[string]terraform.ResourceProvider
+var testAccProvider *schema.Provider
+
+func init() {
+ testAccProvider = Provider().(*schema.Provider)
+ testAccProviders = map[string]terraform.ResourceProvider{
+ "softlayer": testAccProvider,
+ }
+}
+
+func TestProvider(t *testing.T) {
+ if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+}
+
+func TestProvider_impl(t *testing.T) {
+ var _ terraform.ResourceProvider = Provider()
+}
+
+func testAccPreCheck(t *testing.T) {
+ if v := os.Getenv("SOFTLAYER_USERNAME"); v == "" {
+ t.Fatal("SOFTLAYER_USERNAME must be set for acceptance tests")
+ }
+ if v := os.Getenv("SOFTLAYER_API_KEY"); v == "" {
+ t.Fatal("SOFTLAYER_API_KEY must be set for acceptance tests")
+ }
+}
diff --git a/builtin/providers/softlayer/resource_softlayer_ssh_key.go b/builtin/providers/softlayer/resource_softlayer_ssh_key.go
new file mode 100644
index 000000000000..d03fb7f3bd99
--- /dev/null
+++ b/builtin/providers/softlayer/resource_softlayer_ssh_key.go
@@ -0,0 +1,159 @@
+package softlayer
+
+import (
+ "fmt"
+ "log"
+ "strconv"
+ "strings"
+
+ "github.com/hashicorp/terraform/helper/schema"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+func resourceSoftLayerSSHKey() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceSoftLayerSSHKeyCreate,
+ Read: resourceSoftLayerSSHKeyRead,
+ Update: resourceSoftLayerSSHKeyUpdate,
+ Delete: resourceSoftLayerSSHKeyDelete,
+ Exists: resourceSoftLayerSSHKeyExists,
+
+ Schema: map[string]*schema.Schema{
+ "id": &schema.Schema{
+ Type: schema.TypeInt,
+ Computed: true,
+ },
+
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "public_key": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "fingerprint": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "notes": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Default: nil,
+ },
+ },
+ }
+}
+
+func resourceSoftLayerSSHKeyCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).sshKeyService
+
+ // Build up our creation options
+ opts := datatypes.SoftLayer_Security_Ssh_Key{
+ Label: d.Get("name").(string),
+ Key: d.Get("public_key").(string),
+ }
+
+ if notes, ok := d.GetOk("notes"); ok {
+ opts.Notes = notes.(string)
+ }
+
+ res, err := client.CreateObject(opts)
+ if err != nil {
+ return fmt.Errorf("Error creating SSH Key: %s", err)
+ }
+
+ d.SetId(strconv.Itoa(res.Id))
+ log.Printf("[INFO] SSH Key: %d", res.Id)
+
+ return resourceSoftLayerSSHKeyRead(d, meta)
+}
+
+func resourceSoftLayerSSHKeyRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).sshKeyService
+
+ keyId, _ := strconv.Atoi(d.Id())
+
+ key, err := client.GetObject(keyId)
+ if err != nil {
+ // If the key is somehow already destroyed, mark as
+ // succesfully gone
+ if strings.Contains(err.Error(), "404 Not Found") {
+ d.SetId("")
+ return nil
+ }
+
+ return fmt.Errorf("Error retrieving SSH key: %s", err)
+ }
+
+ d.Set("id", key.Id)
+ d.Set("name", key.Label)
+ d.Set("public_key", strings.TrimSpace(key.Key))
+ d.Set("fingerprint", key.Fingerprint)
+ d.Set("notes", key.Notes)
+
+ return nil
+}
+
+func resourceSoftLayerSSHKeyUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).sshKeyService
+
+ keyId, _ := strconv.Atoi(d.Id())
+
+ key, err := client.GetObject(keyId)
+ if err != nil {
+ return fmt.Errorf("Error retrieving SSH key: %s", err)
+ }
+
+ if d.HasChange("name") {
+ key.Label = d.Get("name").(string)
+ }
+
+ if d.HasChange("notes") {
+ key.Notes = d.Get("notes").(string)
+ }
+
+ _, err = client.EditObject(keyId, key)
+ if err != nil {
+ return fmt.Errorf("Error editing SSH key: %s", err)
+ }
+ return nil
+}
+
+func resourceSoftLayerSSHKeyDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).sshKeyService
+
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return fmt.Errorf("Error deleting SSH Key: %s", err)
+ }
+
+ log.Printf("[INFO] Deleting SSH key: %d", id)
+ _, err = client.DeleteObject(id)
+ if err != nil {
+ return fmt.Errorf("Error deleting SSH key: %s", err)
+ }
+
+ d.SetId("")
+ return nil
+}
+
+func resourceSoftLayerSSHKeyExists(d *schema.ResourceData, meta interface{}) (bool, error) {
+ client := meta.(*Client).sshKeyService
+
+ if client == nil {
+ return false, fmt.Errorf("The client was nil.")
+ }
+
+ keyId, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return false, fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+
+ result, err := client.GetObject(keyId)
+ return result.Id == keyId && err == nil, nil
+}
diff --git a/builtin/providers/softlayer/resource_softlayer_ssh_key_test.go b/builtin/providers/softlayer/resource_softlayer_ssh_key_test.go
new file mode 100644
index 000000000000..70f7344fe61c
--- /dev/null
+++ b/builtin/providers/softlayer/resource_softlayer_ssh_key_test.go
@@ -0,0 +1,131 @@
+package softlayer
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+func TestAccSoftLayerSSHKey_Basic(t *testing.T) {
+ var key datatypes.SoftLayer_Security_Ssh_Key
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckSoftLayerSSHKeyDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckSoftLayerSSHKeyConfig_basic,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSoftLayerSSHKeyExists("softlayer_ssh_key.testacc_foobar", &key),
+ testAccCheckSoftLayerSSHKeyAttributes(&key),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "name", "testacc_foobar"),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "public_key", testAccValidPublicKey),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "notes", "first_note"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccCheckSoftLayerSSHKeyConfig_updated,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSoftLayerSSHKeyExists("softlayer_ssh_key.testacc_foobar", &key),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "name", "changed_name"),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "public_key", testAccValidPublicKey),
+ resource.TestCheckResourceAttr(
+ "softlayer_ssh_key.testacc_foobar", "notes", "changed_note"),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckSoftLayerSSHKeyDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*Client).sshKeyService
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "softlayer_ssh_key" {
+ continue
+ }
+
+ keyId, _ := strconv.Atoi(rs.Primary.ID)
+
+ // Try to find the key
+ _, err := client.GetObject(keyId)
+
+ if err == nil {
+ return fmt.Errorf("SSH key still exists")
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckSoftLayerSSHKeyAttributes(key *datatypes.SoftLayer_Security_Ssh_Key) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+
+ if key.Label != "testacc_foobar" {
+ return fmt.Errorf("Bad name: %s", key.Label)
+ }
+
+ return nil
+ }
+}
+
+func testAccCheckSoftLayerSSHKeyExists(n string, key *datatypes.SoftLayer_Security_Ssh_Key) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No Record ID is set")
+ }
+
+ keyId, _ := strconv.Atoi(rs.Primary.ID)
+
+ client := testAccProvider.Meta().(*Client).sshKeyService
+ foundKey, err := client.GetObject(keyId)
+
+ if err != nil {
+ return err
+ }
+
+ if strconv.Itoa(int(foundKey.Id)) != rs.Primary.ID {
+ return fmt.Errorf("Record not found")
+ }
+
+ *key = foundKey
+
+ return nil
+ }
+}
+
+var testAccCheckSoftLayerSSHKeyConfig_basic = fmt.Sprintf(`
+resource "softlayer_ssh_key" "testacc_foobar" {
+ name = "testacc_foobar"
+ notes = "first_note"
+ public_key = "%s"
+}`, testAccValidPublicKey)
+
+var testAccCheckSoftLayerSSHKeyConfig_updated = fmt.Sprintf(`
+resource "softlayer_ssh_key" "testacc_foobar" {
+ name = "changed_name"
+ notes = "changed_note"
+ public_key = "%s"
+}`, testAccValidPublicKey)
+
+var testAccValidPublicKey = strings.TrimSpace(`
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCKVmnMOlHKcZK8tpt3MP1lqOLAcqcJzhsvJcjscgVERRN7/9484SOBJ3HSKxxNG5JN8owAjy5f9yYwcUg+JaUVuytn5Pv3aeYROHGGg+5G346xaq3DAwX6Y5ykr2fvjObgncQBnuU5KHWCECO/4h8uWuwh/kfniXPVjFToc+gnkqA+3RKpAecZhFXwfalQ9mMuYGFxn+fwn8cYEApsJbsEmb0iJwPiZ5hjFC8wREuiTlhPHDgkBLOiycd20op2nXzDbHfCHInquEe/gYxEitALONxm0swBOwJZwlTDOB7C6y2dzlrtxr1L59m7pCkWI4EtTRLvleehBoj3u7jB4usR
+`)
diff --git a/builtin/providers/softlayer/resource_softlayer_virtual_guest.go b/builtin/providers/softlayer/resource_softlayer_virtual_guest.go
new file mode 100644
index 000000000000..54d4f9ba4e23
--- /dev/null
+++ b/builtin/providers/softlayer/resource_softlayer_virtual_guest.go
@@ -0,0 +1,545 @@
+package softlayer
+
+import (
+ "fmt"
+ "log"
+ "strconv"
+ "time"
+
+ "encoding/base64"
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/helper/schema"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ "github.com/maximilien/softlayer-go/softlayer"
+ "math"
+ "strings"
+)
+
+func resourceSoftLayerVirtualGuest() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceSoftLayerVirtualGuestCreate,
+ Read: resourceSoftLayerVirtualGuestRead,
+ Update: resourceSoftLayerVirtualGuestUpdate,
+ Delete: resourceSoftLayerVirtualGuestDelete,
+ Exists: resourceSoftLayerVirtualGuestExists,
+ Schema: map[string]*schema.Schema{
+ "name": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "domain": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "image": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "hourly_billing": &schema.Schema{
+ Type: schema.TypeBool,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "private_network_only": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ ForceNew: true,
+ },
+
+ "region": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "cpu": &schema.Schema{
+ Type: schema.TypeInt,
+ Required: true,
+ // TODO: This fields for now requires recreation, because currently for some reason SoftLayer resets "dedicated_acct_host_only"
+ // TODO: flag to false, while upgrading CPUs. That problem is reported to SoftLayer team. "ForceNew" can be set back
+ // TODO: to false as soon as it is fixed at their side. Also corresponding test for virtual guest upgrade will be uncommented.
+ ForceNew: true,
+ },
+
+ "ram": &schema.Schema{
+ Type: schema.TypeInt,
+ Required: true,
+ ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) {
+ memoryInMB := float64(v.(int))
+
+ // Validate memory to match gigs format
+ remaining := math.Mod(memoryInMB, 1024)
+ if remaining > 0 {
+ suggested := math.Ceil(memoryInMB/1024) * 1024
+ errors = append(errors, fmt.Errorf(
+ "Invalid 'ram' value %d megabytes, must be a multiple of 1024 (e.g. use %d)", int(memoryInMB), int(suggested)))
+ }
+
+ return
+ },
+ },
+
+ "dedicated_acct_host_only": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "frontend_vlan_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "backend_vlan_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "disks": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{Type: schema.TypeInt},
+ },
+
+ "public_network_speed": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ Default: 1000,
+ },
+
+ "ipv4_address": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "ipv4_address_private": &schema.Schema{
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "ssh_keys": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{Type: schema.TypeInt},
+ },
+
+ "user_data": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
+ "local_disk": &schema.Schema{
+ Type: schema.TypeBool,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "post_install_script_uri": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Default: nil,
+ ForceNew: true,
+ },
+
+ "block_device_template_group_gid": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ },
+ }
+}
+
+func getNameForBlockDevice(i int) string {
+ // skip 1, which is reserved for the swap disk.
+ // so we get 0, 2, 3, 4, 5 ...
+ if i == 0 {
+ return "0"
+ } else {
+ return strconv.Itoa(i + 1)
+ }
+}
+
+func getBlockDevices(d *schema.ResourceData) []datatypes.BlockDevice {
+ numBlocks := d.Get("disks.#").(int)
+ if numBlocks == 0 {
+ return nil
+ } else {
+ blocks := make([]datatypes.BlockDevice, 0, numBlocks)
+ for i := 0; i < numBlocks; i++ {
+ blockRef := fmt.Sprintf("disks.%d", i)
+ name := getNameForBlockDevice(i)
+ capacity := d.Get(blockRef).(int)
+ block := datatypes.BlockDevice{
+ Device: name,
+ DiskImage: datatypes.DiskImage{
+ Capacity: capacity,
+ },
+ }
+ blocks = append(blocks, block)
+ }
+ return blocks
+ }
+}
+
+func resourceSoftLayerVirtualGuestCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).virtualGuestService
+ if client == nil {
+ return fmt.Errorf("The client was nil.")
+ }
+
+ dc := datatypes.Datacenter{
+ Name: d.Get("region").(string),
+ }
+
+ networkComponent := datatypes.NetworkComponents{
+ MaxSpeed: d.Get("public_network_speed").(int),
+ }
+
+ privateNetworkOnly := d.Get("private_network_only").(bool)
+ opts := datatypes.SoftLayer_Virtual_Guest_Template{
+ Hostname: d.Get("name").(string),
+ Domain: d.Get("domain").(string),
+ HourlyBillingFlag: d.Get("hourly_billing").(bool),
+ PrivateNetworkOnlyFlag: privateNetworkOnly,
+ Datacenter: dc,
+ StartCpus: d.Get("cpu").(int),
+ MaxMemory: d.Get("ram").(int),
+ NetworkComponents: []datatypes.NetworkComponents{networkComponent},
+ BlockDevices: getBlockDevices(d),
+ LocalDiskFlag: d.Get("local_disk").(bool),
+ PostInstallScriptUri: d.Get("post_install_script_uri").(string),
+ }
+
+ if dedicatedAcctHostOnly, ok := d.GetOk("dedicated_acct_host_only"); ok {
+ opts.DedicatedAccountHostOnlyFlag = dedicatedAcctHostOnly.(bool)
+ }
+
+ if globalIdentifier, ok := d.GetOk("block_device_template_group_gid"); ok {
+ opts.BlockDeviceTemplateGroup = &datatypes.BlockDeviceTemplateGroup{
+ GlobalIdentifier: globalIdentifier.(string),
+ }
+ }
+
+ if operatingSystemReferenceCode, ok := d.GetOk("image"); ok {
+ opts.OperatingSystemReferenceCode = operatingSystemReferenceCode.(string)
+ }
+
+ // Apply frontend VLAN if provided
+ if param, ok := d.GetOk("frontend_vlan_id"); ok {
+ frontendVlanId, err := strconv.Atoi(param.(string))
+ if err != nil {
+ return fmt.Errorf("Not a valid frontend ID, must be an integer: %s", err)
+ }
+ opts.PrimaryNetworkComponent = &datatypes.PrimaryNetworkComponent{
+ NetworkVlan: datatypes.NetworkVlan{Id: (frontendVlanId)},
+ }
+ }
+
+ // Apply backend VLAN if provided
+ if param, ok := d.GetOk("backend_vlan_id"); ok {
+ backendVlanId, err := strconv.Atoi(param.(string))
+ if err != nil {
+ return fmt.Errorf("Not a valid backend ID, must be an integer: %s", err)
+ }
+ opts.PrimaryBackendNetworkComponent = &datatypes.PrimaryBackendNetworkComponent{
+ NetworkVlan: datatypes.NetworkVlan{Id: (backendVlanId)},
+ }
+ }
+
+ if userData, ok := d.GetOk("user_data"); ok {
+ opts.UserData = []datatypes.UserData{
+ datatypes.UserData{
+ Value: userData.(string),
+ },
+ }
+ }
+
+ // Get configured ssh_keys
+ ssh_keys := d.Get("ssh_keys.#").(int)
+ if ssh_keys > 0 {
+ opts.SshKeys = make([]datatypes.SshKey, 0, ssh_keys)
+ for i := 0; i < ssh_keys; i++ {
+ key := fmt.Sprintf("ssh_keys.%d", i)
+ id := d.Get(key).(int)
+ sshKey := datatypes.SshKey{
+ Id: id,
+ }
+ opts.SshKeys = append(opts.SshKeys, sshKey)
+ }
+ }
+
+ log.Printf("[INFO] Creating virtual machine")
+
+ guest, err := client.CreateObject(opts)
+
+ if err != nil {
+ return fmt.Errorf("Error creating virtual guest: %s", err)
+ }
+
+ d.SetId(fmt.Sprintf("%d", guest.Id))
+
+ log.Printf("[INFO] Virtual Machine ID: %s", d.Id())
+
+ // wait for machine availability
+ _, err = WaitForNoActiveTransactions(d, meta)
+
+ if err != nil {
+ return fmt.Errorf(
+ "Error waiting for virtual machine (%s) to become ready: %s", d.Id(), err)
+ }
+
+ if !privateNetworkOnly {
+ _, err = WaitForPublicIpAvailable(d, meta)
+ if err != nil {
+ return fmt.Errorf(
+ "Error waiting for virtual machine (%s) public ip to become ready: %s", d.Id(), err)
+ }
+ }
+
+ return resourceSoftLayerVirtualGuestRead(d, meta)
+}
+
+func resourceSoftLayerVirtualGuestRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).virtualGuestService
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+ result, err := client.GetObject(id)
+ if err != nil {
+ return fmt.Errorf("Error retrieving virtual guest: %s", err)
+ }
+
+ d.Set("name", result.Hostname)
+ d.Set("domain", result.Domain)
+ if result.Datacenter != nil {
+ d.Set("region", result.Datacenter.Name)
+ }
+ d.Set("public_network_speed", result.NetworkComponents[0].MaxSpeed)
+ d.Set("cpu", result.StartCpus)
+ d.Set("ram", result.MaxMemory)
+ d.Set("dedicated_acct_host_only", result.DedicatedAccountHostOnlyFlag)
+ d.Set("has_public_ip", result.PrimaryIpAddress != "")
+ d.Set("ipv4_address", result.PrimaryIpAddress)
+ d.Set("ipv4_address_private", result.PrimaryBackendIpAddress)
+ d.Set("private_network_only", result.PrivateNetworkOnlyFlag)
+ d.Set("hourly_billing", result.HourlyBillingFlag)
+ d.Set("local_disk", result.LocalDiskFlag)
+ d.Set("frontend_vlan_id", result.PrimaryNetworkComponent.NetworkVlan.Id)
+ d.Set("backend_vlan_id", result.PrimaryBackendNetworkComponent.NetworkVlan.Id)
+
+ userData := result.UserData
+ if userData != nil && len(userData) > 0 {
+ data, err := base64.StdEncoding.DecodeString(userData[0].Value)
+ if err != nil {
+ log.Printf("Can't base64 decode user data %s. error: %s", userData, err)
+ d.Set("user_data", userData)
+ } else {
+ d.Set("user_data", string(data))
+ }
+ }
+
+ return nil
+}
+
+func resourceSoftLayerVirtualGuestUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).virtualGuestService
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+ result, err := client.GetObject(id)
+ if err != nil {
+ return fmt.Errorf("Error retrieving virtual guest: %s", err)
+ }
+
+ // Update "name" and "domain" fields if present and changed
+ // Those are the only fields, which could be updated
+ if d.HasChange("name") || d.HasChange("domain") {
+ result.Hostname = d.Get("name").(string)
+ result.Domain = d.Get("domain").(string)
+
+ _, err = client.EditObject(id, result)
+
+ if err != nil {
+ return fmt.Errorf("Couldn't update virtual guest: %s", err)
+ }
+ }
+
+ // Set user data if provided and not empty
+ if d.HasChange("user_data") {
+ client.SetMetadata(id, d.Get("user_data").(string))
+ }
+
+ // Upgrade "cpu", "ram" and "nic_speed" if provided and changed
+ upgradeOptions := softlayer.UpgradeOptions{}
+ if d.HasChange("cpu") {
+ upgradeOptions.Cpus = d.Get("cpu").(int)
+ }
+ if d.HasChange("ram") {
+ memoryInMB := float64(d.Get("ram").(int))
+
+ // Convert memory to GB, as softlayer only allows to upgrade RAM in Gigs
+ // Must be already validated at this step
+ upgradeOptions.MemoryInGB = int(memoryInMB / 1024)
+ }
+ if d.HasChange("public_network_speed") {
+ upgradeOptions.NicSpeed = d.Get("public_network_speed").(int)
+ }
+
+ started, err := client.UpgradeObject(id, &upgradeOptions)
+ if err != nil {
+ return fmt.Errorf("Couldn't upgrade virtual guest: %s", err)
+ }
+
+ if started {
+ // Wait for softlayer to start upgrading...
+ _, err = WaitForUpgradeTransactionsToAppear(d, meta)
+
+ // Wait for upgrade transactions to finish
+ _, err = WaitForNoActiveTransactions(d, meta)
+ }
+
+ return err
+}
+
+func resourceSoftLayerVirtualGuestDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*Client).virtualGuestService
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+
+ _, err = WaitForNoActiveTransactions(d, meta)
+
+ if err != nil {
+ return fmt.Errorf("Error deleting virtual guest, couldn't wait for zero active transactions: %s", err)
+ }
+
+ _, err = client.DeleteObject(id)
+
+ if err != nil {
+ return fmt.Errorf("Error deleting virtual guest: %s", err)
+ }
+
+ return nil
+}
+
+func WaitForUpgradeTransactionsToAppear(d *schema.ResourceData, meta interface{}) (interface{}, error) {
+
+ log.Printf("Waiting for server (%s) to have upgrade transactions", d.Id())
+
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return nil, fmt.Errorf("The instance ID %s must be numeric", d.Id())
+ }
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"pending_upgrade"},
+ Target: []string{"upgrade_started"},
+ Refresh: func() (interface{}, string, error) {
+ client := meta.(*Client).virtualGuestService
+ transactions, err := client.GetActiveTransactions(id)
+ if err != nil {
+ return nil, "", fmt.Errorf("Couldn't fetch active transactions: %s", err)
+ }
+ for _, transaction := range transactions {
+ if strings.Contains(transaction.TransactionStatus.Name, "UPGRADE") {
+ return transactions, "upgrade_started", nil
+ }
+ }
+ return transactions, "pending_upgrade", nil
+ },
+ Timeout: 5 * time.Minute,
+ Delay: 5 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+
+ return stateConf.WaitForState()
+}
+
+func WaitForPublicIpAvailable(d *schema.ResourceData, meta interface{}) (interface{}, error) {
+ log.Printf("Waiting for server (%s) to get a public IP", d.Id())
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"", "unavailable"},
+ Target: []string{"available"},
+ Refresh: func() (interface{}, string, error) {
+ fmt.Println("Refreshing server state...")
+ client := meta.(*Client).virtualGuestService
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return nil, "", fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+ result, err := client.GetObject(id)
+ if err != nil {
+ return nil, "", fmt.Errorf("Error retrieving virtual guest: %s", err)
+ }
+ if result.PrimaryIpAddress == "" {
+ return result, "unavailable", nil
+ } else {
+ return result, "available", nil
+ }
+ },
+ Timeout: 30 * time.Minute,
+ Delay: 10 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+
+ return stateConf.WaitForState()
+}
+
+func WaitForNoActiveTransactions(d *schema.ResourceData, meta interface{}) (interface{}, error) {
+ log.Printf("Waiting for server (%s) to have zero active transactions", d.Id())
+ id, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return nil, fmt.Errorf("The instance ID %s must be numeric", d.Id())
+ }
+
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"", "active"},
+ Target: []string{"idle"},
+ Refresh: func() (interface{}, string, error) {
+ client := meta.(*Client).virtualGuestService
+ transactions, err := client.GetActiveTransactions(id)
+ if err != nil {
+ return nil, "", fmt.Errorf("Couldn't get active transactions: %s", err)
+ }
+ if len(transactions) == 0 {
+ return transactions, "idle", nil
+ } else {
+ return transactions, "active", nil
+ }
+ },
+ Timeout: 10 * time.Minute,
+ Delay: 10 * time.Second,
+ MinTimeout: 3 * time.Second,
+ }
+
+ return stateConf.WaitForState()
+}
+
+func resourceSoftLayerVirtualGuestExists(d *schema.ResourceData, meta interface{}) (bool, error) {
+ client := meta.(*Client).virtualGuestService
+
+ if client == nil {
+ return false, fmt.Errorf("The client was nil.")
+ }
+
+ guestId, err := strconv.Atoi(d.Id())
+ if err != nil {
+ return false, fmt.Errorf("Not a valid ID, must be an integer: %s", err)
+ }
+
+ result, err := client.GetObject(guestId)
+ return result.Id == guestId && err == nil, nil
+}
diff --git a/builtin/providers/softlayer/resource_softlayer_virtual_guest_test.go b/builtin/providers/softlayer/resource_softlayer_virtual_guest_test.go
new file mode 100644
index 000000000000..43c87e718402
--- /dev/null
+++ b/builtin/providers/softlayer/resource_softlayer_virtual_guest_test.go
@@ -0,0 +1,299 @@
+package softlayer
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+func TestAccSoftLayerVirtualGuest_Basic(t *testing.T) {
+ var guest datatypes.SoftLayer_Virtual_Guest
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckSoftLayerVirtualGuestDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckSoftLayerVirtualGuestConfig_basic,
+ Destroy: false,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-1", &guest),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "name", "terraform-test"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "domain", "bar.example.com"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "region", "ams01"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "public_network_speed", "10"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "hourly_billing", "true"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "private_network_only", "false"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "cpu", "1"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "ram", "1024"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "disks.0", "25"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "disks.1", "10"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "disks.2", "20"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "user_data", "{\"value\":\"newvalue\"}"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "local_disk", "false"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "dedicated_acct_host_only", "true"),
+
+ // TODO: As agreed, will be enabled when VLAN support is implemented: https://github.com/TheWeatherCompany/softlayer-go/issues/3
+ // resource.TestCheckResourceAttr(
+ // "softlayer_virtual_guest.terraform-acceptance-test-1", "frontend_vlan_id", "1085155"),
+ // resource.TestCheckResourceAttr(
+ // "softlayer_virtual_guest.terraform-acceptance-test-1", "backend_vlan_id", "1085157"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccCheckSoftLayerVirtualGuestConfig_userDataUpdate,
+ Destroy: false,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-1", &guest),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "user_data", "updatedData"),
+ ),
+ },
+
+ resource.TestStep{
+ Config: testAccCheckSoftLayerVirtualGuestConfig_upgradeMemoryNetworkSpeed,
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-1", &guest),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "ram", "2048"),
+ resource.TestCheckResourceAttr(
+ "softlayer_virtual_guest.terraform-acceptance-test-1", "public_network_speed", "100"),
+ ),
+ },
+
+ // TODO: currently CPU upgrade test is disabled, due to unexpected behavior of field "dedicated_acct_host_only".
+ // TODO: For some reason it is reset by SoftLayer to "false". Daniel Bright reported corresponding issue to SoftLayer team.
+ // resource.TestStep{
+ // Config: testAccCheckSoftLayerVirtualGuestConfig_vmUpgradeCPUs,
+ // Check: resource.ComposeTestCheckFunc(
+ // testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-1", &guest),
+ // resource.TestCheckResourceAttr(
+ // "softlayer_virtual_guest.terraform-acceptance-test-1", "cpu", "2"),
+ // ),
+ // },
+
+ },
+ })
+}
+
+func TestAccSoftLayerVirtualGuest_BlockDeviceTemplateGroup(t *testing.T) {
+ var guest datatypes.SoftLayer_Virtual_Guest
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckSoftLayerVirtualGuestDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckSoftLayerVirtualGuestConfig_blockDeviceTemplateGroup,
+ Check: resource.ComposeTestCheckFunc(
+ // block_device_template_group_gid value is hardcoded. If it's valid then virtual guest will be created well
+ testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-BDTGroup", &guest),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccSoftLayerVirtualGuest_postInstallScriptUri(t *testing.T) {
+ var guest datatypes.SoftLayer_Virtual_Guest
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckSoftLayerVirtualGuestDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: testAccCheckSoftLayerVirtualGuestConfig_postInstallScriptUri,
+ Check: resource.ComposeTestCheckFunc(
+ // block_device_template_group_gid value is hardcoded. If it's valid then virtual guest will be created well
+ testAccCheckSoftLayerVirtualGuestExists("softlayer_virtual_guest.terraform-acceptance-test-pISU", &guest),
+ ),
+ },
+ },
+ })
+}
+
+func testAccCheckSoftLayerVirtualGuestDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*Client).virtualGuestService
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "softlayer_virtual_guest" {
+ continue
+ }
+
+ guestId, _ := strconv.Atoi(rs.Primary.ID)
+
+ // Try to find the guest
+ _, err := client.GetObject(guestId)
+
+ // Wait
+
+ if err != nil && !strings.Contains(err.Error(), "404") {
+ return fmt.Errorf(
+ "Error waiting for virtual guest (%s) to be destroyed: %s",
+ rs.Primary.ID, err)
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckSoftLayerVirtualGuestExists(n string, guest *datatypes.SoftLayer_Virtual_Guest) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No virtual guest ID is set")
+ }
+
+ id, err := strconv.Atoi(rs.Primary.ID)
+
+ if err != nil {
+ return err
+ }
+
+ client := testAccProvider.Meta().(*Client).virtualGuestService
+ retrieveVirtGuest, err := client.GetObject(id)
+
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("The ID is %d", id)
+
+ if retrieveVirtGuest.Id != id {
+ return fmt.Errorf("Virtual guest not found")
+ }
+
+ *guest = retrieveVirtGuest
+
+ return nil
+ }
+}
+
+const testAccCheckSoftLayerVirtualGuestConfig_basic = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-1" {
+ name = "terraform-test"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = true
+ private_network_only = false
+ cpu = 1
+ ram = 1024
+ disks = [25, 10, 20]
+ user_data = "{\"value\":\"newvalue\"}"
+ dedicated_acct_host_only = true
+ local_disk = false
+}
+`
+
+const testAccCheckSoftLayerVirtualGuestConfig_userDataUpdate = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-1" {
+ name = "terraform-test"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = true
+ cpu = 1
+ ram = 1024
+ disks = [25, 10, 20]
+ user_data = "updatedData"
+ dedicated_acct_host_only = true
+ local_disk = false
+}
+`
+
+const testAccCheckSoftLayerVirtualGuestConfig_upgradeMemoryNetworkSpeed = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-1" {
+ name = "terraform-test"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 100
+ hourly_billing = true
+ cpu = 1
+ ram = 2048
+ disks = [25, 10, 20]
+ user_data = "updatedData"
+ dedicated_acct_host_only = true
+ local_disk = false
+}
+`
+
+const testAccCheckSoftLayerVirtualGuestConfig_vmUpgradeCPUs = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-1" {
+ name = "terraform-test"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 100
+ hourly_billing = true
+ cpu = 2
+ ram = 2048
+ disks = [25, 10, 20]
+ user_data = "updatedData"
+ dedicated_acct_host_only = true
+ local_disk = false
+}
+`
+
+const testAccCheckSoftLayerVirtualGuestConfig_postInstallScriptUri = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-pISU" {
+ name = "terraform-test-pISU"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = true
+ private_network_only = false
+ cpu = 1
+ ram = 1024
+ disks = [25, 10, 20]
+ user_data = "{\"value\":\"newvalue\"}"
+ dedicated_acct_host_only = true
+ local_disk = false
+ post_install_script_uri = "https://www.google.com"
+}
+`
+
+const testAccCheckSoftLayerVirtualGuestConfig_blockDeviceTemplateGroup = `
+resource "softlayer_virtual_guest" "terraform-acceptance-test-BDTGroup" {
+ name = "terraform-test-blockDeviceTemplateGroup"
+ domain = "bar.example.com"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = false
+ cpu = 1
+ ram = 1024
+ local_disk = false
+ block_device_template_group_gid = "ac2b413c-9893-4178-8e62-a24cbe2864db"
+}
+`
diff --git a/builtin/providers/template/resource_template_file.go b/builtin/providers/template/resource_template_file.go
index 78fdf83267d2..c5b3b3b09001 100644
--- a/builtin/providers/template/resource_template_file.go
+++ b/builtin/providers/template/resource_template_file.go
@@ -160,15 +160,15 @@ func execute(s string, vars map[string]interface{}) (string, error) {
},
}
- out, typ, err := hil.Eval(root, &cfg)
+ result, err := hil.Eval(root, &cfg)
if err != nil {
return "", err
}
- if typ != ast.TypeString {
- return "", fmt.Errorf("unexpected output ast.Type: %v", typ)
+ if result.Type != hil.TypeString {
+ return "", fmt.Errorf("unexpected output hil.Type: %v", result.Type)
}
- return out.(string), nil
+ return result.Value.(string), nil
}
func hash(s string) string {
diff --git a/builtin/providers/triton/resource_machine.go b/builtin/providers/triton/resource_machine.go
index a16c9a964bd8..18263c849eea 100644
--- a/builtin/providers/triton/resource_machine.go
+++ b/builtin/providers/triton/resource_machine.go
@@ -6,6 +6,7 @@ import (
"regexp"
"time"
+ "github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
"github.com/joyent/gosdc/cloudapi"
)
@@ -108,17 +109,53 @@ func resourceMachine() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
- "networks": {
- Description: "desired network IDs",
- Type: schema.TypeList,
- Optional: true,
+ "nic": {
+ Description: "network interface",
+ Type: schema.TypeSet,
Computed: true,
- // TODO: this really should ForceNew but the Network IDs don't seem to
- // be returned by the API, meaning if we track them here TF will replace
- // the resource on every run.
- // ForceNew: true,
- Elem: &schema.Schema{
- Type: schema.TypeString,
+ Optional: true,
+ Set: func(v interface{}) int {
+ m := v.(map[string]interface{})
+ return hashcode.String(m["network"].(string))
+ },
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "ip": {
+ Description: "NIC's IPv4 address",
+ Computed: true,
+ Type: schema.TypeString,
+ },
+ "mac": {
+ Description: "NIC's MAC address",
+ Computed: true,
+ Type: schema.TypeString,
+ },
+ "primary": {
+ Description: "Whether this is the machine's primary NIC",
+ Computed: true,
+ Type: schema.TypeBool,
+ },
+ "netmask": {
+ Description: "IPv4 netmask",
+ Computed: true,
+ Type: schema.TypeString,
+ },
+ "gateway": {
+ Description: "IPv4 gateway",
+ Computed: true,
+ Type: schema.TypeString,
+ },
+ "state": {
+ Description: "describes the state of the NIC (e.g. provisioning, running, or stopped)",
+ Computed: true,
+ Type: schema.TypeString,
+ },
+ "network": {
+ Description: "Network ID this NIC is attached to",
+ Required: true,
+ Type: schema.TypeString,
+ },
+ },
},
},
"firewall_enabled": {
@@ -153,6 +190,18 @@ func resourceMachine() *schema.Resource {
Optional: true,
Computed: true,
},
+
+ // deprecated fields
+ "networks": {
+ Description: "desired network IDs",
+ Type: schema.TypeList,
+ Optional: true,
+ Computed: true,
+ Deprecated: "Networks is deprecated, please use `nic`",
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
},
}
}
@@ -164,6 +213,11 @@ func resourceMachineCreate(d *schema.ResourceData, meta interface{}) error {
for _, network := range d.Get("networks").([]interface{}) {
networks = append(networks, network.(string))
}
+ nics := d.Get("nic").(*schema.Set)
+ for _, nicI := range nics.List() {
+ nic := nicI.(map[string]interface{})
+ networks = append(networks, nic["network"].(string))
+ }
metadata := map[string]string{}
for schemaName, metadataKey := range resourceMachineMetadataKeys {
@@ -221,6 +275,11 @@ func resourceMachineRead(d *schema.ResourceData, meta interface{}) error {
return err
}
+ nics, err := client.ListNICs(d.Id())
+ if err != nil {
+ return err
+ }
+
d.SetId(machine.Id)
d.Set("name", machine.Name)
d.Set("type", machine.Type)
@@ -235,9 +294,31 @@ func resourceMachineRead(d *schema.ResourceData, meta interface{}) error {
d.Set("package", machine.Package)
d.Set("image", machine.Image)
d.Set("primaryip", machine.PrimaryIP)
- d.Set("networks", machine.Networks)
d.Set("firewall_enabled", machine.FirewallEnabled)
+ // create and update NICs
+ var (
+ machineNICs []map[string]interface{}
+ networks []string
+ )
+ for _, nic := range nics {
+ machineNICs = append(
+ machineNICs,
+ map[string]interface{}{
+ "ip": nic.IP,
+ "mac": nic.MAC,
+ "primary": nic.Primary,
+ "netmask": nic.Netmask,
+ "gateway": nic.Gateway,
+ "state": nic.State,
+ "network": nic.Network,
+ },
+ )
+ networks = append(networks, nic.Network)
+ }
+ d.Set("nic", machineNICs)
+ d.Set("networks", networks)
+
// computed attributes from metadata
for schemaName, metadataKey := range resourceMachineMetadataKeys {
d.Set(schemaName, machine.Metadata[metadataKey])
@@ -333,9 +414,57 @@ func resourceMachineUpdate(d *schema.ResourceData, meta interface{}) error {
return err
}
+ err = waitFor(
+ func() (bool, error) {
+ machine, err := client.GetMachine(d.Id())
+ return machine.FirewallEnabled == d.Get("firewall_enabled").(bool), err
+ },
+ machineStateChangeCheckInterval,
+ machineStateChangeTimeout,
+ )
+
+ if err != nil {
+ return err
+ }
+
d.SetPartial("firewall_enabled")
}
+ if d.HasChange("nic") {
+ o, n := d.GetChange("nic")
+ if o == nil {
+ o = new(schema.Set)
+ }
+ if n == nil {
+ n = new(schema.Set)
+ }
+
+ oldNICs := o.(*schema.Set)
+ newNICs := o.(*schema.Set)
+
+ // add new NICs that are not in old NICs
+ for _, nicI := range newNICs.Difference(oldNICs).List() {
+ nic := nicI.(map[string]interface{})
+ fmt.Printf("adding %+v\n", nic)
+ _, err := client.AddNIC(d.Id(), nic["network"].(string))
+ if err != nil {
+ return err
+ }
+ }
+
+ // remove old NICs that are not in new NICs
+ for _, nicI := range oldNICs.Difference(newNICs).List() {
+ nic := nicI.(map[string]interface{})
+ fmt.Printf("removing %+v\n", nic)
+ err := client.RemoveNIC(d.Id(), nic["mac"].(string))
+ if err != nil {
+ return err
+ }
+ }
+
+ d.SetPartial("nic")
+ }
+
// metadata stuff
metadata := map[string]string{}
for schemaName, metadataKey := range resourceMachineMetadataKeys {
@@ -352,7 +481,12 @@ func resourceMachineUpdate(d *schema.ResourceData, meta interface{}) error {
err = waitFor(
func() (bool, error) {
machine, err := client.GetMachine(d.Id())
- return reflect.DeepEqual(machine.Metadata, metadata), err
+ for k, v := range metadata {
+ if provider_v, ok := machine.Metadata[k]; !ok || v != provider_v {
+ return false, err
+ }
+ }
+ return true, err
},
machineStateChangeCheckInterval,
1*time.Minute,
diff --git a/builtin/providers/triton/resource_machine_test.go b/builtin/providers/triton/resource_machine_test.go
index 2fd13afcadba..2ed6dd04e6e2 100644
--- a/builtin/providers/triton/resource_machine_test.go
+++ b/builtin/providers/triton/resource_machine_test.go
@@ -34,6 +34,62 @@ func TestAccTritonMachine_basic(t *testing.T) {
})
}
+func TestAccTritonMachine_nic(t *testing.T) {
+ machineName := fmt.Sprintf("acctest-%d", acctest.RandInt())
+ config := fmt.Sprintf(testAccTritonMachine_withnic, machineName, machineName)
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testCheckTritonMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: config,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ func(*terraform.State) error {
+ time.Sleep(10 * time.Second)
+ return nil
+ },
+ testCheckTritonMachineHasFabric("triton_machine.test", "triton_fabric.test"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccTritonMachine_addnic(t *testing.T) {
+ machineName := fmt.Sprintf("acctest-%d", acctest.RandInt())
+ without := fmt.Sprintf(testAccTritonMachine_withoutnic, machineName, machineName)
+ with := fmt.Sprintf(testAccTritonMachine_withnic, machineName, machineName)
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testCheckTritonMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: without,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ func(*terraform.State) error {
+ time.Sleep(10 * time.Second)
+ return nil
+ },
+ testCheckTritonMachineHasNoFabric("triton_machine.test", "triton_fabric.test"),
+ ),
+ },
+ resource.TestStep{
+ Config: with,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ testCheckTritonMachineHasFabric("triton_machine.test", "triton_fabric.test"),
+ ),
+ },
+ },
+ })
+}
+
func testCheckTritonMachineExists(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
@@ -56,6 +112,64 @@ func testCheckTritonMachineExists(name string) resource.TestCheckFunc {
}
}
+func testCheckTritonMachineHasFabric(name, fabricName string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ // Ensure we have enough information in state to look up in API
+ machine, ok := s.RootModule().Resources[name]
+ if !ok {
+ return fmt.Errorf("Not found: %s", name)
+ }
+
+ network, ok := s.RootModule().Resources[fabricName]
+ if !ok {
+ return fmt.Errorf("Not found: %s", fabricName)
+ }
+ conn := testAccProvider.Meta().(*cloudapi.Client)
+
+ nics, err := conn.ListNICs(machine.Primary.ID)
+ if err != nil {
+ return fmt.Errorf("Bad: Check NICs Exist: %s", err)
+ }
+
+ for _, nic := range nics {
+ if nic.Network == network.Primary.ID {
+ return nil
+ }
+ }
+
+ return fmt.Errorf("Bad: Machine %q does not have Fabric %q", machine.Primary.ID, network.Primary.ID)
+ }
+}
+
+func testCheckTritonMachineHasNoFabric(name, fabricName string) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ // Ensure we have enough information in state to look up in API
+ machine, ok := s.RootModule().Resources[name]
+ if !ok {
+ return fmt.Errorf("Not found: %s", name)
+ }
+
+ network, ok := s.RootModule().Resources[fabricName]
+ if !ok {
+ return fmt.Errorf("Not found: %s", fabricName)
+ }
+ conn := testAccProvider.Meta().(*cloudapi.Client)
+
+ nics, err := conn.ListNICs(machine.Primary.ID)
+ if err != nil {
+ return fmt.Errorf("Bad: Check NICs Exist: %s", err)
+ }
+
+ for _, nic := range nics {
+ if nic.Network == network.Primary.ID {
+ return fmt.Errorf("Bad: Machine %q has Fabric %q", machine.Primary.ID, network.Primary.ID)
+ }
+ }
+
+ return nil
+ }
+}
+
func testCheckTritonMachineDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*cloudapi.Client)
@@ -77,14 +191,181 @@ func testCheckTritonMachineDestroy(s *terraform.State) error {
return nil
}
+func TestAccTritonMachine_firewall(t *testing.T) {
+ machineName := fmt.Sprintf("acctest-%d", acctest.RandInt())
+ disabled_config := fmt.Sprintf(testAccTritonMachine_firewall_0, machineName)
+ enabled_config := fmt.Sprintf(testAccTritonMachine_firewall_1, machineName)
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testCheckTritonMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: enabled_config,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ resource.TestCheckResourceAttr(
+ "triton_machine.test", "firewall_enabled", "true"),
+ ),
+ },
+ resource.TestStep{
+ Config: disabled_config,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ resource.TestCheckResourceAttr(
+ "triton_machine.test", "firewall_enabled", "false"),
+ ),
+ },
+ resource.TestStep{
+ Config: enabled_config,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ resource.TestCheckResourceAttr(
+ "triton_machine.test", "firewall_enabled", "true"),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccTritonMachine_metadata(t *testing.T) {
+ machineName := fmt.Sprintf("acctest-%d", acctest.RandInt())
+ basic := fmt.Sprintf(testAccTritonMachine_metadata_1, machineName)
+ add_metadata := fmt.Sprintf(testAccTritonMachine_metadata_1, machineName)
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testCheckTritonMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: basic,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ ),
+ },
+ resource.TestStep{
+ Config: add_metadata,
+ Check: resource.ComposeTestCheckFunc(
+ testCheckTritonMachineExists("triton_machine.test"),
+ resource.TestCheckResourceAttr(
+ "triton_machine.test", "user_data", "hello"),
+ ),
+ },
+ },
+ })
+}
+
var testAccTritonMachine_basic = `
+provider "triton" {
+ url = "https://us-west-1.api.joyentcloud.com"
+}
+
resource "triton_machine" "test" {
name = "%s"
package = "g3-standard-0.25-smartos"
- image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
+ image = "c20b4b7c-e1a6-11e5-9a4d-ef590901732e"
tags = {
test = "hello!"
}
}
`
+
+var testAccTritonMachine_firewall_0 = `
+provider "triton" {
+ url = "https://us-west-1.api.joyentcloud.com"
+}
+
+resource "triton_machine" "test" {
+ name = "%s"
+ package = "g3-standard-0.25-smartos"
+ image = "c20b4b7c-e1a6-11e5-9a4d-ef590901732e"
+
+ firewall_enabled = 0
+}
+`
+var testAccTritonMachine_firewall_1 = `
+provider "triton" {
+ url = "https://us-west-1.api.joyentcloud.com"
+}
+
+resource "triton_machine" "test" {
+ name = "%s"
+ package = "g3-standard-0.25-smartos"
+ image = "c20b4b7c-e1a6-11e5-9a4d-ef590901732e"
+
+ firewall_enabled = 1
+}
+`
+
+var testAccTritonMachine_metadata_1 = `
+provider "triton" {
+ url = "https://us-west-1.api.joyentcloud.com"
+}
+
+resource "triton_machine" "test" {
+ name = "%s"
+ package = "g3-standard-0.25-smartos"
+ image = "c20b4b7c-e1a6-11e5-9a4d-ef590901732e"
+
+ user_data = "hello"
+
+ tags = {
+ test = "hello!"
+ }
+}
+`
+
+var testAccTritonMachine_withnic = `
+resource "triton_fabric" "test" {
+ name = "%s-network"
+ description = "test network"
+ vlan_id = 2 # every DC seems to have a vlan 2 available
+
+ subnet = "10.0.0.0/22"
+ gateway = "10.0.0.1"
+ provision_start_ip = "10.0.0.5"
+ provision_end_ip = "10.0.3.250"
+
+ resolvers = ["8.8.8.8", "8.8.4.4"]
+}
+
+resource "triton_machine" "test" {
+ name = "%s"
+ package = "g3-standard-0.25-smartos"
+ image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
+
+ tags = {
+ test = "hello!"
+ }
+
+ nic { network = "${triton_fabric.test.id}" }
+}
+`
+
+var testAccTritonMachine_withoutnic = `
+resource "triton_fabric" "test" {
+ name = "%s-network"
+ description = "test network"
+ vlan_id = 2 # every DC seems to have a vlan 2 available
+
+ subnet = "10.0.0.0/22"
+ gateway = "10.0.0.1"
+ provision_start_ip = "10.0.0.5"
+ provision_end_ip = "10.0.3.250"
+
+ resolvers = ["8.8.8.8", "8.8.4.4"]
+}
+
+resource "triton_machine" "test" {
+ name = "%s"
+ package = "g3-standard-0.25-smartos"
+ image = "842e6fa6-6e9b-11e5-8402-1b490459e334"
+
+ tags = {
+ test = "hello!"
+ }
+}
+`
diff --git a/builtin/providers/vsphere/README.md b/builtin/providers/vsphere/README.md
new file mode 100644
index 000000000000..0da819f8dd1e
--- /dev/null
+++ b/builtin/providers/vsphere/README.md
@@ -0,0 +1,57 @@
+# Terraform vSphere Provider Dev Docs
+
+This document is in place for developer documentation. User documentation is located [HERE](https://www.terraform.io/docs/providers/vsphere/) on Terraform's website.
+
+Thank-you [@tkak](https://github.com/tkak) and [Rakuten, Inc.](https://github.com/rakutentech) for their original contribution of the source base used for this provider!
+
+## Introductory Documentation
+
+Both [README.md](../../../README.md) and [BUILDING.md](../../../BUILDING.md) should be read first!
+
+## Base API Dependency ~ [govmomi](https://github.com/vmware/govmomi)
+
+This provider utilizes [govmomi](https://github.com/vmware/govmomi) Go Library for communicating to VMware vSphere APIs (ESXi and/or vCenter).
+Because of the dependency this provider is compatible with VMware systems that are supported by govmomi. Much thanks to the dev team that maintains govmomi, and
+even more thanks to their guidance with the development of this provider. We have had many issues answered by the govmomi team!
+
+#### vSphere CLI ~ [govc](https://github.com/vmware/govmomi/blob/master/govc/README.md)
+
+One of the great tools that govmomi contains is [govc](https://github.com/vmware/govmomi/blob/master/govc/README.md). It is a command line tool for using the govmomi API. Not only is it a tool to use, but also it's
+[source base](https://github.com/vmware/govmomi/blob/master/govc/) is a great resource of examples on how to exercise the API.
+
+## Required privileges for running Terraform as non-administrative user
+Most of the organizations are concerned about administrative privileges. In order to use Terraform provider as non priviledged user, we can define a new Role within a vCenter and assign it appropriate privileges:
+Navigate to Administration -> Access Control -> Roles
+Click on "+" icon (Create role action), give it appropraite name and select following privileges:
+ * Datastore
+ - Allocate space
+ - Browse datastore
+ - Low level file operations
+ - Remove file
+ - Update virtual machine files
+ - Update virtual machine metadata
+
+ * Folder (all)
+ - Create folder
+ - Delete folder
+ - Move folder
+ - Rename folder
+
+ * Network
+ - Assign network
+
+ * Resource
+ - Apply recommendation
+ - Assign virtual machine to resource pool
+
+ * Virtual Machine
+ - Configuration (all) - for now
+ - Guest Operations (all) - for now
+ - Interaction (all)
+ - Inventory (all)
+ - Provisioning (all)
+
+These settings were tested with [vSphere 6.0](https://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html) and [vSphere 5.5](https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html). For additional information on roles and permissions, please refer to official VMware documentation.
+
+This section is a work in progress and additional contributions are more than welcome.
+
diff --git a/builtin/providers/vsphere/provider.go b/builtin/providers/vsphere/provider.go
index 5c98d31c01ec..cbe9782ff90b 100644
--- a/builtin/providers/vsphere/provider.go
+++ b/builtin/providers/vsphere/provider.go
@@ -46,6 +46,7 @@ func Provider() terraform.ResourceProvider {
},
ResourcesMap: map[string]*schema.Resource{
+ "vsphere_file": resourceVSphereFile(),
"vsphere_folder": resourceVSphereFolder(),
"vsphere_virtual_machine": resourceVSphereVirtualMachine(),
},
diff --git a/builtin/providers/vsphere/resource_vsphere_file.go b/builtin/providers/vsphere/resource_vsphere_file.go
new file mode 100644
index 000000000000..f418d947e23c
--- /dev/null
+++ b/builtin/providers/vsphere/resource_vsphere_file.go
@@ -0,0 +1,309 @@
+package vsphere
+
+import (
+ "fmt"
+ "log"
+
+ "github.com/hashicorp/terraform/helper/schema"
+ "github.com/vmware/govmomi"
+ "github.com/vmware/govmomi/find"
+ "github.com/vmware/govmomi/object"
+ "github.com/vmware/govmomi/vim25/soap"
+ "golang.org/x/net/context"
+)
+
+type file struct {
+ datacenter string
+ datastore string
+ sourceFile string
+ destinationFile string
+}
+
+func resourceVSphereFile() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceVSphereFileCreate,
+ Read: resourceVSphereFileRead,
+ Update: resourceVSphereFileUpdate,
+ Delete: resourceVSphereFileDelete,
+
+ Schema: map[string]*schema.Schema{
+ "datacenter": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "datastore": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "source_file": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "destination_file": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+ },
+ }
+}
+
+func resourceVSphereFileCreate(d *schema.ResourceData, meta interface{}) error {
+
+ log.Printf("[DEBUG] creating file: %#v", d)
+ client := meta.(*govmomi.Client)
+
+ f := file{}
+
+ if v, ok := d.GetOk("datacenter"); ok {
+ f.datacenter = v.(string)
+ }
+
+ if v, ok := d.GetOk("datastore"); ok {
+ f.datastore = v.(string)
+ } else {
+ return fmt.Errorf("datastore argument is required")
+ }
+
+ if v, ok := d.GetOk("source_file"); ok {
+ f.sourceFile = v.(string)
+ } else {
+ return fmt.Errorf("source_file argument is required")
+ }
+
+ if v, ok := d.GetOk("destination_file"); ok {
+ f.destinationFile = v.(string)
+ } else {
+ return fmt.Errorf("destination_file argument is required")
+ }
+
+ err := createFile(client, &f)
+ if err != nil {
+ return err
+ }
+
+ d.SetId(fmt.Sprintf("[%v] %v/%v", f.datastore, f.datacenter, f.destinationFile))
+ log.Printf("[INFO] Created file: %s", f.destinationFile)
+
+ return resourceVSphereFileRead(d, meta)
+}
+
+func createFile(client *govmomi.Client, f *file) error {
+
+ finder := find.NewFinder(client.Client, true)
+
+ dc, err := finder.Datacenter(context.TODO(), f.datacenter)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, f.datastore)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ dsurl, err := ds.URL(context.TODO(), dc, f.destinationFile)
+ if err != nil {
+ return err
+ }
+
+ p := soap.DefaultUpload
+ err = client.Client.UploadFile(f.sourceFile, dsurl, &p)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+ return nil
+}
+
+func resourceVSphereFileRead(d *schema.ResourceData, meta interface{}) error {
+
+ log.Printf("[DEBUG] reading file: %#v", d)
+ f := file{}
+
+ if v, ok := d.GetOk("datacenter"); ok {
+ f.datacenter = v.(string)
+ }
+
+ if v, ok := d.GetOk("datastore"); ok {
+ f.datastore = v.(string)
+ } else {
+ return fmt.Errorf("datastore argument is required")
+ }
+
+ if v, ok := d.GetOk("source_file"); ok {
+ f.sourceFile = v.(string)
+ } else {
+ return fmt.Errorf("source_file argument is required")
+ }
+
+ if v, ok := d.GetOk("destination_file"); ok {
+ f.destinationFile = v.(string)
+ } else {
+ return fmt.Errorf("destination_file argument is required")
+ }
+
+ client := meta.(*govmomi.Client)
+ finder := find.NewFinder(client.Client, true)
+
+ dc, err := finder.Datacenter(context.TODO(), f.datacenter)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, f.datastore)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ _, err = ds.Stat(context.TODO(), f.destinationFile)
+ if err != nil {
+ d.SetId("")
+ return err
+ }
+
+ return nil
+}
+
+func resourceVSphereFileUpdate(d *schema.ResourceData, meta interface{}) error {
+
+ log.Printf("[DEBUG] updating file: %#v", d)
+ if d.HasChange("destination_file") {
+ oldDestinationFile, newDestinationFile := d.GetChange("destination_file")
+ f := file{}
+
+ if v, ok := d.GetOk("datacenter"); ok {
+ f.datacenter = v.(string)
+ }
+
+ if v, ok := d.GetOk("datastore"); ok {
+ f.datastore = v.(string)
+ } else {
+ return fmt.Errorf("datastore argument is required")
+ }
+
+ if v, ok := d.GetOk("source_file"); ok {
+ f.sourceFile = v.(string)
+ } else {
+ return fmt.Errorf("source_file argument is required")
+ }
+
+ if v, ok := d.GetOk("destination_file"); ok {
+ f.destinationFile = v.(string)
+ } else {
+ return fmt.Errorf("destination_file argument is required")
+ }
+
+ client := meta.(*govmomi.Client)
+ dc, err := getDatacenter(client, f.datacenter)
+ if err != nil {
+ return err
+ }
+
+ finder := find.NewFinder(client.Client, true)
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, f.datastore)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ fm := object.NewFileManager(client.Client)
+ task, err := fm.MoveDatastoreFile(context.TODO(), ds.Path(oldDestinationFile.(string)), dc, ds.Path(newDestinationFile.(string)), dc, true)
+ if err != nil {
+ return err
+ }
+
+ _, err = task.WaitForResult(context.TODO(), nil)
+ if err != nil {
+ return err
+ }
+
+ }
+
+ return nil
+}
+
+func resourceVSphereFileDelete(d *schema.ResourceData, meta interface{}) error {
+
+ log.Printf("[DEBUG] deleting file: %#v", d)
+ f := file{}
+
+ if v, ok := d.GetOk("datacenter"); ok {
+ f.datacenter = v.(string)
+ }
+
+ if v, ok := d.GetOk("datastore"); ok {
+ f.datastore = v.(string)
+ } else {
+ return fmt.Errorf("datastore argument is required")
+ }
+
+ if v, ok := d.GetOk("source_file"); ok {
+ f.sourceFile = v.(string)
+ } else {
+ return fmt.Errorf("source_file argument is required")
+ }
+
+ if v, ok := d.GetOk("destination_file"); ok {
+ f.destinationFile = v.(string)
+ } else {
+ return fmt.Errorf("destination_file argument is required")
+ }
+
+ client := meta.(*govmomi.Client)
+
+ err := deleteFile(client, &f)
+ if err != nil {
+ return err
+ }
+
+ d.SetId("")
+ return nil
+}
+
+func deleteFile(client *govmomi.Client, f *file) error {
+
+ dc, err := getDatacenter(client, f.datacenter)
+ if err != nil {
+ return err
+ }
+
+ finder := find.NewFinder(client.Client, true)
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, f.datastore)
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ fm := object.NewFileManager(client.Client)
+ task, err := fm.DeleteDatastoreFile(context.TODO(), ds.Path(f.destinationFile), dc)
+ if err != nil {
+ return err
+ }
+
+ _, err = task.WaitForResult(context.TODO(), nil)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+// getDatastore gets datastore object
+func getDatastore(f *find.Finder, ds string) (*object.Datastore, error) {
+
+ if ds != "" {
+ dso, err := f.Datastore(context.TODO(), ds)
+ return dso, err
+ } else {
+ dso, err := f.DefaultDatastore(context.TODO())
+ return dso, err
+ }
+}
diff --git a/builtin/providers/vsphere/resource_vsphere_file_test.go b/builtin/providers/vsphere/resource_vsphere_file_test.go
new file mode 100644
index 000000000000..81520b0cb4f6
--- /dev/null
+++ b/builtin/providers/vsphere/resource_vsphere_file_test.go
@@ -0,0 +1,203 @@
+package vsphere
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform/helper/resource"
+ "github.com/hashicorp/terraform/terraform"
+ "github.com/vmware/govmomi"
+ "github.com/vmware/govmomi/find"
+ "github.com/vmware/govmomi/object"
+ "golang.org/x/net/context"
+)
+
+// Basic file creation
+func TestAccVSphereFile_basic(t *testing.T) {
+ testVmdkFileData := []byte("# Disk DescriptorFile\n")
+ testVmdkFile := "/tmp/tf_test.vmdk"
+ err := ioutil.WriteFile(testVmdkFile, testVmdkFileData, 0644)
+ if err != nil {
+ t.Errorf("error %s", err)
+ return
+ }
+
+ datacenter := os.Getenv("VSPHERE_DATACENTER")
+ datastore := os.Getenv("VSPHERE_DATASTORE")
+ testMethod := "basic"
+ resourceName := "vsphere_file." + testMethod
+ destinationFile := "tf_file_test.vmdk"
+ sourceFile := testVmdkFile
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereFileDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: fmt.Sprintf(
+ testAccCheckVSphereFileConfig,
+ testMethod,
+ datacenter,
+ datastore,
+ sourceFile,
+ destinationFile,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereFileExists(resourceName, destinationFile, true),
+ resource.TestCheckResourceAttr(resourceName, "destination_file", destinationFile),
+ ),
+ },
+ },
+ })
+ os.Remove(testVmdkFile)
+}
+
+// file creation followed by a rename of file (update)
+func TestAccVSphereFile_renamePostCreation(t *testing.T) {
+ testVmdkFileData := []byte("# Disk DescriptorFile\n")
+ testVmdkFile := "/tmp/tf_test.vmdk"
+ err := ioutil.WriteFile(testVmdkFile, testVmdkFileData, 0644)
+ if err != nil {
+ t.Errorf("error %s", err)
+ return
+ }
+
+ datacenter := os.Getenv("VSPHERE_DATACENTER")
+ datastore := os.Getenv("VSPHERE_DATASTORE")
+ testMethod := "basic"
+ resourceName := "vsphere_file." + testMethod
+ destinationFile := "tf_test_file.vmdk"
+ destinationFileMoved := "tf_test_file_moved.vmdk"
+ sourceFile := testVmdkFile
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereFolderDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: fmt.Sprintf(
+ testAccCheckVSphereFileConfig,
+ testMethod,
+ datacenter,
+ datastore,
+ sourceFile,
+ destinationFile,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereFileExists(resourceName, destinationFile, true),
+ testAccCheckVSphereFileExists(resourceName, destinationFileMoved, false),
+ resource.TestCheckResourceAttr(resourceName, "destination_file", destinationFile),
+ ),
+ },
+ {
+ Config: fmt.Sprintf(
+ testAccCheckVSphereFileConfig,
+ testMethod,
+ datacenter,
+ datastore,
+ sourceFile,
+ destinationFileMoved,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereFileExists(resourceName, destinationFile, false),
+ testAccCheckVSphereFileExists(resourceName, destinationFileMoved, true),
+ resource.TestCheckResourceAttr(resourceName, "destination_file", destinationFileMoved),
+ ),
+ },
+ },
+ })
+ os.Remove(testVmdkFile)
+}
+
+func testAccCheckVSphereFileDestroy(s *terraform.State) error {
+ client := testAccProvider.Meta().(*govmomi.Client)
+ finder := find.NewFinder(client.Client, true)
+
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "vsphere_file" {
+ continue
+ }
+
+ dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"])
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, rs.Primary.Attributes["datastore"])
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ _, err = ds.Stat(context.TODO(), rs.Primary.Attributes["destination_file"])
+ if err != nil {
+ switch e := err.(type) {
+ case object.DatastoreNoSuchFileError:
+ fmt.Printf("Expected error received: %s\n", e.Error())
+ return nil
+ default:
+ return err
+ }
+ } else {
+ return fmt.Errorf("File %s still exists", rs.Primary.Attributes["destination_file"])
+ }
+ }
+
+ return nil
+}
+
+func testAccCheckVSphereFileExists(n string, df string, exists bool) resource.TestCheckFunc {
+ return func(s *terraform.State) error {
+ rs, ok := s.RootModule().Resources[n]
+ if !ok {
+ return fmt.Errorf("Resource not found: %s", n)
+ }
+
+ if rs.Primary.ID == "" {
+ return fmt.Errorf("No ID is set")
+ }
+
+ client := testAccProvider.Meta().(*govmomi.Client)
+ finder := find.NewFinder(client.Client, true)
+
+ dc, err := finder.Datacenter(context.TODO(), rs.Primary.Attributes["datacenter"])
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+ finder = finder.SetDatacenter(dc)
+
+ ds, err := getDatastore(finder, rs.Primary.Attributes["datastore"])
+ if err != nil {
+ return fmt.Errorf("error %s", err)
+ }
+
+ _, err = ds.Stat(context.TODO(), df)
+ if err != nil {
+ switch e := err.(type) {
+ case object.DatastoreNoSuchFileError:
+ if exists {
+ return fmt.Errorf("File does not exist: %s", e.Error())
+ }
+ fmt.Printf("Expected error received: %s\n", e.Error())
+ return nil
+ default:
+ return err
+ }
+ }
+ return nil
+ }
+}
+
+const testAccCheckVSphereFileConfig = `
+resource "vsphere_file" "%s" {
+ datacenter = "%s"
+ datastore = "%s"
+ source_file = "%s"
+ destination_file = "%s"
+}
+`
diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go
index be49c99e7652..d5d96816c665 100644
--- a/builtin/providers/vsphere/resource_vsphere_virtual_machine.go
+++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine.go
@@ -4,6 +4,7 @@ import (
"fmt"
"log"
"net"
+ "strconv"
"strings"
"time"
@@ -32,8 +33,10 @@ type networkInterface struct {
label string
ipv4Address string
ipv4PrefixLength int
+ ipv4Gateway string
ipv6Address string
ipv6PrefixLength int
+ ipv6Gateway string
adapterType string // TODO: Make "adapter_type" argument
}
@@ -41,26 +44,50 @@ type hardDisk struct {
size int64
iops int64
initType string
+ vmdkPath string
+}
+
+//Additional options Vsphere can use clones of windows machines
+type windowsOptConfig struct {
+ productKey string
+ adminPassword string
+ domainUser string
+ domain string
+ domainUserPassword string
+}
+
+type cdrom struct {
+ datastore string
+ path string
+}
+
+type memoryAllocation struct {
+ reservation int64
}
type virtualMachine struct {
- name string
- folder string
- datacenter string
- cluster string
- resourcePool string
- datastore string
- vcpu int
- memoryMb int64
- template string
- networkInterfaces []networkInterface
- hardDisks []hardDisk
- gateway string
- domain string
- timeZone string
- dnsSuffixes []string
- dnsServers []string
- customConfigurations map[string](types.AnyType)
+ name string
+ folder string
+ datacenter string
+ cluster string
+ resourcePool string
+ datastore string
+ vcpu int
+ memoryMb int64
+ memoryAllocation memoryAllocation
+ template string
+ networkInterfaces []networkInterface
+ hardDisks []hardDisk
+ cdroms []cdrom
+ domain string
+ timeZone string
+ dnsSuffixes []string
+ dnsServers []string
+ bootableVmdk bool
+ linkedClone bool
+ skipCustomization bool
+ windowsOptionalConfig windowsOptConfig
+ customConfigurations map[string](types.AnyType)
}
func (v virtualMachine) Path() string {
@@ -79,6 +106,7 @@ func resourceVSphereVirtualMachine() *schema.Resource {
return &schema.Resource{
Create: resourceVSphereVirtualMachineCreate,
Read: resourceVSphereVirtualMachineRead,
+ Update: resourceVSphereVirtualMachineUpdate,
Delete: resourceVSphereVirtualMachineDelete,
Schema: map[string]*schema.Schema{
@@ -97,12 +125,17 @@ func resourceVSphereVirtualMachine() *schema.Resource {
"vcpu": &schema.Schema{
Type: schema.TypeInt,
Required: true,
- ForceNew: true,
},
"memory": &schema.Schema{
Type: schema.TypeInt,
Required: true,
+ },
+
+ "memory_reservation": &schema.Schema{
+ Type: schema.TypeInt,
+ Optional: true,
+ Default: 0,
ForceNew: true,
},
@@ -124,11 +157,18 @@ func resourceVSphereVirtualMachine() *schema.Resource {
ForceNew: true,
},
- "gateway": &schema.Schema{
- Type: schema.TypeString,
+ "linked_clone": &schema.Schema{
+ Type: schema.TypeBool,
Optional: true,
+ Default: false,
ForceNew: true,
},
+ "gateway": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Deprecated: "Please use network_interface.ipv4_gateway",
+ },
"domain": &schema.Schema{
Type: schema.TypeString,
@@ -158,11 +198,56 @@ func resourceVSphereVirtualMachine() *schema.Resource {
ForceNew: true,
},
+ "skip_customization": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ Default: false,
+ },
+
"custom_configuration_parameters": &schema.Schema{
Type: schema.TypeMap,
Optional: true,
ForceNew: true,
},
+ "windows_opt_config": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "product_key": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "admin_password": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "domain_user": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "domain": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "domain_user_password": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+ },
+ },
+ },
"network_interface": &schema.Schema{
Type: schema.TypeList,
@@ -202,16 +287,27 @@ func resourceVSphereVirtualMachine() *schema.Resource {
Computed: true,
},
- // TODO: Imprement ipv6 parameters to be optional
+ "ipv4_gateway": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
"ipv6_address": &schema.Schema{
Type: schema.TypeString,
- Computed: true,
+ Optional: true,
ForceNew: true,
},
"ipv6_prefix_length": &schema.Schema{
Type: schema.TypeInt,
- Computed: true,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "ipv6_gateway": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
ForceNew: true,
},
@@ -268,6 +364,42 @@ func resourceVSphereVirtualMachine() *schema.Resource {
Optional: true,
ForceNew: true,
},
+
+ "vmdk": &schema.Schema{
+ // TODO: Add ValidateFunc to confirm path exists
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "",
+ },
+
+ "bootable": &schema.Schema{
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ ForceNew: true,
+ },
+ },
+ },
+ },
+
+ "cdrom": &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "datastore": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "path": &schema.Schema{
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
},
},
},
@@ -281,6 +413,93 @@ func resourceVSphereVirtualMachine() *schema.Resource {
}
}
+func resourceVSphereVirtualMachineUpdate(d *schema.ResourceData, meta interface{}) error {
+ // flag if changes have to be applied
+ hasChanges := false
+ // flag if changes have to be done when powered off
+ rebootRequired := false
+
+ // make config spec
+ configSpec := types.VirtualMachineConfigSpec{}
+
+ if d.HasChange("vcpu") {
+ configSpec.NumCPUs = d.Get("vcpu").(int)
+ hasChanges = true
+ rebootRequired = true
+ }
+
+ if d.HasChange("memory") {
+ configSpec.MemoryMB = int64(d.Get("memory").(int))
+ hasChanges = true
+ rebootRequired = true
+ }
+
+ // do nothing if there are no changes
+ if !hasChanges {
+ return nil
+ }
+
+ client := meta.(*govmomi.Client)
+ dc, err := getDatacenter(client, d.Get("datacenter").(string))
+ if err != nil {
+ return err
+ }
+ finder := find.NewFinder(client.Client, true)
+ finder = finder.SetDatacenter(dc)
+
+ vm, err := finder.VirtualMachine(context.TODO(), vmPath(d.Get("folder").(string), d.Get("name").(string)))
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] virtual machine config spec: %v", configSpec)
+
+ if rebootRequired {
+ log.Printf("[INFO] Shutting down virtual machine: %s", d.Id())
+
+ task, err := vm.PowerOff(context.TODO())
+ if err != nil {
+ return err
+ }
+
+ err = task.Wait(context.TODO())
+ if err != nil {
+ return err
+ }
+ }
+
+ log.Printf("[INFO] Reconfiguring virtual machine: %s", d.Id())
+
+ task, err := vm.Reconfigure(context.TODO(), configSpec)
+ if err != nil {
+ log.Printf("[ERROR] %s", err)
+ }
+
+ err = task.Wait(context.TODO())
+ if err != nil {
+ log.Printf("[ERROR] %s", err)
+ }
+
+ if rebootRequired {
+ task, err = vm.PowerOn(context.TODO())
+ if err != nil {
+ return err
+ }
+
+ err = task.Wait(context.TODO())
+ if err != nil {
+ log.Printf("[ERROR] %s", err)
+ }
+ }
+
+ ip, err := vm.WaitForIP(context.TODO())
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] ip address: %v", ip)
+
+ return resourceVSphereVirtualMachineRead(d, meta)
+}
+
func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*govmomi.Client)
@@ -288,6 +507,9 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
name: d.Get("name").(string),
vcpu: d.Get("vcpu").(int),
memoryMb: int64(d.Get("memory").(int)),
+ memoryAllocation: memoryAllocation{
+ reservation: int64(d.Get("memory_reservation").(int)),
+ },
}
if v, ok := d.GetOk("folder"); ok {
@@ -306,10 +528,6 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
vm.resourcePool = v.(string)
}
- if v, ok := d.GetOk("gateway"); ok {
- vm.gateway = v.(string)
- }
-
if v, ok := d.GetOk("domain"); ok {
vm.domain = v.(string)
}
@@ -318,6 +536,14 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
vm.timeZone = v.(string)
}
+ if v, ok := d.GetOk("linked_clone"); ok {
+ vm.linkedClone = v.(bool)
+ }
+
+ if v, ok := d.GetOk("skip_customization"); ok {
+ vm.skipCustomization = v.(bool)
+ }
+
if raw, ok := d.GetOk("dns_suffixes"); ok {
for _, v := range raw.([]interface{}) {
vm.dnsSuffixes = append(vm.dnsSuffixes, v.(string))
@@ -353,6 +579,9 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
if v, ok := network["ip_address"].(string); ok && v != "" {
networks[i].ipv4Address = v
}
+ if v, ok := d.GetOk("gateway"); ok {
+ networks[i].ipv4Gateway = v.(string)
+ }
if v, ok := network["subnet_mask"].(string); ok && v != "" {
ip := net.ParseIP(v).To4()
if ip != nil {
@@ -369,11 +598,45 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
if v, ok := network["ipv4_prefix_length"].(int); ok && v != 0 {
networks[i].ipv4PrefixLength = v
}
+ if v, ok := network["ipv4_gateway"].(string); ok && v != "" {
+ networks[i].ipv4Gateway = v
+ }
+ if v, ok := network["ipv6_address"].(string); ok && v != "" {
+ networks[i].ipv6Address = v
+ }
+ if v, ok := network["ipv6_prefix_length"].(int); ok && v != 0 {
+ networks[i].ipv6PrefixLength = v
+ }
+ if v, ok := network["ipv6_gateway"].(string); ok && v != "" {
+ networks[i].ipv6Gateway = v
+ }
}
vm.networkInterfaces = networks
log.Printf("[DEBUG] network_interface init: %v", networks)
}
+ if vL, ok := d.GetOk("windows_opt_config"); ok {
+ var winOpt windowsOptConfig
+ custom_configs := (vL.([]interface{}))[0].(map[string]interface{})
+ if v, ok := custom_configs["admin_password"].(string); ok && v != "" {
+ winOpt.adminPassword = v
+ }
+ if v, ok := custom_configs["domain"].(string); ok && v != "" {
+ winOpt.domain = v
+ }
+ if v, ok := custom_configs["domain_user"].(string); ok && v != "" {
+ winOpt.domainUser = v
+ }
+ if v, ok := custom_configs["product_key"].(string); ok && v != "" {
+ winOpt.productKey = v
+ }
+ if v, ok := custom_configs["domain_user_password"].(string); ok && v != "" {
+ winOpt.domainUserPassword = v
+ }
+ vm.windowsOptionalConfig = winOpt
+ log.Printf("[DEBUG] windows config init: %v", winOpt)
+ }
+
if vL, ok := d.GetOk("disk"); ok {
disks := make([]hardDisk, len(vL.([]interface{})))
for i, v := range vL.([]interface{}) {
@@ -384,8 +647,13 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
} else {
if v, ok := disk["size"].(int); ok && v != 0 {
disks[i].size = int64(v)
+ } else if v, ok := disk["vmdk"].(string); ok && v != "" {
+ disks[i].vmdkPath = v
+ if v, ok := disk["bootable"].(bool); ok {
+ vm.bootableVmdk = v
+ }
} else {
- return fmt.Errorf("If template argument is not specified, size argument is required.")
+ return fmt.Errorf("template, size, or vmdk argument is required")
}
}
if v, ok := disk["datastore"].(string); ok && v != "" {
@@ -394,8 +662,10 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
} else {
if v, ok := disk["size"].(int); ok && v != 0 {
disks[i].size = int64(v)
+ } else if v, ok := disk["vmdk"].(string); ok && v != "" {
+ disks[i].vmdkPath = v
} else {
- return fmt.Errorf("Size argument is required.")
+ return fmt.Errorf("size or vmdk argument is required")
}
}
@@ -410,6 +680,25 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
log.Printf("[DEBUG] disk init: %v", disks)
}
+ if vL, ok := d.GetOk("cdrom"); ok {
+ cdroms := make([]cdrom, len(vL.([]interface{})))
+ for i, v := range vL.([]interface{}) {
+ c := v.(map[string]interface{})
+ if v, ok := c["datastore"].(string); ok && v != "" {
+ cdroms[i].datastore = v
+ } else {
+ return fmt.Errorf("Datastore argument must be specified when attaching a cdrom image.")
+ }
+ if v, ok := c["path"].(string); ok && v != "" {
+ cdroms[i].path = v
+ } else {
+ return fmt.Errorf("Path argument must be specified when attaching a cdrom image.")
+ }
+ }
+ vm.cdroms = cdroms
+ log.Printf("[DEBUG] cdrom init: %v", cdroms)
+ }
+
if vm.template != "" {
err := vm.deployVirtualMachine(client)
if err != nil {
@@ -455,7 +744,6 @@ func resourceVSphereVirtualMachineCreate(d *schema.ResourceData, meta interface{
}
func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{}) error {
-
log.Printf("[DEBUG] reading virtual machine: %#v", d)
client := meta.(*govmomi.Client)
dc, err := getDatacenter(client, d.Get("datacenter").(string))
@@ -513,6 +801,16 @@ func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{})
return fmt.Errorf("Invalid network interfaces to set: %#v", networkInterfaces)
}
+ ip, err := vm.WaitForIP(context.TODO())
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] ip address: %v", ip)
+ d.SetConnInfo(map[string]string{
+ "type": "ssh",
+ "host": ip,
+ })
+
var rootDatastore string
for _, v := range mvm.Datastore {
var md mo.Datastore
@@ -535,6 +833,7 @@ func resourceVSphereVirtualMachineRead(d *schema.ResourceData, meta interface{})
d.Set("datacenter", dc)
d.Set("memory", mvm.Summary.Config.MemorySizeMB)
+ d.Set("memory_reservation", mvm.Summary.Config.MemoryReservation)
d.Set("cpu", mvm.Summary.Config.NumCpu)
d.Set("datastore", rootDatastore)
@@ -556,18 +855,24 @@ func resourceVSphereVirtualMachineDelete(d *schema.ResourceData, meta interface{
}
log.Printf("[INFO] Deleting virtual machine: %s", d.Id())
-
- task, err := vm.PowerOff(context.TODO())
+ state, err := vm.PowerState(context.TODO())
if err != nil {
return err
}
- err = task.Wait(context.TODO())
- if err != nil {
- return err
+ if state == types.VirtualMachinePowerStatePoweredOn {
+ task, err := vm.PowerOff(context.TODO())
+ if err != nil {
+ return err
+ }
+
+ err = task.Wait(context.TODO())
+ if err != nil {
+ return err
+ }
}
- task, err = vm.Destroy(context.TODO())
+ task, err := vm.Destroy(context.TODO())
if err != nil {
return err
}
@@ -615,7 +920,7 @@ func waitForNetworkingActive(client *govmomi.Client, datacenter, name string) re
}
// addHardDisk adds a new Hard Disk to the VirtualMachine.
-func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) error {
+func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string, datastore *object.Datastore, diskPath string) error {
devices, err := vm.Device(context.TODO())
if err != nil {
return err
@@ -628,7 +933,15 @@ func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) e
}
log.Printf("[DEBUG] disk controller: %#v\n", controller)
- disk := devices.CreateDisk(controller, "")
+ // If diskPath is not specified, pass empty string to CreateDisk()
+ var newDiskPath string
+ if diskPath == "" {
+ newDiskPath = ""
+ } else {
+ // TODO Check if diskPath & datastore exist
+ newDiskPath = fmt.Sprintf("[%v] %v", datastore.Name(), diskPath)
+ }
+ disk := devices.CreateDisk(controller, newDiskPath)
existing := devices.SelectByBackingInfo(disk.Backing)
log.Printf("[DEBUG] disk: %#v\n", disk)
@@ -661,6 +974,31 @@ func addHardDisk(vm *object.VirtualMachine, size, iops int64, diskType string) e
}
}
+// addCdrom adds a new virtual cdrom drive to the VirtualMachine and attaches an image (ISO) to it from a datastore path.
+func addCdrom(vm *object.VirtualMachine, datastore, path string) error {
+ devices, err := vm.Device(context.TODO())
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] vm devices: %#v", devices)
+
+ controller, err := devices.FindIDEController("")
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] ide controller: %#v", controller)
+
+ c, err := devices.CreateCdrom(controller)
+ if err != nil {
+ return err
+ }
+
+ c = devices.InsertIso(c, fmt.Sprintf("[%s] %s", datastore, path))
+ log.Printf("[DEBUG] addCdrom: %#v", c)
+
+ return vm.AddDevice(context.TODO(), c)
+}
+
// buildNetworkDevice builds VirtualDeviceConfigSpec for Network Device.
func buildNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) {
network, err := f.Network(context.TODO(), "*"+label)
@@ -707,8 +1045,15 @@ func buildNetworkDevice(f *find.Finder, label, adapterType string) (*types.Virtu
}
// buildVMRelocateSpec builds VirtualMachineRelocateSpec to set a place for a new VirtualMachine.
-func buildVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine, initType string) (types.VirtualMachineRelocateSpec, error) {
+func buildVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *object.VirtualMachine, linkedClone bool, initType string) (types.VirtualMachineRelocateSpec, error) {
var key int
+ var moveType string
+ if linkedClone {
+ moveType = "createNewChildDiskBacking"
+ } else {
+ moveType = "moveAllDiskBackingsAndDisallowSharing"
+ }
+ log.Printf("[DEBUG] relocate type: [%s]", moveType)
devices, err := vm.Device(context.TODO())
if err != nil {
@@ -724,8 +1069,9 @@ func buildVMRelocateSpec(rp *object.ResourcePool, ds *object.Datastore, vm *obje
rpr := rp.Reference()
dsr := ds.Reference()
return types.VirtualMachineRelocateSpec{
- Datastore: &dsr,
- Pool: &rpr,
+ Datastore: &dsr,
+ Pool: &rpr,
+ DiskMoveType: moveType,
Disk: []types.VirtualMachineRelocateSpecDiskLocator{
types.VirtualMachineRelocateSpecDiskLocator{
Datastore: dsr,
@@ -844,6 +1190,21 @@ func findDatastore(c *govmomi.Client, sps types.StoragePlacementSpec) (*object.D
return datastore, nil
}
+// createCdroms is a helper function to attach virtual cdrom devices (and their attached disk images) to a virtual IDE controller.
+func createCdroms(vm *object.VirtualMachine, cdroms []cdrom) error {
+ log.Printf("[DEBUG] add cdroms: %v", cdroms)
+ for _, cd := range cdroms {
+ log.Printf("[DEBUG] add cdrom (datastore): %v", cd.datastore)
+ log.Printf("[DEBUG] add cdrom (cd path): %v", cd.path)
+ err := addCdrom(vm, cd.datastore, cd.path)
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
// createVirtualMachine creates a new VirtualMachine.
func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
dc, err := getDatacenter(c, vm.datacenter)
@@ -913,7 +1274,10 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
NumCPUs: vm.vcpu,
NumCoresPerSocket: 1,
MemoryMB: vm.memoryMb,
- DeviceChange: networkDevices,
+ MemoryAllocation: &types.ResourceAllocationInfo{
+ Reservation: vm.memoryAllocation.reservation,
+ },
+ DeviceChange: networkDevices,
}
log.Printf("[DEBUG] virtual machine config spec: %v", configSpec)
@@ -981,6 +1345,7 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
Operation: types.VirtualDeviceConfigSpecOperationAdd,
Device: scsi,
})
+
configSpec.Files = &types.VirtualMachineFileInfo{VmPathName: fmt.Sprintf("[%s]", mds.Name)}
task, err := folder.CreateVM(context.TODO(), configSpec, resourcePool, nil)
@@ -1003,11 +1368,26 @@ func (vm *virtualMachine) createVirtualMachine(c *govmomi.Client) error {
for _, hd := range vm.hardDisks {
log.Printf("[DEBUG] add hard disk: %v", hd.size)
log.Printf("[DEBUG] add hard disk: %v", hd.iops)
- err = addHardDisk(newVM, hd.size, hd.iops, "thin")
+ err = addHardDisk(newVM, hd.size, hd.iops, "thin", datastore, hd.vmdkPath)
+ if err != nil {
+ return err
+ }
+ }
+
+ // Create the cdroms if needed.
+ if err := createCdroms(newVM, vm.cdroms); err != nil {
+ return err
+ }
+
+ if vm.bootableVmdk {
+ newVM.PowerOn(context.TODO())
+ ip, err := newVM.WaitForIP(context.TODO())
if err != nil {
return err
}
+ log.Printf("[DEBUG] ip address: %v", ip)
}
+
return nil
}
@@ -1099,7 +1479,7 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
}
log.Printf("[DEBUG] datastore: %#v", datastore)
- relocateSpec, err := buildVMRelocateSpec(resourcePool, datastore, template, vm.hardDisks[0].initType)
+ relocateSpec, err := buildVMRelocateSpec(resourcePool, datastore, template, vm.linkedClone, vm.hardDisks[0].initType)
if err != nil {
return err
}
@@ -1117,12 +1497,9 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
}
networkDevices = append(networkDevices, nd)
- // TODO: IPv6 support
var ipSetting types.CustomizationIPSettings
if network.ipv4Address == "" {
- ipSetting = types.CustomizationIPSettings{
- Ip: &types.CustomizationDhcpIpGenerator{},
- }
+ ipSetting.Ip = &types.CustomizationDhcpIpGenerator{}
} else {
if network.ipv4PrefixLength == 0 {
return fmt.Errorf("Error: ipv4_prefix_length argument is empty.")
@@ -1130,20 +1507,38 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
m := net.CIDRMask(network.ipv4PrefixLength, 32)
sm := net.IPv4(m[0], m[1], m[2], m[3])
subnetMask := sm.String()
- log.Printf("[DEBUG] gateway: %v", vm.gateway)
- log.Printf("[DEBUG] ipv4 address: %v", network.ipv4Address)
- log.Printf("[DEBUG] ipv4 prefix length: %v", network.ipv4PrefixLength)
- log.Printf("[DEBUG] ipv4 subnet mask: %v", subnetMask)
- ipSetting = types.CustomizationIPSettings{
- Gateway: []string{
- vm.gateway,
- },
- Ip: &types.CustomizationFixedIp{
- IpAddress: network.ipv4Address,
+ log.Printf("[DEBUG] ipv4 gateway: %v\n", network.ipv4Gateway)
+ log.Printf("[DEBUG] ipv4 address: %v\n", network.ipv4Address)
+ log.Printf("[DEBUG] ipv4 prefix length: %v\n", network.ipv4PrefixLength)
+ log.Printf("[DEBUG] ipv4 subnet mask: %v\n", subnetMask)
+ ipSetting.Gateway = []string{
+ network.ipv4Gateway,
+ }
+ ipSetting.Ip = &types.CustomizationFixedIp{
+ IpAddress: network.ipv4Address,
+ }
+ ipSetting.SubnetMask = subnetMask
+ }
+
+ ipv6Spec := &types.CustomizationIPSettingsIpV6AddressSpec{}
+ if network.ipv6Address == "" {
+ ipv6Spec.Ip = []types.BaseCustomizationIpV6Generator{
+ &types.CustomizationDhcpIpV6Generator{},
+ }
+ } else {
+ log.Printf("[DEBUG] ipv6 gateway: %v\n", network.ipv6Gateway)
+ log.Printf("[DEBUG] ipv6 address: %v\n", network.ipv6Address)
+ log.Printf("[DEBUG] ipv6 prefix length: %v\n", network.ipv6PrefixLength)
+
+ ipv6Spec.Ip = []types.BaseCustomizationIpV6Generator{
+ &types.CustomizationFixedIpV6{
+ IpAddress: network.ipv6Address,
+ SubnetMask: network.ipv6PrefixLength,
},
- SubnetMask: subnetMask,
}
+ ipv6Spec.Gateway = []string{network.ipv6Gateway}
}
+ ipSetting.IpV6Spec = ipv6Spec
// network config
config := types.CustomizationAdapterMapping{
@@ -1158,7 +1553,11 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
NumCPUs: vm.vcpu,
NumCoresPerSocket: 1,
MemoryMB: vm.memoryMb,
+ MemoryAllocation: &types.ResourceAllocationInfo{
+ Reservation: vm.memoryAllocation.reservation,
+ },
}
+
log.Printf("[DEBUG] virtual machine config spec: %v", configSpec)
log.Printf("[DEBUG] starting extra custom config spec: %v", vm.customConfigurations)
@@ -1179,16 +1578,72 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
log.Printf("[DEBUG] virtual machine Extra Config spec: %v", configSpec.ExtraConfig)
}
- // create CustomizationSpec
- customSpec := types.CustomizationSpec{
- Identity: &types.CustomizationLinuxPrep{
+ var template_mo mo.VirtualMachine
+ err = template.Properties(context.TODO(), template.Reference(), []string{"parent", "config.template", "config.guestId", "resourcePool", "snapshot", "guest.toolsVersionStatus2", "config.guestFullName"}, &template_mo)
+
+ var identity_options types.BaseCustomizationIdentitySettings
+ if strings.HasPrefix(template_mo.Config.GuestId, "win") {
+ var timeZone int
+ if vm.timeZone == "Etc/UTC" {
+ vm.timeZone = "085"
+ }
+ timeZone, err := strconv.Atoi(vm.timeZone)
+ if err != nil {
+ return fmt.Errorf("Error converting TimeZone: %s", err)
+ }
+
+ guiUnattended := types.CustomizationGuiUnattended{
+ AutoLogon: false,
+ AutoLogonCount: 1,
+ TimeZone: timeZone,
+ }
+
+ customIdentification := types.CustomizationIdentification{}
+
+ userData := types.CustomizationUserData{
+ ComputerName: &types.CustomizationFixedName{
+ Name: strings.Split(vm.name, ".")[0],
+ },
+ ProductId: vm.windowsOptionalConfig.productKey,
+ FullName: "terraform",
+ OrgName: "terraform",
+ }
+
+ if vm.windowsOptionalConfig.domainUserPassword != "" && vm.windowsOptionalConfig.domainUser != "" && vm.windowsOptionalConfig.domain != "" {
+ customIdentification.DomainAdminPassword = &types.CustomizationPassword{
+ PlainText: true,
+ Value: vm.windowsOptionalConfig.domainUserPassword,
+ }
+ customIdentification.DomainAdmin = vm.windowsOptionalConfig.domainUser
+ customIdentification.JoinDomain = vm.windowsOptionalConfig.domain
+ }
+
+ if vm.windowsOptionalConfig.adminPassword != "" {
+ guiUnattended.Password = &types.CustomizationPassword{
+ PlainText: true,
+ Value: vm.windowsOptionalConfig.adminPassword,
+ }
+ }
+
+ identity_options = &types.CustomizationSysprep{
+ GuiUnattended: guiUnattended,
+ Identification: customIdentification,
+ UserData: userData,
+ }
+ } else {
+ identity_options = &types.CustomizationLinuxPrep{
HostName: &types.CustomizationFixedName{
Name: strings.Split(vm.name, ".")[0],
},
Domain: vm.domain,
TimeZone: vm.timeZone,
HwClockUTC: types.NewBool(true),
- },
+ }
+ }
+
+ // create CustomizationSpec
+ customSpec := types.CustomizationSpec{
+ Identity: identity_options,
GlobalIPSettings: types.CustomizationGlobalIPSettings{
DnsSuffixList: vm.dnsSuffixes,
DnsServerList: vm.dnsServers,
@@ -1204,6 +1659,15 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
Config: &configSpec,
PowerOn: false,
}
+ if vm.linkedClone {
+ if err != nil {
+ return fmt.Errorf("Error reading base VM properties: %s", err)
+ }
+ if template_mo.Snapshot == nil {
+ return fmt.Errorf("`linkedClone=true`, but image VM has no snapshots")
+ }
+ cloneSpec.Snapshot = template_mo.Snapshot.CurrentSnapshot
+ }
log.Printf("[DEBUG] clone spec: %v", cloneSpec)
task, err := template.Clone(context.TODO(), folder, vm.name, cloneSpec)
@@ -1246,23 +1710,33 @@ func (vm *virtualMachine) deployVirtualMachine(c *govmomi.Client) error {
}
}
- taskb, err := newVM.Customize(context.TODO(), customSpec)
- if err != nil {
+ // Create the cdroms if needed.
+ if err := createCdroms(newVM, vm.cdroms); err != nil {
return err
}
- _, err = taskb.WaitForResult(context.TODO(), nil)
- if err != nil {
- return err
+ if vm.skipCustomization {
+ log.Printf("[DEBUG] VM customization skipped")
+ } else {
+ log.Printf("[DEBUG] VM customization starting")
+ taskb, err := newVM.Customize(context.TODO(), customSpec)
+ if err != nil {
+ return err
+ }
+ _, err = taskb.WaitForResult(context.TODO(), nil)
+ if err != nil {
+ return err
+ }
+ log.Printf("[DEBUG] VM customization finished")
}
- log.Printf("[DEBUG]VM customization finished")
for i := 1; i < len(vm.hardDisks); i++ {
- err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, vm.hardDisks[i].initType)
+ err = addHardDisk(newVM, vm.hardDisks[i].size, vm.hardDisks[i].iops, vm.hardDisks[i].initType, datastore, vm.hardDisks[i].vmdkPath)
if err != nil {
return err
}
}
+
log.Printf("[DEBUG] virtual machine config spec: %v", configSpec)
newVM.PowerOn(context.TODO())
diff --git a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go
index 17197f63d5b4..4269939738d0 100644
--- a/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go
+++ b/builtin/providers/vsphere/resource_vsphere_virtual_machine_test.go
@@ -34,9 +34,9 @@ func TestAccVSphereVirtualMachine_basic(t *testing.T) {
datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
}
template := os.Getenv("VSPHERE_TEMPLATE")
- gateway := os.Getenv("VSPHERE_NETWORK_GATEWAY")
+ gateway := os.Getenv("VSPHERE_IPV4_GATEWAY")
label := os.Getenv("VSPHERE_NETWORK_LABEL")
- ip_address := os.Getenv("VSPHERE_NETWORK_IP_ADDRESS")
+ ip_address := os.Getenv("VSPHERE_IPV4_ADDRESS")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@@ -61,6 +61,8 @@ func TestAccVSphereVirtualMachine_basic(t *testing.T) {
"vsphere_virtual_machine.foo", "vcpu", "2"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.foo", "memory_reservation", "4096"),
resource.TestCheckResourceAttr(
"vsphere_virtual_machine.foo", "disk.#", "2"),
resource.TestCheckResourceAttr(
@@ -93,9 +95,9 @@ func TestAccVSphereVirtualMachine_diskInitType(t *testing.T) {
datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
}
template := os.Getenv("VSPHERE_TEMPLATE")
- gateway := os.Getenv("VSPHERE_NETWORK_GATEWAY")
+ gateway := os.Getenv("VSPHERE_IPV4_GATEWAY")
label := os.Getenv("VSPHERE_NETWORK_LABEL")
- ip_address := os.Getenv("VSPHERE_NETWORK_IP_ADDRESS")
+ ip_address := os.Getenv("VSPHERE_IPV4_ADDRESS")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@@ -388,6 +390,366 @@ func TestAccVSphereVirtualMachine_createWithFolder(t *testing.T) {
})
}
+func TestAccVSphereVirtualMachine_createWithCdrom(t *testing.T) {
+ var vm virtualMachine
+ var locationOpt string
+ var datastoreOpt string
+
+ if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
+ locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
+ locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
+ locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
+ datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
+ }
+ template := os.Getenv("VSPHERE_TEMPLATE")
+ label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP")
+ cdromDatastore := os.Getenv("VSPHERE_CDROM_DATASTORE")
+ cdromPath := os.Getenv("VSPHERE_CDROM_PATH")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereVirtualMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVsphereVirtualMachineConfig_cdrom,
+ locationOpt,
+ label,
+ datastoreOpt,
+ template,
+ cdromDatastore,
+ cdromPath,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.with_cdrom", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "name", "terraform-test-with-cdrom"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "cdrom.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "cdrom.0.datastore", cdromDatastore),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "cdrom.0.path", cdromPath),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_cdrom", "network_interface.0.label", label),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccVSphereVirtualMachine_createWithExistingVmdk(t *testing.T) {
+ vmdk_path := os.Getenv("VSPHERE_VMDK_PATH")
+ gateway := os.Getenv("VSPHERE_IPV4_GATEWAY")
+ label := os.Getenv("VSPHERE_NETWORK_LABEL")
+ ip_address := os.Getenv("VSPHERE_IPV4_ADDRESS")
+
+ var vm virtualMachine
+ var locationOpt string
+ var datastoreOpt string
+
+ if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
+ locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
+ locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
+ locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
+ datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
+ }
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereVirtualMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_withExistingVmdk,
+ locationOpt,
+ gateway,
+ label,
+ ip_address,
+ datastoreOpt,
+ vmdk_path,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.with_existing_vmdk", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "name", "terraform-test-with-existing-vmdk"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "disk.0.vmdk", vmdk_path),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "disk.0.bootable", "true"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.with_existing_vmdk", "network_interface.0.label", label),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccVSphereVirtualMachine_updateMemory(t *testing.T) {
+ var vm virtualMachine
+ var locationOpt string
+ var datastoreOpt string
+
+ if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
+ locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
+ locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
+ locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
+ datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
+ }
+ template := os.Getenv("VSPHERE_TEMPLATE")
+ label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereVirtualMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_updateMemoryInitial,
+ locationOpt,
+ label,
+ datastoreOpt,
+ template,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "name", "terraform-test"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.0.label", label),
+ ),
+ },
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_updateMemoryUpdate,
+ locationOpt,
+ label,
+ datastoreOpt,
+ template,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "name", "terraform-test"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "memory", "2048"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.0.label", label),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccVSphereVirtualMachine_updateVcpu(t *testing.T) {
+ var vm virtualMachine
+ var locationOpt string
+ var datastoreOpt string
+
+ if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
+ locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
+ locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
+ locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
+ datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
+ }
+ template := os.Getenv("VSPHERE_TEMPLATE")
+ label := os.Getenv("VSPHERE_NETWORK_LABEL_DHCP")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereVirtualMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_updateVcpuInitial,
+ locationOpt,
+ label,
+ datastoreOpt,
+ template,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "name", "terraform-test"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.0.label", label),
+ ),
+ },
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_updateVcpuUpdate,
+ locationOpt,
+ label,
+ datastoreOpt,
+ template,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.bar", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "name", "terraform-test"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "vcpu", "4"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.bar", "network_interface.0.label", label),
+ ),
+ },
+ },
+ })
+}
+
+func TestAccVSphereVirtualMachine_ipv4Andipv6(t *testing.T) {
+ var vm virtualMachine
+ var locationOpt string
+ var datastoreOpt string
+
+ if v := os.Getenv("VSPHERE_DATACENTER"); v != "" {
+ locationOpt += fmt.Sprintf(" datacenter = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_CLUSTER"); v != "" {
+ locationOpt += fmt.Sprintf(" cluster = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_RESOURCE_POOL"); v != "" {
+ locationOpt += fmt.Sprintf(" resource_pool = \"%s\"\n", v)
+ }
+ if v := os.Getenv("VSPHERE_DATASTORE"); v != "" {
+ datastoreOpt = fmt.Sprintf(" datastore = \"%s\"\n", v)
+ }
+ template := os.Getenv("VSPHERE_TEMPLATE")
+ label := os.Getenv("VSPHERE_NETWORK_LABEL")
+ ipv4Address := os.Getenv("VSPHERE_IPV4_ADDRESS")
+ ipv4Gateway := os.Getenv("VSPHERE_IPV4_GATEWAY")
+ ipv6Address := os.Getenv("VSPHERE_IPV6_ADDRESS")
+ ipv6Gateway := os.Getenv("VSPHERE_IPV6_GATEWAY")
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { testAccPreCheck(t) },
+ Providers: testAccProviders,
+ CheckDestroy: testAccCheckVSphereVirtualMachineDestroy,
+ Steps: []resource.TestStep{
+ resource.TestStep{
+ Config: fmt.Sprintf(
+ testAccCheckVSphereVirtualMachineConfig_ipv4Andipv6,
+ locationOpt,
+ label,
+ ipv4Address,
+ ipv4Gateway,
+ ipv6Address,
+ ipv6Gateway,
+ datastoreOpt,
+ template,
+ ),
+ Check: resource.ComposeTestCheckFunc(
+ testAccCheckVSphereVirtualMachineExists("vsphere_virtual_machine.ipv4ipv6", &vm),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "name", "terraform-test-ipv4-ipv6"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "vcpu", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "memory", "4096"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "disk.#", "2"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "disk.0.template", template),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.#", "1"),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.0.label", label),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.0.ipv4_address", ipv4Address),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.0.ipv4_gateway", ipv4Gateway),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.0.ipv6_address", ipv6Address),
+ resource.TestCheckResourceAttr(
+ "vsphere_virtual_machine.ipv4ipv6", "network_interface.0.ipv6_gateway", ipv6Gateway),
+ ),
+ },
+ },
+ })
+}
+
func testAccCheckVSphereVirtualMachineDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*govmomi.Client)
finder := find.NewFinder(client.Client, true)
@@ -567,6 +929,7 @@ resource "vsphere_virtual_machine" "foo" {
%s
vcpu = 2
memory = 4096
+ memory_reservation = 4096
gateway = "%s"
network_interface {
label = "%s"
@@ -664,7 +1027,7 @@ resource "vsphere_virtual_machine" "folder" {
const testAccCheckVSphereVirtualMachineConfig_createWithFolder = `
resource "vsphere_folder" "with_folder" {
- path = "%s"
+ path = "%s"
%s
}
resource "vsphere_virtual_machine" "with_folder" {
@@ -682,3 +1045,135 @@ resource "vsphere_virtual_machine" "with_folder" {
}
}
`
+
+const testAccCheckVsphereVirtualMachineConfig_cdrom = `
+resource "vsphere_virtual_machine" "with_cdrom" {
+ name = "terraform-test-with-cdrom"
+%s
+ vcpu = 2
+ memory = 4096
+ network_interface {
+ label = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ }
+
+ cdrom {
+ datastore = "%s"
+ path = "%s"
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_withExistingVmdk = `
+resource "vsphere_virtual_machine" "with_existing_vmdk" {
+ name = "terraform-test-with-existing-vmdk"
+%s
+ vcpu = 2
+ memory = 4096
+ gateway = "%s"
+ network_interface {
+ label = "%s"
+ ipv4_address = "%s"
+ ipv4_prefix_length = 24
+ }
+ disk {
+%s
+ vmdk = "%s"
+ bootable = true
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_updateMemoryInitial = `
+resource "vsphere_virtual_machine" "bar" {
+ name = "terraform-test"
+%s
+ vcpu = 2
+ memory = 4096
+ network_interface {
+ label = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_updateMemoryUpdate = `
+resource "vsphere_virtual_machine" "bar" {
+ name = "terraform-test"
+%s
+ vcpu = 2
+ memory = 2048
+ network_interface {
+ label = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_updateVcpuInitial = `
+resource "vsphere_virtual_machine" "bar" {
+ name = "terraform-test"
+%s
+ vcpu = 2
+ memory = 4096
+ network_interface {
+ label = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_updateVcpuUpdate = `
+resource "vsphere_virtual_machine" "bar" {
+ name = "terraform-test"
+%s
+ vcpu = 4
+ memory = 4096
+ network_interface {
+ label = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ }
+}
+`
+
+const testAccCheckVSphereVirtualMachineConfig_ipv4Andipv6 = `
+resource "vsphere_virtual_machine" "ipv4ipv6" {
+ name = "terraform-test-ipv4-ipv6"
+%s
+ vcpu = 2
+ memory = 4096
+ network_interface {
+ label = "%s"
+ ipv4_address = "%s"
+ ipv4_prefix_length = 24
+ ipv4_gateway = "%s"
+ ipv6_address = "%s"
+ ipv6_prefix_length = 64
+ ipv6_gateway = "%s"
+ }
+ disk {
+%s
+ template = "%s"
+ iops = 500
+ }
+ disk {
+ size = 1
+ iops = 500
+ }
+}
+`
diff --git a/command/apply.go b/command/apply.go
index 62ed3dd9ab0a..f18865bdb193 100644
--- a/command/apply.go
+++ b/command/apply.go
@@ -276,11 +276,16 @@ func (c *ApplyCommand) Synopsis() string {
func (c *ApplyCommand) helpApply() string {
helpText := `
-Usage: terraform apply [options] [DIR]
+Usage: terraform apply [options] [DIR-OR-PLAN]
Builds or changes infrastructure according to Terraform configuration
files in DIR.
+ By default, apply scans the current directory for the configuration
+ and applies the changes appropriately. However, a path to another
+ configuration or an execution plan can be provided. Execution plans can be
+ used to only execute a pre-determined set of actions.
+
DIR can also be a SOURCE as given to the "init" command. In this case,
apply behaves as though "init" was called followed by "apply". This only
works for sources that aren't files, and only if the current working
diff --git a/command/fmt.go b/command/fmt.go
index d9ccc2643c87..0b06d7b50ad9 100644
--- a/command/fmt.go
+++ b/command/fmt.go
@@ -79,11 +79,11 @@ Usage: terraform fmt [options] [DIR]
Options:
- -list List files whose formatting differs (disabled if using STDIN)
+ -list=true List files whose formatting differs (always false if using STDIN)
- -write Write result to source file instead of STDOUT (disabled if using STDIN)
+ -write=true Write result to source file instead of STDOUT (always false if using STDIN)
- -diff Display diffs instead of rewriting files
+ -diff=false Display diffs of formatting changes
`
return strings.TrimSpace(helpText)
diff --git a/command/hook_ui.go b/command/hook_ui.go
index 6a10ceefbb4a..206dffe4067a 100644
--- a/command/hook_ui.go
+++ b/command/hook_ui.go
@@ -7,6 +7,7 @@ import (
"sort"
"strings"
"sync"
+ "time"
"unicode"
"github.com/hashicorp/terraform/terraform"
@@ -14,6 +15,8 @@ import (
"github.com/mitchellh/colorstring"
)
+const periodicUiTimer = 10 * time.Second
+
type UiHook struct {
terraform.NilHook
@@ -22,10 +25,17 @@ type UiHook struct {
l sync.Mutex
once sync.Once
- resources map[string]uiResourceOp
+ resources map[string]uiResourceState
ui cli.Ui
}
+// uiResourceState tracks the state of a single resource
+type uiResourceState struct {
+ Op uiResourceOp
+ Start time.Time
+}
+
+// uiResourceOp is an enum for operations on a resource
type uiResourceOp byte
const (
@@ -51,7 +61,10 @@ func (h *UiHook) PreApply(
}
h.l.Lock()
- h.resources[id] = op
+ h.resources[id] = uiResourceState{
+ Op: op,
+ Start: time.Now().Round(time.Second),
+ }
h.l.Unlock()
var operation string
@@ -113,9 +126,47 @@ func (h *UiHook) PreApply(
operation,
attrString)))
+ // Set a timer to show an operation is still happening
+ time.AfterFunc(periodicUiTimer, func() { h.stillApplying(id) })
+
return terraform.HookActionContinue, nil
}
+func (h *UiHook) stillApplying(id string) {
+ // Grab the operation. We defer the lock here to avoid the "still..."
+ // message showing up after a completion message.
+ h.l.Lock()
+ defer h.l.Unlock()
+ state, ok := h.resources[id]
+
+ // If the resource is out of the map it means we're done with it
+ if !ok {
+ return
+ }
+
+ var msg string
+ switch state.Op {
+ case uiResourceModify:
+ msg = "Still modifying..."
+ case uiResourceDestroy:
+ msg = "Still destroying..."
+ case uiResourceCreate:
+ msg = "Still creating..."
+ case uiResourceUnknown:
+ return
+ }
+
+ h.ui.Output(h.Colorize.Color(fmt.Sprintf(
+ "[reset][bold]%s: %s (%s elapsed)[reset_bold]",
+ id,
+ msg,
+ time.Now().Round(time.Second).Sub(state.Start),
+ )))
+
+ // Reschedule
+ time.AfterFunc(periodicUiTimer, func() { h.stillApplying(id) })
+}
+
func (h *UiHook) PostApply(
n *terraform.InstanceInfo,
s *terraform.InstanceState,
@@ -123,12 +174,12 @@ func (h *UiHook) PostApply(
id := n.HumanId()
h.l.Lock()
- op := h.resources[id]
+ state := h.resources[id]
delete(h.resources, id)
h.l.Unlock()
var msg string
- switch op {
+ switch state.Op {
case uiResourceModify:
msg = "Modifications complete"
case uiResourceDestroy:
@@ -205,7 +256,7 @@ func (h *UiHook) init() {
panic("colorize not given")
}
- h.resources = make(map[string]uiResourceOp)
+ h.resources = make(map[string]uiResourceState)
// Wrap the ui so that it is safe for concurrency regardless of the
// underlying reader/writer that is in place.
diff --git a/communicator/ssh/password.go b/communicator/ssh/password.go
index 8db6f82da2c4..8b32c8d4cd98 100644
--- a/communicator/ssh/password.go
+++ b/communicator/ssh/password.go
@@ -1,8 +1,9 @@
package ssh
import (
- "golang.org/x/crypto/ssh"
"log"
+
+ "golang.org/x/crypto/ssh"
)
// An implementation of ssh.KeyboardInteractiveChallenge that simply sends
diff --git a/communicator/ssh/password_test.go b/communicator/ssh/password_test.go
index 6e3e0a257ad1..e513716d0834 100644
--- a/communicator/ssh/password_test.go
+++ b/communicator/ssh/password_test.go
@@ -1,9 +1,10 @@
package ssh
import (
- "golang.org/x/crypto/ssh"
"reflect"
"testing"
+
+ "golang.org/x/crypto/ssh"
)
func TestPasswordKeyboardInteractive_Impl(t *testing.T) {
diff --git a/config/config.go b/config/config.go
index 8d97d9f3c05a..38578ae493f8 100644
--- a/config/config.go
+++ b/config/config.go
@@ -162,6 +162,17 @@ const (
VariableTypeMap
)
+func (v VariableType) Printable() string {
+ switch v {
+ case VariableTypeString:
+ return "string"
+ case VariableTypeMap:
+ return "map"
+ default:
+ return "unknown"
+ }
+}
+
// ProviderConfigName returns the name of the provider configuration in
// the given mapping that maps to the proper provider configuration
// for this resource.
@@ -439,7 +450,7 @@ func (c *Config) Validate() error {
r.RawCount.interpolate(func(root ast.Node) (string, error) {
// Execute the node but transform the AST so that it returns
// a fixed value of "5" for all interpolations.
- out, _, err := hil.Eval(
+ result, err := hil.Eval(
hil.FixedValueTransform(
root, &ast.LiteralNode{Value: "5", Typex: ast.TypeString}),
nil)
@@ -447,7 +458,7 @@ func (c *Config) Validate() error {
return "", err
}
- return out.(string), nil
+ return result.Value.(string), nil
})
_, err := strconv.ParseInt(r.RawCount.Value().(string), 0, 0)
if err != nil {
@@ -669,7 +680,7 @@ func (c *Config) validateVarContextFn(
node = node.Accept(func(n ast.Node) ast.Node {
// If it is a concat or variable access, we allow it.
switch n.(type) {
- case *ast.Concat:
+ case *ast.Output:
return n
case *ast.VariableAccess:
return n
diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go
index 123ee39f36e4..774a7bf4589f 100644
--- a/config/interpolate_funcs_test.go
+++ b/config/interpolate_funcs_test.go
@@ -1004,16 +1004,16 @@ func TestInterpolateFuncUUID(t *testing.T) {
t.Fatalf("err: %s", err)
}
- out, _, err := hil.Eval(ast, langEvalConfig(nil))
+ result, err := hil.Eval(ast, langEvalConfig(nil))
if err != nil {
t.Fatalf("err: %s", err)
}
- if results[out.(string)] {
- t.Fatalf("Got unexpected duplicate uuid: %s", out)
+ if results[result.Value.(string)] {
+ t.Fatalf("Got unexpected duplicate uuid: %s", result.Value)
}
- results[out.(string)] = true
+ results[result.Value.(string)] = true
}
}
@@ -1035,15 +1035,14 @@ func testFunction(t *testing.T, config testFunctionConfig) {
t.Fatalf("Case #%d: input: %#v\nerr: %s", i, tc.Input, err)
}
- out, _, err := hil.Eval(ast, langEvalConfig(config.Vars))
+ result, err := hil.Eval(ast, langEvalConfig(config.Vars))
if err != nil != tc.Error {
t.Fatalf("Case #%d:\ninput: %#v\nerr: %s", i, tc.Input, err)
}
- if !reflect.DeepEqual(out, tc.Result) {
- t.Fatalf(
- "%d: bad output for input: %s\n\nOutput: %#v\nExpected: %#v",
- i, tc.Input, out, tc.Result)
+ if !reflect.DeepEqual(result.Value, tc.Result) {
+ t.Fatalf("%d: bad output for input: %s\n\nOutput: %#v\nExpected: %#v",
+ i, tc.Input, result.Value, tc.Result)
}
}
}
diff --git a/config/raw_config.go b/config/raw_config.go
index c897ed387a43..6fc15ebd5e68 100644
--- a/config/raw_config.go
+++ b/config/raw_config.go
@@ -132,12 +132,12 @@ func (r *RawConfig) Interpolate(vs map[string]ast.Variable) error {
// None of the variables we need are computed, meaning we should
// be able to properly evaluate.
- out, _, err := hil.Eval(root, config)
+ result, err := hil.Eval(root, config)
if err != nil {
return "", err
}
- return out.(string), nil
+ return result.Value.(string), nil
})
}
diff --git a/dag/graph.go b/dag/graph.go
index 5178648d26ab..b271339ba838 100644
--- a/dag/graph.go
+++ b/dag/graph.go
@@ -177,6 +177,47 @@ func (g *Graph) Connect(edge Edge) {
s.Add(source)
}
+// String outputs some human-friendly output for the graph structure.
+func (g *Graph) StringWithNodeTypes() string {
+ var buf bytes.Buffer
+
+ // Build the list of node names and a mapping so that we can more
+ // easily alphabetize the output to remain deterministic.
+ vertices := g.Vertices()
+ names := make([]string, 0, len(vertices))
+ mapping := make(map[string]Vertex, len(vertices))
+ for _, v := range vertices {
+ name := VertexName(v)
+ names = append(names, name)
+ mapping[name] = v
+ }
+ sort.Strings(names)
+
+ // Write each node in order...
+ for _, name := range names {
+ v := mapping[name]
+ targets := g.downEdges[hashcode(v)]
+
+ buf.WriteString(fmt.Sprintf("%s - %T\n", name, v))
+
+ // Alphabetize dependencies
+ deps := make([]string, 0, targets.Len())
+ targetNodes := make([]Vertex, 0, targets.Len())
+ for _, target := range targets.List() {
+ deps = append(deps, VertexName(target))
+ targetNodes = append(targetNodes, target)
+ }
+ sort.Strings(deps)
+
+ // Write dependencies
+ for i, d := range deps {
+ buf.WriteString(fmt.Sprintf(" %s - %T\n", d, targetNodes[i]))
+ }
+ }
+
+ return buf.String()
+}
+
// String outputs some human-friendly output for the graph structure.
func (g *Graph) String() string {
var buf bytes.Buffer
diff --git a/helper/resource/testing.go b/helper/resource/testing.go
index 659cf2641252..94e03b53110f 100644
--- a/helper/resource/testing.go
+++ b/helper/resource/testing.go
@@ -7,10 +7,12 @@ import (
"log"
"os"
"path/filepath"
+ "reflect"
"regexp"
"strings"
"testing"
+ "github.com/davecgh/go-spew/spew"
"github.com/hashicorp/go-getter"
"github.com/hashicorp/terraform/config/module"
"github.com/hashicorp/terraform/helper/logging"
@@ -58,6 +60,18 @@ type TestCase struct {
// Steps are the apply sequences done within the context of the
// same state. Each step can have its own check to verify correctness.
Steps []TestStep
+
+ // The settings below control the "ID-only refresh test." This is
+ // an enabled-by-default test that tests that a refresh can be
+ // refreshed with only an ID to result in the same attributes.
+ // This validates completeness of Refresh.
+ //
+ // IDRefreshName is the name of the resource to check. This will
+ // default to the first non-nil primary resource in the state.
+ //
+ // IDRefreshIgnore is a list of configuration keys that will be ignored.
+ IDRefreshName string
+ IDRefreshIgnore []string
}
// TestStep is a single apply sequence of a test, done within the
@@ -145,15 +159,59 @@ func Test(t TestT, c TestCase) {
var state *terraform.State
// Go through each step and run it
+ var idRefreshCheck *terraform.ResourceState
+ idRefresh := c.IDRefreshName != ""
+ errored := false
for i, step := range c.Steps {
var err error
log.Printf("[WARN] Test: Executing step %d", i)
state, err = testStep(opts, state, step)
if err != nil {
+ errored = true
t.Error(fmt.Sprintf(
"Step %d error: %s", i, err))
break
}
+
+ // If we've never checked an id-only refresh and our state isn't
+ // empty, find the first resource and test it.
+ if idRefresh && idRefreshCheck == nil && !state.Empty() {
+ // Find the first non-nil resource in the state
+ for _, m := range state.Modules {
+ if len(m.Resources) > 0 {
+ if v, ok := m.Resources[c.IDRefreshName]; ok {
+ idRefreshCheck = v
+ }
+
+ break
+ }
+ }
+
+ // If we have an instance to check for refreshes, do it
+ // immediately. We do it in the middle of another test
+ // because it shouldn't affect the overall state (refresh
+ // is read-only semantically) and we want to fail early if
+ // this fails. If refresh isn't read-only, then this will have
+ // caught a different bug.
+ if idRefreshCheck != nil {
+ log.Printf(
+ "[WARN] Test: Running ID-only refresh check on %s",
+ idRefreshCheck.Primary.ID)
+ if err := testIDOnlyRefresh(c, opts, step, idRefreshCheck); err != nil {
+ log.Printf("[ERROR] Test: ID-only test failed: %s", err)
+ t.Error(fmt.Sprintf(
+ "ID-Only refresh test failure: %s", err))
+ break
+ }
+ }
+ }
+ }
+
+ // If we never checked an id-only refresh, it is a failure.
+ if idRefresh {
+ if !errored && len(c.Steps) > 0 && idRefreshCheck == nil {
+ t.Error("ID-only refresh check never ran.")
+ }
}
// If we have a state, then run the destroy
@@ -195,49 +253,109 @@ func UnitTest(t TestT, c TestCase) {
Test(t, c)
}
-func testStep(
- opts terraform.ContextOpts,
- state *terraform.State,
- step TestStep) (*terraform.State, error) {
- if step.PreConfig != nil {
- step.PreConfig()
+func testIDOnlyRefresh(c TestCase, opts terraform.ContextOpts, step TestStep, r *terraform.ResourceState) error {
+ // TODO: We guard by this right now so master doesn't explode. We
+ // need to remove this eventually to make this part of the normal tests.
+ if os.Getenv("TF_ACC_IDONLY") == "" {
+ return nil
}
- cfgPath, err := ioutil.TempDir("", "tf-test")
- if err != nil {
- return state, fmt.Errorf(
- "Error creating temporary directory for config: %s", err)
+ name := fmt.Sprintf("%s.foo", r.Type)
+
+ // Build the state. The state is just the resource with an ID. There
+ // are no attributes. We only set what is needed to perform a refresh.
+ state := terraform.NewState()
+ state.RootModule().Resources[name] = &terraform.ResourceState{
+ Type: r.Type,
+ Primary: &terraform.InstanceState{
+ ID: r.Primary.ID,
+ },
}
- defer os.RemoveAll(cfgPath)
- // Write the configuration
- cfgF, err := os.Create(filepath.Join(cfgPath, "main.tf"))
+ // Create the config module. We use the full config because Refresh
+ // doesn't have access to it and we may need things like provider
+ // configurations. The initial implementation of id-only checks used
+ // an empty config module, but that caused the aforementioned problems.
+ mod, err := testModule(opts, step)
if err != nil {
- return state, fmt.Errorf(
- "Error creating temporary file for config: %s", err)
+ return err
}
- _, err = io.Copy(cfgF, strings.NewReader(step.Config))
- cfgF.Close()
- if err != nil {
- return state, fmt.Errorf(
- "Error creating temporary file for config: %s", err)
+ // Initialize the context
+ opts.Module = mod
+ opts.State = state
+ ctx := terraform.NewContext(&opts)
+ if ws, es := ctx.Validate(); len(ws) > 0 || len(es) > 0 {
+ if len(es) > 0 {
+ estrs := make([]string, len(es))
+ for i, e := range es {
+ estrs[i] = e.Error()
+ }
+ return fmt.Errorf(
+ "Configuration is invalid.\n\nWarnings: %#v\n\nErrors: %#v",
+ ws, estrs)
+ }
+
+ log.Printf("[WARN] Config warnings: %#v", ws)
}
- // Parse the configuration
- mod, err := module.NewTreeModule("", cfgPath)
+ // Refresh!
+ state, err = ctx.Refresh()
if err != nil {
- return state, fmt.Errorf(
- "Error loading configuration: %s", err)
+ return fmt.Errorf("Error refreshing: %s", err)
}
- // Load the modules
- modStorage := &getter.FolderStorage{
- StorageDir: filepath.Join(cfgPath, ".tfmodules"),
+ // Verify attribute equivalence.
+ actualR := state.RootModule().Resources[name]
+ if actualR == nil {
+ return fmt.Errorf("Resource gone!")
}
- err = mod.Load(modStorage, module.GetModeGet)
+ if actualR.Primary == nil {
+ return fmt.Errorf("Resource has no primary instance")
+ }
+ actual := actualR.Primary.Attributes
+ expected := r.Primary.Attributes
+ // Remove fields we're ignoring
+ for _, v := range c.IDRefreshIgnore {
+ for k, _ := range actual {
+ if strings.HasPrefix(k, v) {
+ delete(actual, k)
+ }
+ }
+ for k, _ := range expected {
+ if strings.HasPrefix(k, v) {
+ delete(expected, k)
+ }
+ }
+ }
+
+ if !reflect.DeepEqual(actual, expected) {
+ // Determine only the different attributes
+ for k, v := range expected {
+ if av, ok := actual[k]; ok && v == av {
+ delete(expected, k)
+ delete(actual, k)
+ }
+ }
+
+ spewConf := spew.NewDefaultConfig()
+ spewConf.SortKeys = true
+ return fmt.Errorf(
+ "Attributes not equivalent. Difference is shown below. Top is actual, bottom is expected."+
+ "\n\n%s\n\n%s",
+ spewConf.Sdump(actual), spewConf.Sdump(expected))
+ }
+
+ return nil
+}
+
+func testStep(
+ opts terraform.ContextOpts,
+ state *terraform.State,
+ step TestStep) (*terraform.State, error) {
+ mod, err := testModule(opts, step)
if err != nil {
- return state, fmt.Errorf("Error downloading modules: %s", err)
+ return state, err
}
// Build the context
@@ -340,6 +458,53 @@ func testStep(
return state, nil
}
+func testModule(
+ opts terraform.ContextOpts,
+ step TestStep) (*module.Tree, error) {
+ if step.PreConfig != nil {
+ step.PreConfig()
+ }
+
+ cfgPath, err := ioutil.TempDir("", "tf-test")
+ if err != nil {
+ return nil, fmt.Errorf(
+ "Error creating temporary directory for config: %s", err)
+ }
+ defer os.RemoveAll(cfgPath)
+
+ // Write the configuration
+ cfgF, err := os.Create(filepath.Join(cfgPath, "main.tf"))
+ if err != nil {
+ return nil, fmt.Errorf(
+ "Error creating temporary file for config: %s", err)
+ }
+
+ _, err = io.Copy(cfgF, strings.NewReader(step.Config))
+ cfgF.Close()
+ if err != nil {
+ return nil, fmt.Errorf(
+ "Error creating temporary file for config: %s", err)
+ }
+
+ // Parse the configuration
+ mod, err := module.NewTreeModule("", cfgPath)
+ if err != nil {
+ return nil, fmt.Errorf(
+ "Error loading configuration: %s", err)
+ }
+
+ // Load the modules
+ modStorage := &getter.FolderStorage{
+ StorageDir: filepath.Join(cfgPath, ".tfmodules"),
+ }
+ err = mod.Load(modStorage, module.GetModeGet)
+ if err != nil {
+ return nil, fmt.Errorf("Error downloading modules: %s", err)
+ }
+
+ return mod, nil
+}
+
// ComposeTestCheckFunc lets you compose multiple TestCheckFuncs into
// a single TestCheckFunc.
//
diff --git a/helper/resource/testing_test.go b/helper/resource/testing_test.go
index 31e8ab69d775..edb11b7b606f 100644
--- a/helper/resource/testing_test.go
+++ b/helper/resource/testing_test.go
@@ -12,6 +12,11 @@ import (
func init() {
testTesting = true
+ // TODO: Remove when we remove the guard on id checks
+ if err := os.Setenv("TF_ACC_IDONLY", "1"); err != nil {
+ panic(err)
+ }
+
if err := os.Setenv(TestEnvVar, "1"); err != nil {
panic(err)
}
@@ -21,17 +26,23 @@ func TestTest(t *testing.T) {
mp := testProvider()
mp.DiffReturn = nil
- mp.ApplyReturn = &terraform.InstanceState{
- ID: "foo",
+ mp.ApplyFn = func(
+ info *terraform.InstanceInfo,
+ state *terraform.InstanceState,
+ diff *terraform.InstanceDiff) (*terraform.InstanceState, error) {
+ if !diff.Destroy {
+ return &terraform.InstanceState{
+ ID: "foo",
+ }, nil
+ }
+
+ return nil, nil
}
+
var refreshCount int32
mp.RefreshFn = func(*terraform.InstanceInfo, *terraform.InstanceState) (*terraform.InstanceState, error) {
atomic.AddInt32(&refreshCount, 1)
- if atomic.LoadInt32(&refreshCount) == 1 {
- return &terraform.InstanceState{ID: "foo"}, nil
- } else {
- return nil, nil
- }
+ return &terraform.InstanceState{ID: "foo"}, nil
}
checkDestroy := false
@@ -83,6 +94,172 @@ func TestTest(t *testing.T) {
}
}
+func TestTest_idRefresh(t *testing.T) {
+ // Refresh count should be 3:
+ // 1.) initial Ref/Plan/Apply
+ // 2.) post Ref/Plan/Apply for plan-check
+ // 3.) id refresh check
+ var expectedRefresh int32 = 3
+
+ mp := testProvider()
+ mp.DiffReturn = nil
+
+ mp.ApplyFn = func(
+ info *terraform.InstanceInfo,
+ state *terraform.InstanceState,
+ diff *terraform.InstanceDiff) (*terraform.InstanceState, error) {
+ if !diff.Destroy {
+ return &terraform.InstanceState{
+ ID: "foo",
+ }, nil
+ }
+
+ return nil, nil
+ }
+
+ var refreshCount int32
+ mp.RefreshFn = func(*terraform.InstanceInfo, *terraform.InstanceState) (*terraform.InstanceState, error) {
+ atomic.AddInt32(&refreshCount, 1)
+ return &terraform.InstanceState{ID: "foo"}, nil
+ }
+
+ mt := new(mockT)
+ Test(mt, TestCase{
+ IDRefreshName: "test_instance.foo",
+ Providers: map[string]terraform.ResourceProvider{
+ "test": mp,
+ },
+ Steps: []TestStep{
+ TestStep{
+ Config: testConfigStr,
+ },
+ },
+ })
+
+ if mt.failed() {
+ t.Fatalf("test failed: %s", mt.failMessage())
+ }
+
+ // See declaration of expectedRefresh for why that number
+ if refreshCount != expectedRefresh {
+ t.Fatalf("bad refresh count: %d", refreshCount)
+ }
+}
+
+func TestTest_idRefreshCustomName(t *testing.T) {
+ // Refresh count should be 3:
+ // 1.) initial Ref/Plan/Apply
+ // 2.) post Ref/Plan/Apply for plan-check
+ // 3.) id refresh check
+ var expectedRefresh int32 = 3
+
+ mp := testProvider()
+ mp.DiffReturn = nil
+
+ mp.ApplyFn = func(
+ info *terraform.InstanceInfo,
+ state *terraform.InstanceState,
+ diff *terraform.InstanceDiff) (*terraform.InstanceState, error) {
+ if !diff.Destroy {
+ return &terraform.InstanceState{
+ ID: "foo",
+ }, nil
+ }
+
+ return nil, nil
+ }
+
+ var refreshCount int32
+ mp.RefreshFn = func(*terraform.InstanceInfo, *terraform.InstanceState) (*terraform.InstanceState, error) {
+ atomic.AddInt32(&refreshCount, 1)
+ return &terraform.InstanceState{ID: "foo"}, nil
+ }
+
+ mt := new(mockT)
+ Test(mt, TestCase{
+ IDRefreshName: "test_instance.foo",
+ Providers: map[string]terraform.ResourceProvider{
+ "test": mp,
+ },
+ Steps: []TestStep{
+ TestStep{
+ Config: testConfigStr,
+ },
+ },
+ })
+
+ if mt.failed() {
+ t.Fatalf("test failed: %s", mt.failMessage())
+ }
+
+ // See declaration of expectedRefresh for why that number
+ if refreshCount != expectedRefresh {
+ t.Fatalf("bad refresh count: %d", refreshCount)
+ }
+}
+
+func TestTest_idRefreshFail(t *testing.T) {
+ // Refresh count should be 3:
+ // 1.) initial Ref/Plan/Apply
+ // 2.) post Ref/Plan/Apply for plan-check
+ // 3.) id refresh check
+ var expectedRefresh int32 = 3
+
+ mp := testProvider()
+ mp.DiffReturn = nil
+
+ mp.ApplyFn = func(
+ info *terraform.InstanceInfo,
+ state *terraform.InstanceState,
+ diff *terraform.InstanceDiff) (*terraform.InstanceState, error) {
+ if !diff.Destroy {
+ return &terraform.InstanceState{
+ ID: "foo",
+ }, nil
+ }
+
+ return nil, nil
+ }
+
+ var refreshCount int32
+ mp.RefreshFn = func(*terraform.InstanceInfo, *terraform.InstanceState) (*terraform.InstanceState, error) {
+ atomic.AddInt32(&refreshCount, 1)
+ if atomic.LoadInt32(&refreshCount) == expectedRefresh-1 {
+ return &terraform.InstanceState{
+ ID: "foo",
+ Attributes: map[string]string{"foo": "bar"},
+ }, nil
+ } else if atomic.LoadInt32(&refreshCount) < expectedRefresh {
+ return &terraform.InstanceState{ID: "foo"}, nil
+ } else {
+ return nil, nil
+ }
+ }
+
+ mt := new(mockT)
+ Test(mt, TestCase{
+ IDRefreshName: "test_instance.foo",
+ Providers: map[string]terraform.ResourceProvider{
+ "test": mp,
+ },
+ Steps: []TestStep{
+ TestStep{
+ Config: testConfigStr,
+ },
+ },
+ })
+
+ if !mt.failed() {
+ t.Fatal("test didn't fail")
+ }
+ t.Logf("failure reason: %s", mt.failMessage())
+
+ // See declaration of expectedRefresh for why that number
+ if refreshCount != expectedRefresh {
+ t.Fatalf("bad refresh count: %d", refreshCount)
+ }
+}
+
func TestTest_empty(t *testing.T) {
destroyCalled := false
checkDestroyFn := func(*terraform.State) error {
diff --git a/scripts/build.sh b/scripts/build.sh
index 6681565c185d..76ff6dad6117 100755
--- a/scripts/build.sh
+++ b/scripts/build.sh
@@ -35,12 +35,18 @@ if ! which gox > /dev/null; then
go get -u github.com/mitchellh/gox
fi
+LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY}"
+# In relase mode we don't want debug information in the binary
+if [[ -n "${TF_RELEASE}" ]]; then
+ LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} -s -w"
+fi
+
# Build!
echo "==> Building..."
gox \
-os="${XC_OS}" \
-arch="${XC_ARCH}" \
- -ldflags "-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY}" \
+ -ldflags "${LD_FLAGS}" \
-output "pkg/{{.OS}}_{{.Arch}}/terraform-{{.Dir}}" \
$(go list ./... | grep -v /vendor/)
diff --git a/state/remote/atlas.go b/state/remote/atlas.go
index e3988e02ccc0..6a48c21bc8a2 100644
--- a/state/remote/atlas.go
+++ b/state/remote/atlas.go
@@ -3,6 +3,7 @@ package remote
import (
"bytes"
"crypto/md5"
+ "crypto/tls"
"encoding/base64"
"fmt"
"io"
@@ -13,7 +14,9 @@ import (
"path"
"strings"
+ "github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/go-retryablehttp"
+ "github.com/hashicorp/go-rootcerts"
"github.com/hashicorp/terraform/terraform"
)
@@ -90,7 +93,10 @@ func (c *AtlasClient) Get() (*Payload, error) {
}
// Request the url
- client := c.http()
+ client, err := c.http()
+ if err != nil {
+ return nil, err
+ }
resp, err := client.Do(req)
if err != nil {
return nil, err
@@ -169,7 +175,10 @@ func (c *AtlasClient) Put(state []byte) error {
req.ContentLength = int64(len(state))
// Make the request
- client := c.http()
+ client, err := c.http()
+ if err != nil {
+ return err
+ }
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("Failed to upload state: %v", err)
@@ -197,7 +206,10 @@ func (c *AtlasClient) Delete() error {
}
// Make the request
- client := c.http()
+ client, err := c.http()
+ if err != nil {
+ return err
+ }
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("Failed to delete state: %v", err)
@@ -247,11 +259,23 @@ func (c *AtlasClient) url() *url.URL {
}
}
-func (c *AtlasClient) http() *retryablehttp.Client {
+func (c *AtlasClient) http() (*retryablehttp.Client, error) {
if c.HTTPClient != nil {
- return c.HTTPClient
+ return c.HTTPClient, nil
+ }
+ tlsConfig := &tls.Config{}
+ err := rootcerts.ConfigureTLS(tlsConfig, &rootcerts.Config{
+ CAFile: os.Getenv("ATLAS_CAFILE"),
+ CAPath: os.Getenv("ATLAS_CAPATH"),
+ })
+ if err != nil {
+ return nil, err
}
- return retryablehttp.NewClient()
+ rc := retryablehttp.NewClient()
+ t := cleanhttp.DefaultTransport()
+ t.TLSClientConfig = tlsConfig
+ rc.HTTPClient.Transport = t
+ return rc, nil
}
// Atlas returns an HTTP 409 - Conflict if the pushed state reports the same
diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go
index c07e938d9fc9..9e390455b203 100644
--- a/terraform/context_apply_test.go
+++ b/terraform/context_apply_test.go
@@ -1056,6 +1056,47 @@ func TestContext2Apply_moduleOrphanProvider(t *testing.T) {
}
}
+func TestContext2Apply_moduleGrandchildProvider(t *testing.T) {
+ m := testModule(t, "apply-module-grandchild-provider-inherit")
+ p := testProvider("aws")
+ p.ApplyFn = testApplyFn
+ p.DiffFn = testDiffFn
+
+ var callLock sync.Mutex
+ called := false
+ p.ConfigureFn = func(c *ResourceConfig) error {
+ if _, ok := c.Get("value"); !ok {
+ return fmt.Errorf("value is not found")
+ }
+ callLock.Lock()
+ called = true
+ callLock.Unlock()
+
+ return nil
+ }
+
+ ctx := testContext2(t, &ContextOpts{
+ Module: m,
+ Providers: map[string]ResourceProviderFactory{
+ "aws": testProviderFuncFixed(p),
+ },
+ })
+
+ if _, err := ctx.Plan(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+
+ if _, err := ctx.Apply(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+
+ callLock.Lock()
+ defer callLock.Unlock()
+ if called != true {
+ t.Fatalf("err: configure never called")
+ }
+}
+
// This tests an issue where all the providers in a module but not
// in the root weren't being added to the root properly. In this test
// case: aws is explicitly added to root, but "test" should be added to.
diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go
index ea45c1323665..58ae89124c27 100644
--- a/terraform/context_plan_test.go
+++ b/terraform/context_plan_test.go
@@ -626,6 +626,57 @@ func TestContext2Plan_moduleVar(t *testing.T) {
}
}
+func TestContext2Plan_moduleVarWrongType(t *testing.T) {
+ m := testModule(t, "plan-module-wrong-var-type")
+ p := testProvider("aws")
+ p.DiffFn = testDiffFn
+ ctx := testContext2(t, &ContextOpts{
+ Module: m,
+ Providers: map[string]ResourceProviderFactory{
+ "aws": testProviderFuncFixed(p),
+ },
+ })
+
+ _, err := ctx.Plan()
+ if err == nil {
+ t.Fatalf("should error")
+ }
+}
+
+func TestContext2Plan_moduleVarWrongTypeNested(t *testing.T) {
+ m := testModule(t, "plan-module-wrong-var-type-nested")
+ p := testProvider("aws")
+ p.DiffFn = testDiffFn
+ ctx := testContext2(t, &ContextOpts{
+ Module: m,
+ Providers: map[string]ResourceProviderFactory{
+ "aws": testProviderFuncFixed(p),
+ },
+ })
+
+ _, err := ctx.Plan()
+ if err == nil {
+ t.Fatalf("should error")
+ }
+}
+
+func TestContext2Plan_moduleVarWithDefaultValue(t *testing.T) {
+ m := testModule(t, "plan-module-var-with-default-value")
+ p := testProvider("null")
+ p.DiffFn = testDiffFn
+ ctx := testContext2(t, &ContextOpts{
+ Module: m,
+ Providers: map[string]ResourceProviderFactory{
+ "null": testProviderFuncFixed(p),
+ },
+ })
+
+ _, err := ctx.Plan()
+ if err != nil {
+ t.Fatalf("bad: %s", err)
+ }
+}
+
func TestContext2Plan_moduleVarComputed(t *testing.T) {
m := testModule(t, "plan-module-var-computed")
p := testProvider("aws")
@@ -2035,6 +2086,7 @@ func TestContext2Plan_varListErr(t *testing.T) {
})
_, err := ctx.Plan()
+
if err == nil {
t.Fatal("should error")
}
diff --git a/terraform/eval_variable.go b/terraform/eval_variable.go
index e6a9befbea1b..216efe5b8a2a 100644
--- a/terraform/eval_variable.go
+++ b/terraform/eval_variable.go
@@ -2,12 +2,80 @@ package terraform
import (
"fmt"
+ "strings"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/config"
+ "github.com/hashicorp/terraform/config/module"
"github.com/mitchellh/mapstructure"
)
+// EvalTypeCheckVariable is an EvalNode which ensures that the variable
+// values which are assigned as inputs to a module (including the root)
+// match the types which are either declared for the variables explicitly
+// or inferred from the default values.
+//
+// In order to achieve this three things are required:
+// - a map of the proposed variable values
+// - the configuration tree of the module in which the variable is
+// declared
+// - the path to the module (so we know which part of the tree to
+// compare the values against).
+//
+// Currently since the type system is simple, we currently do not make
+// use of the values since it is only valid to pass string values. The
+// structure is in place for extension of the type system, however.
+type EvalTypeCheckVariable struct {
+ Variables map[string]string
+ ModulePath []string
+ ModuleTree *module.Tree
+}
+
+func (n *EvalTypeCheckVariable) Eval(ctx EvalContext) (interface{}, error) {
+ currentTree := n.ModuleTree
+ for _, pathComponent := range n.ModulePath[1:] {
+ currentTree = currentTree.Children()[pathComponent]
+ }
+ targetConfig := currentTree.Config()
+
+ prototypes := make(map[string]config.VariableType)
+ for _, variable := range targetConfig.Variables {
+ prototypes[variable.Name] = variable.Type()
+ }
+
+ for name, declaredType := range prototypes {
+ // This is only necessary when we _actually_ check. It is left as a reminder
+ // that at the current time we are dealing with a type system consisting only
+ // of strings and maps - where the only valid inter-module variable type is
+ // string.
+ _, ok := n.Variables[name]
+ if !ok {
+ // This means the default value should be used as no overriding value
+ // has been set. Therefore we should continue as no check is necessary.
+ continue
+ }
+
+ switch declaredType {
+ case config.VariableTypeString:
+ // This will need actual verification once we aren't dealing with
+ // a map[string]string but this is sufficient for now.
+ continue
+ default:
+ // Only display a module if we are not in the root module
+ modulePathDescription := fmt.Sprintf(" in module %s", strings.Join(n.ModulePath[1:], "."))
+ if len(n.ModulePath) == 1 {
+ modulePathDescription = ""
+ }
+ // This will need the actual type substituting when we have more than
+ // just strings and maps.
+ return nil, fmt.Errorf("variable %s%s should be type %s, got type string",
+ name, modulePathDescription, declaredType.Printable())
+ }
+ }
+
+ return nil, nil
+}
+
// EvalSetVariables is an EvalNode implementation that sets the variables
// explicitly for interpolation later.
type EvalSetVariables struct {
diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go
index 17a18011c79a..abc9aca37b68 100644
--- a/terraform/graph_builder.go
+++ b/terraform/graph_builder.go
@@ -32,7 +32,7 @@ func (b *BasicGraphBuilder) Build(path []string) (*Graph, error) {
log.Printf(
"[TRACE] Graph after step %T:\n\n%s",
- step, g.String())
+ step, g.StringWithNodeTypes())
}
// Validate the graph structure
diff --git a/terraform/graph_config_node_module.go b/terraform/graph_config_node_module.go
index 08182a6b61da..ba377e94d61f 100644
--- a/terraform/graph_config_node_module.go
+++ b/terraform/graph_config_node_module.go
@@ -57,12 +57,6 @@ func (n *GraphNodeConfigModule) Expand(b GraphBuilder) (GraphNodeSubgraph, error
return nil, err
}
- // Add the parameters node to the module
- t := &ModuleInputTransformer{Variables: make(map[string]string)}
- if err := t.Transform(graph); err != nil {
- return nil, err
- }
-
{
// Add the destroy marker to the graph
t := &ModuleDestroyTransformer{}
@@ -75,7 +69,7 @@ func (n *GraphNodeConfigModule) Expand(b GraphBuilder) (GraphNodeSubgraph, error
return &graphNodeModuleExpanded{
Original: n,
Graph: graph,
- Variables: t.Variables,
+ Variables: make(map[string]string),
}, nil
}
@@ -169,11 +163,6 @@ func (n *graphNodeModuleExpanded) FlattenGraph() *Graph {
// flattening. We have to skip some nodes (graphNodeModuleSkippable)
// as well as setup the variable values.
for _, v := range graph.Vertices() {
- if sn, ok := v.(graphNodeModuleSkippable); ok && sn.FlattenSkip() {
- graph.Remove(v)
- continue
- }
-
// If this is a variable, then look it up in the raw configuration.
// If it exists in the raw configuration, set the value of it.
if vn, ok := v.(*GraphNodeConfigVariable); ok && input != nil {
@@ -204,12 +193,6 @@ func (n *graphNodeModuleExpanded) Subgraph() *Graph {
return n.Graph
}
-// This interface can be implemented to be skipped/ignored when
-// flattening the module graph.
-type graphNodeModuleSkippable interface {
- FlattenSkip() bool
-}
-
func modulePrefixStr(p []string) string {
parts := make([]string, 0, len(p)*2)
for _, p := range p[1:] {
diff --git a/terraform/graph_config_node_module_test.go b/terraform/graph_config_node_module_test.go
index 6000c20de4f8..1b5430ddfd04 100644
--- a/terraform/graph_config_node_module_test.go
+++ b/terraform/graph_config_node_module_test.go
@@ -71,8 +71,6 @@ const testGraphNodeModuleExpandStr = `
aws_instance.bar
aws_instance.foo
aws_instance.foo
- module inputs
-module inputs
plan-destroy
`
diff --git a/terraform/graph_config_node_variable.go b/terraform/graph_config_node_variable.go
index 9b3d77dbc4b1..e462070d022d 100644
--- a/terraform/graph_config_node_variable.go
+++ b/terraform/graph_config_node_variable.go
@@ -4,6 +4,7 @@ import (
"fmt"
"github.com/hashicorp/terraform/config"
+ "github.com/hashicorp/terraform/config/module"
"github.com/hashicorp/terraform/dag"
)
@@ -18,7 +19,8 @@ type GraphNodeConfigVariable struct {
Module string
Value *config.RawConfig
- depPrefix string
+ ModuleTree *module.Tree
+ ModulePath []string
}
func (n *GraphNodeConfigVariable) Name() string {
@@ -125,6 +127,12 @@ func (n *GraphNodeConfigVariable) EvalTree() EvalNode {
Variables: variables,
},
+ &EvalTypeCheckVariable{
+ Variables: variables,
+ ModulePath: n.ModulePath,
+ ModuleTree: n.ModuleTree,
+ },
+
&EvalSetVariables{
Module: &n.Module,
Variables: variables,
diff --git a/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/grandchild/main.tf b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/grandchild/main.tf
new file mode 100644
index 000000000000..919f140bba6b
--- /dev/null
+++ b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/grandchild/main.tf
@@ -0,0 +1 @@
+resource "aws_instance" "foo" {}
diff --git a/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/main.tf b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/main.tf
new file mode 100644
index 000000000000..b422300ec984
--- /dev/null
+++ b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/child/main.tf
@@ -0,0 +1,3 @@
+module "grandchild" {
+ source = "./grandchild"
+}
diff --git a/terraform/test-fixtures/apply-module-grandchild-provider-inherit/main.tf b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/main.tf
new file mode 100644
index 000000000000..25d0993d1e40
--- /dev/null
+++ b/terraform/test-fixtures/apply-module-grandchild-provider-inherit/main.tf
@@ -0,0 +1,7 @@
+provider "aws" {
+ value = "foo"
+}
+
+module "child" {
+ source = "./child"
+}
diff --git a/terraform/test-fixtures/plan-module-var-with-default-value/inner/main.tf b/terraform/test-fixtures/plan-module-var-with-default-value/inner/main.tf
new file mode 100644
index 000000000000..8a089655a8f8
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-var-with-default-value/inner/main.tf
@@ -0,0 +1,12 @@
+variable "im_a_string" {
+ type = "string"
+}
+
+variable "service_region_ami" {
+ type = "map"
+ default = {
+ us-east-1 = "ami-e4c9db8e"
+ }
+}
+
+resource "null_resource" "noop" {}
diff --git a/terraform/test-fixtures/plan-module-var-with-default-value/main.tf b/terraform/test-fixtures/plan-module-var-with-default-value/main.tf
new file mode 100644
index 000000000000..96b27418a03f
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-var-with-default-value/main.tf
@@ -0,0 +1,7 @@
+resource "null_resource" "noop" {}
+
+module "test" {
+ source = "./inner"
+
+ im_a_string = "hello"
+}
diff --git a/terraform/test-fixtures/plan-module-wrong-var-type-nested/inner/main.tf b/terraform/test-fixtures/plan-module-wrong-var-type-nested/inner/main.tf
new file mode 100644
index 000000000000..88995119d7b2
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-wrong-var-type-nested/inner/main.tf
@@ -0,0 +1,13 @@
+variable "inner_in" {
+ type = "map"
+ default = {
+ us-west-1 = "ami-12345"
+ us-west-2 = "ami-67890"
+ }
+}
+
+resource "null_resource" "inner_noop" {}
+
+output "inner_out" {
+ value = "${lookup(var.inner_in, "us-west-1")}"
+}
diff --git a/terraform/test-fixtures/plan-module-wrong-var-type-nested/main.tf b/terraform/test-fixtures/plan-module-wrong-var-type-nested/main.tf
new file mode 100644
index 000000000000..fe63fc5f8129
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-wrong-var-type-nested/main.tf
@@ -0,0 +1,3 @@
+module "middle" {
+ source = "./middle"
+}
diff --git a/terraform/test-fixtures/plan-module-wrong-var-type-nested/middle/main.tf b/terraform/test-fixtures/plan-module-wrong-var-type-nested/middle/main.tf
new file mode 100644
index 000000000000..1e823576196f
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-wrong-var-type-nested/middle/main.tf
@@ -0,0 +1,19 @@
+variable "middle_in" {
+ type = "map"
+ default = {
+ eu-west-1 = "ami-12345"
+ eu-west-2 = "ami-67890"
+ }
+}
+
+module "inner" {
+ source = "../inner"
+
+ inner_in = "hello"
+}
+
+resource "null_resource" "middle_noop" {}
+
+output "middle_out" {
+ value = "${lookup(var.middle_in, "us-west-1")}"
+}
diff --git a/terraform/test-fixtures/plan-module-wrong-var-type/inner/main.tf b/terraform/test-fixtures/plan-module-wrong-var-type/inner/main.tf
new file mode 100644
index 000000000000..8a9f380c7721
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-wrong-var-type/inner/main.tf
@@ -0,0 +1,7 @@
+variable "map_in" {
+ type = "map"
+ default = {
+ us-west-1 = "ami-12345"
+ us-west-2 = "ami-67890"
+ }
+}
diff --git a/terraform/test-fixtures/plan-module-wrong-var-type/main.tf b/terraform/test-fixtures/plan-module-wrong-var-type/main.tf
new file mode 100644
index 000000000000..4fc7f8a7c3fe
--- /dev/null
+++ b/terraform/test-fixtures/plan-module-wrong-var-type/main.tf
@@ -0,0 +1,10 @@
+variable "input" {
+ type = "string"
+ default = "hello world"
+}
+
+module "test" {
+ source = "./inner"
+
+ map_in = "${var.input}"
+}
diff --git a/terraform/transform_config.go b/terraform/transform_config.go
index a14e4f4342b9..bcfa1233e3a6 100644
--- a/terraform/transform_config.go
+++ b/terraform/transform_config.go
@@ -45,7 +45,11 @@ func (t *ConfigTransformer) Transform(g *Graph) error {
// Write all the variables out
for _, v := range config.Variables {
- nodes = append(nodes, &GraphNodeConfigVariable{Variable: v})
+ nodes = append(nodes, &GraphNodeConfigVariable{
+ Variable: v,
+ ModuleTree: t.Module,
+ ModulePath: g.Path,
+ })
}
// Write all the provider configs out
diff --git a/terraform/transform_module.go b/terraform/transform_module.go
index ca6586265b5d..609873c449b5 100644
--- a/terraform/transform_module.go
+++ b/terraform/transform_module.go
@@ -2,38 +2,10 @@ package terraform
import (
"fmt"
+
"github.com/hashicorp/terraform/dag"
)
-// ModuleInputTransformer is a GraphTransformer that adds a node to the
-// graph for setting the module input variables for the remainder of the
-// graph.
-type ModuleInputTransformer struct {
- Variables map[string]string
-}
-
-func (t *ModuleInputTransformer) Transform(g *Graph) error {
- // Create the node
- n := &graphNodeModuleInput{Variables: t.Variables}
-
- // Add it to the graph
- g.Add(n)
-
- // Connect the inputs to the bottom of the graph so that it happens
- // first.
- for _, v := range g.Vertices() {
- if v == n {
- continue
- }
-
- if g.DownEdges(v).Len() == 0 {
- g.Connect(dag.BasicEdge(v, n))
- }
- }
-
- return nil
-}
-
// ModuleDestroyTransformer is a GraphTransformer that adds a node
// to the graph that will just mark the full module for destroy in
// the destroy scenario.
@@ -88,21 +60,3 @@ func (n *graphNodeModuleDestroyFlat) Name() string {
func (n *graphNodeModuleDestroyFlat) Path() []string {
return n.PathValue
}
-
-type graphNodeModuleInput struct {
- Variables map[string]string
-}
-
-func (n *graphNodeModuleInput) Name() string {
- return "module inputs"
-}
-
-// GraphNodeEvalable impl.
-func (n *graphNodeModuleInput) EvalTree() EvalNode {
- return &EvalSetVariables{Variables: n.Variables}
-}
-
-// graphNodeModuleSkippable impl.
-func (n *graphNodeModuleInput) FlattenSkip() bool {
- return true
-}
diff --git a/terraform/transform_module_test.go b/terraform/transform_module_test.go
index b857108b2192..cc3ee2f47b6d 100644
--- a/terraform/transform_module_test.go
+++ b/terraform/transform_module_test.go
@@ -1,41 +1 @@
package terraform
-
-import (
- "strings"
- "testing"
-
- "github.com/hashicorp/terraform/dag"
-)
-
-func TestModuleInputTransformer(t *testing.T) {
- var g Graph
- g.Add(1)
- g.Add(2)
- g.Add(3)
- g.Connect(dag.BasicEdge(1, 2))
- g.Connect(dag.BasicEdge(1, 3))
-
- {
- tf := &ModuleInputTransformer{}
- if err := tf.Transform(&g); err != nil {
- t.Fatalf("err: %s", err)
- }
- }
-
- actual := strings.TrimSpace(g.String())
- expected := strings.TrimSpace(testModuleInputTransformStr)
- if actual != expected {
- t.Fatalf("bad:\n\n%s", actual)
- }
-}
-
-const testModuleInputTransformStr = `
-1
- 2
- 3
-2
- module inputs
-3
- module inputs
-module inputs
-`
diff --git a/terraform/transform_provider.go b/terraform/transform_provider.go
index 8e224775d5fa..5ea79200c36d 100644
--- a/terraform/transform_provider.go
+++ b/terraform/transform_provider.go
@@ -2,6 +2,7 @@ package terraform
import (
"fmt"
+ "log"
"strings"
"github.com/hashicorp/go-multierror"
@@ -214,9 +215,11 @@ func (t *PruneProviderTransformer) Transform(g *Graph) error {
if pn, ok := v.(GraphNodeProvider); !ok || pn.ProviderName() == "" {
continue
}
-
// Does anything depend on this? If not, then prune it.
if s := g.UpEdges(v); s.Len() == 0 {
+ if nv, ok := v.(dag.NamedVertex); ok {
+ log.Printf("[DEBUG] Pruning provider with no dependencies: %s", nv.Name())
+ }
g.Remove(v)
}
}
@@ -340,7 +343,9 @@ func (n *graphNodeDisabledProviderFlat) ProviderName() string {
// GraphNodeDependable impl.
func (n *graphNodeDisabledProviderFlat) DependableName() []string {
- return []string{n.Name()}
+ return modulePrefixList(
+ n.graphNodeDisabledProvider.DependableName(),
+ modulePrefixStr(n.PathValue))
}
func (n *graphNodeDisabledProviderFlat) DependentOn() []string {
@@ -349,13 +354,8 @@ func (n *graphNodeDisabledProviderFlat) DependentOn() []string {
// If we're in a module, then depend on our parent's provider
if len(n.PathValue) > 1 {
prefix := modulePrefixStr(n.PathValue[:len(n.PathValue)-1])
- if prefix != "" {
- prefix += "."
- }
-
- result = append(result, fmt.Sprintf(
- "%s%s",
- prefix, n.graphNodeDisabledProvider.Name()))
+ result = modulePrefixList(
+ n.graphNodeDisabledProvider.DependableName(), prefix)
}
return result
@@ -474,13 +474,7 @@ func (n *graphNodeProviderFlat) DependentOn() []string {
// If we're in a module, then depend on our parent's provider
if len(n.PathValue) > 1 {
prefix := modulePrefixStr(n.PathValue[:len(n.PathValue)-1])
- if prefix != "" {
- prefix += "."
- }
-
- result = append(result, fmt.Sprintf(
- "%s%s",
- prefix, n.graphNodeProvider.Name()))
+ result = modulePrefixList(n.graphNodeProvider.DependableName(), prefix)
}
return result
diff --git a/terraform/version.go b/terraform/version.go
index 274b902b4eb7..3031bd928b57 100644
--- a/terraform/version.go
+++ b/terraform/version.go
@@ -1,7 +1,7 @@
package terraform
// The main version number that is being run at the moment.
-const Version = "0.6.15"
+const Version = "0.6.16"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release
diff --git a/vendor/github.com/ajg/form/.travis.yml b/vendor/github.com/ajg/form/.travis.yml
index d0d191072ff1..b257361d8b2b 100644
--- a/vendor/github.com/ajg/form/.travis.yml
+++ b/vendor/github.com/ajg/form/.travis.yml
@@ -8,9 +8,9 @@ go:
- tip
- 1.3
# - 1.2
- # Note: 1.2 is disabled because it seems to require that cover
+ # Note: 1.2 is disabled because it seems to require that cover
# be installed from code.google.com/p/go.tools/cmd/cover
-
+
before_install:
- go get -v golang.org/x/tools/cmd/cover
- go get -v golang.org/x/tools/cmd/vet
diff --git a/vendor/github.com/ajg/form/README.md b/vendor/github.com/ajg/form/README.md
index 00c3b2ef12d6..de3ab635c3b9 100644
--- a/vendor/github.com/ajg/form/README.md
+++ b/vendor/github.com/ajg/form/README.md
@@ -34,13 +34,13 @@ Given a type like the following...
```go
type User struct {
- Name string `form:"name"`
- Email string `form:"email"`
- Joined time.Time `form:"joined,omitempty"`
- Posts []int `form:"posts"`
- Preferences map[string]string `form:"prefs"`
- Avatar []byte `form:"avatar"`
- PasswordHash int64 `form:"-"`
+ Name string `form:"name"`
+ Email string `form:"email"`
+ Joined time.Time `form:"joined,omitempty"`
+ Posts []int `form:"posts"`
+ Preferences map[string]string `form:"prefs"`
+ Avatar []byte `form:"avatar"`
+ PasswordHash int64 `form:"-"`
}
```
@@ -49,9 +49,9 @@ type User struct {
```go
func PostUser(url string, u User) error {
- var c http.Client
- _, err := c.PostForm(url, form.EncodeToValues(u))
- return err
+ var c http.Client
+ _, err := c.PostForm(url, form.EncodeToValues(u))
+ return err
}
```
@@ -60,15 +60,15 @@ func PostUser(url string, u User) error {
```go
func Handler(w http.ResponseWriter, r *http.Request) {
- var u User
+ var u User
- d := form.NewDecoder(r.Body)
- if err := d.Decode(&u); err != nil {
- http.Error(w, "Form could not be decoded", http.StatusBadRequest)
- return
- }
+ d := form.NewDecoder(r.Body)
+ if err := d.Decode(&u); err != nil {
+ http.Error(w, "Form could not be decoded", http.StatusBadRequest)
+ return
+ }
- fmt.Fprintf(w, "Decoded: %#v", u)
+ fmt.Fprintf(w, "Decoded: %#v", u)
}
```
@@ -149,20 +149,20 @@ import "encoding"
type Binary []byte
var (
- _ encoding.TextMarshaler = &Binary{}
- _ encoding.TextUnmarshaler = &Binary{}
+ _ encoding.TextMarshaler = &Binary{}
+ _ encoding.TextUnmarshaler = &Binary{}
)
func (b Binary) MarshalText() ([]byte, error) {
- return []byte(base64.URLEncoding.EncodeToString([]byte(b))), nil
+ return []byte(base64.URLEncoding.EncodeToString([]byte(b))), nil
}
func (b *Binary) UnmarshalText(text []byte) error {
- bs, err := base64.URLEncoding.DecodeString(string(text))
- if err == nil {
- *b = Binary(bs)
- }
- return err
+ bs, err := base64.URLEncoding.DecodeString(string(text))
+ if err == nil {
+ *b = Binary(bs)
+ }
+ return err
}
```
diff --git a/vendor/github.com/ajg/form/decode.go b/vendor/github.com/ajg/form/decode.go
index d9b62355cb22..d03b2082c765 100644
--- a/vendor/github.com/ajg/form/decode.go
+++ b/vendor/github.com/ajg/form/decode.go
@@ -5,326 +5,326 @@
package form
import (
- "fmt"
- "io"
- "io/ioutil"
- "net/url"
- "reflect"
- "strconv"
- "time"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/url"
+ "reflect"
+ "strconv"
+ "time"
)
// NewDecoder returns a new form decoder.
func NewDecoder(r io.Reader) *decoder {
- return &decoder{r}
+ return &decoder{r}
}
// decoder decodes data from a form (application/x-www-form-urlencoded).
type decoder struct {
- r io.Reader
+ r io.Reader
}
// Decode reads in and decodes form-encoded data into dst.
func (d decoder) Decode(dst interface{}) error {
- bs, err := ioutil.ReadAll(d.r)
- if err != nil {
- return err
- }
- vs, err := url.ParseQuery(string(bs))
- if err != nil {
- return err
- }
- v := reflect.ValueOf(dst)
- return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
+ bs, err := ioutil.ReadAll(d.r)
+ if err != nil {
+ return err
+ }
+ vs, err := url.ParseQuery(string(bs))
+ if err != nil {
+ return err
+ }
+ v := reflect.ValueOf(dst)
+ return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
}
// DecodeString decodes src into dst.
func DecodeString(dst interface{}, src string) error {
- vs, err := url.ParseQuery(src)
- if err != nil {
- return err
- }
- v := reflect.ValueOf(dst)
- return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
+ vs, err := url.ParseQuery(src)
+ if err != nil {
+ return err
+ }
+ v := reflect.ValueOf(dst)
+ return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
}
// DecodeValues decodes vs into dst.
func DecodeValues(dst interface{}, vs url.Values) error {
- v := reflect.ValueOf(dst)
- return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
+ v := reflect.ValueOf(dst)
+ return decodeNode(v, parseValues(vs, canIndexOrdinally(v)))
}
func decodeNode(v reflect.Value, n node) (err error) {
- defer func() {
- if e := recover(); e != nil {
- err = fmt.Errorf("%v", e)
- }
- }()
+ defer func() {
+ if e := recover(); e != nil {
+ err = fmt.Errorf("%v", e)
+ }
+ }()
- if v.Kind() == reflect.Slice {
- return fmt.Errorf("could not decode directly into slice; use pointer to slice")
- }
- decodeValue(v, n)
- return nil
+ if v.Kind() == reflect.Slice {
+ return fmt.Errorf("could not decode directly into slice; use pointer to slice")
+ }
+ decodeValue(v, n)
+ return nil
}
func decodeValue(v reflect.Value, x interface{}) {
- t := v.Type()
- k := v.Kind()
+ t := v.Type()
+ k := v.Kind()
- if k == reflect.Ptr && v.IsNil() {
- v.Set(reflect.New(t.Elem()))
- }
+ if k == reflect.Ptr && v.IsNil() {
+ v.Set(reflect.New(t.Elem()))
+ }
- if unmarshalValue(v, x) {
- return
- }
+ if unmarshalValue(v, x) {
+ return
+ }
- empty := isEmpty(x)
+ empty := isEmpty(x)
- switch k {
- case reflect.Ptr:
- decodeValue(v.Elem(), x)
- return
- case reflect.Interface:
- if !v.IsNil() {
- decodeValue(v.Elem(), x)
- return
+ switch k {
+ case reflect.Ptr:
+ decodeValue(v.Elem(), x)
+ return
+ case reflect.Interface:
+ if !v.IsNil() {
+ decodeValue(v.Elem(), x)
+ return
- } else if empty {
- return // Allow nil interfaces only if empty.
- } else {
- panic("form: cannot decode non-empty value into into nil interface")
- }
- }
+ } else if empty {
+ return // Allow nil interfaces only if empty.
+ } else {
+ panic("form: cannot decode non-empty value into into nil interface")
+ }
+ }
- if empty {
- v.Set(reflect.Zero(t)) // Treat the empty string as the zero value.
- return
- }
+ if empty {
+ v.Set(reflect.Zero(t)) // Treat the empty string as the zero value.
+ return
+ }
- switch k {
- case reflect.Struct:
- if t.ConvertibleTo(timeType) {
- decodeTime(v, x)
- } else if t.ConvertibleTo(urlType) {
- decodeURL(v, x)
- } else {
- decodeStruct(v, x)
- }
- case reflect.Slice:
- decodeSlice(v, x)
- case reflect.Array:
- decodeArray(v, x)
- case reflect.Map:
- decodeMap(v, x)
- case reflect.Invalid, reflect.Uintptr, reflect.UnsafePointer, reflect.Chan, reflect.Func:
- panic(t.String() + " has unsupported kind " + k.String())
- default:
- decodeBasic(v, x)
- }
+ switch k {
+ case reflect.Struct:
+ if t.ConvertibleTo(timeType) {
+ decodeTime(v, x)
+ } else if t.ConvertibleTo(urlType) {
+ decodeURL(v, x)
+ } else {
+ decodeStruct(v, x)
+ }
+ case reflect.Slice:
+ decodeSlice(v, x)
+ case reflect.Array:
+ decodeArray(v, x)
+ case reflect.Map:
+ decodeMap(v, x)
+ case reflect.Invalid, reflect.Uintptr, reflect.UnsafePointer, reflect.Chan, reflect.Func:
+ panic(t.String() + " has unsupported kind " + k.String())
+ default:
+ decodeBasic(v, x)
+ }
}
func decodeStruct(v reflect.Value, x interface{}) {
- t := v.Type()
- for k, c := range getNode(x) {
- if f, ok := findField(v, k); !ok && k == "" {
- panic(getString(x) + " cannot be decoded as " + t.String())
- } else if !ok {
- panic(k + " doesn't exist in " + t.String())
- } else if !f.CanSet() {
- panic(k + " cannot be set in " + t.String())
- } else {
- decodeValue(f, c)
- }
- }
+ t := v.Type()
+ for k, c := range getNode(x) {
+ if f, ok := findField(v, k); !ok && k == "" {
+ panic(getString(x) + " cannot be decoded as " + t.String())
+ } else if !ok {
+ panic(k + " doesn't exist in " + t.String())
+ } else if !f.CanSet() {
+ panic(k + " cannot be set in " + t.String())
+ } else {
+ decodeValue(f, c)
+ }
+ }
}
func decodeMap(v reflect.Value, x interface{}) {
- t := v.Type()
- if v.IsNil() {
- v.Set(reflect.MakeMap(t))
- }
- for k, c := range getNode(x) {
- i := reflect.New(t.Key()).Elem()
- decodeValue(i, k)
+ t := v.Type()
+ if v.IsNil() {
+ v.Set(reflect.MakeMap(t))
+ }
+ for k, c := range getNode(x) {
+ i := reflect.New(t.Key()).Elem()
+ decodeValue(i, k)
- w := v.MapIndex(i)
- if w.IsValid() { // We have an actual element value to decode into.
- if w.Kind() == reflect.Interface {
- w = w.Elem()
- }
- w = reflect.New(w.Type()).Elem()
- } else if t.Elem().Kind() != reflect.Interface { // The map's element type is concrete.
- w = reflect.New(t.Elem()).Elem()
- } else {
- // The best we can do here is to decode as either a string (for scalars) or a map[string]interface {} (for the rest).
- // We could try to guess the type based on the string (e.g. true/false => bool) but that'll get ugly fast,
- // especially if we have to guess the kind (slice vs. array vs. map) and index type (e.g. string, int, etc.)
- switch c.(type) {
- case node:
- w = reflect.MakeMap(stringMapType)
- case string:
- w = reflect.New(stringType).Elem()
- default:
- panic("value is neither node nor string")
- }
- }
+ w := v.MapIndex(i)
+ if w.IsValid() { // We have an actual element value to decode into.
+ if w.Kind() == reflect.Interface {
+ w = w.Elem()
+ }
+ w = reflect.New(w.Type()).Elem()
+ } else if t.Elem().Kind() != reflect.Interface { // The map's element type is concrete.
+ w = reflect.New(t.Elem()).Elem()
+ } else {
+ // The best we can do here is to decode as either a string (for scalars) or a map[string]interface {} (for the rest).
+ // We could try to guess the type based on the string (e.g. true/false => bool) but that'll get ugly fast,
+ // especially if we have to guess the kind (slice vs. array vs. map) and index type (e.g. string, int, etc.)
+ switch c.(type) {
+ case node:
+ w = reflect.MakeMap(stringMapType)
+ case string:
+ w = reflect.New(stringType).Elem()
+ default:
+ panic("value is neither node nor string")
+ }
+ }
- decodeValue(w, c)
- v.SetMapIndex(i, w)
- }
+ decodeValue(w, c)
+ v.SetMapIndex(i, w)
+ }
}
func decodeArray(v reflect.Value, x interface{}) {
- t := v.Type()
- for k, c := range getNode(x) {
- i, err := strconv.Atoi(k)
- if err != nil {
- panic(k + " is not a valid index for type " + t.String())
- }
- if l := v.Len(); i >= l {
- panic("index is above array size")
- }
- decodeValue(v.Index(i), c)
- }
+ t := v.Type()
+ for k, c := range getNode(x) {
+ i, err := strconv.Atoi(k)
+ if err != nil {
+ panic(k + " is not a valid index for type " + t.String())
+ }
+ if l := v.Len(); i >= l {
+ panic("index is above array size")
+ }
+ decodeValue(v.Index(i), c)
+ }
}
func decodeSlice(v reflect.Value, x interface{}) {
- t := v.Type()
- if t.Elem().Kind() == reflect.Uint8 {
- // Allow, but don't require, byte slices to be encoded as a single string.
- if s, ok := x.(string); ok {
- v.SetBytes([]byte(s))
- return
- }
- }
+ t := v.Type()
+ if t.Elem().Kind() == reflect.Uint8 {
+ // Allow, but don't require, byte slices to be encoded as a single string.
+ if s, ok := x.(string); ok {
+ v.SetBytes([]byte(s))
+ return
+ }
+ }
- // NOTE: Implicit indexing is currently done at the parseValues level,
- // so if if an implicitKey reaches here it will always replace the last.
- implicit := 0
- for k, c := range getNode(x) {
- var i int
- if k == implicitKey {
- i = implicit
- implicit++
- } else {
- explicit, err := strconv.Atoi(k)
- if err != nil {
- panic(k + " is not a valid index for type " + t.String())
- }
- i = explicit
- implicit = explicit + 1
- }
- // "Extend" the slice if it's too short.
- if l := v.Len(); i >= l {
- delta := i - l + 1
- v.Set(reflect.AppendSlice(v, reflect.MakeSlice(t, delta, delta)))
- }
- decodeValue(v.Index(i), c)
- }
+ // NOTE: Implicit indexing is currently done at the parseValues level,
+ // so if if an implicitKey reaches here it will always replace the last.
+ implicit := 0
+ for k, c := range getNode(x) {
+ var i int
+ if k == implicitKey {
+ i = implicit
+ implicit++
+ } else {
+ explicit, err := strconv.Atoi(k)
+ if err != nil {
+ panic(k + " is not a valid index for type " + t.String())
+ }
+ i = explicit
+ implicit = explicit + 1
+ }
+ // "Extend" the slice if it's too short.
+ if l := v.Len(); i >= l {
+ delta := i - l + 1
+ v.Set(reflect.AppendSlice(v, reflect.MakeSlice(t, delta, delta)))
+ }
+ decodeValue(v.Index(i), c)
+ }
}
func decodeBasic(v reflect.Value, x interface{}) {
- t := v.Type()
- switch k, s := t.Kind(), getString(x); k {
- case reflect.Bool:
- if b, e := strconv.ParseBool(s); e == nil {
- v.SetBool(b)
- } else {
- panic("could not parse bool from " + strconv.Quote(s))
- }
- case reflect.Int,
- reflect.Int8,
- reflect.Int16,
- reflect.Int32,
- reflect.Int64:
- if i, e := strconv.ParseInt(s, 10, 64); e == nil {
- v.SetInt(i)
- } else {
- panic("could not parse int from " + strconv.Quote(s))
- }
- case reflect.Uint,
- reflect.Uint8,
- reflect.Uint16,
- reflect.Uint32,
- reflect.Uint64:
- if u, e := strconv.ParseUint(s, 10, 64); e == nil {
- v.SetUint(u)
- } else {
- panic("could not parse uint from " + strconv.Quote(s))
- }
- case reflect.Float32,
- reflect.Float64:
- if f, e := strconv.ParseFloat(s, 64); e == nil {
- v.SetFloat(f)
- } else {
- panic("could not parse float from " + strconv.Quote(s))
- }
- case reflect.Complex64,
- reflect.Complex128:
- var c complex128
- if n, err := fmt.Sscanf(s, "%g", &c); n == 1 && err == nil {
- v.SetComplex(c)
- } else {
- panic("could not parse complex from " + strconv.Quote(s))
- }
- case reflect.String:
- v.SetString(s)
- default:
- panic(t.String() + " has unsupported kind " + k.String())
- }
+ t := v.Type()
+ switch k, s := t.Kind(), getString(x); k {
+ case reflect.Bool:
+ if b, e := strconv.ParseBool(s); e == nil {
+ v.SetBool(b)
+ } else {
+ panic("could not parse bool from " + strconv.Quote(s))
+ }
+ case reflect.Int,
+ reflect.Int8,
+ reflect.Int16,
+ reflect.Int32,
+ reflect.Int64:
+ if i, e := strconv.ParseInt(s, 10, 64); e == nil {
+ v.SetInt(i)
+ } else {
+ panic("could not parse int from " + strconv.Quote(s))
+ }
+ case reflect.Uint,
+ reflect.Uint8,
+ reflect.Uint16,
+ reflect.Uint32,
+ reflect.Uint64:
+ if u, e := strconv.ParseUint(s, 10, 64); e == nil {
+ v.SetUint(u)
+ } else {
+ panic("could not parse uint from " + strconv.Quote(s))
+ }
+ case reflect.Float32,
+ reflect.Float64:
+ if f, e := strconv.ParseFloat(s, 64); e == nil {
+ v.SetFloat(f)
+ } else {
+ panic("could not parse float from " + strconv.Quote(s))
+ }
+ case reflect.Complex64,
+ reflect.Complex128:
+ var c complex128
+ if n, err := fmt.Sscanf(s, "%g", &c); n == 1 && err == nil {
+ v.SetComplex(c)
+ } else {
+ panic("could not parse complex from " + strconv.Quote(s))
+ }
+ case reflect.String:
+ v.SetString(s)
+ default:
+ panic(t.String() + " has unsupported kind " + k.String())
+ }
}
func decodeTime(v reflect.Value, x interface{}) {
- t := v.Type()
- s := getString(x)
- // TODO: Find a more efficient way to do this.
- for _, f := range allowedTimeFormats {
- if p, err := time.Parse(f, s); err == nil {
- v.Set(reflect.ValueOf(p).Convert(v.Type()))
- return
- }
- }
- panic("cannot decode string `" + s + "` as " + t.String())
+ t := v.Type()
+ s := getString(x)
+ // TODO: Find a more efficient way to do this.
+ for _, f := range allowedTimeFormats {
+ if p, err := time.Parse(f, s); err == nil {
+ v.Set(reflect.ValueOf(p).Convert(v.Type()))
+ return
+ }
+ }
+ panic("cannot decode string `" + s + "` as " + t.String())
}
func decodeURL(v reflect.Value, x interface{}) {
- t := v.Type()
- s := getString(x)
- if u, err := url.Parse(s); err == nil {
- v.Set(reflect.ValueOf(*u).Convert(v.Type()))
- return
- }
- panic("cannot decode string `" + s + "` as " + t.String())
+ t := v.Type()
+ s := getString(x)
+ if u, err := url.Parse(s); err == nil {
+ v.Set(reflect.ValueOf(*u).Convert(v.Type()))
+ return
+ }
+ panic("cannot decode string `" + s + "` as " + t.String())
}
var allowedTimeFormats = []string{
- "2006-01-02T15:04:05.999999999Z07:00",
- "2006-01-02T15:04:05.999999999Z07",
- "2006-01-02T15:04:05.999999999Z",
- "2006-01-02T15:04:05.999999999",
- "2006-01-02T15:04:05Z07:00",
- "2006-01-02T15:04:05Z07",
- "2006-01-02T15:04:05Z",
- "2006-01-02T15:04:05",
- "2006-01-02T15:04Z",
- "2006-01-02T15:04",
- "2006-01-02T15Z",
- "2006-01-02T15",
- "2006-01-02",
- "2006-01",
- "2006",
- "15:04:05.999999999Z07:00",
- "15:04:05.999999999Z07",
- "15:04:05.999999999Z",
- "15:04:05.999999999",
- "15:04:05Z07:00",
- "15:04:05Z07",
- "15:04:05Z",
- "15:04:05",
- "15:04Z",
- "15:04",
- "15Z",
- "15",
+ "2006-01-02T15:04:05.999999999Z07:00",
+ "2006-01-02T15:04:05.999999999Z07",
+ "2006-01-02T15:04:05.999999999Z",
+ "2006-01-02T15:04:05.999999999",
+ "2006-01-02T15:04:05Z07:00",
+ "2006-01-02T15:04:05Z07",
+ "2006-01-02T15:04:05Z",
+ "2006-01-02T15:04:05",
+ "2006-01-02T15:04Z",
+ "2006-01-02T15:04",
+ "2006-01-02T15Z",
+ "2006-01-02T15",
+ "2006-01-02",
+ "2006-01",
+ "2006",
+ "15:04:05.999999999Z07:00",
+ "15:04:05.999999999Z07",
+ "15:04:05.999999999Z",
+ "15:04:05.999999999",
+ "15:04:05Z07:00",
+ "15:04:05Z07",
+ "15:04:05Z",
+ "15:04:05",
+ "15:04Z",
+ "15:04",
+ "15Z",
+ "15",
}
diff --git a/vendor/github.com/ajg/form/encode.go b/vendor/github.com/ajg/form/encode.go
index f0fcf9457ab4..4c6f6c869d4e 100644
--- a/vendor/github.com/ajg/form/encode.go
+++ b/vendor/github.com/ajg/form/encode.go
@@ -5,347 +5,347 @@
package form
import (
- "encoding"
- "errors"
- "fmt"
- "io"
- "net/url"
- "reflect"
- "strconv"
- "strings"
- "time"
+ "encoding"
+ "errors"
+ "fmt"
+ "io"
+ "net/url"
+ "reflect"
+ "strconv"
+ "strings"
+ "time"
)
// NewEncoder returns a new form encoder.
func NewEncoder(w io.Writer) *encoder {
- return &encoder{w}
+ return &encoder{w}
}
// encoder provides a way to encode to a Writer.
type encoder struct {
- w io.Writer
+ w io.Writer
}
// Encode encodes dst as form and writes it out using the encoder's Writer.
func (e encoder) Encode(dst interface{}) error {
- v := reflect.ValueOf(dst)
- n, err := encodeToNode(v)
- if err != nil {
- return err
- }
- s := n.Values().Encode()
- l, err := io.WriteString(e.w, s)
- switch {
- case err != nil:
- return err
- case l != len(s):
- return errors.New("could not write data completely")
- }
- return nil
+ v := reflect.ValueOf(dst)
+ n, err := encodeToNode(v)
+ if err != nil {
+ return err
+ }
+ s := n.Values().Encode()
+ l, err := io.WriteString(e.w, s)
+ switch {
+ case err != nil:
+ return err
+ case l != len(s):
+ return errors.New("could not write data completely")
+ }
+ return nil
}
// EncodeToString encodes dst as a form and returns it as a string.
func EncodeToString(dst interface{}) (string, error) {
- v := reflect.ValueOf(dst)
- n, err := encodeToNode(v)
- if err != nil {
- return "", err
- }
- return n.Values().Encode(), nil
+ v := reflect.ValueOf(dst)
+ n, err := encodeToNode(v)
+ if err != nil {
+ return "", err
+ }
+ return n.Values().Encode(), nil
}
// EncodeToValues encodes dst as a form and returns it as Values.
func EncodeToValues(dst interface{}) (url.Values, error) {
- v := reflect.ValueOf(dst)
- n, err := encodeToNode(v)
- if err != nil {
- return nil, err
- }
- return n.Values(), nil
+ v := reflect.ValueOf(dst)
+ n, err := encodeToNode(v)
+ if err != nil {
+ return nil, err
+ }
+ return n.Values(), nil
}
func encodeToNode(v reflect.Value) (n node, err error) {
- defer func() {
- if e := recover(); e != nil {
- err = fmt.Errorf("%v", e)
- }
- }()
- return getNode(encodeValue(v)), nil
+ defer func() {
+ if e := recover(); e != nil {
+ err = fmt.Errorf("%v", e)
+ }
+ }()
+ return getNode(encodeValue(v)), nil
}
func encodeValue(v reflect.Value) interface{} {
- t := v.Type()
- k := v.Kind()
-
- if s, ok := marshalValue(v); ok {
- return s
- } else if isEmptyValue(v) {
- return "" // Treat the zero value as the empty string.
- }
-
- switch k {
- case reflect.Ptr, reflect.Interface:
- return encodeValue(v.Elem())
- case reflect.Struct:
- if t.ConvertibleTo(timeType) {
- return encodeTime(v)
- } else if t.ConvertibleTo(urlType) {
- return encodeURL(v)
- }
- return encodeStruct(v)
- case reflect.Slice:
- return encodeSlice(v)
- case reflect.Array:
- return encodeArray(v)
- case reflect.Map:
- return encodeMap(v)
- case reflect.Invalid, reflect.Uintptr, reflect.UnsafePointer, reflect.Chan, reflect.Func:
- panic(t.String() + " has unsupported kind " + t.Kind().String())
- default:
- return encodeBasic(v)
- }
+ t := v.Type()
+ k := v.Kind()
+
+ if s, ok := marshalValue(v); ok {
+ return s
+ } else if isEmptyValue(v) {
+ return "" // Treat the zero value as the empty string.
+ }
+
+ switch k {
+ case reflect.Ptr, reflect.Interface:
+ return encodeValue(v.Elem())
+ case reflect.Struct:
+ if t.ConvertibleTo(timeType) {
+ return encodeTime(v)
+ } else if t.ConvertibleTo(urlType) {
+ return encodeURL(v)
+ }
+ return encodeStruct(v)
+ case reflect.Slice:
+ return encodeSlice(v)
+ case reflect.Array:
+ return encodeArray(v)
+ case reflect.Map:
+ return encodeMap(v)
+ case reflect.Invalid, reflect.Uintptr, reflect.UnsafePointer, reflect.Chan, reflect.Func:
+ panic(t.String() + " has unsupported kind " + t.Kind().String())
+ default:
+ return encodeBasic(v)
+ }
}
func encodeStruct(v reflect.Value) interface{} {
- t := v.Type()
- n := node{}
- for i := 0; i < t.NumField(); i++ {
- f := t.Field(i)
- k, oe := fieldInfo(f)
-
- if k == "-" {
- continue
- } else if fv := v.Field(i); oe && isEmptyValue(fv) {
- delete(n, k)
- } else {
- n[k] = encodeValue(fv)
- }
- }
- return n
+ t := v.Type()
+ n := node{}
+ for i := 0; i < t.NumField(); i++ {
+ f := t.Field(i)
+ k, oe := fieldInfo(f)
+
+ if k == "-" {
+ continue
+ } else if fv := v.Field(i); oe && isEmptyValue(fv) {
+ delete(n, k)
+ } else {
+ n[k] = encodeValue(fv)
+ }
+ }
+ return n
}
func encodeMap(v reflect.Value) interface{} {
- n := node{}
- for _, i := range v.MapKeys() {
- k := getString(encodeValue(i))
- n[k] = encodeValue(v.MapIndex(i))
- }
- return n
+ n := node{}
+ for _, i := range v.MapKeys() {
+ k := getString(encodeValue(i))
+ n[k] = encodeValue(v.MapIndex(i))
+ }
+ return n
}
func encodeArray(v reflect.Value) interface{} {
- n := node{}
- for i := 0; i < v.Len(); i++ {
- n[strconv.Itoa(i)] = encodeValue(v.Index(i))
- }
- return n
+ n := node{}
+ for i := 0; i < v.Len(); i++ {
+ n[strconv.Itoa(i)] = encodeValue(v.Index(i))
+ }
+ return n
}
func encodeSlice(v reflect.Value) interface{} {
- t := v.Type()
- if t.Elem().Kind() == reflect.Uint8 {
- return string(v.Bytes()) // Encode byte slices as a single string by default.
- }
- n := node{}
- for i := 0; i < v.Len(); i++ {
- n[strconv.Itoa(i)] = encodeValue(v.Index(i))
- }
- return n
+ t := v.Type()
+ if t.Elem().Kind() == reflect.Uint8 {
+ return string(v.Bytes()) // Encode byte slices as a single string by default.
+ }
+ n := node{}
+ for i := 0; i < v.Len(); i++ {
+ n[strconv.Itoa(i)] = encodeValue(v.Index(i))
+ }
+ return n
}
func encodeTime(v reflect.Value) string {
- t := v.Convert(timeType).Interface().(time.Time)
- if t.Year() == 0 && (t.Month() == 0 || t.Month() == 1) && (t.Day() == 0 || t.Day() == 1) {
- return t.Format("15:04:05.999999999Z07:00")
- } else if t.Hour() == 0 && t.Minute() == 0 && t.Second() == 0 && t.Nanosecond() == 0 {
- return t.Format("2006-01-02")
- }
- return t.Format("2006-01-02T15:04:05.999999999Z07:00")
+ t := v.Convert(timeType).Interface().(time.Time)
+ if t.Year() == 0 && (t.Month() == 0 || t.Month() == 1) && (t.Day() == 0 || t.Day() == 1) {
+ return t.Format("15:04:05.999999999Z07:00")
+ } else if t.Hour() == 0 && t.Minute() == 0 && t.Second() == 0 && t.Nanosecond() == 0 {
+ return t.Format("2006-01-02")
+ }
+ return t.Format("2006-01-02T15:04:05.999999999Z07:00")
}
func encodeURL(v reflect.Value) string {
- u := v.Convert(urlType).Interface().(url.URL)
- return u.String()
+ u := v.Convert(urlType).Interface().(url.URL)
+ return u.String()
}
func encodeBasic(v reflect.Value) string {
- t := v.Type()
- switch k := t.Kind(); k {
- case reflect.Bool:
- return strconv.FormatBool(v.Bool())
- case reflect.Int,
- reflect.Int8,
- reflect.Int16,
- reflect.Int32,
- reflect.Int64:
- return strconv.FormatInt(v.Int(), 10)
- case reflect.Uint,
- reflect.Uint8,
- reflect.Uint16,
- reflect.Uint32,
- reflect.Uint64:
- return strconv.FormatUint(v.Uint(), 10)
- case reflect.Float32:
- return strconv.FormatFloat(v.Float(), 'g', -1, 32)
- case reflect.Float64:
- return strconv.FormatFloat(v.Float(), 'g', -1, 64)
- case reflect.Complex64, reflect.Complex128:
- s := fmt.Sprintf("%g", v.Complex())
- return strings.TrimSuffix(strings.TrimPrefix(s, "("), ")")
- case reflect.String:
- return v.String()
- }
- panic(t.String() + " has unsupported kind " + t.Kind().String())
+ t := v.Type()
+ switch k := t.Kind(); k {
+ case reflect.Bool:
+ return strconv.FormatBool(v.Bool())
+ case reflect.Int,
+ reflect.Int8,
+ reflect.Int16,
+ reflect.Int32,
+ reflect.Int64:
+ return strconv.FormatInt(v.Int(), 10)
+ case reflect.Uint,
+ reflect.Uint8,
+ reflect.Uint16,
+ reflect.Uint32,
+ reflect.Uint64:
+ return strconv.FormatUint(v.Uint(), 10)
+ case reflect.Float32:
+ return strconv.FormatFloat(v.Float(), 'g', -1, 32)
+ case reflect.Float64:
+ return strconv.FormatFloat(v.Float(), 'g', -1, 64)
+ case reflect.Complex64, reflect.Complex128:
+ s := fmt.Sprintf("%g", v.Complex())
+ return strings.TrimSuffix(strings.TrimPrefix(s, "("), ")")
+ case reflect.String:
+ return v.String()
+ }
+ panic(t.String() + " has unsupported kind " + t.Kind().String())
}
func isEmptyValue(v reflect.Value) bool {
- switch t := v.Type(); v.Kind() {
- case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
- return v.Len() == 0
- case reflect.Bool:
- return !v.Bool()
- case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
- return v.Int() == 0
- case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
- return v.Uint() == 0
- case reflect.Float32, reflect.Float64:
- return v.Float() == 0
- case reflect.Complex64, reflect.Complex128:
- return v.Complex() == 0
- case reflect.Interface, reflect.Ptr:
- return v.IsNil()
- case reflect.Struct:
- if t.ConvertibleTo(timeType) {
- return v.Convert(timeType).Interface().(time.Time).IsZero()
- }
- return reflect.DeepEqual(v, reflect.Zero(t))
- }
- return false
+ switch t := v.Type(); v.Kind() {
+ case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
+ return v.Len() == 0
+ case reflect.Bool:
+ return !v.Bool()
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.Complex64, reflect.Complex128:
+ return v.Complex() == 0
+ case reflect.Interface, reflect.Ptr:
+ return v.IsNil()
+ case reflect.Struct:
+ if t.ConvertibleTo(timeType) {
+ return v.Convert(timeType).Interface().(time.Time).IsZero()
+ }
+ return reflect.DeepEqual(v, reflect.Zero(t))
+ }
+ return false
}
// canIndexOrdinally returns whether a value contains an ordered sequence of elements.
func canIndexOrdinally(v reflect.Value) bool {
- if !v.IsValid() {
- return false
- }
- switch t := v.Type(); t.Kind() {
- case reflect.Ptr, reflect.Interface:
- return canIndexOrdinally(v.Elem())
- case reflect.Slice, reflect.Array:
- return true
- }
- return false
+ if !v.IsValid() {
+ return false
+ }
+ switch t := v.Type(); t.Kind() {
+ case reflect.Ptr, reflect.Interface:
+ return canIndexOrdinally(v.Elem())
+ case reflect.Slice, reflect.Array:
+ return true
+ }
+ return false
}
func fieldInfo(f reflect.StructField) (k string, oe bool) {
- if f.PkgPath != "" { // Skip private fields.
- return omittedKey, oe
- }
-
- k = f.Name
- tag := f.Tag.Get("form")
- if tag == "" {
- return k, oe
- }
-
- ps := strings.SplitN(tag, ",", 2)
- if ps[0] != "" {
- k = ps[0]
- }
- if len(ps) == 2 {
- oe = ps[1] == "omitempty"
- }
- return k, oe
+ if f.PkgPath != "" { // Skip private fields.
+ return omittedKey, oe
+ }
+
+ k = f.Name
+ tag := f.Tag.Get("form")
+ if tag == "" {
+ return k, oe
+ }
+
+ ps := strings.SplitN(tag, ",", 2)
+ if ps[0] != "" {
+ k = ps[0]
+ }
+ if len(ps) == 2 {
+ oe = ps[1] == "omitempty"
+ }
+ return k, oe
}
func findField(v reflect.Value, n string) (reflect.Value, bool) {
- t := v.Type()
- l := v.NumField()
- // First try named fields.
- for i := 0; i < l; i++ {
- f := t.Field(i)
- k, _ := fieldInfo(f)
- if k == omittedKey {
- continue
- } else if n == k {
- return v.Field(i), true
- }
- }
-
- // Then try anonymous (embedded) fields.
- for i := 0; i < l; i++ {
- f := t.Field(i)
- k, _ := fieldInfo(f)
- if k == omittedKey || !f.Anonymous { // || k != "" ?
- continue
- }
- fv := v.Field(i)
- fk := fv.Kind()
- for fk == reflect.Ptr || fk == reflect.Interface {
- fv = fv.Elem()
- fk = fv.Kind()
- }
-
- if fk != reflect.Struct {
- continue
- }
- if ev, ok := findField(fv, n); ok {
- return ev, true
- }
- }
-
- return reflect.Value{}, false
+ t := v.Type()
+ l := v.NumField()
+ // First try named fields.
+ for i := 0; i < l; i++ {
+ f := t.Field(i)
+ k, _ := fieldInfo(f)
+ if k == omittedKey {
+ continue
+ } else if n == k {
+ return v.Field(i), true
+ }
+ }
+
+ // Then try anonymous (embedded) fields.
+ for i := 0; i < l; i++ {
+ f := t.Field(i)
+ k, _ := fieldInfo(f)
+ if k == omittedKey || !f.Anonymous { // || k != "" ?
+ continue
+ }
+ fv := v.Field(i)
+ fk := fv.Kind()
+ for fk == reflect.Ptr || fk == reflect.Interface {
+ fv = fv.Elem()
+ fk = fv.Kind()
+ }
+
+ if fk != reflect.Struct {
+ continue
+ }
+ if ev, ok := findField(fv, n); ok {
+ return ev, true
+ }
+ }
+
+ return reflect.Value{}, false
}
var (
- stringType = reflect.TypeOf(string(""))
- stringMapType = reflect.TypeOf(map[string]interface{}{})
- timeType = reflect.TypeOf(time.Time{})
- timePtrType = reflect.TypeOf(&time.Time{})
- urlType = reflect.TypeOf(url.URL{})
+ stringType = reflect.TypeOf(string(""))
+ stringMapType = reflect.TypeOf(map[string]interface{}{})
+ timeType = reflect.TypeOf(time.Time{})
+ timePtrType = reflect.TypeOf(&time.Time{})
+ urlType = reflect.TypeOf(url.URL{})
)
func skipTextMarshalling(t reflect.Type) bool {
- /*// Skip time.Time because its text unmarshaling is overly rigid:
- return t == timeType || t == timePtrType*/
- // Skip time.Time & convertibles because its text unmarshaling is overly rigid:
- return t.ConvertibleTo(timeType) || t.ConvertibleTo(timePtrType)
+ /*// Skip time.Time because its text unmarshaling is overly rigid:
+ return t == timeType || t == timePtrType*/
+ // Skip time.Time & convertibles because its text unmarshaling is overly rigid:
+ return t.ConvertibleTo(timeType) || t.ConvertibleTo(timePtrType)
}
func unmarshalValue(v reflect.Value, x interface{}) bool {
- if skipTextMarshalling(v.Type()) {
- return false
- }
-
- tu, ok := v.Interface().(encoding.TextUnmarshaler)
- if !ok && !v.CanAddr() {
- return false
- } else if !ok {
- return unmarshalValue(v.Addr(), x)
- }
-
- s := getString(x)
- if err := tu.UnmarshalText([]byte(s)); err != nil {
- panic(err)
- }
- return true
+ if skipTextMarshalling(v.Type()) {
+ return false
+ }
+
+ tu, ok := v.Interface().(encoding.TextUnmarshaler)
+ if !ok && !v.CanAddr() {
+ return false
+ } else if !ok {
+ return unmarshalValue(v.Addr(), x)
+ }
+
+ s := getString(x)
+ if err := tu.UnmarshalText([]byte(s)); err != nil {
+ panic(err)
+ }
+ return true
}
func marshalValue(v reflect.Value) (string, bool) {
- if skipTextMarshalling(v.Type()) {
- return "", false
- }
-
- tm, ok := v.Interface().(encoding.TextMarshaler)
- if !ok && !v.CanAddr() {
- return "", false
- } else if !ok {
- return marshalValue(v.Addr())
- }
-
- bs, err := tm.MarshalText()
- if err != nil {
- panic(err)
- }
- return string(bs), true
+ if skipTextMarshalling(v.Type()) {
+ return "", false
+ }
+
+ tm, ok := v.Interface().(encoding.TextMarshaler)
+ if !ok && !v.CanAddr() {
+ return "", false
+ } else if !ok {
+ return marshalValue(v.Addr())
+ }
+
+ bs, err := tm.MarshalText()
+ if err != nil {
+ panic(err)
+ }
+ return string(bs), true
}
diff --git a/vendor/github.com/ajg/form/form.go b/vendor/github.com/ajg/form/form.go
index 59463cc83a5d..7c74f3d57735 100644
--- a/vendor/github.com/ajg/form/form.go
+++ b/vendor/github.com/ajg/form/form.go
@@ -6,6 +6,6 @@
package form
const (
- implicitKey = "_"
- omittedKey = "-"
+ implicitKey = "_"
+ omittedKey = "-"
)
diff --git a/vendor/github.com/ajg/form/node.go b/vendor/github.com/ajg/form/node.go
index 9db2540134bf..e4a04e5bdd41 100644
--- a/vendor/github.com/ajg/form/node.go
+++ b/vendor/github.com/ajg/form/node.go
@@ -5,144 +5,144 @@
package form
import (
- "net/url"
- "strconv"
- "strings"
+ "net/url"
+ "strconv"
+ "strings"
)
type node map[string]interface{}
func (n node) Values() url.Values {
- vs := url.Values{}
- n.merge("", &vs)
- return vs
+ vs := url.Values{}
+ n.merge("", &vs)
+ return vs
}
func (n node) merge(p string, vs *url.Values) {
- for k, x := range n {
- switch y := x.(type) {
- case string:
- vs.Add(p+escape(k), y)
- case node:
- y.merge(p+escape(k)+".", vs)
- default:
- panic("value is neither string nor node")
- }
- }
+ for k, x := range n {
+ switch y := x.(type) {
+ case string:
+ vs.Add(p+escape(k), y)
+ case node:
+ y.merge(p+escape(k)+".", vs)
+ default:
+ panic("value is neither string nor node")
+ }
+ }
}
// TODO: Add tests for implicit indexing.
func parseValues(vs url.Values, canIndexFirstLevelOrdinally bool) node {
- // NOTE: Because of the flattening of potentially multiple strings to one key, implicit indexing works:
- // i. At the first level; e.g. Foo.Bar=A&Foo.Bar=B becomes 0.Foo.Bar=A&1.Foo.Bar=B
- // ii. At the last level; e.g. Foo.Bar._=A&Foo.Bar._=B becomes Foo.Bar.0=A&Foo.Bar.1=B
- // TODO: At in-between levels; e.g. Foo._.Bar=A&Foo._.Bar=B becomes Foo.0.Bar=A&Foo.1.Bar=B
- // (This last one requires that there only be one placeholder in order for it to be unambiguous.)
-
- m := map[string]string{}
- for k, ss := range vs {
- indexLastLevelOrdinally := strings.HasSuffix(k, "."+implicitKey)
-
- for i, s := range ss {
- if canIndexFirstLevelOrdinally {
- k = strconv.Itoa(i) + "." + k
- } else if indexLastLevelOrdinally {
- k = strings.TrimSuffix(k, implicitKey) + strconv.Itoa(i)
- }
-
- m[k] = s
- }
- }
-
- n := node{}
- for k, s := range m {
- n = n.split(k, s)
- }
- return n
+ // NOTE: Because of the flattening of potentially multiple strings to one key, implicit indexing works:
+ // i. At the first level; e.g. Foo.Bar=A&Foo.Bar=B becomes 0.Foo.Bar=A&1.Foo.Bar=B
+ // ii. At the last level; e.g. Foo.Bar._=A&Foo.Bar._=B becomes Foo.Bar.0=A&Foo.Bar.1=B
+ // TODO: At in-between levels; e.g. Foo._.Bar=A&Foo._.Bar=B becomes Foo.0.Bar=A&Foo.1.Bar=B
+ // (This last one requires that there only be one placeholder in order for it to be unambiguous.)
+
+ m := map[string]string{}
+ for k, ss := range vs {
+ indexLastLevelOrdinally := strings.HasSuffix(k, "."+implicitKey)
+
+ for i, s := range ss {
+ if canIndexFirstLevelOrdinally {
+ k = strconv.Itoa(i) + "." + k
+ } else if indexLastLevelOrdinally {
+ k = strings.TrimSuffix(k, implicitKey) + strconv.Itoa(i)
+ }
+
+ m[k] = s
+ }
+ }
+
+ n := node{}
+ for k, s := range m {
+ n = n.split(k, s)
+ }
+ return n
}
func splitPath(path string) (k, rest string) {
- esc := false
- for i, r := range path {
- switch {
- case !esc && r == '\\':
- esc = true
- case !esc && r == '.':
- return unescape(path[:i]), path[i+1:]
- default:
- esc = false
- }
- }
- return unescape(path), ""
+ esc := false
+ for i, r := range path {
+ switch {
+ case !esc && r == '\\':
+ esc = true
+ case !esc && r == '.':
+ return unescape(path[:i]), path[i+1:]
+ default:
+ esc = false
+ }
+ }
+ return unescape(path), ""
}
func (n node) split(path, s string) node {
- k, rest := splitPath(path)
- if rest == "" {
- return add(n, k, s)
- }
- if _, ok := n[k]; !ok {
- n[k] = node{}
- }
-
- c := getNode(n[k])
- n[k] = c.split(rest, s)
- return n
+ k, rest := splitPath(path)
+ if rest == "" {
+ return add(n, k, s)
+ }
+ if _, ok := n[k]; !ok {
+ n[k] = node{}
+ }
+
+ c := getNode(n[k])
+ n[k] = c.split(rest, s)
+ return n
}
func add(n node, k, s string) node {
- if n == nil {
- return node{k: s}
- }
+ if n == nil {
+ return node{k: s}
+ }
- if _, ok := n[k]; ok {
- panic("key " + k + " already set")
- }
+ if _, ok := n[k]; ok {
+ panic("key " + k + " already set")
+ }
- n[k] = s
- return n
+ n[k] = s
+ return n
}
func isEmpty(x interface{}) bool {
- switch y := x.(type) {
- case string:
- return y == ""
- case node:
- if s, ok := y[""].(string); ok {
- return s == ""
- }
- return false
- }
- panic("value is neither string nor node")
+ switch y := x.(type) {
+ case string:
+ return y == ""
+ case node:
+ if s, ok := y[""].(string); ok {
+ return s == ""
+ }
+ return false
+ }
+ panic("value is neither string nor node")
}
func getNode(x interface{}) node {
- switch y := x.(type) {
- case string:
- return node{"": y}
- case node:
- return y
- }
- panic("value is neither string nor node")
+ switch y := x.(type) {
+ case string:
+ return node{"": y}
+ case node:
+ return y
+ }
+ panic("value is neither string nor node")
}
func getString(x interface{}) string {
- switch y := x.(type) {
- case string:
- return y
- case node:
- if s, ok := y[""].(string); ok {
- return s
- }
- return ""
- }
- panic("value is neither string nor node")
+ switch y := x.(type) {
+ case string:
+ return y
+ case node:
+ if s, ok := y[""].(string); ok {
+ return s
+ }
+ return ""
+ }
+ panic("value is neither string nor node")
}
func escape(s string) string {
- return strings.Replace(strings.Replace(s, `\`, `\\`, -1), `.`, `\.`, -1)
+ return strings.Replace(strings.Replace(s, `\`, `\\`, -1), `.`, `\.`, -1)
}
func unescape(s string) string {
- return strings.Replace(strings.Replace(s, `\.`, `.`, -1), `\\`, `\`, -1)
+ return strings.Replace(strings.Replace(s, `\.`, `.`, -1), `\\`, `\`, -1)
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go
index 83badef7cc81..f664caf0946e 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go
@@ -32,19 +32,37 @@ func (d DefaultRetryer) MaxRetries() int {
// RetryRules returns the delay duration before retrying this request again
func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration {
// Set the upper limit of delay in retrying at ~five minutes
+ minTime := 30
+ throttle := d.shouldThrottle(r)
+ if throttle {
+ minTime = 1000
+ }
+
retryCount := r.RetryCount
if retryCount > 13 {
retryCount = 13
+ } else if throttle && retryCount > 8 {
+ retryCount = 8
}
- delay := (1 << uint(retryCount)) * (rand.Intn(30) + 30)
+ delay := (1 << uint(retryCount)) * (rand.Intn(30) + minTime)
return time.Duration(delay) * time.Millisecond
}
-// ShouldRetry returns if the request should be retried.
+// ShouldRetry returns true if the request should be retried.
func (d DefaultRetryer) ShouldRetry(r *request.Request) bool {
if r.HTTPResponse.StatusCode >= 500 {
return true
}
- return r.IsErrorRetryable()
+ return r.IsErrorRetryable() || d.shouldThrottle(r)
+}
+
+// ShouldThrottle returns true if the request should be throttled.
+func (d DefaultRetryer) shouldThrottle(r *request.Request) bool {
+ if r.HTTPResponse.StatusCode == 502 ||
+ r.HTTPResponse.StatusCode == 503 ||
+ r.HTTPResponse.StatusCode == 504 {
+ return true
+ }
+ return r.IsErrorThrottle()
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go
index 8edcfc926bd9..669c813a00d5 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go
@@ -2,6 +2,7 @@ package ec2metadata
import (
"encoding/json"
+ "fmt"
"path"
"strings"
"time"
@@ -49,7 +50,7 @@ func (c *EC2Metadata) GetInstanceIdentityDocument() (EC2InstanceIdentityDocument
resp, err := c.GetDynamicData("instance-identity/document")
if err != nil {
return EC2InstanceIdentityDocument{},
- awserr.New("EC2RoleRequestError",
+ awserr.New("EC2MetadataRequestError",
"failed to get EC2 instance identity document", err)
}
@@ -63,6 +64,31 @@ func (c *EC2Metadata) GetInstanceIdentityDocument() (EC2InstanceIdentityDocument
return doc, nil
}
+// IAMInfo retrieves IAM info from the metadata API
+func (c *EC2Metadata) IAMInfo() (EC2IAMInfo, error) {
+ resp, err := c.GetMetadata("iam/info")
+ if err != nil {
+ return EC2IAMInfo{},
+ awserr.New("EC2MetadataRequestError",
+ "failed to get EC2 IAM info", err)
+ }
+
+ info := EC2IAMInfo{}
+ if err := json.NewDecoder(strings.NewReader(resp)).Decode(&info); err != nil {
+ return EC2IAMInfo{},
+ awserr.New("SerializationError",
+ "failed to decode EC2 IAM info", err)
+ }
+
+ if info.Code != "Success" {
+ errMsg := fmt.Sprintf("failed to get EC2 IAM Info (%s)", info.Code)
+ return EC2IAMInfo{},
+ awserr.New("EC2MetadataError", errMsg, nil)
+ }
+
+ return info, nil
+}
+
// Region returns the region the instance is running in.
func (c *EC2Metadata) Region() (string, error) {
resp, err := c.GetMetadata("placement/availability-zone")
@@ -85,6 +111,15 @@ func (c *EC2Metadata) Available() bool {
return true
}
+// An EC2IAMInfo provides the shape for unmarshalling
+// an IAM info from the metadata API
+type EC2IAMInfo struct {
+ Code string
+ LastUpdated time.Time
+ InstanceProfileArn string
+ InstanceProfileID string
+}
+
// An EC2InstanceIdentityDocument provides the shape for unmarshalling
// an instance identity document
type EC2InstanceIdentityDocument struct {
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
index ab6fff5ac842..8cc8b015ae6c 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
@@ -26,8 +26,11 @@ func WithRetryer(cfg *aws.Config, retryer Retryer) *aws.Config {
// retryableCodes is a collection of service response codes which are retry-able
// without any further action.
var retryableCodes = map[string]struct{}{
- "RequestError": {},
- "RequestTimeout": {},
+ "RequestError": {},
+ "RequestTimeout": {},
+}
+
+var throttleCodes = map[string]struct{}{
"ProvisionedThroughputExceededException": {},
"Throttling": {},
"ThrottlingException": {},
@@ -46,6 +49,11 @@ var credsExpiredCodes = map[string]struct{}{
"RequestExpired": {}, // EC2 Only
}
+func isCodeThrottle(code string) bool {
+ _, ok := throttleCodes[code]
+ return ok
+}
+
func isCodeRetryable(code string) bool {
if _, ok := retryableCodes[code]; ok {
return true
@@ -70,6 +78,17 @@ func (r *Request) IsErrorRetryable() bool {
return false
}
+// IsErrorThrottle returns whether the error is to be throttled based on its code.
+// Returns false if the request has no Error set
+func (r *Request) IsErrorThrottle() bool {
+ if r.Error != nil {
+ if err, ok := r.Error.(awserr.Error); ok {
+ return isCodeThrottle(err.Code())
+ }
+ }
+ return false
+}
+
// IsErrorExpired returns whether the error code is a credential expiry error.
// Returns false if the request has no Error set.
func (r *Request) IsErrorExpired() bool {
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/types.go b/vendor/github.com/aws/aws-sdk-go/aws/types.go
index 0f067c57f4e2..fa014b49e1d7 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/types.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/types.go
@@ -61,23 +61,41 @@ func (r ReaderSeekerCloser) Close() error {
type WriteAtBuffer struct {
buf []byte
m sync.Mutex
+
+ // GrowthCoeff defines the growth rate of the internal buffer. By
+ // default, the growth rate is 1, where expanding the internal
+ // buffer will allocate only enough capacity to fit the new expected
+ // length.
+ GrowthCoeff float64
+}
+
+// NewWriteAtBuffer creates a WriteAtBuffer with an internal buffer
+// provided by buf.
+func NewWriteAtBuffer(buf []byte) *WriteAtBuffer {
+ return &WriteAtBuffer{buf: buf}
}
// WriteAt writes a slice of bytes to a buffer starting at the position provided
// The number of bytes written will be returned, or error. Can overwrite previous
// written slices if the write ats overlap.
func (b *WriteAtBuffer) WriteAt(p []byte, pos int64) (n int, err error) {
+ pLen := len(p)
+ expLen := pos + int64(pLen)
b.m.Lock()
defer b.m.Unlock()
-
- expLen := pos + int64(len(p))
if int64(len(b.buf)) < expLen {
- newBuf := make([]byte, expLen)
- copy(newBuf, b.buf)
- b.buf = newBuf
+ if int64(cap(b.buf)) < expLen {
+ if b.GrowthCoeff < 1 {
+ b.GrowthCoeff = 1
+ }
+ newBuf := make([]byte, expLen, int64(b.GrowthCoeff*float64(expLen)))
+ copy(newBuf, b.buf)
+ b.buf = newBuf
+ }
+ b.buf = b.buf[:expLen]
}
copy(b.buf[pos:], p)
- return len(p), nil
+ return pLen, nil
}
// Bytes returns a slice of bytes written to the buffer.
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go
index 458620f88b4e..805dc711dbb0 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/version.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go
@@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK
-const SDKVersion = "1.1.14"
+const SDKVersion = "1.1.15"
diff --git a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/waiters.go
index 42ffccb852dc..42595d2178b0 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/waiters.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/waiters.go
@@ -14,15 +14,15 @@ func (c *AutoScaling) WaitUntilGroupExists(input *DescribeAutoScalingGroupsInput
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
- Matcher: "pathAll",
- Argument: "length(AutoScalingGroups)",
- Expected: 1,
+ Matcher: "path",
+ Argument: "length(AutoScalingGroups) > `0`",
+ Expected: true,
},
{
State: "retry",
- Matcher: "pathAll",
- Argument: "length(AutoScalingGroups)",
- Expected: 0,
+ Matcher: "path",
+ Argument: "length(AutoScalingGroups) > `0`",
+ Expected: false,
},
},
}
@@ -43,13 +43,13 @@ func (c *AutoScaling) WaitUntilGroupInService(input *DescribeAutoScalingGroupsIn
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
- Matcher: "pathAll",
+ Matcher: "path",
Argument: "contains(AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], `false`)",
Expected: false,
},
{
State: "retry",
- Matcher: "pathAll",
+ Matcher: "path",
Argument: "contains(AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], `false`)",
Expected: true,
},
@@ -72,15 +72,15 @@ func (c *AutoScaling) WaitUntilGroupNotExists(input *DescribeAutoScalingGroupsIn
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
- Matcher: "pathAll",
- Argument: "length(AutoScalingGroups)",
- Expected: 0,
+ Matcher: "path",
+ Argument: "length(AutoScalingGroups) > `0`",
+ Expected: false,
},
{
State: "retry",
- Matcher: "pathAll",
- Argument: "length(AutoScalingGroups)",
- Expected: 1,
+ Matcher: "path",
+ Argument: "length(AutoScalingGroups) > `0`",
+ Expected: true,
},
},
}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go
index ca79c979cfa2..7ace19458c8a 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go
@@ -83,6 +83,46 @@ func (c *CloudFormation) ContinueUpdateRollback(input *ContinueUpdateRollbackInp
return out, err
}
+const opCreateChangeSet = "CreateChangeSet"
+
+// CreateChangeSetRequest generates a request for the CreateChangeSet operation.
+func (c *CloudFormation) CreateChangeSetRequest(input *CreateChangeSetInput) (req *request.Request, output *CreateChangeSetOutput) {
+ op := &request.Operation{
+ Name: opCreateChangeSet,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &CreateChangeSetInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &CreateChangeSetOutput{}
+ req.Data = output
+ return
+}
+
+// Creates a list of changes for a stack. AWS CloudFormation generates the change
+// set by comparing the stack's information with the information that you submit.
+// A change set can help you understand which resources AWS CloudFormation will
+// change and how it will change them before you update your stack. Change sets
+// allow you to check before you make a change so that you don't delete or replace
+// critical resources.
+//
+// AWS CloudFormation doesn't make any changes to the stack when you create
+// a change set. To make the specified changes, you must execute the change
+// set by using the ExecuteChangeSet action.
+//
+// After the call successfully completes, AWS CloudFormation starts creating
+// the change set. To check the status of the change set, use the DescribeChangeSet
+// action.
+func (c *CloudFormation) CreateChangeSet(input *CreateChangeSetInput) (*CreateChangeSetOutput, error) {
+ req, out := c.CreateChangeSetRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opCreateStack = "CreateStack"
// CreateStackRequest generates a request for the CreateStack operation.
@@ -112,6 +152,37 @@ func (c *CloudFormation) CreateStack(input *CreateStackInput) (*CreateStackOutpu
return out, err
}
+const opDeleteChangeSet = "DeleteChangeSet"
+
+// DeleteChangeSetRequest generates a request for the DeleteChangeSet operation.
+func (c *CloudFormation) DeleteChangeSetRequest(input *DeleteChangeSetInput) (req *request.Request, output *DeleteChangeSetOutput) {
+ op := &request.Operation{
+ Name: opDeleteChangeSet,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &DeleteChangeSetInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &DeleteChangeSetOutput{}
+ req.Data = output
+ return
+}
+
+// Deletes the specified change set. Deleting change sets ensures that no one
+// executes the wrong change set.
+//
+// If the call successfully completes, AWS CloudFormation successfully deleted
+// the change set.
+func (c *CloudFormation) DeleteChangeSet(input *DeleteChangeSetInput) (*DeleteChangeSetOutput, error) {
+ req, out := c.DeleteChangeSetRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opDeleteStack = "DeleteStack"
// DeleteStackRequest generates a request for the DeleteStack operation.
@@ -171,6 +242,36 @@ func (c *CloudFormation) DescribeAccountLimits(input *DescribeAccountLimitsInput
return out, err
}
+const opDescribeChangeSet = "DescribeChangeSet"
+
+// DescribeChangeSetRequest generates a request for the DescribeChangeSet operation.
+func (c *CloudFormation) DescribeChangeSetRequest(input *DescribeChangeSetInput) (req *request.Request, output *DescribeChangeSetOutput) {
+ op := &request.Operation{
+ Name: opDescribeChangeSet,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &DescribeChangeSetInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &DescribeChangeSetOutput{}
+ req.Data = output
+ return
+}
+
+// Returns the inputs for the change set and a list of changes that AWS CloudFormation
+// will make if you execute the change set. For more information, see Updating
+// Stacks Using Change Sets (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html)
+// in the AWS CloudFormation User Guide.
+func (c *CloudFormation) DescribeChangeSet(input *DescribeChangeSetInput) (*DescribeChangeSetOutput, error) {
+ req, out := c.DescribeChangeSetRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opDescribeStackEvents = "DescribeStackEvents"
// DescribeStackEventsRequest generates a request for the DescribeStackEvents operation.
@@ -361,6 +462,44 @@ func (c *CloudFormation) EstimateTemplateCost(input *EstimateTemplateCostInput)
return out, err
}
+const opExecuteChangeSet = "ExecuteChangeSet"
+
+// ExecuteChangeSetRequest generates a request for the ExecuteChangeSet operation.
+func (c *CloudFormation) ExecuteChangeSetRequest(input *ExecuteChangeSetInput) (req *request.Request, output *ExecuteChangeSetOutput) {
+ op := &request.Operation{
+ Name: opExecuteChangeSet,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &ExecuteChangeSetInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ExecuteChangeSetOutput{}
+ req.Data = output
+ return
+}
+
+// Updates a stack using the input information that was provided when the specified
+// change set was created. After the call successfully completes, AWS CloudFormation
+// starts updating the stack. Use the DescribeStacks action to view the status
+// of the update.
+//
+// When you execute a change set, AWS CloudFormation deletes all other change
+// sets associated with the stack because they aren't valid for the updated
+// stack.
+//
+// If a stack policy is associated with the stack, AWS CloudFormation enforces
+// the policy during the update. You can't specify a temporary stack policy
+// that overrides the current policy.
+func (c *CloudFormation) ExecuteChangeSet(input *ExecuteChangeSetInput) (*ExecuteChangeSetOutput, error) {
+ req, out := c.ExecuteChangeSetRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opGetStackPolicy = "GetStackPolicy"
// GetStackPolicyRequest generates a request for the GetStackPolicy operation.
@@ -458,6 +597,35 @@ func (c *CloudFormation) GetTemplateSummary(input *GetTemplateSummaryInput) (*Ge
return out, err
}
+const opListChangeSets = "ListChangeSets"
+
+// ListChangeSetsRequest generates a request for the ListChangeSets operation.
+func (c *CloudFormation) ListChangeSetsRequest(input *ListChangeSetsInput) (req *request.Request, output *ListChangeSetsOutput) {
+ op := &request.Operation{
+ Name: opListChangeSets,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &ListChangeSetsInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListChangeSetsOutput{}
+ req.Data = output
+ return
+}
+
+// Returns the ID and status of each active change set for a stack. For example,
+// AWS CloudFormation lists change sets that are in the CREATE_IN_PROGRESS or
+// CREATE_PENDING state.
+func (c *CloudFormation) ListChangeSets(input *ListChangeSetsInput) (*ListChangeSetsOutput, error) {
+ req, out := c.ListChangeSetsRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opListStackResources = "ListStackResources"
// ListStackResourcesRequest generates a request for the ListStackResources operation.
@@ -725,6 +893,72 @@ func (s CancelUpdateStackOutput) GoString() string {
return s.String()
}
+// The Change structure describes the changes AWS CloudFormation will perform
+// if you execute the change set.
+type Change struct {
+ _ struct{} `type:"structure"`
+
+ // A ResourceChange structure that describes the resource and action that AWS
+ // CloudFormation will perform.
+ ResourceChange *ResourceChange `type:"structure"`
+
+ // The type of entity that AWS CloudFormation changes. Currently, the only entity
+ // type is Resource.
+ Type *string `type:"string" enum:"ChangeType"`
+}
+
+// String returns the string representation
+func (s Change) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Change) GoString() string {
+ return s.String()
+}
+
+// The ChangeSetSummary structure describes a change set, its status, and the
+// stack with which it's associated.
+type ChangeSetSummary struct {
+ _ struct{} `type:"structure"`
+
+ // The ID of the change set.
+ ChangeSetId *string `min:"1" type:"string"`
+
+ // The name of the change set.
+ ChangeSetName *string `min:"1" type:"string"`
+
+ // The start time when the change set was created, in UTC.
+ CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"`
+
+ // Descriptive information about the change set.
+ Description *string `min:"1" type:"string"`
+
+ // The ID of the stack with which the change set is associated.
+ StackId *string `type:"string"`
+
+ // The name of the stack with which the change set is associated.
+ StackName *string `type:"string"`
+
+ // The state of the change set, such as CREATE_IN_PROGRESS, CREATE_COMPLETE,
+ // or FAILED.
+ Status *string `type:"string" enum:"ChangeSetStatus"`
+
+ // A description of the change set's status. For example, if your change set
+ // is in the FAILED state, AWS CloudFormation shows the error message.
+ StatusReason *string `type:"string"`
+}
+
+// String returns the string representation
+func (s ChangeSetSummary) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ChangeSetSummary) GoString() string {
+ return s.String()
+}
+
// The input for the ContinueUpdateRollback action.
type ContinueUpdateRollbackInput struct {
_ struct{} `type:"structure"`
@@ -759,13 +993,137 @@ func (s ContinueUpdateRollbackOutput) GoString() string {
return s.String()
}
+// The input for the CreateChangeSet action.
+type CreateChangeSetInput struct {
+ _ struct{} `type:"structure"`
+
+ // A list of capabilities that you must specify before AWS CloudFormation can
+ // update certain stacks. Some stack templates might include resources that
+ // can affect permissions in your AWS account, for example, by creating new
+ // AWS Identity and Access Management (IAM) users. For those stacks, you must
+ // explicitly acknowledge their capabilities by specifying this parameter.
+ //
+ // Currently, the only valid value is CAPABILITY_IAM, which is required for
+ // the following resources: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html),
+ // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html),
+ // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html),
+ // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html),
+ // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html),
+ // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html),
+ // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html).
+ // If your stack template contains these resources, we recommend that you review
+ // all permissions associated with them and edit their permissions if necessary.
+ // If your template contains any of the listed resources and you don't specify
+ // this parameter, this action returns an InsufficientCapabilities error.
+ Capabilities []*string `type:"list"`
+
+ // The name of the change set. The name must be unique among all change sets
+ // that are associated with the specified stack.
+ //
+ // A change set name can contain only alphanumeric, case sensitive characters
+ // and hyphens. It must start with an alphabetic character and cannot exceed
+ // 128 characters.
+ ChangeSetName *string `min:"1" type:"string" required:"true"`
+
+ // A unique identifier for this CreateChangeSet request. Specify this token
+ // if you plan to retry requests so that AWS CloudFormation knows that you're
+ // not attempting to create another change set with the same name. You might
+ // retry CreateChangeSet requests to ensure that AWS CloudFormation successfully
+ // received them.
+ ClientToken *string `min:"1" type:"string"`
+
+ // A description to help you identify this change set.
+ Description *string `min:"1" type:"string"`
+
+ // The Amazon Resource Names (ARNs) of Amazon Simple Notification Service (Amazon
+ // SNS) topics that AWS CloudFormation associates with the stack. To remove
+ // all associated notification topics, specify an empty list.
+ NotificationARNs []*string `type:"list"`
+
+ // A list of Parameter structures that specify input parameters for the change
+ // set. For more information, see the Parameter (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html)
+ // data type.
+ Parameters []*Parameter `type:"list"`
+
+ // The template resource types that you have permissions to work with if you
+ // execute this change set, such as AWS::EC2::Instance, AWS::EC2::*, or Custom::MyCustomInstance.
+ //
+ // If the list of resource types doesn't include a resource type that you're
+ // updating, the stack update fails. By default, AWS CloudFormation grants permissions
+ // to all resource types. AWS Identity and Access Management (IAM) uses this
+ // parameter for condition keys in IAM policies for AWS CloudFormation. For
+ // more information, see Controlling Access with AWS Identity and Access Management
+ // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html)
+ // in the AWS CloudFormation User Guide.
+ ResourceTypes []*string `type:"list"`
+
+ // The name or the unique ID of the stack for which you are creating a change
+ // set. AWS CloudFormation generates the change set by comparing this stack's
+ // information with the information that you submit, such as a modified template
+ // or different parameter input values.
+ StackName *string `min:"1" type:"string" required:"true"`
+
+ // Key-value pairs to associate with this stack. AWS CloudFormation also propagates
+ // these tags to resources in the stack. You can specify a maximum of 10 tags.
+ Tags []*Tag `type:"list"`
+
+ // A structure that contains the body of the revised template, with a minimum
+ // length of 1 byte and a maximum length of 51,200 bytes. AWS CloudFormation
+ // generates the change set by comparing this template with the template of
+ // the stack that you specified.
+ //
+ // Conditional: You must specify only TemplateBody or TemplateURL.
+ TemplateBody *string `min:"1" type:"string"`
+
+ // The location of the file that contains the revised template. The URL must
+ // point to a template (max size: 460,800 bytes) that is located in an S3 bucket.
+ // AWS CloudFormation generates the change set by comparing this template with
+ // the stack that you specified.
+ //
+ // Conditional: You must specify only TemplateBody or TemplateURL.
+ TemplateURL *string `min:"1" type:"string"`
+
+ // Whether to reuse the template that is associated with the stack to create
+ // the change set.
+ UsePreviousTemplate *bool `type:"boolean"`
+}
+
+// String returns the string representation
+func (s CreateChangeSetInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateChangeSetInput) GoString() string {
+ return s.String()
+}
+
+// The output for the CreateChangeSet action.
+type CreateChangeSetOutput struct {
+ _ struct{} `type:"structure"`
+
+ // The Amazon Resource Name (ARN) of the change set.
+ Id *string `min:"1" type:"string"`
+}
+
+// String returns the string representation
+func (s CreateChangeSetOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateChangeSetOutput) GoString() string {
+ return s.String()
+}
+
// The input for CreateStack action.
type CreateStackInput struct {
_ struct{} `type:"structure"`
// A list of capabilities that you must specify before AWS CloudFormation can
- // create or update certain stacks. Some stack templates might include resources
- // that can affect permissions in your AWS account. For those stacks, you must
+ // create certain stacks. Some stack templates might include resources that
+ // can affect permissions in your AWS account, for example, by creating new
+ // AWS Identity and Access Management (IAM) users. For those stacks, you must
// explicitly acknowledge their capabilities by specifying this parameter.
//
// Currently, the only valid value is CAPABILITY_IAM, which is required for
@@ -777,8 +1135,9 @@ type CreateStackInput struct {
// AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html),
// and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html).
// If your stack template contains these resources, we recommend that you review
- // any permissions associated with them. If you don't specify this parameter,
- // this action returns an InsufficientCapabilities error.
+ // all permissions associated with them and edit their permissions if necessary.
+ // If your template contains any of the listed resources and you don't specify
+ // this parameter, this action returns an InsufficientCapabilities error.
Capabilities []*string `type:"list"`
// Set to true to disable rollback of the stack if stack creation failed. You
@@ -897,6 +1256,44 @@ func (s CreateStackOutput) GoString() string {
return s.String()
}
+// The input for the DeleteChangeSet action.
+type DeleteChangeSetInput struct {
+ _ struct{} `type:"structure"`
+
+ // The name or Amazon Resource Name (ARN) of the change set that you want to
+ // delete.
+ ChangeSetName *string `min:"1" type:"string" required:"true"`
+
+ // If you specified the name of a change set to delete, specify the stack name
+ // or ID (ARN) that is associated with it.
+ StackName *string `min:"1" type:"string"`
+}
+
+// String returns the string representation
+func (s DeleteChangeSetInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteChangeSetInput) GoString() string {
+ return s.String()
+}
+
+// The output for the DeleteChangeSet action.
+type DeleteChangeSetOutput struct {
+ _ struct{} `type:"structure"`
+}
+
+// String returns the string representation
+func (s DeleteChangeSetOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteChangeSetOutput) GoString() string {
+ return s.String()
+}
+
// The input for DeleteStack action.
type DeleteStackInput struct {
_ struct{} `type:"structure"`
@@ -978,6 +1375,100 @@ func (s DescribeAccountLimitsOutput) GoString() string {
return s.String()
}
+// The input for the DescribeChangeSet action.
+type DescribeChangeSetInput struct {
+ _ struct{} `type:"structure"`
+
+ // The name or Amazon Resource Name (ARN) of the change set that you want to
+ // describe.
+ ChangeSetName *string `min:"1" type:"string" required:"true"`
+
+ // A string (provided by the DescribeChangeSet response output) that identifies
+ // the next page of information that you want to retrieve.
+ NextToken *string `min:"1" type:"string"`
+
+ // If you specified the name of a change set, specify the stack name or ID (ARN)
+ // of the change set you want to describe.
+ StackName *string `min:"1" type:"string"`
+}
+
+// String returns the string representation
+func (s DescribeChangeSetInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DescribeChangeSetInput) GoString() string {
+ return s.String()
+}
+
+// The output for the DescribeChangeSet action.
+type DescribeChangeSetOutput struct {
+ _ struct{} `type:"structure"`
+
+ // If you execute the change set, the list of capabilities that were explicitly
+ // acknowledged when the change set was created.
+ Capabilities []*string `type:"list"`
+
+ // The ARN of the change set.
+ ChangeSetId *string `min:"1" type:"string"`
+
+ // The name of the change set.
+ ChangeSetName *string `min:"1" type:"string"`
+
+ // A list of Change structures that describes the resources AWS CloudFormation
+ // changes if you execute the change set.
+ Changes []*Change `type:"list"`
+
+ // The start time when the change set was created, in UTC.
+ CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"`
+
+ // Information about the change set.
+ Description *string `min:"1" type:"string"`
+
+ // If the output exceeds 1 MB, a string that identifies the next page of changes.
+ // If there is no additional page, this value is null.
+ NextToken *string `min:"1" type:"string"`
+
+ // The ARNs of the Amazon Simple Notification Service (Amazon SNS) topics that
+ // will be associated with the stack if you execute the change set.
+ NotificationARNs []*string `type:"list"`
+
+ // A list of Parameter structures that describes the input parameters and their
+ // values used to create the change set. For more information, see the Parameter
+ // (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html)
+ // data type.
+ Parameters []*Parameter `type:"list"`
+
+ // The ARN of the stack that is associated with the change set.
+ StackId *string `type:"string"`
+
+ // The name of the stack that is associated with the change set.
+ StackName *string `type:"string"`
+
+ // The current status of the change set, such as CREATE_IN_PROGRESS, CREATE_COMPLETE,
+ // or FAILED.
+ Status *string `type:"string" enum:"ChangeSetStatus"`
+
+ // A description of the change set's status. For example, if your attempt to
+ // create a change set failed, AWS CloudFormation shows the error message.
+ StatusReason *string `type:"string"`
+
+ // If you execute the change set, the tags that will be associated with the
+ // stack.
+ Tags []*Tag `type:"list"`
+}
+
+// String returns the string representation
+func (s DescribeChangeSetOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DescribeChangeSetOutput) GoString() string {
+ return s.String()
+}
+
// The input for DescribeStackEvents action.
type DescribeStackEventsInput struct {
_ struct{} `type:"structure"`
@@ -1238,6 +1729,44 @@ func (s EstimateTemplateCostOutput) GoString() string {
return s.String()
}
+// The input for the ExecuteChangeSet action.
+type ExecuteChangeSetInput struct {
+ _ struct{} `type:"structure"`
+
+ // The name or ARN of the change set that you want use to update the specified
+ // stack.
+ ChangeSetName *string `min:"1" type:"string" required:"true"`
+
+ // If you specified the name of a change set, specify the stack name or ID (ARN)
+ // that is associated with the change set you want to execute.
+ StackName *string `min:"1" type:"string"`
+}
+
+// String returns the string representation
+func (s ExecuteChangeSetInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ExecuteChangeSetInput) GoString() string {
+ return s.String()
+}
+
+// The output for the ExecuteChangeSet action.
+type ExecuteChangeSetOutput struct {
+ _ struct{} `type:"structure"`
+}
+
+// String returns the string representation
+func (s ExecuteChangeSetOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ExecuteChangeSetOutput) GoString() string {
+ return s.String()
+}
+
// The input for the GetStackPolicy action.
type GetStackPolicyInput struct {
_ struct{} `type:"structure"`
@@ -1378,7 +1907,7 @@ type GetTemplateSummaryOutput struct {
CapabilitiesReason *string `type:"string"`
// The value that is defined in the Description property of the template.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// The value that is defined for the Metadata property of the template.
Metadata *string `type:"string"`
@@ -1406,6 +1935,52 @@ func (s GetTemplateSummaryOutput) GoString() string {
return s.String()
}
+// The input for the ListChangeSets action.
+type ListChangeSetsInput struct {
+ _ struct{} `type:"structure"`
+
+ // A string (provided by the ListChangeSets response output) that identifies
+ // the next page of change sets that you want to retrieve.
+ NextToken *string `min:"1" type:"string"`
+
+ // The name or the Amazon Resource Name (ARN) of the stack for which you want
+ // to list change sets.
+ StackName *string `min:"1" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s ListChangeSetsInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListChangeSetsInput) GoString() string {
+ return s.String()
+}
+
+// The output for the ListChangeSets action.
+type ListChangeSetsOutput struct {
+ _ struct{} `type:"structure"`
+
+ // If the output exceeds 1 MB, a string that identifies the next page of change
+ // sets. If there is no additional page, this value is null.
+ NextToken *string `min:"1" type:"string"`
+
+ // A list of ChangeSetSummary structures that provides the ID and status of
+ // each change set for the specified stack.
+ Summaries []*ChangeSetSummary `type:"list"`
+}
+
+// String returns the string representation
+func (s ListChangeSetsOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListChangeSetsOutput) GoString() string {
+ return s.String()
+}
+
// The input for the ListStackResource action.
type ListStackResourcesInput struct {
_ struct{} `type:"structure"`
@@ -1437,8 +2012,8 @@ func (s ListStackResourcesInput) GoString() string {
type ListStackResourcesOutput struct {
_ struct{} `type:"structure"`
- // If the output exceeds 1 MB in size, a string that identifies the next page
- // of stack resources. If no additional page exists, this value is null.
+ // If the output exceeds 1 MB, a string that identifies the next page of stack
+ // resources. If no additional page exists, this value is null.
NextToken *string `min:"1" type:"string"`
// A list of StackResourceSummary structures.
@@ -1506,7 +2081,7 @@ type Output struct {
_ struct{} `type:"structure"`
// User defined description associated with the output.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// The key associated with the output.
OutputKey *string `type:"string"`
@@ -1581,7 +2156,7 @@ type ParameterDeclaration struct {
DefaultValue *string `type:"string"`
// The description that is associate with the parameter.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// Flag that indicates whether the parameter value is shown as plain text in
// logs and in the AWS Management Console.
@@ -1607,6 +2182,152 @@ func (s ParameterDeclaration) GoString() string {
return s.String()
}
+// The ResourceChange structure describes the resource and the action that AWS
+// CloudFormation will perform on it if you execute this change set.
+type ResourceChange struct {
+ _ struct{} `type:"structure"`
+
+ // The action that AWS CloudFormation takes on the resource, such as Add (adds
+ // a new resource), Modify (changes a resource), or Remove (deletes a resource).
+ Action *string `type:"string" enum:"ChangeAction"`
+
+ // For the Modify action, a list of ResourceChangeDetail structures that describes
+ // the changes that AWS CloudFormation will make to the resource.
+ Details []*ResourceChangeDetail `type:"list"`
+
+ // The resource's logical ID, which is defined in the stack's template.
+ LogicalResourceId *string `type:"string"`
+
+ // The resource's physical ID (resource name). Resources that you are adding
+ // don't have physical IDs because they haven't been created.
+ PhysicalResourceId *string `type:"string"`
+
+ // For the Modify action, indicates whether AWS CloudFormation will replace
+ // the resource by creating a new one and deleting the old one. This value depends
+ // on the value of the RequiresRecreation property in the ResourceTargetDefinition
+ // structure. For example, if the RequiresRecreation field is Always and the
+ // Evaluation field is Static, Replacement is True. If the RequiresRecreation
+ // field is Always and the Evaluation field is Dynamic, Replacement is Conditionally.
+ //
+ // If you have multiple changes with different RequiresRecreation values, the
+ // Replacement value depends on the change with the most impact. A RequiresRecreation
+ // value of Always has the most impact, followed by Conditionally, and then
+ // Never.
+ Replacement *string `type:"string" enum:"Replacement"`
+
+ // The type of AWS CloudFormation resource, such as AWS::S3::Bucket.
+ ResourceType *string `min:"1" type:"string"`
+
+ // For the Modify action, indicates which resource attribute is triggering this
+ // update, such as a change in the resource attribute's Metadata, Properties,
+ // or Tags.
+ Scope []*string `type:"list"`
+}
+
+// String returns the string representation
+func (s ResourceChange) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ResourceChange) GoString() string {
+ return s.String()
+}
+
+// For a resource with Modify as the action, the ResourceChange structure describes
+// the changes AWS CloudFormation will make to that resource.
+type ResourceChangeDetail struct {
+ _ struct{} `type:"structure"`
+
+ // The identity of the entity that triggered this change. This entity is a member
+ // of the group that is specified by the ChangeSource field. For example, if
+ // you modified the value of the KeyPairName parameter, the CausingEntity is
+ // the name of the parameter (KeyPairName).
+ //
+ // If the ChangeSource value is DirectModification, no value is given for CausingEntity.
+ CausingEntity *string `type:"string"`
+
+ // The group to which the CausingEntity value belongs. There are five entity
+ // groups:
+ //
+ // ResourceReference entities are Ref intrinsic functions that refer to resources
+ // in the template, such as { "Ref" : "MyEC2InstanceResource" }. ParameterReference
+ // entities are Ref intrinsic functions that get template parameter values,
+ // such as { "Ref" : "MyPasswordParameter" }. ResourceAttribute entities are
+ // Fn::GetAtt intrinsic functions that get resource attribute values, such as
+ // { "Fn::GetAtt" : [ "MyEC2InstanceResource", "PublicDnsName" ] }. DirectModification
+ // entities are changes that are made directly to the template. Automatic entities
+ // are AWS::CloudFormation::Stack resource types, which are also known as nested
+ // stacks. If you made no changes to the AWS::CloudFormation::Stack resource,
+ // AWS CloudFormation sets the ChangeSource to Automatic because the nested
+ // stack's template might have changed. Changes to a nested stack's template
+ // aren't visible to AWS CloudFormation until you run an update on the parent
+ // stack.
+ ChangeSource *string `type:"string" enum:"ChangeSource"`
+
+ // Indicates whether AWS CloudFormation can determine the target value, and
+ // whether the target value will change before you execute a change set.
+ //
+ // For Static evaluations, AWS CloudFormation can determine that the target
+ // value will change, and its value. For example, if you directly modify the
+ // InstanceType property of an EC2 instance, AWS CloudFormation knows that this
+ // property value will change, and its value, so this is a Static evaluation.
+ //
+ // For Dynamic evaluations, cannot determine the target value because it depends
+ // on the result of an intrinsic function, such as a Ref or Fn::GetAtt intrinsic
+ // function, when the stack is updated. For example, if your template includes
+ // a reference to a resource that is conditionally recreated, the value of the
+ // reference (the physical ID of the resource) might change, depending on if
+ // the resource is recreated. If the resource is recreated, it will have a new
+ // physical ID, so all references to that resource will also be updated.
+ Evaluation *string `type:"string" enum:"EvaluationType"`
+
+ // A ResourceTargetDefinition structure that describes the field that AWS CloudFormation
+ // will change and whether the resource will be recreated.
+ Target *ResourceTargetDefinition `type:"structure"`
+}
+
+// String returns the string representation
+func (s ResourceChangeDetail) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ResourceChangeDetail) GoString() string {
+ return s.String()
+}
+
+// The field that AWS CloudFormation will change, such as the name of a resource's
+// property, and whether the resource will be recreated.
+type ResourceTargetDefinition struct {
+ _ struct{} `type:"structure"`
+
+ // Indicates which resource attribute is triggering this update, such as a change
+ // in the resource attribute's Metadata, Properties, or Tags.
+ Attribute *string `type:"string" enum:"ResourceAttribute"`
+
+ // If the Attribute value is Properties, the name of the property. For all other
+ // attributes, the value is null.
+ Name *string `type:"string"`
+
+ // If the Attribute value is Properties, indicates whether a change to this
+ // property causes the resource to be recreated. The value can be Never, Always,
+ // or Conditionally. To determine the conditions for a Conditionally recreation,
+ // see the update behavior for that property (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html)
+ // in the AWS CloudFormation User Guide.
+ RequiresRecreation *string `type:"string" enum:"RequiresRecreation"`
+}
+
+// String returns the string representation
+func (s ResourceTargetDefinition) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ResourceTargetDefinition) GoString() string {
+ return s.String()
+}
+
// The input for the SetStackPolicy action.
type SetStackPolicyInput struct {
_ struct{} `type:"structure"`
@@ -1621,7 +2342,7 @@ type SetStackPolicyInput struct {
StackPolicyBody *string `min:"1" type:"string"`
// Location of a file containing the stack policy. The URL must point to a policy
- // (max size: 16KB) located in an S3 bucket in the same region as the stack.
+ // (maximum size: 16 KB) located in an S3 bucket in the same region as the stack.
// You can specify either the StackPolicyBody or the StackPolicyURL parameter,
// but not both.
StackPolicyURL *string `min:"1" type:"string"`
@@ -1709,7 +2430,7 @@ type Stack struct {
CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
// A user-defined description associated with the stack.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// Boolean to enable or disable rollback on stack creation failures:
//
@@ -1741,7 +2462,7 @@ type Stack struct {
// Success/failure message associated with the stack status.
StackStatusReason *string `type:"string"`
- // A list of Tags that specify cost allocation information for the stack.
+ // A list of Tags that specify information about the stack.
Tags []*Tag `type:"list"`
// The amount of time within which stack creation should complete.
@@ -1784,7 +2505,7 @@ type StackEvent struct {
// Type of resource. (For more information, go to AWS Resource Types Reference
// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html)
// in the AWS CloudFormation User Guide.)
- ResourceType *string `type:"string"`
+ ResourceType *string `min:"1" type:"string"`
// The unique ID name of the instance of the stack.
StackId *string `type:"string" required:"true"`
@@ -1811,7 +2532,7 @@ type StackResource struct {
_ struct{} `type:"structure"`
// User defined description associated with the resource.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// The logical name of the resource specified in the template.
LogicalResourceId *string `type:"string" required:"true"`
@@ -1829,7 +2550,7 @@ type StackResource struct {
// Type of resource. (For more information, go to AWS Resource Types Reference
// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html)
// in the AWS CloudFormation User Guide.)
- ResourceType *string `type:"string" required:"true"`
+ ResourceType *string `min:"1" type:"string" required:"true"`
// Unique identifier of the stack.
StackId *string `type:"string"`
@@ -1856,7 +2577,7 @@ type StackResourceDetail struct {
_ struct{} `type:"structure"`
// User defined description associated with the resource.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// Time the status was updated.
LastUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
@@ -1882,7 +2603,7 @@ type StackResourceDetail struct {
// Type of resource. ((For more information, go to AWS Resource Types Reference
// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html)
// in the AWS CloudFormation User Guide.)
- ResourceType *string `type:"string" required:"true"`
+ ResourceType *string `min:"1" type:"string" required:"true"`
// Unique identifier of the stack.
StackId *string `type:"string"`
@@ -1924,7 +2645,7 @@ type StackResourceSummary struct {
// Type of resource. (For more information, go to AWS Resource Types Reference
// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html)
// in the AWS CloudFormation User Guide.)
- ResourceType *string `type:"string" required:"true"`
+ ResourceType *string `min:"1" type:"string" required:"true"`
}
// String returns the string representation
@@ -1977,9 +2698,8 @@ func (s StackSummary) GoString() string {
return s.String()
}
-// The Tag type is used by CreateStack in the Tags parameter. It allows you
-// to specify a key-value pair that can be used to store information related
-// to cost allocation for an AWS CloudFormation stack.
+// The Tag type enables you to specify a key-value pair that can be used to
+// store information about an AWS CloudFormation stack.
type Tag struct {
_ struct{} `type:"structure"`
@@ -2011,7 +2731,7 @@ type TemplateParameter struct {
DefaultValue *string `type:"string"`
// User defined description associated with the parameter.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// Flag indicating whether the parameter should be displayed as plain text in
// logs and UIs.
@@ -2036,11 +2756,13 @@ type UpdateStackInput struct {
_ struct{} `type:"structure"`
// A list of capabilities that you must specify before AWS CloudFormation can
- // create or update certain stacks. Some stack templates might include resources
- // that can affect permissions in your AWS account. For those stacks, you must
- // explicitly acknowledge their capabilities by specifying this parameter. Currently,
- // the only valid value is CAPABILITY_IAM, which is required for the following
- // resources: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html),
+ // update certain stacks. Some stack templates might include resources that
+ // can affect permissions in your AWS account, for example, by creating new
+ // AWS Identity and Access Management (IAM) users. For those stacks, you must
+ // explicitly acknowledge their capabilities by specifying this parameter.
+ //
+ // Currently, the only valid value is CAPABILITY_IAM, which is required for
+ // the following resources: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html),
// AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html),
// AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html),
// AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html),
@@ -2048,8 +2770,9 @@ type UpdateStackInput struct {
// AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html),
// and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html).
// If your stack template contains these resources, we recommend that you review
- // any permissions associated with them. If you don't specify this parameter,
- // this action returns an InsufficientCapabilities error.
+ // all permissions associated with them and edit their permissions if necessary.
+ // If your template contains any of the listed resources and you don't specify
+ // this parameter, this action returns an InsufficientCapabilities error.
Capabilities []*string `type:"list"`
// Amazon Simple Notification Service topic Amazon Resource Names (ARNs) that
@@ -2222,7 +2945,7 @@ type ValidateTemplateOutput struct {
CapabilitiesReason *string `type:"string"`
// The description found within the template.
- Description *string `type:"string"`
+ Description *string `min:"1" type:"string"`
// A list of TemplateParameter structures.
Parameters []*TemplateParameter `type:"list"`
@@ -2243,6 +2966,53 @@ const (
CapabilityCapabilityIam = "CAPABILITY_IAM"
)
+const (
+ // @enum ChangeAction
+ ChangeActionAdd = "Add"
+ // @enum ChangeAction
+ ChangeActionModify = "Modify"
+ // @enum ChangeAction
+ ChangeActionRemove = "Remove"
+)
+
+const (
+ // @enum ChangeSetStatus
+ ChangeSetStatusCreatePending = "CREATE_PENDING"
+ // @enum ChangeSetStatus
+ ChangeSetStatusCreateInProgress = "CREATE_IN_PROGRESS"
+ // @enum ChangeSetStatus
+ ChangeSetStatusCreateComplete = "CREATE_COMPLETE"
+ // @enum ChangeSetStatus
+ ChangeSetStatusDeleteComplete = "DELETE_COMPLETE"
+ // @enum ChangeSetStatus
+ ChangeSetStatusFailed = "FAILED"
+)
+
+const (
+ // @enum ChangeSource
+ ChangeSourceResourceReference = "ResourceReference"
+ // @enum ChangeSource
+ ChangeSourceParameterReference = "ParameterReference"
+ // @enum ChangeSource
+ ChangeSourceResourceAttribute = "ResourceAttribute"
+ // @enum ChangeSource
+ ChangeSourceDirectModification = "DirectModification"
+ // @enum ChangeSource
+ ChangeSourceAutomatic = "Automatic"
+)
+
+const (
+ // @enum ChangeType
+ ChangeTypeResource = "Resource"
+)
+
+const (
+ // @enum EvaluationType
+ EvaluationTypeStatic = "Static"
+ // @enum EvaluationType
+ EvaluationTypeDynamic = "Dynamic"
+)
+
const (
// @enum OnFailure
OnFailureDoNothing = "DO_NOTHING"
@@ -2252,6 +3022,39 @@ const (
OnFailureDelete = "DELETE"
)
+const (
+ // @enum Replacement
+ ReplacementTrue = "True"
+ // @enum Replacement
+ ReplacementFalse = "False"
+ // @enum Replacement
+ ReplacementConditional = "Conditional"
+)
+
+const (
+ // @enum RequiresRecreation
+ RequiresRecreationNever = "Never"
+ // @enum RequiresRecreation
+ RequiresRecreationConditionally = "Conditionally"
+ // @enum RequiresRecreation
+ RequiresRecreationAlways = "Always"
+)
+
+const (
+ // @enum ResourceAttribute
+ ResourceAttributeProperties = "Properties"
+ // @enum ResourceAttribute
+ ResourceAttributeMetadata = "Metadata"
+ // @enum ResourceAttribute
+ ResourceAttributeCreationPolicy = "CreationPolicy"
+ // @enum ResourceAttribute
+ ResourceAttributeUpdatePolicy = "UpdatePolicy"
+ // @enum ResourceAttribute
+ ResourceAttributeDeletionPolicy = "DeletionPolicy"
+ // @enum ResourceAttribute
+ ResourceAttributeTags = "Tags"
+)
+
const (
// @enum ResourceSignalStatus
ResourceSignalStatusSuccess = "SUCCESS"
diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go
index cd90e41d7af8..f8ca675144dd 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/waiters.go
@@ -10,7 +10,7 @@ func (c *CloudFormation) WaitUntilStackCreateComplete(input *DescribeStacksInput
waiterCfg := waiter.Config{
Operation: "DescribeStacks",
Delay: 30,
- MaxAttempts: 50,
+ MaxAttempts: 120,
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
@@ -24,6 +24,48 @@ func (c *CloudFormation) WaitUntilStackCreateComplete(input *DescribeStacksInput
Argument: "Stacks[].StackStatus",
Expected: "CREATE_FAILED",
},
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "DELETE_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "DELETE_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "DELETE_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "error",
+ Argument: "",
+ Expected: "ValidationError",
+ },
},
}
@@ -39,7 +81,7 @@ func (c *CloudFormation) WaitUntilStackDeleteComplete(input *DescribeStacksInput
waiterCfg := waiter.Config{
Operation: "DescribeStacks",
Delay: 30,
- MaxAttempts: 25,
+ MaxAttempts: 120,
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
@@ -59,6 +101,113 @@ func (c *CloudFormation) WaitUntilStackDeleteComplete(input *DescribeStacksInput
Argument: "Stacks[].StackStatus",
Expected: "DELETE_FAILED",
},
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "CREATE_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "CREATE_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "CREATE_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "ROLLBACK_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_COMPLETE_CLEANUP_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_IN_PROGRESS",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
+
+func (c *CloudFormation) WaitUntilStackExists(input *DescribeStacksInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "DescribeStacks",
+ Delay: 5,
+ MaxAttempts: 20,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "status",
+ Argument: "",
+ Expected: 200,
+ },
+ {
+ State: "retry",
+ Matcher: "error",
+ Argument: "",
+ Expected: "ValidationError",
+ },
},
}
@@ -74,7 +223,7 @@ func (c *CloudFormation) WaitUntilStackUpdateComplete(input *DescribeStacksInput
waiterCfg := waiter.Config{
Operation: "DescribeStacks",
Delay: 30,
- MaxAttempts: 5,
+ MaxAttempts: 120,
Acceptors: []waiter.WaitAcceptor{
{
State: "success",
@@ -88,6 +237,36 @@ func (c *CloudFormation) WaitUntilStackUpdateComplete(input *DescribeStacksInput
Argument: "Stacks[].StackStatus",
Expected: "UPDATE_FAILED",
},
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_COMPLETE",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_FAILED",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "pathAny",
+ Argument: "Stacks[].StackStatus",
+ Expected: "UPDATE_ROLLBACK_IN_PROGRESS",
+ },
+ {
+ State: "failure",
+ Matcher: "error",
+ Argument: "",
+ Expected: "ValidationError",
+ },
},
}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go
new file mode 100644
index 000000000000..cfdb35577d71
--- /dev/null
+++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go
@@ -0,0 +1,3450 @@
+// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
+
+// Package cloudfront provides a client for Amazon CloudFront.
+package cloudfront
+
+import (
+ "time"
+
+ "github.com/aws/aws-sdk-go/aws/awsutil"
+ "github.com/aws/aws-sdk-go/aws/request"
+ "github.com/aws/aws-sdk-go/private/protocol"
+ "github.com/aws/aws-sdk-go/private/protocol/restxml"
+)
+
+const opCreateCloudFrontOriginAccessIdentity = "CreateCloudFrontOriginAccessIdentity2016_01_28"
+
+// CreateCloudFrontOriginAccessIdentityRequest generates a request for the CreateCloudFrontOriginAccessIdentity operation.
+func (c *CloudFront) CreateCloudFrontOriginAccessIdentityRequest(input *CreateCloudFrontOriginAccessIdentityInput) (req *request.Request, output *CreateCloudFrontOriginAccessIdentityOutput) {
+ op := &request.Operation{
+ Name: opCreateCloudFrontOriginAccessIdentity,
+ HTTPMethod: "POST",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront",
+ }
+
+ if input == nil {
+ input = &CreateCloudFrontOriginAccessIdentityInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &CreateCloudFrontOriginAccessIdentityOutput{}
+ req.Data = output
+ return
+}
+
+// Create a new origin access identity.
+func (c *CloudFront) CreateCloudFrontOriginAccessIdentity(input *CreateCloudFrontOriginAccessIdentityInput) (*CreateCloudFrontOriginAccessIdentityOutput, error) {
+ req, out := c.CreateCloudFrontOriginAccessIdentityRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opCreateDistribution = "CreateDistribution2016_01_28"
+
+// CreateDistributionRequest generates a request for the CreateDistribution operation.
+func (c *CloudFront) CreateDistributionRequest(input *CreateDistributionInput) (req *request.Request, output *CreateDistributionOutput) {
+ op := &request.Operation{
+ Name: opCreateDistribution,
+ HTTPMethod: "POST",
+ HTTPPath: "/2016-01-28/distribution",
+ }
+
+ if input == nil {
+ input = &CreateDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &CreateDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Create a new distribution.
+func (c *CloudFront) CreateDistribution(input *CreateDistributionInput) (*CreateDistributionOutput, error) {
+ req, out := c.CreateDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opCreateInvalidation = "CreateInvalidation2016_01_28"
+
+// CreateInvalidationRequest generates a request for the CreateInvalidation operation.
+func (c *CloudFront) CreateInvalidationRequest(input *CreateInvalidationInput) (req *request.Request, output *CreateInvalidationOutput) {
+ op := &request.Operation{
+ Name: opCreateInvalidation,
+ HTTPMethod: "POST",
+ HTTPPath: "/2016-01-28/distribution/{DistributionId}/invalidation",
+ }
+
+ if input == nil {
+ input = &CreateInvalidationInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &CreateInvalidationOutput{}
+ req.Data = output
+ return
+}
+
+// Create a new invalidation.
+func (c *CloudFront) CreateInvalidation(input *CreateInvalidationInput) (*CreateInvalidationOutput, error) {
+ req, out := c.CreateInvalidationRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opCreateStreamingDistribution = "CreateStreamingDistribution2016_01_28"
+
+// CreateStreamingDistributionRequest generates a request for the CreateStreamingDistribution operation.
+func (c *CloudFront) CreateStreamingDistributionRequest(input *CreateStreamingDistributionInput) (req *request.Request, output *CreateStreamingDistributionOutput) {
+ op := &request.Operation{
+ Name: opCreateStreamingDistribution,
+ HTTPMethod: "POST",
+ HTTPPath: "/2016-01-28/streaming-distribution",
+ }
+
+ if input == nil {
+ input = &CreateStreamingDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &CreateStreamingDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Create a new streaming distribution.
+func (c *CloudFront) CreateStreamingDistribution(input *CreateStreamingDistributionInput) (*CreateStreamingDistributionOutput, error) {
+ req, out := c.CreateStreamingDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opDeleteCloudFrontOriginAccessIdentity = "DeleteCloudFrontOriginAccessIdentity2016_01_28"
+
+// DeleteCloudFrontOriginAccessIdentityRequest generates a request for the DeleteCloudFrontOriginAccessIdentity operation.
+func (c *CloudFront) DeleteCloudFrontOriginAccessIdentityRequest(input *DeleteCloudFrontOriginAccessIdentityInput) (req *request.Request, output *DeleteCloudFrontOriginAccessIdentityOutput) {
+ op := &request.Operation{
+ Name: opDeleteCloudFrontOriginAccessIdentity,
+ HTTPMethod: "DELETE",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront/{Id}",
+ }
+
+ if input == nil {
+ input = &DeleteCloudFrontOriginAccessIdentityInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler)
+ req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler)
+ output = &DeleteCloudFrontOriginAccessIdentityOutput{}
+ req.Data = output
+ return
+}
+
+// Delete an origin access identity.
+func (c *CloudFront) DeleteCloudFrontOriginAccessIdentity(input *DeleteCloudFrontOriginAccessIdentityInput) (*DeleteCloudFrontOriginAccessIdentityOutput, error) {
+ req, out := c.DeleteCloudFrontOriginAccessIdentityRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opDeleteDistribution = "DeleteDistribution2016_01_28"
+
+// DeleteDistributionRequest generates a request for the DeleteDistribution operation.
+func (c *CloudFront) DeleteDistributionRequest(input *DeleteDistributionInput) (req *request.Request, output *DeleteDistributionOutput) {
+ op := &request.Operation{
+ Name: opDeleteDistribution,
+ HTTPMethod: "DELETE",
+ HTTPPath: "/2016-01-28/distribution/{Id}",
+ }
+
+ if input == nil {
+ input = &DeleteDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler)
+ req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler)
+ output = &DeleteDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Delete a distribution.
+func (c *CloudFront) DeleteDistribution(input *DeleteDistributionInput) (*DeleteDistributionOutput, error) {
+ req, out := c.DeleteDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opDeleteStreamingDistribution = "DeleteStreamingDistribution2016_01_28"
+
+// DeleteStreamingDistributionRequest generates a request for the DeleteStreamingDistribution operation.
+func (c *CloudFront) DeleteStreamingDistributionRequest(input *DeleteStreamingDistributionInput) (req *request.Request, output *DeleteStreamingDistributionOutput) {
+ op := &request.Operation{
+ Name: opDeleteStreamingDistribution,
+ HTTPMethod: "DELETE",
+ HTTPPath: "/2016-01-28/streaming-distribution/{Id}",
+ }
+
+ if input == nil {
+ input = &DeleteStreamingDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler)
+ req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler)
+ output = &DeleteStreamingDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Delete a streaming distribution.
+func (c *CloudFront) DeleteStreamingDistribution(input *DeleteStreamingDistributionInput) (*DeleteStreamingDistributionOutput, error) {
+ req, out := c.DeleteStreamingDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetCloudFrontOriginAccessIdentity = "GetCloudFrontOriginAccessIdentity2016_01_28"
+
+// GetCloudFrontOriginAccessIdentityRequest generates a request for the GetCloudFrontOriginAccessIdentity operation.
+func (c *CloudFront) GetCloudFrontOriginAccessIdentityRequest(input *GetCloudFrontOriginAccessIdentityInput) (req *request.Request, output *GetCloudFrontOriginAccessIdentityOutput) {
+ op := &request.Operation{
+ Name: opGetCloudFrontOriginAccessIdentity,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront/{Id}",
+ }
+
+ if input == nil {
+ input = &GetCloudFrontOriginAccessIdentityInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetCloudFrontOriginAccessIdentityOutput{}
+ req.Data = output
+ return
+}
+
+// Get the information about an origin access identity.
+func (c *CloudFront) GetCloudFrontOriginAccessIdentity(input *GetCloudFrontOriginAccessIdentityInput) (*GetCloudFrontOriginAccessIdentityOutput, error) {
+ req, out := c.GetCloudFrontOriginAccessIdentityRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetCloudFrontOriginAccessIdentityConfig = "GetCloudFrontOriginAccessIdentityConfig2016_01_28"
+
+// GetCloudFrontOriginAccessIdentityConfigRequest generates a request for the GetCloudFrontOriginAccessIdentityConfig operation.
+func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfigRequest(input *GetCloudFrontOriginAccessIdentityConfigInput) (req *request.Request, output *GetCloudFrontOriginAccessIdentityConfigOutput) {
+ op := &request.Operation{
+ Name: opGetCloudFrontOriginAccessIdentityConfig,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront/{Id}/config",
+ }
+
+ if input == nil {
+ input = &GetCloudFrontOriginAccessIdentityConfigInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetCloudFrontOriginAccessIdentityConfigOutput{}
+ req.Data = output
+ return
+}
+
+// Get the configuration information about an origin access identity.
+func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfig(input *GetCloudFrontOriginAccessIdentityConfigInput) (*GetCloudFrontOriginAccessIdentityConfigOutput, error) {
+ req, out := c.GetCloudFrontOriginAccessIdentityConfigRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetDistribution = "GetDistribution2016_01_28"
+
+// GetDistributionRequest generates a request for the GetDistribution operation.
+func (c *CloudFront) GetDistributionRequest(input *GetDistributionInput) (req *request.Request, output *GetDistributionOutput) {
+ op := &request.Operation{
+ Name: opGetDistribution,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distribution/{Id}",
+ }
+
+ if input == nil {
+ input = &GetDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Get the information about a distribution.
+func (c *CloudFront) GetDistribution(input *GetDistributionInput) (*GetDistributionOutput, error) {
+ req, out := c.GetDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetDistributionConfig = "GetDistributionConfig2016_01_28"
+
+// GetDistributionConfigRequest generates a request for the GetDistributionConfig operation.
+func (c *CloudFront) GetDistributionConfigRequest(input *GetDistributionConfigInput) (req *request.Request, output *GetDistributionConfigOutput) {
+ op := &request.Operation{
+ Name: opGetDistributionConfig,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distribution/{Id}/config",
+ }
+
+ if input == nil {
+ input = &GetDistributionConfigInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetDistributionConfigOutput{}
+ req.Data = output
+ return
+}
+
+// Get the configuration information about a distribution.
+func (c *CloudFront) GetDistributionConfig(input *GetDistributionConfigInput) (*GetDistributionConfigOutput, error) {
+ req, out := c.GetDistributionConfigRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetInvalidation = "GetInvalidation2016_01_28"
+
+// GetInvalidationRequest generates a request for the GetInvalidation operation.
+func (c *CloudFront) GetInvalidationRequest(input *GetInvalidationInput) (req *request.Request, output *GetInvalidationOutput) {
+ op := &request.Operation{
+ Name: opGetInvalidation,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distribution/{DistributionId}/invalidation/{Id}",
+ }
+
+ if input == nil {
+ input = &GetInvalidationInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetInvalidationOutput{}
+ req.Data = output
+ return
+}
+
+// Get the information about an invalidation.
+func (c *CloudFront) GetInvalidation(input *GetInvalidationInput) (*GetInvalidationOutput, error) {
+ req, out := c.GetInvalidationRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetStreamingDistribution = "GetStreamingDistribution2016_01_28"
+
+// GetStreamingDistributionRequest generates a request for the GetStreamingDistribution operation.
+func (c *CloudFront) GetStreamingDistributionRequest(input *GetStreamingDistributionInput) (req *request.Request, output *GetStreamingDistributionOutput) {
+ op := &request.Operation{
+ Name: opGetStreamingDistribution,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/streaming-distribution/{Id}",
+ }
+
+ if input == nil {
+ input = &GetStreamingDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetStreamingDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Get the information about a streaming distribution.
+func (c *CloudFront) GetStreamingDistribution(input *GetStreamingDistributionInput) (*GetStreamingDistributionOutput, error) {
+ req, out := c.GetStreamingDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opGetStreamingDistributionConfig = "GetStreamingDistributionConfig2016_01_28"
+
+// GetStreamingDistributionConfigRequest generates a request for the GetStreamingDistributionConfig operation.
+func (c *CloudFront) GetStreamingDistributionConfigRequest(input *GetStreamingDistributionConfigInput) (req *request.Request, output *GetStreamingDistributionConfigOutput) {
+ op := &request.Operation{
+ Name: opGetStreamingDistributionConfig,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/streaming-distribution/{Id}/config",
+ }
+
+ if input == nil {
+ input = &GetStreamingDistributionConfigInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &GetStreamingDistributionConfigOutput{}
+ req.Data = output
+ return
+}
+
+// Get the configuration information about a streaming distribution.
+func (c *CloudFront) GetStreamingDistributionConfig(input *GetStreamingDistributionConfigInput) (*GetStreamingDistributionConfigOutput, error) {
+ req, out := c.GetStreamingDistributionConfigRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opListCloudFrontOriginAccessIdentities = "ListCloudFrontOriginAccessIdentities2016_01_28"
+
+// ListCloudFrontOriginAccessIdentitiesRequest generates a request for the ListCloudFrontOriginAccessIdentities operation.
+func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesRequest(input *ListCloudFrontOriginAccessIdentitiesInput) (req *request.Request, output *ListCloudFrontOriginAccessIdentitiesOutput) {
+ op := &request.Operation{
+ Name: opListCloudFrontOriginAccessIdentities,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront",
+ Paginator: &request.Paginator{
+ InputTokens: []string{"Marker"},
+ OutputTokens: []string{"CloudFrontOriginAccessIdentityList.NextMarker"},
+ LimitToken: "MaxItems",
+ TruncationToken: "CloudFrontOriginAccessIdentityList.IsTruncated",
+ },
+ }
+
+ if input == nil {
+ input = &ListCloudFrontOriginAccessIdentitiesInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListCloudFrontOriginAccessIdentitiesOutput{}
+ req.Data = output
+ return
+}
+
+// List origin access identities.
+func (c *CloudFront) ListCloudFrontOriginAccessIdentities(input *ListCloudFrontOriginAccessIdentitiesInput) (*ListCloudFrontOriginAccessIdentitiesOutput, error) {
+ req, out := c.ListCloudFrontOriginAccessIdentitiesRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesPages(input *ListCloudFrontOriginAccessIdentitiesInput, fn func(p *ListCloudFrontOriginAccessIdentitiesOutput, lastPage bool) (shouldContinue bool)) error {
+ page, _ := c.ListCloudFrontOriginAccessIdentitiesRequest(input)
+ page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator"))
+ return page.EachPage(func(p interface{}, lastPage bool) bool {
+ return fn(p.(*ListCloudFrontOriginAccessIdentitiesOutput), lastPage)
+ })
+}
+
+const opListDistributions = "ListDistributions2016_01_28"
+
+// ListDistributionsRequest generates a request for the ListDistributions operation.
+func (c *CloudFront) ListDistributionsRequest(input *ListDistributionsInput) (req *request.Request, output *ListDistributionsOutput) {
+ op := &request.Operation{
+ Name: opListDistributions,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distribution",
+ Paginator: &request.Paginator{
+ InputTokens: []string{"Marker"},
+ OutputTokens: []string{"DistributionList.NextMarker"},
+ LimitToken: "MaxItems",
+ TruncationToken: "DistributionList.IsTruncated",
+ },
+ }
+
+ if input == nil {
+ input = &ListDistributionsInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListDistributionsOutput{}
+ req.Data = output
+ return
+}
+
+// List distributions.
+func (c *CloudFront) ListDistributions(input *ListDistributionsInput) (*ListDistributionsOutput, error) {
+ req, out := c.ListDistributionsRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+func (c *CloudFront) ListDistributionsPages(input *ListDistributionsInput, fn func(p *ListDistributionsOutput, lastPage bool) (shouldContinue bool)) error {
+ page, _ := c.ListDistributionsRequest(input)
+ page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator"))
+ return page.EachPage(func(p interface{}, lastPage bool) bool {
+ return fn(p.(*ListDistributionsOutput), lastPage)
+ })
+}
+
+const opListDistributionsByWebACLId = "ListDistributionsByWebACLId2016_01_28"
+
+// ListDistributionsByWebACLIdRequest generates a request for the ListDistributionsByWebACLId operation.
+func (c *CloudFront) ListDistributionsByWebACLIdRequest(input *ListDistributionsByWebACLIdInput) (req *request.Request, output *ListDistributionsByWebACLIdOutput) {
+ op := &request.Operation{
+ Name: opListDistributionsByWebACLId,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distributionsByWebACLId/{WebACLId}",
+ }
+
+ if input == nil {
+ input = &ListDistributionsByWebACLIdInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListDistributionsByWebACLIdOutput{}
+ req.Data = output
+ return
+}
+
+// List the distributions that are associated with a specified AWS WAF web ACL.
+func (c *CloudFront) ListDistributionsByWebACLId(input *ListDistributionsByWebACLIdInput) (*ListDistributionsByWebACLIdOutput, error) {
+ req, out := c.ListDistributionsByWebACLIdRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opListInvalidations = "ListInvalidations2016_01_28"
+
+// ListInvalidationsRequest generates a request for the ListInvalidations operation.
+func (c *CloudFront) ListInvalidationsRequest(input *ListInvalidationsInput) (req *request.Request, output *ListInvalidationsOutput) {
+ op := &request.Operation{
+ Name: opListInvalidations,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/distribution/{DistributionId}/invalidation",
+ Paginator: &request.Paginator{
+ InputTokens: []string{"Marker"},
+ OutputTokens: []string{"InvalidationList.NextMarker"},
+ LimitToken: "MaxItems",
+ TruncationToken: "InvalidationList.IsTruncated",
+ },
+ }
+
+ if input == nil {
+ input = &ListInvalidationsInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListInvalidationsOutput{}
+ req.Data = output
+ return
+}
+
+// List invalidation batches.
+func (c *CloudFront) ListInvalidations(input *ListInvalidationsInput) (*ListInvalidationsOutput, error) {
+ req, out := c.ListInvalidationsRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+func (c *CloudFront) ListInvalidationsPages(input *ListInvalidationsInput, fn func(p *ListInvalidationsOutput, lastPage bool) (shouldContinue bool)) error {
+ page, _ := c.ListInvalidationsRequest(input)
+ page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator"))
+ return page.EachPage(func(p interface{}, lastPage bool) bool {
+ return fn(p.(*ListInvalidationsOutput), lastPage)
+ })
+}
+
+const opListStreamingDistributions = "ListStreamingDistributions2016_01_28"
+
+// ListStreamingDistributionsRequest generates a request for the ListStreamingDistributions operation.
+func (c *CloudFront) ListStreamingDistributionsRequest(input *ListStreamingDistributionsInput) (req *request.Request, output *ListStreamingDistributionsOutput) {
+ op := &request.Operation{
+ Name: opListStreamingDistributions,
+ HTTPMethod: "GET",
+ HTTPPath: "/2016-01-28/streaming-distribution",
+ Paginator: &request.Paginator{
+ InputTokens: []string{"Marker"},
+ OutputTokens: []string{"StreamingDistributionList.NextMarker"},
+ LimitToken: "MaxItems",
+ TruncationToken: "StreamingDistributionList.IsTruncated",
+ },
+ }
+
+ if input == nil {
+ input = &ListStreamingDistributionsInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ListStreamingDistributionsOutput{}
+ req.Data = output
+ return
+}
+
+// List streaming distributions.
+func (c *CloudFront) ListStreamingDistributions(input *ListStreamingDistributionsInput) (*ListStreamingDistributionsOutput, error) {
+ req, out := c.ListStreamingDistributionsRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+func (c *CloudFront) ListStreamingDistributionsPages(input *ListStreamingDistributionsInput, fn func(p *ListStreamingDistributionsOutput, lastPage bool) (shouldContinue bool)) error {
+ page, _ := c.ListStreamingDistributionsRequest(input)
+ page.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler("Paginator"))
+ return page.EachPage(func(p interface{}, lastPage bool) bool {
+ return fn(p.(*ListStreamingDistributionsOutput), lastPage)
+ })
+}
+
+const opUpdateCloudFrontOriginAccessIdentity = "UpdateCloudFrontOriginAccessIdentity2016_01_28"
+
+// UpdateCloudFrontOriginAccessIdentityRequest generates a request for the UpdateCloudFrontOriginAccessIdentity operation.
+func (c *CloudFront) UpdateCloudFrontOriginAccessIdentityRequest(input *UpdateCloudFrontOriginAccessIdentityInput) (req *request.Request, output *UpdateCloudFrontOriginAccessIdentityOutput) {
+ op := &request.Operation{
+ Name: opUpdateCloudFrontOriginAccessIdentity,
+ HTTPMethod: "PUT",
+ HTTPPath: "/2016-01-28/origin-access-identity/cloudfront/{Id}/config",
+ }
+
+ if input == nil {
+ input = &UpdateCloudFrontOriginAccessIdentityInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &UpdateCloudFrontOriginAccessIdentityOutput{}
+ req.Data = output
+ return
+}
+
+// Update an origin access identity.
+func (c *CloudFront) UpdateCloudFrontOriginAccessIdentity(input *UpdateCloudFrontOriginAccessIdentityInput) (*UpdateCloudFrontOriginAccessIdentityOutput, error) {
+ req, out := c.UpdateCloudFrontOriginAccessIdentityRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opUpdateDistribution = "UpdateDistribution2016_01_28"
+
+// UpdateDistributionRequest generates a request for the UpdateDistribution operation.
+func (c *CloudFront) UpdateDistributionRequest(input *UpdateDistributionInput) (req *request.Request, output *UpdateDistributionOutput) {
+ op := &request.Operation{
+ Name: opUpdateDistribution,
+ HTTPMethod: "PUT",
+ HTTPPath: "/2016-01-28/distribution/{Id}/config",
+ }
+
+ if input == nil {
+ input = &UpdateDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &UpdateDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Update a distribution.
+func (c *CloudFront) UpdateDistribution(input *UpdateDistributionInput) (*UpdateDistributionOutput, error) {
+ req, out := c.UpdateDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+const opUpdateStreamingDistribution = "UpdateStreamingDistribution2016_01_28"
+
+// UpdateStreamingDistributionRequest generates a request for the UpdateStreamingDistribution operation.
+func (c *CloudFront) UpdateStreamingDistributionRequest(input *UpdateStreamingDistributionInput) (req *request.Request, output *UpdateStreamingDistributionOutput) {
+ op := &request.Operation{
+ Name: opUpdateStreamingDistribution,
+ HTTPMethod: "PUT",
+ HTTPPath: "/2016-01-28/streaming-distribution/{Id}/config",
+ }
+
+ if input == nil {
+ input = &UpdateStreamingDistributionInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &UpdateStreamingDistributionOutput{}
+ req.Data = output
+ return
+}
+
+// Update a streaming distribution.
+func (c *CloudFront) UpdateStreamingDistribution(input *UpdateStreamingDistributionInput) (*UpdateStreamingDistributionOutput, error) {
+ req, out := c.UpdateStreamingDistributionRequest(input)
+ err := req.Send()
+ return out, err
+}
+
+// A complex type that lists the AWS accounts, if any, that you included in
+// the TrustedSigners complex type for the default cache behavior or for any
+// of the other cache behaviors for this distribution. These are accounts that
+// you want to allow to create signed URLs for private content.
+type ActiveTrustedSigners struct {
+ _ struct{} `type:"structure"`
+
+ // Each active trusted signer.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // A complex type that contains one Signer complex type for each unique trusted
+ // signer that is specified in the TrustedSigners complex type, including trusted
+ // signers in the default cache behavior and in all of the other cache behaviors.
+ Items []*Signer `locationNameList:"Signer" type:"list"`
+
+ // The number of unique trusted signers included in all cache behaviors. For
+ // example, if three cache behaviors all list the same three AWS accounts, the
+ // value of Quantity for ActiveTrustedSigners will be 3.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s ActiveTrustedSigners) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ActiveTrustedSigners) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about CNAMEs (alternate domain names),
+// if any, for this distribution.
+type Aliases struct {
+ _ struct{} `type:"structure"`
+
+ // Optional: A complex type that contains CNAME elements, if any, for this distribution.
+ // If Quantity is 0, you can omit Items.
+ Items []*string `locationNameList:"CNAME" type:"list"`
+
+ // The number of CNAMEs, if any, for this distribution.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s Aliases) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Aliases) GoString() string {
+ return s.String()
+}
+
+// A complex type that controls which HTTP methods CloudFront processes and
+// forwards to your Amazon S3 bucket or your custom origin. There are three
+// choices: - CloudFront forwards only GET and HEAD requests. - CloudFront forwards
+// only GET, HEAD and OPTIONS requests. - CloudFront forwards GET, HEAD, OPTIONS,
+// PUT, PATCH, POST, and DELETE requests. If you pick the third choice, you
+// may need to restrict access to your Amazon S3 bucket or to your custom origin
+// so users can't perform operations that you don't want them to. For example,
+// you may not want users to have permission to delete objects from your origin.
+type AllowedMethods struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that controls whether CloudFront caches the response to requests
+ // using the specified HTTP methods. There are two choices: - CloudFront caches
+ // responses to GET and HEAD requests. - CloudFront caches responses to GET,
+ // HEAD, and OPTIONS requests. If you pick the second choice for your S3 Origin,
+ // you may need to forward Access-Control-Request-Method, Access-Control-Request-Headers
+ // and Origin headers for the responses to be cached correctly.
+ CachedMethods *CachedMethods `type:"structure"`
+
+ // A complex type that contains the HTTP methods that you want CloudFront to
+ // process and forward to your origin.
+ Items []*string `locationNameList:"Method" type:"list" required:"true"`
+
+ // The number of HTTP methods that you want CloudFront to forward to your origin.
+ // Valid values are 2 (for GET and HEAD requests), 3 (for GET, HEAD and OPTIONS
+ // requests) and 7 (for GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE requests).
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s AllowedMethods) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s AllowedMethods) GoString() string {
+ return s.String()
+}
+
+// A complex type that describes how CloudFront processes requests. You can
+// create up to 10 cache behaviors.You must create at least as many cache behaviors
+// (including the default cache behavior) as you have origins if you want CloudFront
+// to distribute objects from all of the origins. Each cache behavior specifies
+// the one origin from which you want CloudFront to get objects. If you have
+// two origins and only the default cache behavior, the default cache behavior
+// will cause CloudFront to get objects from one of the origins, but the other
+// origin will never be used. If you don't want to specify any cache behaviors,
+// include only an empty CacheBehaviors element. Don't include an empty CacheBehavior
+// element, or CloudFront returns a MalformedXML error. To delete all cache
+// behaviors in an existing distribution, update the distribution configuration
+// and include only an empty CacheBehaviors element. To add, change, or remove
+// one or more cache behaviors, update the distribution configuration and specify
+// all of the cache behaviors that you want to include in the updated distribution.
+type CacheBehavior struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that controls which HTTP methods CloudFront processes and
+ // forwards to your Amazon S3 bucket or your custom origin. There are three
+ // choices: - CloudFront forwards only GET and HEAD requests. - CloudFront forwards
+ // only GET, HEAD and OPTIONS requests. - CloudFront forwards GET, HEAD, OPTIONS,
+ // PUT, PATCH, POST, and DELETE requests. If you pick the third choice, you
+ // may need to restrict access to your Amazon S3 bucket or to your custom origin
+ // so users can't perform operations that you don't want them to. For example,
+ // you may not want users to have permission to delete objects from your origin.
+ AllowedMethods *AllowedMethods `type:"structure"`
+
+ // Whether you want CloudFront to automatically compress content for web requests
+ // that include Accept-Encoding: gzip in the request header. If so, specify
+ // true; if not, specify false. CloudFront compresses files larger than 1000
+ // bytes and less than 1 megabyte for both Amazon S3 and custom origins. When
+ // a CloudFront edge location is unusually busy, some files might not be compressed.
+ // The value of the Content-Type header must be on the list of file types that
+ // CloudFront will compress. For the current list, see Serving Compressed Content
+ // (http://docs.aws.amazon.com/console/cloudfront/compressed-content) in the
+ // Amazon CloudFront Developer Guide. If you configure CloudFront to compress
+ // content, CloudFront removes the ETag response header from the objects that
+ // it compresses. The ETag header indicates that the version in a CloudFront
+ // edge cache is identical to the version on the origin server, but after compression
+ // the two versions are no longer identical. As a result, for compressed objects,
+ // CloudFront can't use the ETag header to determine whether an expired object
+ // in the CloudFront edge cache is still the latest version.
+ Compress *bool `type:"boolean"`
+
+ // If you don't configure your origin to add a Cache-Control max-age directive
+ // or an Expires header, DefaultTTL is the default amount of time (in seconds)
+ // that an object is in a CloudFront cache before CloudFront forwards another
+ // request to your origin to determine whether the object has been updated.
+ // The value that you specify applies only when your origin does not add HTTP
+ // headers such as Cache-Control max-age, Cache-Control s-maxage, and Expires
+ // to objects. You can specify a value from 0 to 3,153,600,000 seconds (100
+ // years).
+ DefaultTTL *int64 `type:"long"`
+
+ // A complex type that specifies how CloudFront handles query strings, cookies
+ // and headers.
+ ForwardedValues *ForwardedValues `type:"structure" required:"true"`
+
+ // The maximum amount of time (in seconds) that an object is in a CloudFront
+ // cache before CloudFront forwards another request to your origin to determine
+ // whether the object has been updated. The value that you specify applies only
+ // when your origin adds HTTP headers such as Cache-Control max-age, Cache-Control
+ // s-maxage, and Expires to objects. You can specify a value from 0 to 3,153,600,000
+ // seconds (100 years).
+ MaxTTL *int64 `type:"long"`
+
+ // The minimum amount of time that you want objects to stay in CloudFront caches
+ // before CloudFront queries your origin to see whether the object has been
+ // updated.You can specify a value from 0 to 3,153,600,000 seconds (100 years).
+ MinTTL *int64 `type:"long" required:"true"`
+
+ // The pattern (for example, images/*.jpg) that specifies which requests you
+ // want this cache behavior to apply to. When CloudFront receives an end-user
+ // request, the requested path is compared with path patterns in the order in
+ // which cache behaviors are listed in the distribution. The path pattern for
+ // the default cache behavior is * and cannot be changed. If the request for
+ // an object does not match the path pattern for any cache behaviors, CloudFront
+ // applies the behavior in the default cache behavior.
+ PathPattern *string `type:"string" required:"true"`
+
+ // Indicates whether you want to distribute media files in Microsoft Smooth
+ // Streaming format using the origin that is associated with this cache behavior.
+ // If so, specify true; if not, specify false.
+ SmoothStreaming *bool `type:"boolean"`
+
+ // The value of ID for the origin that you want CloudFront to route requests
+ // to when a request matches the path pattern either for a cache behavior or
+ // for the default cache behavior.
+ TargetOriginId *string `type:"string" required:"true"`
+
+ // A complex type that specifies the AWS accounts, if any, that you want to
+ // allow to create signed URLs for private content. If you want to require signed
+ // URLs in requests for objects in the target origin that match the PathPattern
+ // for this cache behavior, specify true for Enabled, and specify the applicable
+ // values for Quantity and Items. For more information, go to Using a Signed
+ // URL to Serve Private Content in the Amazon CloudFront Developer Guide. If
+ // you don't want to require signed URLs in requests for objects that match
+ // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. To
+ // add, change, or remove one or more trusted signers, change Enabled to true
+ // (if it's currently false), change Quantity as applicable, and specify all
+ // of the trusted signers that you want to include in the updated distribution.
+ TrustedSigners *TrustedSigners `type:"structure" required:"true"`
+
+ // Use this element to specify the protocol that users can use to access the
+ // files in the origin specified by TargetOriginId when a request matches the
+ // path pattern in PathPattern. If you want CloudFront to allow end users to
+ // use any available protocol, specify allow-all. If you want CloudFront to
+ // require HTTPS, specify https. If you want CloudFront to respond to an HTTP
+ // request with an HTTP status code of 301 (Moved Permanently) and the HTTPS
+ // URL, specify redirect-to-https. The viewer then resubmits the request using
+ // the HTTPS URL.
+ ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"`
+}
+
+// String returns the string representation
+func (s CacheBehavior) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CacheBehavior) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains zero or more CacheBehavior elements.
+type CacheBehaviors struct {
+ _ struct{} `type:"structure"`
+
+ // Optional: A complex type that contains cache behaviors for this distribution.
+ // If Quantity is 0, you can omit Items.
+ Items []*CacheBehavior `locationNameList:"CacheBehavior" type:"list"`
+
+ // The number of cache behaviors for this distribution.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s CacheBehaviors) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CacheBehaviors) GoString() string {
+ return s.String()
+}
+
+// A complex type that controls whether CloudFront caches the response to requests
+// using the specified HTTP methods. There are two choices: - CloudFront caches
+// responses to GET and HEAD requests. - CloudFront caches responses to GET,
+// HEAD, and OPTIONS requests. If you pick the second choice for your S3 Origin,
+// you may need to forward Access-Control-Request-Method, Access-Control-Request-Headers
+// and Origin headers for the responses to be cached correctly.
+type CachedMethods struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains the HTTP methods that you want CloudFront to
+ // cache responses to.
+ Items []*string `locationNameList:"Method" type:"list" required:"true"`
+
+ // The number of HTTP methods for which you want CloudFront to cache responses.
+ // Valid values are 2 (for caching responses to GET and HEAD requests) and 3
+ // (for caching responses to GET, HEAD, and OPTIONS requests).
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s CachedMethods) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CachedMethods) GoString() string {
+ return s.String()
+}
+
+// A complex type that specifies the whitelisted cookies, if any, that you want
+// CloudFront to forward to your origin that is associated with this cache behavior.
+type CookieNames struct {
+ _ struct{} `type:"structure"`
+
+ // Optional: A complex type that contains whitelisted cookies for this cache
+ // behavior. If Quantity is 0, you can omit Items.
+ Items []*string `locationNameList:"Name" type:"list"`
+
+ // The number of whitelisted cookies for this cache behavior.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s CookieNames) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CookieNames) GoString() string {
+ return s.String()
+}
+
+// A complex type that specifies the cookie preferences associated with this
+// cache behavior.
+type CookiePreference struct {
+ _ struct{} `type:"structure"`
+
+ // Use this element to specify whether you want CloudFront to forward cookies
+ // to the origin that is associated with this cache behavior. You can specify
+ // all, none or whitelist. If you choose All, CloudFront forwards all cookies
+ // regardless of how many your application uses.
+ Forward *string `type:"string" required:"true" enum:"ItemSelection"`
+
+ // A complex type that specifies the whitelisted cookies, if any, that you want
+ // CloudFront to forward to your origin that is associated with this cache behavior.
+ WhitelistedNames *CookieNames `type:"structure"`
+}
+
+// String returns the string representation
+func (s CookiePreference) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CookiePreference) GoString() string {
+ return s.String()
+}
+
+// The request to create a new origin access identity.
+type CreateCloudFrontOriginAccessIdentityInput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"`
+
+ // The origin access identity's configuration information.
+ CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s CreateCloudFrontOriginAccessIdentityInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateCloudFrontOriginAccessIdentityInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type CreateCloudFrontOriginAccessIdentityOutput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"`
+
+ // The origin access identity's information.
+ CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"`
+
+ // The current version of the origin access identity created.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The fully qualified URI of the new origin access identity just created. For
+ // example: https://cloudfront.amazonaws.com/2010-11-01/origin-access-identity/cloudfront/E74FTE3AJFJ256A.
+ Location *string `location:"header" locationName:"Location" type:"string"`
+}
+
+// String returns the string representation
+func (s CreateCloudFrontOriginAccessIdentityOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateCloudFrontOriginAccessIdentityOutput) GoString() string {
+ return s.String()
+}
+
+// The request to create a new distribution.
+type CreateDistributionInput struct {
+ _ struct{} `type:"structure" payload:"DistributionConfig"`
+
+ // The distribution's configuration information.
+ DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s CreateDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type CreateDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"Distribution"`
+
+ // The distribution's information.
+ Distribution *Distribution `type:"structure"`
+
+ // The current version of the distribution created.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The fully qualified URI of the new distribution resource just created. For
+ // example: https://cloudfront.amazonaws.com/2010-11-01/distribution/EDFDVBD632BHDS5.
+ Location *string `location:"header" locationName:"Location" type:"string"`
+}
+
+// String returns the string representation
+func (s CreateDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// The request to create an invalidation.
+type CreateInvalidationInput struct {
+ _ struct{} `type:"structure" payload:"InvalidationBatch"`
+
+ // The distribution's id.
+ DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"`
+
+ // The batch information for the invalidation.
+ InvalidationBatch *InvalidationBatch `locationName:"InvalidationBatch" type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s CreateInvalidationInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateInvalidationInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type CreateInvalidationOutput struct {
+ _ struct{} `type:"structure" payload:"Invalidation"`
+
+ // The invalidation's information.
+ Invalidation *Invalidation `type:"structure"`
+
+ // The fully qualified URI of the distribution and invalidation batch request,
+ // including the Invalidation ID.
+ Location *string `location:"header" locationName:"Location" type:"string"`
+}
+
+// String returns the string representation
+func (s CreateInvalidationOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateInvalidationOutput) GoString() string {
+ return s.String()
+}
+
+// The request to create a new streaming distribution.
+type CreateStreamingDistributionInput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistributionConfig"`
+
+ // The streaming distribution's configuration information.
+ StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s CreateStreamingDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateStreamingDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type CreateStreamingDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistribution"`
+
+ // The current version of the streaming distribution created.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The fully qualified URI of the new streaming distribution resource just created.
+ // For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8.
+ Location *string `location:"header" locationName:"Location" type:"string"`
+
+ // The streaming distribution's information.
+ StreamingDistribution *StreamingDistribution `type:"structure"`
+}
+
+// String returns the string representation
+func (s CreateStreamingDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CreateStreamingDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// A complex type that describes how you'd prefer CloudFront to respond to requests
+// that result in either a 4xx or 5xx response. You can control whether a custom
+// error page should be displayed, what the desired response code should be
+// for this error page and how long should the error response be cached by CloudFront.
+// If you don't want to specify any custom error responses, include only an
+// empty CustomErrorResponses element. To delete all custom error responses
+// in an existing distribution, update the distribution configuration and include
+// only an empty CustomErrorResponses element. To add, change, or remove one
+// or more custom error responses, update the distribution configuration and
+// specify all of the custom error responses that you want to include in the
+// updated distribution.
+type CustomErrorResponse struct {
+ _ struct{} `type:"structure"`
+
+ // The minimum amount of time you want HTTP error codes to stay in CloudFront
+ // caches before CloudFront queries your origin to see whether the object has
+ // been updated. You can specify a value from 0 to 31,536,000.
+ ErrorCachingMinTTL *int64 `type:"long"`
+
+ // The 4xx or 5xx HTTP status code that you want to customize. For a list of
+ // HTTP status codes that you can customize, see CloudFront documentation.
+ ErrorCode *int64 `type:"integer" required:"true"`
+
+ // The HTTP status code that you want CloudFront to return with the custom error
+ // page to the viewer. For a list of HTTP status codes that you can replace,
+ // see CloudFront Documentation.
+ ResponseCode *string `type:"string"`
+
+ // The path of the custom error page (for example, /custom_404.html). The path
+ // is relative to the distribution and must begin with a slash (/). If the path
+ // includes any non-ASCII characters or unsafe characters as defined in RFC
+ // 1783 (http://www.ietf.org/rfc/rfc1738.txt), URL encode those characters.
+ // Do not URL encode any other characters in the path, or CloudFront will not
+ // return the custom error page to the viewer.
+ ResponsePagePath *string `type:"string"`
+}
+
+// String returns the string representation
+func (s CustomErrorResponse) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CustomErrorResponse) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains zero or more CustomErrorResponse elements.
+type CustomErrorResponses struct {
+ _ struct{} `type:"structure"`
+
+ // Optional: A complex type that contains custom error responses for this distribution.
+ // If Quantity is 0, you can omit Items.
+ Items []*CustomErrorResponse `locationNameList:"CustomErrorResponse" type:"list"`
+
+ // The number of custom error responses for this distribution.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s CustomErrorResponses) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CustomErrorResponses) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains the list of Custom Headers for each origin.
+type CustomHeaders struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains the custom headers for this Origin.
+ Items []*OriginCustomHeader `locationNameList:"OriginCustomHeader" type:"list"`
+
+ // The number of custom headers for this origin.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s CustomHeaders) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CustomHeaders) GoString() string {
+ return s.String()
+}
+
+// A customer origin.
+type CustomOriginConfig struct {
+ _ struct{} `type:"structure"`
+
+ // The HTTP port the custom origin listens on.
+ HTTPPort *int64 `type:"integer" required:"true"`
+
+ // The HTTPS port the custom origin listens on.
+ HTTPSPort *int64 `type:"integer" required:"true"`
+
+ // The origin protocol policy to apply to your origin.
+ OriginProtocolPolicy *string `type:"string" required:"true" enum:"OriginProtocolPolicy"`
+
+ // The SSL/TLS protocols that you want CloudFront to use when communicating
+ // with your origin over HTTPS.
+ OriginSslProtocols *OriginSslProtocols `type:"structure"`
+}
+
+// String returns the string representation
+func (s CustomOriginConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s CustomOriginConfig) GoString() string {
+ return s.String()
+}
+
+// A complex type that describes the default cache behavior if you do not specify
+// a CacheBehavior element or if files don't match any of the values of PathPattern
+// in CacheBehavior elements.You must create exactly one default cache behavior.
+type DefaultCacheBehavior struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that controls which HTTP methods CloudFront processes and
+ // forwards to your Amazon S3 bucket or your custom origin. There are three
+ // choices: - CloudFront forwards only GET and HEAD requests. - CloudFront forwards
+ // only GET, HEAD and OPTIONS requests. - CloudFront forwards GET, HEAD, OPTIONS,
+ // PUT, PATCH, POST, and DELETE requests. If you pick the third choice, you
+ // may need to restrict access to your Amazon S3 bucket or to your custom origin
+ // so users can't perform operations that you don't want them to. For example,
+ // you may not want users to have permission to delete objects from your origin.
+ AllowedMethods *AllowedMethods `type:"structure"`
+
+ // Whether you want CloudFront to automatically compress content for web requests
+ // that include Accept-Encoding: gzip in the request header. If so, specify
+ // true; if not, specify false. CloudFront compresses files larger than 1000
+ // bytes and less than 1 megabyte for both Amazon S3 and custom origins. When
+ // a CloudFront edge location is unusually busy, some files might not be compressed.
+ // The value of the Content-Type header must be on the list of file types that
+ // CloudFront will compress. For the current list, see Serving Compressed Content
+ // (http://docs.aws.amazon.com/console/cloudfront/compressed-content) in the
+ // Amazon CloudFront Developer Guide. If you configure CloudFront to compress
+ // content, CloudFront removes the ETag response header from the objects that
+ // it compresses. The ETag header indicates that the version in a CloudFront
+ // edge cache is identical to the version on the origin server, but after compression
+ // the two versions are no longer identical. As a result, for compressed objects,
+ // CloudFront can't use the ETag header to determine whether an expired object
+ // in the CloudFront edge cache is still the latest version.
+ Compress *bool `type:"boolean"`
+
+ // If you don't configure your origin to add a Cache-Control max-age directive
+ // or an Expires header, DefaultTTL is the default amount of time (in seconds)
+ // that an object is in a CloudFront cache before CloudFront forwards another
+ // request to your origin to determine whether the object has been updated.
+ // The value that you specify applies only when your origin does not add HTTP
+ // headers such as Cache-Control max-age, Cache-Control s-maxage, and Expires
+ // to objects. You can specify a value from 0 to 3,153,600,000 seconds (100
+ // years).
+ DefaultTTL *int64 `type:"long"`
+
+ // A complex type that specifies how CloudFront handles query strings, cookies
+ // and headers.
+ ForwardedValues *ForwardedValues `type:"structure" required:"true"`
+
+ // The maximum amount of time (in seconds) that an object is in a CloudFront
+ // cache before CloudFront forwards another request to your origin to determine
+ // whether the object has been updated. The value that you specify applies only
+ // when your origin adds HTTP headers such as Cache-Control max-age, Cache-Control
+ // s-maxage, and Expires to objects. You can specify a value from 0 to 3,153,600,000
+ // seconds (100 years).
+ MaxTTL *int64 `type:"long"`
+
+ // The minimum amount of time that you want objects to stay in CloudFront caches
+ // before CloudFront queries your origin to see whether the object has been
+ // updated.You can specify a value from 0 to 3,153,600,000 seconds (100 years).
+ MinTTL *int64 `type:"long" required:"true"`
+
+ // Indicates whether you want to distribute media files in Microsoft Smooth
+ // Streaming format using the origin that is associated with this cache behavior.
+ // If so, specify true; if not, specify false.
+ SmoothStreaming *bool `type:"boolean"`
+
+ // The value of ID for the origin that you want CloudFront to route requests
+ // to when a request matches the path pattern either for a cache behavior or
+ // for the default cache behavior.
+ TargetOriginId *string `type:"string" required:"true"`
+
+ // A complex type that specifies the AWS accounts, if any, that you want to
+ // allow to create signed URLs for private content. If you want to require signed
+ // URLs in requests for objects in the target origin that match the PathPattern
+ // for this cache behavior, specify true for Enabled, and specify the applicable
+ // values for Quantity and Items. For more information, go to Using a Signed
+ // URL to Serve Private Content in the Amazon CloudFront Developer Guide. If
+ // you don't want to require signed URLs in requests for objects that match
+ // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. To
+ // add, change, or remove one or more trusted signers, change Enabled to true
+ // (if it's currently false), change Quantity as applicable, and specify all
+ // of the trusted signers that you want to include in the updated distribution.
+ TrustedSigners *TrustedSigners `type:"structure" required:"true"`
+
+ // Use this element to specify the protocol that users can use to access the
+ // files in the origin specified by TargetOriginId when a request matches the
+ // path pattern in PathPattern. If you want CloudFront to allow end users to
+ // use any available protocol, specify allow-all. If you want CloudFront to
+ // require HTTPS, specify https. If you want CloudFront to respond to an HTTP
+ // request with an HTTP status code of 301 (Moved Permanently) and the HTTPS
+ // URL, specify redirect-to-https. The viewer then resubmits the request using
+ // the HTTPS URL.
+ ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"`
+}
+
+// String returns the string representation
+func (s DefaultCacheBehavior) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DefaultCacheBehavior) GoString() string {
+ return s.String()
+}
+
+// The request to delete a origin access identity.
+type DeleteCloudFrontOriginAccessIdentityInput struct {
+ _ struct{} `type:"structure"`
+
+ // The origin access identity's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received from a previous GET or PUT request.
+ // For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+}
+
+// String returns the string representation
+func (s DeleteCloudFrontOriginAccessIdentityInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteCloudFrontOriginAccessIdentityInput) GoString() string {
+ return s.String()
+}
+
+type DeleteCloudFrontOriginAccessIdentityOutput struct {
+ _ struct{} `type:"structure"`
+}
+
+// String returns the string representation
+func (s DeleteCloudFrontOriginAccessIdentityOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteCloudFrontOriginAccessIdentityOutput) GoString() string {
+ return s.String()
+}
+
+// The request to delete a distribution.
+type DeleteDistributionInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received when you disabled the distribution.
+ // For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+}
+
+// String returns the string representation
+func (s DeleteDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteDistributionInput) GoString() string {
+ return s.String()
+}
+
+type DeleteDistributionOutput struct {
+ _ struct{} `type:"structure"`
+}
+
+// String returns the string representation
+func (s DeleteDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// The request to delete a streaming distribution.
+type DeleteStreamingDistributionInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received when you disabled the streaming
+ // distribution. For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+}
+
+// String returns the string representation
+func (s DeleteStreamingDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteStreamingDistributionInput) GoString() string {
+ return s.String()
+}
+
+type DeleteStreamingDistributionOutput struct {
+ _ struct{} `type:"structure"`
+}
+
+// String returns the string representation
+func (s DeleteStreamingDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DeleteStreamingDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// A distribution.
+type Distribution struct {
+ _ struct{} `type:"structure"`
+
+ // CloudFront automatically adds this element to the response only if you've
+ // set up the distribution to serve private content with signed URLs. The element
+ // lists the key pair IDs that CloudFront is aware of for each trusted signer.
+ // The Signer child element lists the AWS account number of the trusted signer
+ // (or an empty Self element if the signer is you). The Signer element also
+ // includes the IDs of any active key pairs associated with the trusted signer's
+ // AWS account. If no KeyPairId element appears for a Signer, that signer can't
+ // create working signed URLs.
+ ActiveTrustedSigners *ActiveTrustedSigners `type:"structure" required:"true"`
+
+ // The current configuration information for the distribution.
+ DistributionConfig *DistributionConfig `type:"structure" required:"true"`
+
+ // The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net.
+ DomainName *string `type:"string" required:"true"`
+
+ // The identifier for the distribution. For example: EDFDVBD632BHDS5.
+ Id *string `type:"string" required:"true"`
+
+ // The number of invalidation batches currently in progress.
+ InProgressInvalidationBatches *int64 `type:"integer" required:"true"`
+
+ // The date and time the distribution was last modified.
+ LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
+
+ // This response element indicates the current status of the distribution. When
+ // the status is Deployed, the distribution's information is fully propagated
+ // throughout the Amazon CloudFront system.
+ Status *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s Distribution) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Distribution) GoString() string {
+ return s.String()
+}
+
+// A distribution Configuration.
+type DistributionConfig struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains information about CNAMEs (alternate domain names),
+ // if any, for this distribution.
+ Aliases *Aliases `type:"structure"`
+
+ // A complex type that contains zero or more CacheBehavior elements.
+ CacheBehaviors *CacheBehaviors `type:"structure"`
+
+ // A unique number that ensures the request can't be replayed. If the CallerReference
+ // is new (no matter the content of the DistributionConfig object), a new distribution
+ // is created. If the CallerReference is a value you already sent in a previous
+ // request to create a distribution, and the content of the DistributionConfig
+ // is identical to the original request (ignoring white space), the response
+ // includes the same information returned to the original request. If the CallerReference
+ // is a value you already sent in a previous request to create a distribution
+ // but the content of the DistributionConfig is different from the original
+ // request, CloudFront returns a DistributionAlreadyExists error.
+ CallerReference *string `type:"string" required:"true"`
+
+ // Any comments you want to include about the distribution.
+ Comment *string `type:"string" required:"true"`
+
+ // A complex type that contains zero or more CustomErrorResponse elements.
+ CustomErrorResponses *CustomErrorResponses `type:"structure"`
+
+ // A complex type that describes the default cache behavior if you do not specify
+ // a CacheBehavior element or if files don't match any of the values of PathPattern
+ // in CacheBehavior elements.You must create exactly one default cache behavior.
+ DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"`
+
+ // The object that you want CloudFront to return (for example, index.html) when
+ // an end user requests the root URL for your distribution (http://www.example.com)
+ // instead of an object in your distribution (http://www.example.com/index.html).
+ // Specifying a default root object avoids exposing the contents of your distribution.
+ // If you don't want to specify a default root object when you create a distribution,
+ // include an empty DefaultRootObject element. To delete the default root object
+ // from an existing distribution, update the distribution configuration and
+ // include an empty DefaultRootObject element. To replace the default root object,
+ // update the distribution configuration and specify the new object.
+ DefaultRootObject *string `type:"string"`
+
+ // Whether the distribution is enabled to accept end user requests for content.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // A complex type that controls whether access logs are written for the distribution.
+ Logging *LoggingConfig `type:"structure"`
+
+ // A complex type that contains information about origins for this distribution.
+ Origins *Origins `type:"structure" required:"true"`
+
+ // A complex type that contains information about price class for this distribution.
+ PriceClass *string `type:"string" enum:"PriceClass"`
+
+ // A complex type that identifies ways in which you want to restrict distribution
+ // of your content.
+ Restrictions *Restrictions `type:"structure"`
+
+ // A complex type that contains information about viewer certificates for this
+ // distribution.
+ ViewerCertificate *ViewerCertificate `type:"structure"`
+
+ // (Optional) If you're using AWS WAF to filter CloudFront requests, the Id
+ // of the AWS WAF web ACL that is associated with the distribution.
+ WebACLId *string `type:"string"`
+}
+
+// String returns the string representation
+func (s DistributionConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DistributionConfig) GoString() string {
+ return s.String()
+}
+
+// A distribution list.
+type DistributionList struct {
+ _ struct{} `type:"structure"`
+
+ // A flag that indicates whether more distributions remain to be listed. If
+ // your results were truncated, you can make a follow-up pagination request
+ // using the Marker request parameter to retrieve more distributions in the
+ // list.
+ IsTruncated *bool `type:"boolean" required:"true"`
+
+ // A complex type that contains one DistributionSummary element for each distribution
+ // that was created by the current AWS account.
+ Items []*DistributionSummary `locationNameList:"DistributionSummary" type:"list"`
+
+ // The value you provided for the Marker request parameter.
+ Marker *string `type:"string" required:"true"`
+
+ // The value you provided for the MaxItems request parameter.
+ MaxItems *int64 `type:"integer" required:"true"`
+
+ // If IsTruncated is true, this element is present and contains the value you
+ // can use for the Marker request parameter to continue listing your distributions
+ // where they left off.
+ NextMarker *string `type:"string"`
+
+ // The number of distributions that were created by the current AWS account.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s DistributionList) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DistributionList) GoString() string {
+ return s.String()
+}
+
+// A summary of the information for an Amazon CloudFront distribution.
+type DistributionSummary struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains information about CNAMEs (alternate domain names),
+ // if any, for this distribution.
+ Aliases *Aliases `type:"structure" required:"true"`
+
+ // A complex type that contains zero or more CacheBehavior elements.
+ CacheBehaviors *CacheBehaviors `type:"structure" required:"true"`
+
+ // The comment originally specified when this distribution was created.
+ Comment *string `type:"string" required:"true"`
+
+ // A complex type that contains zero or more CustomErrorResponses elements.
+ CustomErrorResponses *CustomErrorResponses `type:"structure" required:"true"`
+
+ // A complex type that describes the default cache behavior if you do not specify
+ // a CacheBehavior element or if files don't match any of the values of PathPattern
+ // in CacheBehavior elements.You must create exactly one default cache behavior.
+ DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"`
+
+ // The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net.
+ DomainName *string `type:"string" required:"true"`
+
+ // Whether the distribution is enabled to accept end user requests for content.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // The identifier for the distribution. For example: EDFDVBD632BHDS5.
+ Id *string `type:"string" required:"true"`
+
+ // The date and time the distribution was last modified.
+ LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
+
+ // A complex type that contains information about origins for this distribution.
+ Origins *Origins `type:"structure" required:"true"`
+
+ PriceClass *string `type:"string" required:"true" enum:"PriceClass"`
+
+ // A complex type that identifies ways in which you want to restrict distribution
+ // of your content.
+ Restrictions *Restrictions `type:"structure" required:"true"`
+
+ // This response element indicates the current status of the distribution. When
+ // the status is Deployed, the distribution's information is fully propagated
+ // throughout the Amazon CloudFront system.
+ Status *string `type:"string" required:"true"`
+
+ // A complex type that contains information about viewer certificates for this
+ // distribution.
+ ViewerCertificate *ViewerCertificate `type:"structure" required:"true"`
+
+ // The Web ACL Id (if any) associated with the distribution.
+ WebACLId *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s DistributionSummary) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s DistributionSummary) GoString() string {
+ return s.String()
+}
+
+// A complex type that specifies how CloudFront handles query strings, cookies
+// and headers.
+type ForwardedValues struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that specifies how CloudFront handles cookies.
+ Cookies *CookiePreference `type:"structure" required:"true"`
+
+ // A complex type that specifies the Headers, if any, that you want CloudFront
+ // to vary upon for this cache behavior.
+ Headers *Headers `type:"structure"`
+
+ // Indicates whether you want CloudFront to forward query strings to the origin
+ // that is associated with this cache behavior. If so, specify true; if not,
+ // specify false.
+ QueryString *bool `type:"boolean" required:"true"`
+}
+
+// String returns the string representation
+func (s ForwardedValues) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ForwardedValues) GoString() string {
+ return s.String()
+}
+
+// A complex type that controls the countries in which your content is distributed.
+// For more information about geo restriction, go to Customizing Error Responses
+// in the Amazon CloudFront Developer Guide. CloudFront determines the location
+// of your users using MaxMind GeoIP databases. For information about the accuracy
+// of these databases, see How accurate are your GeoIP databases? on the MaxMind
+// website.
+type GeoRestriction struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains a Location element for each country in which
+ // you want CloudFront either to distribute your content (whitelist) or not
+ // distribute your content (blacklist). The Location element is a two-letter,
+ // uppercase country code for a country that you want to include in your blacklist
+ // or whitelist. Include one Location element for each country. CloudFront and
+ // MaxMind both use ISO 3166 country codes. For the current list of countries
+ // and the corresponding codes, see ISO 3166-1-alpha-2 code on the International
+ // Organization for Standardization website. You can also refer to the country
+ // list in the CloudFront console, which includes both country names and codes.
+ Items []*string `locationNameList:"Location" type:"list"`
+
+ // When geo restriction is enabled, this is the number of countries in your
+ // whitelist or blacklist. Otherwise, when it is not enabled, Quantity is 0,
+ // and you can omit Items.
+ Quantity *int64 `type:"integer" required:"true"`
+
+ // The method that you want to use to restrict distribution of your content
+ // by country: - none: No geo restriction is enabled, meaning access to content
+ // is not restricted by client geo location. - blacklist: The Location elements
+ // specify the countries in which you do not want CloudFront to distribute your
+ // content. - whitelist: The Location elements specify the countries in which
+ // you want CloudFront to distribute your content.
+ RestrictionType *string `type:"string" required:"true" enum:"GeoRestrictionType"`
+}
+
+// String returns the string representation
+func (s GeoRestriction) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GeoRestriction) GoString() string {
+ return s.String()
+}
+
+// The request to get an origin access identity's configuration.
+type GetCloudFrontOriginAccessIdentityConfigInput struct {
+ _ struct{} `type:"structure"`
+
+ // The identity's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetCloudFrontOriginAccessIdentityConfigInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetCloudFrontOriginAccessIdentityConfigInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetCloudFrontOriginAccessIdentityConfigOutput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"`
+
+ // The origin access identity's configuration information.
+ CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s GetCloudFrontOriginAccessIdentityConfigOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetCloudFrontOriginAccessIdentityConfigOutput) GoString() string {
+ return s.String()
+}
+
+// The request to get an origin access identity's information.
+type GetCloudFrontOriginAccessIdentityInput struct {
+ _ struct{} `type:"structure"`
+
+ // The identity's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetCloudFrontOriginAccessIdentityInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetCloudFrontOriginAccessIdentityInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetCloudFrontOriginAccessIdentityOutput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"`
+
+ // The origin access identity's information.
+ CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"`
+
+ // The current version of the origin access identity's information. For example:
+ // E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s GetCloudFrontOriginAccessIdentityOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetCloudFrontOriginAccessIdentityOutput) GoString() string {
+ return s.String()
+}
+
+// The request to get a distribution configuration.
+type GetDistributionConfigInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetDistributionConfigInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetDistributionConfigInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetDistributionConfigOutput struct {
+ _ struct{} `type:"structure" payload:"DistributionConfig"`
+
+ // The distribution's configuration information.
+ DistributionConfig *DistributionConfig `type:"structure"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s GetDistributionConfigOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetDistributionConfigOutput) GoString() string {
+ return s.String()
+}
+
+// The request to get a distribution's information.
+type GetDistributionInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"Distribution"`
+
+ // The distribution's information.
+ Distribution *Distribution `type:"structure"`
+
+ // The current version of the distribution's information. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s GetDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// The request to get an invalidation's information.
+type GetInvalidationInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution's id.
+ DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"`
+
+ // The invalidation's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetInvalidationInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetInvalidationInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetInvalidationOutput struct {
+ _ struct{} `type:"structure" payload:"Invalidation"`
+
+ // The invalidation's information.
+ Invalidation *Invalidation `type:"structure"`
+}
+
+// String returns the string representation
+func (s GetInvalidationOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetInvalidationOutput) GoString() string {
+ return s.String()
+}
+
+// To request to get a streaming distribution configuration.
+type GetStreamingDistributionConfigInput struct {
+ _ struct{} `type:"structure"`
+
+ // The streaming distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetStreamingDistributionConfigInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetStreamingDistributionConfigInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetStreamingDistributionConfigOutput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistributionConfig"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The streaming distribution's configuration information.
+ StreamingDistributionConfig *StreamingDistributionConfig `type:"structure"`
+}
+
+// String returns the string representation
+func (s GetStreamingDistributionConfigOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetStreamingDistributionConfigOutput) GoString() string {
+ return s.String()
+}
+
+// The request to get a streaming distribution's information.
+type GetStreamingDistributionInput struct {
+ _ struct{} `type:"structure"`
+
+ // The streaming distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s GetStreamingDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetStreamingDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type GetStreamingDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistribution"`
+
+ // The current version of the streaming distribution's information. For example:
+ // E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The streaming distribution's information.
+ StreamingDistribution *StreamingDistribution `type:"structure"`
+}
+
+// String returns the string representation
+func (s GetStreamingDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s GetStreamingDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// A complex type that specifies the headers that you want CloudFront to forward
+// to the origin for this cache behavior. For the headers that you specify,
+// CloudFront also caches separate versions of a given object based on the header
+// values in viewer requests; this is known as varying on headers. For example,
+// suppose viewer requests for logo.jpg contain a custom Product header that
+// has a value of either Acme or Apex, and you configure CloudFront to vary
+// on the Product header. CloudFront forwards the Product header to the origin
+// and caches the response from the origin once for each header value.
+type Headers struct {
+ _ struct{} `type:"structure"`
+
+ // Optional: A complex type that contains a Name element for each header that
+ // you want CloudFront to forward to the origin and to vary on for this cache
+ // behavior. If Quantity is 0, omit Items.
+ Items []*string `locationNameList:"Name" type:"list"`
+
+ // The number of different headers that you want CloudFront to forward to the
+ // origin and to vary on for this cache behavior. The maximum number of headers
+ // that you can specify by name is 10. If you want CloudFront to forward all
+ // headers to the origin and vary on all of them, specify 1 for Quantity and
+ // * for Name. If you don't want CloudFront to forward any additional headers
+ // to the origin or to vary on any headers, specify 0 for Quantity and omit
+ // Items.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s Headers) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Headers) GoString() string {
+ return s.String()
+}
+
+// An invalidation.
+type Invalidation struct {
+ _ struct{} `type:"structure"`
+
+ // The date and time the invalidation request was first made.
+ CreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
+
+ // The identifier for the invalidation request. For example: IDFDVBD632BHDS5.
+ Id *string `type:"string" required:"true"`
+
+ // The current invalidation information for the batch request.
+ InvalidationBatch *InvalidationBatch `type:"structure" required:"true"`
+
+ // The status of the invalidation request. When the invalidation batch is finished,
+ // the status is Completed.
+ Status *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s Invalidation) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Invalidation) GoString() string {
+ return s.String()
+}
+
+// An invalidation batch.
+type InvalidationBatch struct {
+ _ struct{} `type:"structure"`
+
+ // A unique name that ensures the request can't be replayed. If the CallerReference
+ // is new (no matter the content of the Path object), a new distribution is
+ // created. If the CallerReference is a value you already sent in a previous
+ // request to create an invalidation batch, and the content of each Path element
+ // is identical to the original request, the response includes the same information
+ // returned to the original request. If the CallerReference is a value you already
+ // sent in a previous request to create a distribution but the content of any
+ // Path is different from the original request, CloudFront returns an InvalidationBatchAlreadyExists
+ // error.
+ CallerReference *string `type:"string" required:"true"`
+
+ // The path of the object to invalidate. The path is relative to the distribution
+ // and must begin with a slash (/). You must enclose each invalidation object
+ // with the Path element tags. If the path includes non-ASCII characters or
+ // unsafe characters as defined in RFC 1783 (http://www.ietf.org/rfc/rfc1738.txt),
+ // URL encode those characters. Do not URL encode any other characters in the
+ // path, or CloudFront will not invalidate the old version of the updated object.
+ Paths *Paths `type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s InvalidationBatch) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s InvalidationBatch) GoString() string {
+ return s.String()
+}
+
+// An invalidation list.
+type InvalidationList struct {
+ _ struct{} `type:"structure"`
+
+ // A flag that indicates whether more invalidation batch requests remain to
+ // be listed. If your results were truncated, you can make a follow-up pagination
+ // request using the Marker request parameter to retrieve more invalidation
+ // batches in the list.
+ IsTruncated *bool `type:"boolean" required:"true"`
+
+ // A complex type that contains one InvalidationSummary element for each invalidation
+ // batch that was created by the current AWS account.
+ Items []*InvalidationSummary `locationNameList:"InvalidationSummary" type:"list"`
+
+ // The value you provided for the Marker request parameter.
+ Marker *string `type:"string" required:"true"`
+
+ // The value you provided for the MaxItems request parameter.
+ MaxItems *int64 `type:"integer" required:"true"`
+
+ // If IsTruncated is true, this element is present and contains the value you
+ // can use for the Marker request parameter to continue listing your invalidation
+ // batches where they left off.
+ NextMarker *string `type:"string"`
+
+ // The number of invalidation batches that were created by the current AWS account.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s InvalidationList) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s InvalidationList) GoString() string {
+ return s.String()
+}
+
+// Summary of an invalidation request.
+type InvalidationSummary struct {
+ _ struct{} `type:"structure"`
+
+ CreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
+
+ // The unique ID for an invalidation request.
+ Id *string `type:"string" required:"true"`
+
+ // The status of an invalidation request.
+ Status *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s InvalidationSummary) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s InvalidationSummary) GoString() string {
+ return s.String()
+}
+
+// A complex type that lists the active CloudFront key pairs, if any, that are
+// associated with AwsAccountNumber.
+type KeyPairIds struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that lists the active CloudFront key pairs, if any, that are
+ // associated with AwsAccountNumber.
+ Items []*string `locationNameList:"KeyPairId" type:"list"`
+
+ // The number of active CloudFront key pairs for AwsAccountNumber.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s KeyPairIds) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s KeyPairIds) GoString() string {
+ return s.String()
+}
+
+// The request to list origin access identities.
+type ListCloudFrontOriginAccessIdentitiesInput struct {
+ _ struct{} `type:"structure"`
+
+ // Use this when paginating results to indicate where to begin in your list
+ // of origin access identities. The results include identities in the list that
+ // occur after the marker. To get the next page of results, set the Marker to
+ // the value of the NextMarker from the current page's response (which is also
+ // the ID of the last identity on that page).
+ Marker *string `location:"querystring" locationName:"Marker" type:"string"`
+
+ // The maximum number of origin access identities you want in the response body.
+ MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"`
+}
+
+// String returns the string representation
+func (s ListCloudFrontOriginAccessIdentitiesInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListCloudFrontOriginAccessIdentitiesInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type ListCloudFrontOriginAccessIdentitiesOutput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityList"`
+
+ // The CloudFrontOriginAccessIdentityList type.
+ CloudFrontOriginAccessIdentityList *OriginAccessIdentityList `type:"structure"`
+}
+
+// String returns the string representation
+func (s ListCloudFrontOriginAccessIdentitiesOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListCloudFrontOriginAccessIdentitiesOutput) GoString() string {
+ return s.String()
+}
+
+// The request to list distributions that are associated with a specified AWS
+// WAF web ACL.
+type ListDistributionsByWebACLIdInput struct {
+ _ struct{} `type:"structure"`
+
+ // Use Marker and MaxItems to control pagination of results. If you have more
+ // than MaxItems distributions that satisfy the request, the response includes
+ // a NextMarker element. To get the next page of results, submit another request.
+ // For the value of Marker, specify the value of NextMarker from the last response.
+ // (For the first request, omit Marker.)
+ Marker *string `location:"querystring" locationName:"Marker" type:"string"`
+
+ // The maximum number of distributions that you want CloudFront to return in
+ // the response body. The maximum and default values are both 100.
+ MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"`
+
+ // The Id of the AWS WAF web ACL for which you want to list the associated distributions.
+ // If you specify "null" for the Id, the request returns a list of the distributions
+ // that aren't associated with a web ACL.
+ WebACLId *string `location:"uri" locationName:"WebACLId" type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s ListDistributionsByWebACLIdInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListDistributionsByWebACLIdInput) GoString() string {
+ return s.String()
+}
+
+// The response to a request to list the distributions that are associated with
+// a specified AWS WAF web ACL.
+type ListDistributionsByWebACLIdOutput struct {
+ _ struct{} `type:"structure" payload:"DistributionList"`
+
+ // The DistributionList type.
+ DistributionList *DistributionList `type:"structure"`
+}
+
+// String returns the string representation
+func (s ListDistributionsByWebACLIdOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListDistributionsByWebACLIdOutput) GoString() string {
+ return s.String()
+}
+
+// The request to list your distributions.
+type ListDistributionsInput struct {
+ _ struct{} `type:"structure"`
+
+ // Use Marker and MaxItems to control pagination of results. If you have more
+ // than MaxItems distributions that satisfy the request, the response includes
+ // a NextMarker element. To get the next page of results, submit another request.
+ // For the value of Marker, specify the value of NextMarker from the last response.
+ // (For the first request, omit Marker.)
+ Marker *string `location:"querystring" locationName:"Marker" type:"string"`
+
+ // The maximum number of distributions that you want CloudFront to return in
+ // the response body. The maximum and default values are both 100.
+ MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"`
+}
+
+// String returns the string representation
+func (s ListDistributionsInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListDistributionsInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type ListDistributionsOutput struct {
+ _ struct{} `type:"structure" payload:"DistributionList"`
+
+ // The DistributionList type.
+ DistributionList *DistributionList `type:"structure"`
+}
+
+// String returns the string representation
+func (s ListDistributionsOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListDistributionsOutput) GoString() string {
+ return s.String()
+}
+
+// The request to list invalidations.
+type ListInvalidationsInput struct {
+ _ struct{} `type:"structure"`
+
+ // The distribution's id.
+ DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"`
+
+ // Use this parameter when paginating results to indicate where to begin in
+ // your list of invalidation batches. Because the results are returned in decreasing
+ // order from most recent to oldest, the most recent results are on the first
+ // page, the second page will contain earlier results, and so on. To get the
+ // next page of results, set the Marker to the value of the NextMarker from
+ // the current page's response. This value is the same as the ID of the last
+ // invalidation batch on that page.
+ Marker *string `location:"querystring" locationName:"Marker" type:"string"`
+
+ // The maximum number of invalidation batches you want in the response body.
+ MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"`
+}
+
+// String returns the string representation
+func (s ListInvalidationsInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListInvalidationsInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type ListInvalidationsOutput struct {
+ _ struct{} `type:"structure" payload:"InvalidationList"`
+
+ // Information about invalidation batches.
+ InvalidationList *InvalidationList `type:"structure"`
+}
+
+// String returns the string representation
+func (s ListInvalidationsOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListInvalidationsOutput) GoString() string {
+ return s.String()
+}
+
+// The request to list your streaming distributions.
+type ListStreamingDistributionsInput struct {
+ _ struct{} `type:"structure"`
+
+ // Use this when paginating results to indicate where to begin in your list
+ // of streaming distributions. The results include distributions in the list
+ // that occur after the marker. To get the next page of results, set the Marker
+ // to the value of the NextMarker from the current page's response (which is
+ // also the ID of the last distribution on that page).
+ Marker *string `location:"querystring" locationName:"Marker" type:"string"`
+
+ // The maximum number of streaming distributions you want in the response body.
+ MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"`
+}
+
+// String returns the string representation
+func (s ListStreamingDistributionsInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListStreamingDistributionsInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type ListStreamingDistributionsOutput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistributionList"`
+
+ // The StreamingDistributionList type.
+ StreamingDistributionList *StreamingDistributionList `type:"structure"`
+}
+
+// String returns the string representation
+func (s ListStreamingDistributionsOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ListStreamingDistributionsOutput) GoString() string {
+ return s.String()
+}
+
+// A complex type that controls whether access logs are written for the distribution.
+type LoggingConfig struct {
+ _ struct{} `type:"structure"`
+
+ // The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com.
+ Bucket *string `type:"string" required:"true"`
+
+ // Specifies whether you want CloudFront to save access logs to an Amazon S3
+ // bucket. If you do not want to enable logging when you create a distribution
+ // or if you want to disable logging for an existing distribution, specify false
+ // for Enabled, and specify empty Bucket and Prefix elements. If you specify
+ // false for Enabled but you specify values for Bucket, prefix and IncludeCookies,
+ // the values are automatically deleted.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // Specifies whether you want CloudFront to include cookies in access logs,
+ // specify true for IncludeCookies. If you choose to include cookies in logs,
+ // CloudFront logs all cookies regardless of how you configure the cache behaviors
+ // for this distribution. If you do not want to include cookies when you create
+ // a distribution or if you want to disable include cookies for an existing
+ // distribution, specify false for IncludeCookies.
+ IncludeCookies *bool `type:"boolean" required:"true"`
+
+ // An optional string that you want CloudFront to prefix to the access log filenames
+ // for this distribution, for example, myprefix/. If you want to enable logging,
+ // but you do not want to specify a prefix, you still must include an empty
+ // Prefix element in the Logging element.
+ Prefix *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s LoggingConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s LoggingConfig) GoString() string {
+ return s.String()
+}
+
+// A complex type that describes the Amazon S3 bucket or the HTTP server (for
+// example, a web server) from which CloudFront gets your files.You must create
+// at least one origin.
+type Origin struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains information about the custom headers associated
+ // with this Origin.
+ CustomHeaders *CustomHeaders `type:"structure"`
+
+ // A complex type that contains information about a custom origin. If the origin
+ // is an Amazon S3 bucket, use the S3OriginConfig element instead.
+ CustomOriginConfig *CustomOriginConfig `type:"structure"`
+
+ // Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want
+ // CloudFront to get objects for this origin, for example, myawsbucket.s3.amazonaws.com.
+ // Custom origins: The DNS domain name for the HTTP server from which you want
+ // CloudFront to get objects for this origin, for example, www.example.com.
+ DomainName *string `type:"string" required:"true"`
+
+ // A unique identifier for the origin. The value of Id must be unique within
+ // the distribution. You use the value of Id when you create a cache behavior.
+ // The Id identifies the origin that CloudFront routes a request to when the
+ // request matches the path pattern for that cache behavior.
+ Id *string `type:"string" required:"true"`
+
+ // An optional element that causes CloudFront to request your content from a
+ // directory in your Amazon S3 bucket or your custom origin. When you include
+ // the OriginPath element, specify the directory name, beginning with a /. CloudFront
+ // appends the directory name to the value of DomainName.
+ OriginPath *string `type:"string"`
+
+ // A complex type that contains information about the Amazon S3 origin. If the
+ // origin is a custom origin, use the CustomOriginConfig element instead.
+ S3OriginConfig *S3OriginConfig `type:"structure"`
+}
+
+// String returns the string representation
+func (s Origin) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Origin) GoString() string {
+ return s.String()
+}
+
+// CloudFront origin access identity.
+type OriginAccessIdentity struct {
+ _ struct{} `type:"structure"`
+
+ // The current configuration information for the identity.
+ CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"`
+
+ // The ID for the origin access identity. For example: E74FTE3AJFJ256A.
+ Id *string `type:"string" required:"true"`
+
+ // The Amazon S3 canonical user ID for the origin access identity, which you
+ // use when giving the origin access identity read permission to an object in
+ // Amazon S3.
+ S3CanonicalUserId *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginAccessIdentity) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginAccessIdentity) GoString() string {
+ return s.String()
+}
+
+// Origin access identity configuration.
+type OriginAccessIdentityConfig struct {
+ _ struct{} `type:"structure"`
+
+ // A unique number that ensures the request can't be replayed. If the CallerReference
+ // is new (no matter the content of the CloudFrontOriginAccessIdentityConfig
+ // object), a new origin access identity is created. If the CallerReference
+ // is a value you already sent in a previous request to create an identity,
+ // and the content of the CloudFrontOriginAccessIdentityConfig is identical
+ // to the original request (ignoring white space), the response includes the
+ // same information returned to the original request. If the CallerReference
+ // is a value you already sent in a previous request to create an identity but
+ // the content of the CloudFrontOriginAccessIdentityConfig is different from
+ // the original request, CloudFront returns a CloudFrontOriginAccessIdentityAlreadyExists
+ // error.
+ CallerReference *string `type:"string" required:"true"`
+
+ // Any comments you want to include about the origin access identity.
+ Comment *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginAccessIdentityConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginAccessIdentityConfig) GoString() string {
+ return s.String()
+}
+
+// The CloudFrontOriginAccessIdentityList type.
+type OriginAccessIdentityList struct {
+ _ struct{} `type:"structure"`
+
+ // A flag that indicates whether more origin access identities remain to be
+ // listed. If your results were truncated, you can make a follow-up pagination
+ // request using the Marker request parameter to retrieve more items in the
+ // list.
+ IsTruncated *bool `type:"boolean" required:"true"`
+
+ // A complex type that contains one CloudFrontOriginAccessIdentitySummary element
+ // for each origin access identity that was created by the current AWS account.
+ Items []*OriginAccessIdentitySummary `locationNameList:"CloudFrontOriginAccessIdentitySummary" type:"list"`
+
+ // The value you provided for the Marker request parameter.
+ Marker *string `type:"string" required:"true"`
+
+ // The value you provided for the MaxItems request parameter.
+ MaxItems *int64 `type:"integer" required:"true"`
+
+ // If IsTruncated is true, this element is present and contains the value you
+ // can use for the Marker request parameter to continue listing your origin
+ // access identities where they left off.
+ NextMarker *string `type:"string"`
+
+ // The number of CloudFront origin access identities that were created by the
+ // current AWS account.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginAccessIdentityList) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginAccessIdentityList) GoString() string {
+ return s.String()
+}
+
+// Summary of the information about a CloudFront origin access identity.
+type OriginAccessIdentitySummary struct {
+ _ struct{} `type:"structure"`
+
+ // The comment for this origin access identity, as originally specified when
+ // created.
+ Comment *string `type:"string" required:"true"`
+
+ // The ID for the origin access identity. For example: E74FTE3AJFJ256A.
+ Id *string `type:"string" required:"true"`
+
+ // The Amazon S3 canonical user ID for the origin access identity, which you
+ // use when giving the origin access identity read permission to an object in
+ // Amazon S3.
+ S3CanonicalUserId *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginAccessIdentitySummary) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginAccessIdentitySummary) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information related to a Header
+type OriginCustomHeader struct {
+ _ struct{} `type:"structure"`
+
+ // The header's name.
+ HeaderName *string `type:"string" required:"true"`
+
+ // The header's value.
+ HeaderValue *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginCustomHeader) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginCustomHeader) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains the list of SSL/TLS protocols that you want
+// CloudFront to use when communicating with your origin over HTTPS.
+type OriginSslProtocols struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains one SslProtocol element for each SSL/TLS protocol
+ // that you want to allow CloudFront to use when establishing an HTTPS connection
+ // with this origin.
+ Items []*string `locationNameList:"SslProtocol" type:"list" required:"true"`
+
+ // The number of SSL/TLS protocols that you want to allow CloudFront to use
+ // when establishing an HTTPS connection with this origin.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s OriginSslProtocols) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s OriginSslProtocols) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about origins for this distribution.
+type Origins struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains origins for this distribution.
+ Items []*Origin `locationNameList:"Origin" min:"1" type:"list"`
+
+ // The number of origins for this distribution.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s Origins) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Origins) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about the objects that you want
+// to invalidate.
+type Paths struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains a list of the objects that you want to invalidate.
+ Items []*string `locationNameList:"Path" type:"list"`
+
+ // The number of objects that you want to invalidate.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s Paths) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Paths) GoString() string {
+ return s.String()
+}
+
+// A complex type that identifies ways in which you want to restrict distribution
+// of your content.
+type Restrictions struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that controls the countries in which your content is distributed.
+ // For more information about geo restriction, go to Customizing Error Responses
+ // in the Amazon CloudFront Developer Guide. CloudFront determines the location
+ // of your users using MaxMind GeoIP databases. For information about the accuracy
+ // of these databases, see How accurate are your GeoIP databases? on the MaxMind
+ // website.
+ GeoRestriction *GeoRestriction `type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s Restrictions) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Restrictions) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about the Amazon S3 bucket from
+// which you want CloudFront to get your media files for distribution.
+type S3Origin struct {
+ _ struct{} `type:"structure"`
+
+ // The DNS name of the S3 origin.
+ DomainName *string `type:"string" required:"true"`
+
+ // Your S3 origin's origin access identity.
+ OriginAccessIdentity *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s S3Origin) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s S3Origin) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about the Amazon S3 origin. If the
+// origin is a custom origin, use the CustomOriginConfig element instead.
+type S3OriginConfig struct {
+ _ struct{} `type:"structure"`
+
+ // The CloudFront origin access identity to associate with the origin. Use an
+ // origin access identity to configure the origin so that end users can only
+ // access objects in an Amazon S3 bucket through CloudFront. If you want end
+ // users to be able to access objects using either the CloudFront URL or the
+ // Amazon S3 URL, specify an empty OriginAccessIdentity element. To delete the
+ // origin access identity from an existing distribution, update the distribution
+ // configuration and include an empty OriginAccessIdentity element. To replace
+ // the origin access identity, update the distribution configuration and specify
+ // the new origin access identity. Use the format origin-access-identity/cloudfront/Id
+ // where Id is the value that CloudFront returned in the Id element when you
+ // created the origin access identity.
+ OriginAccessIdentity *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s S3OriginConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s S3OriginConfig) GoString() string {
+ return s.String()
+}
+
+// A complex type that lists the AWS accounts that were included in the TrustedSigners
+// complex type, as well as their active CloudFront key pair IDs, if any.
+type Signer struct {
+ _ struct{} `type:"structure"`
+
+ // Specifies an AWS account that can create signed URLs. Values: self, which
+ // indicates that the AWS account that was used to create the distribution can
+ // created signed URLs, or an AWS account number. Omit the dashes in the account
+ // number.
+ AwsAccountNumber *string `type:"string"`
+
+ // A complex type that lists the active CloudFront key pairs, if any, that are
+ // associated with AwsAccountNumber.
+ KeyPairIds *KeyPairIds `type:"structure"`
+}
+
+// String returns the string representation
+func (s Signer) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s Signer) GoString() string {
+ return s.String()
+}
+
+// A streaming distribution.
+type StreamingDistribution struct {
+ _ struct{} `type:"structure"`
+
+ // CloudFront automatically adds this element to the response only if you've
+ // set up the distribution to serve private content with signed URLs. The element
+ // lists the key pair IDs that CloudFront is aware of for each trusted signer.
+ // The Signer child element lists the AWS account number of the trusted signer
+ // (or an empty Self element if the signer is you). The Signer element also
+ // includes the IDs of any active key pairs associated with the trusted signer's
+ // AWS account. If no KeyPairId element appears for a Signer, that signer can't
+ // create working signed URLs.
+ ActiveTrustedSigners *ActiveTrustedSigners `type:"structure" required:"true"`
+
+ // The domain name corresponding to the streaming distribution. For example:
+ // s5c39gqb8ow64r.cloudfront.net.
+ DomainName *string `type:"string" required:"true"`
+
+ // The identifier for the streaming distribution. For example: EGTXBD79H29TRA8.
+ Id *string `type:"string" required:"true"`
+
+ // The date and time the distribution was last modified.
+ LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"`
+
+ // The current status of the streaming distribution. When the status is Deployed,
+ // the distribution's information is fully propagated throughout the Amazon
+ // CloudFront system.
+ Status *string `type:"string" required:"true"`
+
+ // The current configuration information for the streaming distribution.
+ StreamingDistributionConfig *StreamingDistributionConfig `type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s StreamingDistribution) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s StreamingDistribution) GoString() string {
+ return s.String()
+}
+
+// The configuration for the streaming distribution.
+type StreamingDistributionConfig struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains information about CNAMEs (alternate domain names),
+ // if any, for this streaming distribution.
+ Aliases *Aliases `type:"structure"`
+
+ // A unique number that ensures the request can't be replayed. If the CallerReference
+ // is new (no matter the content of the StreamingDistributionConfig object),
+ // a new streaming distribution is created. If the CallerReference is a value
+ // you already sent in a previous request to create a streaming distribution,
+ // and the content of the StreamingDistributionConfig is identical to the original
+ // request (ignoring white space), the response includes the same information
+ // returned to the original request. If the CallerReference is a value you already
+ // sent in a previous request to create a streaming distribution but the content
+ // of the StreamingDistributionConfig is different from the original request,
+ // CloudFront returns a DistributionAlreadyExists error.
+ CallerReference *string `type:"string" required:"true"`
+
+ // Any comments you want to include about the streaming distribution.
+ Comment *string `type:"string" required:"true"`
+
+ // Whether the streaming distribution is enabled to accept end user requests
+ // for content.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // A complex type that controls whether access logs are written for the streaming
+ // distribution.
+ Logging *StreamingLoggingConfig `type:"structure"`
+
+ // A complex type that contains information about price class for this streaming
+ // distribution.
+ PriceClass *string `type:"string" enum:"PriceClass"`
+
+ // A complex type that contains information about the Amazon S3 bucket from
+ // which you want CloudFront to get your media files for distribution.
+ S3Origin *S3Origin `type:"structure" required:"true"`
+
+ // A complex type that specifies the AWS accounts, if any, that you want to
+ // allow to create signed URLs for private content. If you want to require signed
+ // URLs in requests for objects in the target origin that match the PathPattern
+ // for this cache behavior, specify true for Enabled, and specify the applicable
+ // values for Quantity and Items. For more information, go to Using a Signed
+ // URL to Serve Private Content in the Amazon CloudFront Developer Guide. If
+ // you don't want to require signed URLs in requests for objects that match
+ // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. To
+ // add, change, or remove one or more trusted signers, change Enabled to true
+ // (if it's currently false), change Quantity as applicable, and specify all
+ // of the trusted signers that you want to include in the updated distribution.
+ TrustedSigners *TrustedSigners `type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s StreamingDistributionConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s StreamingDistributionConfig) GoString() string {
+ return s.String()
+}
+
+// A streaming distribution list.
+type StreamingDistributionList struct {
+ _ struct{} `type:"structure"`
+
+ // A flag that indicates whether more streaming distributions remain to be listed.
+ // If your results were truncated, you can make a follow-up pagination request
+ // using the Marker request parameter to retrieve more distributions in the
+ // list.
+ IsTruncated *bool `type:"boolean" required:"true"`
+
+ // A complex type that contains one StreamingDistributionSummary element for
+ // each distribution that was created by the current AWS account.
+ Items []*StreamingDistributionSummary `locationNameList:"StreamingDistributionSummary" type:"list"`
+
+ // The value you provided for the Marker request parameter.
+ Marker *string `type:"string" required:"true"`
+
+ // The value you provided for the MaxItems request parameter.
+ MaxItems *int64 `type:"integer" required:"true"`
+
+ // If IsTruncated is true, this element is present and contains the value you
+ // can use for the Marker request parameter to continue listing your streaming
+ // distributions where they left off.
+ NextMarker *string `type:"string"`
+
+ // The number of streaming distributions that were created by the current AWS
+ // account.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s StreamingDistributionList) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s StreamingDistributionList) GoString() string {
+ return s.String()
+}
+
+// A summary of the information for an Amazon CloudFront streaming distribution.
+type StreamingDistributionSummary struct {
+ _ struct{} `type:"structure"`
+
+ // A complex type that contains information about CNAMEs (alternate domain names),
+ // if any, for this streaming distribution.
+ Aliases *Aliases `type:"structure" required:"true"`
+
+ // The comment originally specified when this distribution was created.
+ Comment *string `type:"string" required:"true"`
+
+ // The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net.
+ DomainName *string `type:"string" required:"true"`
+
+ // Whether the distribution is enabled to accept end user requests for content.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // The identifier for the distribution. For example: EDFDVBD632BHDS5.
+ Id *string `type:"string" required:"true"`
+
+ // The date and time the distribution was last modified.
+ LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"`
+
+ PriceClass *string `type:"string" required:"true" enum:"PriceClass"`
+
+ // A complex type that contains information about the Amazon S3 bucket from
+ // which you want CloudFront to get your media files for distribution.
+ S3Origin *S3Origin `type:"structure" required:"true"`
+
+ // Indicates the current status of the distribution. When the status is Deployed,
+ // the distribution's information is fully propagated throughout the Amazon
+ // CloudFront system.
+ Status *string `type:"string" required:"true"`
+
+ // A complex type that specifies the AWS accounts, if any, that you want to
+ // allow to create signed URLs for private content. If you want to require signed
+ // URLs in requests for objects in the target origin that match the PathPattern
+ // for this cache behavior, specify true for Enabled, and specify the applicable
+ // values for Quantity and Items. For more information, go to Using a Signed
+ // URL to Serve Private Content in the Amazon CloudFront Developer Guide. If
+ // you don't want to require signed URLs in requests for objects that match
+ // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. To
+ // add, change, or remove one or more trusted signers, change Enabled to true
+ // (if it's currently false), change Quantity as applicable, and specify all
+ // of the trusted signers that you want to include in the updated distribution.
+ TrustedSigners *TrustedSigners `type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s StreamingDistributionSummary) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s StreamingDistributionSummary) GoString() string {
+ return s.String()
+}
+
+// A complex type that controls whether access logs are written for this streaming
+// distribution.
+type StreamingLoggingConfig struct {
+ _ struct{} `type:"structure"`
+
+ // The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com.
+ Bucket *string `type:"string" required:"true"`
+
+ // Specifies whether you want CloudFront to save access logs to an Amazon S3
+ // bucket. If you do not want to enable logging when you create a streaming
+ // distribution or if you want to disable logging for an existing streaming
+ // distribution, specify false for Enabled, and specify empty Bucket and Prefix
+ // elements. If you specify false for Enabled but you specify values for Bucket
+ // and Prefix, the values are automatically deleted.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // An optional string that you want CloudFront to prefix to the access log filenames
+ // for this streaming distribution, for example, myprefix/. If you want to enable
+ // logging, but you do not want to specify a prefix, you still must include
+ // an empty Prefix element in the Logging element.
+ Prefix *string `type:"string" required:"true"`
+}
+
+// String returns the string representation
+func (s StreamingLoggingConfig) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s StreamingLoggingConfig) GoString() string {
+ return s.String()
+}
+
+// A complex type that specifies the AWS accounts, if any, that you want to
+// allow to create signed URLs for private content. If you want to require signed
+// URLs in requests for objects in the target origin that match the PathPattern
+// for this cache behavior, specify true for Enabled, and specify the applicable
+// values for Quantity and Items. For more information, go to Using a Signed
+// URL to Serve Private Content in the Amazon CloudFront Developer Guide. If
+// you don't want to require signed URLs in requests for objects that match
+// PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. To
+// add, change, or remove one or more trusted signers, change Enabled to true
+// (if it's currently false), change Quantity as applicable, and specify all
+// of the trusted signers that you want to include in the updated distribution.
+type TrustedSigners struct {
+ _ struct{} `type:"structure"`
+
+ // Specifies whether you want to require end users to use signed URLs to access
+ // the files specified by PathPattern and TargetOriginId.
+ Enabled *bool `type:"boolean" required:"true"`
+
+ // Optional: A complex type that contains trusted signers for this cache behavior.
+ // If Quantity is 0, you can omit Items.
+ Items []*string `locationNameList:"AwsAccountNumber" type:"list"`
+
+ // The number of trusted signers for this cache behavior.
+ Quantity *int64 `type:"integer" required:"true"`
+}
+
+// String returns the string representation
+func (s TrustedSigners) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s TrustedSigners) GoString() string {
+ return s.String()
+}
+
+// The request to update an origin access identity.
+type UpdateCloudFrontOriginAccessIdentityInput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"`
+
+ // The identity's configuration information.
+ CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true"`
+
+ // The identity's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received when retrieving the identity's
+ // configuration. For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+}
+
+// String returns the string representation
+func (s UpdateCloudFrontOriginAccessIdentityInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateCloudFrontOriginAccessIdentityInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type UpdateCloudFrontOriginAccessIdentityOutput struct {
+ _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"`
+
+ // The origin access identity's information.
+ CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s UpdateCloudFrontOriginAccessIdentityOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateCloudFrontOriginAccessIdentityOutput) GoString() string {
+ return s.String()
+}
+
+// The request to update a distribution.
+type UpdateDistributionInput struct {
+ _ struct{} `type:"structure" payload:"DistributionConfig"`
+
+ // The distribution's configuration information.
+ DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true"`
+
+ // The distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received when retrieving the distribution's
+ // configuration. For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+}
+
+// String returns the string representation
+func (s UpdateDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type UpdateDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"Distribution"`
+
+ // The distribution's information.
+ Distribution *Distribution `type:"structure"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+}
+
+// String returns the string representation
+func (s UpdateDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// The request to update a streaming distribution.
+type UpdateStreamingDistributionInput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistributionConfig"`
+
+ // The streaming distribution's id.
+ Id *string `location:"uri" locationName:"Id" type:"string" required:"true"`
+
+ // The value of the ETag header you received when retrieving the streaming distribution's
+ // configuration. For example: E2QWRUHAPOMQZL.
+ IfMatch *string `location:"header" locationName:"If-Match" type:"string"`
+
+ // The streaming distribution's configuration information.
+ StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true"`
+}
+
+// String returns the string representation
+func (s UpdateStreamingDistributionInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateStreamingDistributionInput) GoString() string {
+ return s.String()
+}
+
+// The returned result of the corresponding request.
+type UpdateStreamingDistributionOutput struct {
+ _ struct{} `type:"structure" payload:"StreamingDistribution"`
+
+ // The current version of the configuration. For example: E2QWRUHAPOMQZL.
+ ETag *string `location:"header" locationName:"ETag" type:"string"`
+
+ // The streaming distribution's information.
+ StreamingDistribution *StreamingDistribution `type:"structure"`
+}
+
+// String returns the string representation
+func (s UpdateStreamingDistributionOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s UpdateStreamingDistributionOutput) GoString() string {
+ return s.String()
+}
+
+// A complex type that contains information about viewer certificates for this
+// distribution.
+type ViewerCertificate struct {
+ _ struct{} `type:"structure"`
+
+ // If you want viewers to use HTTPS to request your objects and you're using
+ // an alternate domain name in your object URLs (for example, https://example.com/logo.jpg),
+ // specify the ACM certificate ARN of the custom viewer certificate for this
+ // distribution. Specify either this value, IAMCertificateId, or CloudFrontDefaultCertificate.
+ ACMCertificateArn *string `type:"string"`
+
+ // Note: this field is deprecated. Please use one of [ACMCertificateArn, IAMCertificateId,
+ // CloudFrontDefaultCertificate].
+ Certificate *string `deprecated:"true" type:"string"`
+
+ // Note: this field is deprecated. Please use one of [ACMCertificateArn, IAMCertificateId,
+ // CloudFrontDefaultCertificate].
+ CertificateSource *string `deprecated:"true" type:"string" enum:"CertificateSource"`
+
+ // If you want viewers to use HTTPS to request your objects and you're using
+ // the CloudFront domain name of your distribution in your object URLs (for
+ // example, https://d111111abcdef8.cloudfront.net/logo.jpg), set to true. Omit
+ // this value if you are setting an ACMCertificateArn or IAMCertificateId.
+ CloudFrontDefaultCertificate *bool `type:"boolean"`
+
+ // If you want viewers to use HTTPS to request your objects and you're using
+ // an alternate domain name in your object URLs (for example, https://example.com/logo.jpg),
+ // specify the IAM certificate identifier of the custom viewer certificate for
+ // this distribution. Specify either this value, ACMCertificateArn, or CloudFrontDefaultCertificate.
+ IAMCertificateId *string `type:"string"`
+
+ // Specify the minimum version of the SSL protocol that you want CloudFront
+ // to use, SSLv3 or TLSv1, for HTTPS connections. CloudFront will serve your
+ // objects only to browsers or devices that support at least the SSL version
+ // that you specify. The TLSv1 protocol is more secure, so we recommend that
+ // you specify SSLv3 only if your users are using browsers or devices that don't
+ // support TLSv1. If you're using a custom certificate (if you specify a value
+ // for IAMCertificateId) and if you're using dedicated IP (if you specify vip
+ // for SSLSupportMethod), you can choose SSLv3 or TLSv1 as the MinimumProtocolVersion.
+ // If you're using a custom certificate (if you specify a value for IAMCertificateId)
+ // and if you're using SNI (if you specify sni-only for SSLSupportMethod), you
+ // must specify TLSv1 for MinimumProtocolVersion.
+ MinimumProtocolVersion *string `type:"string" enum:"MinimumProtocolVersion"`
+
+ // If you specify a value for IAMCertificateId, you must also specify how you
+ // want CloudFront to serve HTTPS requests. Valid values are vip and sni-only.
+ // If you specify vip, CloudFront uses dedicated IP addresses for your content
+ // and can respond to HTTPS requests from any viewer. However, you must request
+ // permission to use this feature, and you incur additional monthly charges.
+ // If you specify sni-only, CloudFront can only respond to HTTPS requests from
+ // viewers that support Server Name Indication (SNI). All modern browsers support
+ // SNI, but some browsers still in use don't support SNI. Do not specify a value
+ // for SSLSupportMethod if you specified true for CloudFrontDefaultCertificate.
+ SSLSupportMethod *string `type:"string" enum:"SSLSupportMethod"`
+}
+
+// String returns the string representation
+func (s ViewerCertificate) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ViewerCertificate) GoString() string {
+ return s.String()
+}
+
+const (
+ // @enum CertificateSource
+ CertificateSourceCloudfront = "cloudfront"
+ // @enum CertificateSource
+ CertificateSourceIam = "iam"
+ // @enum CertificateSource
+ CertificateSourceAcm = "acm"
+)
+
+const (
+ // @enum GeoRestrictionType
+ GeoRestrictionTypeBlacklist = "blacklist"
+ // @enum GeoRestrictionType
+ GeoRestrictionTypeWhitelist = "whitelist"
+ // @enum GeoRestrictionType
+ GeoRestrictionTypeNone = "none"
+)
+
+const (
+ // @enum ItemSelection
+ ItemSelectionNone = "none"
+ // @enum ItemSelection
+ ItemSelectionWhitelist = "whitelist"
+ // @enum ItemSelection
+ ItemSelectionAll = "all"
+)
+
+const (
+ // @enum Method
+ MethodGet = "GET"
+ // @enum Method
+ MethodHead = "HEAD"
+ // @enum Method
+ MethodPost = "POST"
+ // @enum Method
+ MethodPut = "PUT"
+ // @enum Method
+ MethodPatch = "PATCH"
+ // @enum Method
+ MethodOptions = "OPTIONS"
+ // @enum Method
+ MethodDelete = "DELETE"
+)
+
+const (
+ // @enum MinimumProtocolVersion
+ MinimumProtocolVersionSslv3 = "SSLv3"
+ // @enum MinimumProtocolVersion
+ MinimumProtocolVersionTlsv1 = "TLSv1"
+)
+
+const (
+ // @enum OriginProtocolPolicy
+ OriginProtocolPolicyHttpOnly = "http-only"
+ // @enum OriginProtocolPolicy
+ OriginProtocolPolicyMatchViewer = "match-viewer"
+ // @enum OriginProtocolPolicy
+ OriginProtocolPolicyHttpsOnly = "https-only"
+)
+
+const (
+ // @enum PriceClass
+ PriceClassPriceClass100 = "PriceClass_100"
+ // @enum PriceClass
+ PriceClassPriceClass200 = "PriceClass_200"
+ // @enum PriceClass
+ PriceClassPriceClassAll = "PriceClass_All"
+)
+
+const (
+ // @enum SSLSupportMethod
+ SSLSupportMethodSniOnly = "sni-only"
+ // @enum SSLSupportMethod
+ SSLSupportMethodVip = "vip"
+)
+
+const (
+ // @enum SslProtocol
+ SslProtocolSslv3 = "SSLv3"
+ // @enum SslProtocol
+ SslProtocolTlsv1 = "TLSv1"
+ // @enum SslProtocol
+ SslProtocolTlsv11 = "TLSv1.1"
+ // @enum SslProtocol
+ SslProtocolTlsv12 = "TLSv1.2"
+)
+
+const (
+ // @enum ViewerProtocolPolicy
+ ViewerProtocolPolicyAllowAll = "allow-all"
+ // @enum ViewerProtocolPolicy
+ ViewerProtocolPolicyHttpsOnly = "https-only"
+ // @enum ViewerProtocolPolicy
+ ViewerProtocolPolicyRedirectToHttps = "redirect-to-https"
+)
diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go
new file mode 100644
index 000000000000..51b73c6efef8
--- /dev/null
+++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go
@@ -0,0 +1,86 @@
+// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
+
+package cloudfront
+
+import (
+ "github.com/aws/aws-sdk-go/aws"
+ "github.com/aws/aws-sdk-go/aws/client"
+ "github.com/aws/aws-sdk-go/aws/client/metadata"
+ "github.com/aws/aws-sdk-go/aws/request"
+ "github.com/aws/aws-sdk-go/private/protocol/restxml"
+ "github.com/aws/aws-sdk-go/private/signer/v4"
+)
+
+// CloudFront is a client for CloudFront.
+//The service client's operations are safe to be used concurrently.
+// It is not safe to mutate any of the client's properties though.
+type CloudFront struct {
+ *client.Client
+}
+
+// Used for custom client initialization logic
+var initClient func(*client.Client)
+
+// Used for custom request initialization logic
+var initRequest func(*request.Request)
+
+// A ServiceName is the name of the service the client will make API calls to.
+const ServiceName = "cloudfront"
+
+// New creates a new instance of the CloudFront client with a session.
+// If additional configuration is needed for the client instance use the optional
+// aws.Config parameter to add your extra config.
+//
+// Example:
+// // Create a CloudFront client from just a session.
+// svc := cloudfront.New(mySession)
+//
+// // Create a CloudFront client with additional configuration
+// svc := cloudfront.New(mySession, aws.NewConfig().WithRegion("us-west-2"))
+func New(p client.ConfigProvider, cfgs ...*aws.Config) *CloudFront {
+ c := p.ClientConfig(ServiceName, cfgs...)
+ return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion)
+}
+
+// newClient creates, initializes and returns a new service client instance.
+func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion string) *CloudFront {
+ svc := &CloudFront{
+ Client: client.New(
+ cfg,
+ metadata.ClientInfo{
+ ServiceName: ServiceName,
+ SigningRegion: signingRegion,
+ Endpoint: endpoint,
+ APIVersion: "2016-01-28",
+ },
+ handlers,
+ ),
+ }
+
+ // Handlers
+ svc.Handlers.Sign.PushBack(v4.Sign)
+ svc.Handlers.Build.PushBackNamed(restxml.BuildHandler)
+ svc.Handlers.Unmarshal.PushBackNamed(restxml.UnmarshalHandler)
+ svc.Handlers.UnmarshalMeta.PushBackNamed(restxml.UnmarshalMetaHandler)
+ svc.Handlers.UnmarshalError.PushBackNamed(restxml.UnmarshalErrorHandler)
+
+ // Run custom client initialization if present
+ if initClient != nil {
+ initClient(svc.Client)
+ }
+
+ return svc
+}
+
+// newRequest creates a new request for a CloudFront operation and runs any
+// custom request initialization.
+func (c *CloudFront) newRequest(op *request.Operation, params, data interface{}) *request.Request {
+ req := c.NewRequest(op, params, data)
+
+ // Run custom request initialization if present
+ if initRequest != nil {
+ initRequest(req)
+ }
+
+ return req
+}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/waiters.go
new file mode 100644
index 000000000000..7a0525d1770b
--- /dev/null
+++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/waiters.go
@@ -0,0 +1,76 @@
+// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
+
+package cloudfront
+
+import (
+ "github.com/aws/aws-sdk-go/private/waiter"
+)
+
+func (c *CloudFront) WaitUntilDistributionDeployed(input *GetDistributionInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "GetDistribution",
+ Delay: 60,
+ MaxAttempts: 25,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "path",
+ Argument: "Distribution.Status",
+ Expected: "Deployed",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
+
+func (c *CloudFront) WaitUntilInvalidationCompleted(input *GetInvalidationInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "GetInvalidation",
+ Delay: 20,
+ MaxAttempts: 30,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "path",
+ Argument: "Invalidation.Status",
+ Expected: "Completed",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
+
+func (c *CloudFront) WaitUntilStreamingDistributionDeployed(input *GetStreamingDistributionInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "GetStreamingDistribution",
+ Delay: 60,
+ MaxAttempts: 25,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "path",
+ Argument: "StreamingDistribution.Status",
+ Expected: "Deployed",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go
index 7bc6da339abc..c4cf4891961a 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go
@@ -2803,7 +2803,7 @@ type ListDeploymentsInput struct {
// queued deployments in the resulting list. In Progress: Include in-progress
// deployments in the resulting list. Succeeded: Include successful deployments
// in the resulting list. Failed: Include failed deployments in the resulting
- // list. Aborted: Include aborted deployments in the resulting list.
+ // list. Stopped: Include stopped deployments in the resulting list.
IncludeOnlyStatuses []*string `locationName:"includeOnlyStatuses" type:"list"`
// An identifier returned from the previous list deployments call. It can be
diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go b/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go
index c381d3381137..907aa394d55f 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go
@@ -1884,8 +1884,6 @@ type CopySnapshotInput struct {
// The name of an existing snapshot from which to copy.
SourceSnapshotName *string `type:"string" required:"true"`
- TargetBucket *string `type:"string"`
-
// A name for the copied snapshot.
TargetSnapshotName *string `type:"string" required:"true"`
}
@@ -4737,7 +4735,7 @@ type ResetCacheParameterGroupInput struct {
// An array of parameter names to be reset. If you are not resetting the entire
// cache parameter group, you must specify at least one parameter name.
- ParameterNameValues []*ParameterNameValue `locationNameList:"ParameterNameValue" type:"list"`
+ ParameterNameValues []*ParameterNameValue `locationNameList:"ParameterNameValue" type:"list" required:"true"`
// If true, all parameters in the cache parameter group will be reset to default
// values. If false, no such action occurs.
diff --git a/vendor/github.com/aws/aws-sdk-go/service/elb/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/elb/waiters.go
index 5d5755d049a0..b1c9a526aaec 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/elb/waiters.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/elb/waiters.go
@@ -29,6 +29,35 @@ func (c *ELB) WaitUntilAnyInstanceInService(input *DescribeInstanceHealthInput)
return w.Wait()
}
+func (c *ELB) WaitUntilInstanceDeregistered(input *DescribeInstanceHealthInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "DescribeInstanceHealth",
+ Delay: 15,
+ MaxAttempts: 40,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "pathAll",
+ Argument: "InstanceStates[].State",
+ Expected: "OutOfService",
+ },
+ {
+ State: "success",
+ Matcher: "error",
+ Argument: "",
+ Expected: "InvalidInstance",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
+
func (c *ELB) WaitUntilInstanceInService(input *DescribeInstanceHealthInput) error {
waiterCfg := waiter.Config{
Operation: "DescribeInstanceHealth",
diff --git a/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go b/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go
index 69315d35e8a0..0a250ba953eb 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go
@@ -1696,9 +1696,9 @@ func (c *Redshift) DescribeTableRestoreStatusRequest(input *DescribeTableRestore
// Lists the status of one or more table restore requests made using the RestoreTableFromClusterSnapshot
// API action. If you don't specify a value for the TableRestoreRequestId parameter,
-// then DescribeTableRestoreStatus returns the status of all in-progress table
-// restore requests. Otherwise DescribeTableRestoreStatus returns the status
-// of the table specified by TableRestoreRequestId.
+// then DescribeTableRestoreStatus returns the status of all table restore requests
+// ordered by the date and time of the request in ascending order. Otherwise
+// DescribeTableRestoreStatus returns the status of the table specified by TableRestoreRequestId.
func (c *Redshift) DescribeTableRestoreStatus(input *DescribeTableRestoreStatusInput) (*DescribeTableRestoreStatusOutput, error) {
req, out := c.DescribeTableRestoreStatusRequest(input)
err := req.Send()
@@ -1903,6 +1903,36 @@ func (c *Redshift) ModifyCluster(input *ModifyClusterInput) (*ModifyClusterOutpu
return out, err
}
+const opModifyClusterIamRoles = "ModifyClusterIamRoles"
+
+// ModifyClusterIamRolesRequest generates a request for the ModifyClusterIamRoles operation.
+func (c *Redshift) ModifyClusterIamRolesRequest(input *ModifyClusterIamRolesInput) (req *request.Request, output *ModifyClusterIamRolesOutput) {
+ op := &request.Operation{
+ Name: opModifyClusterIamRoles,
+ HTTPMethod: "POST",
+ HTTPPath: "/",
+ }
+
+ if input == nil {
+ input = &ModifyClusterIamRolesInput{}
+ }
+
+ req = c.newRequest(op, input, output)
+ output = &ModifyClusterIamRolesOutput{}
+ req.Data = output
+ return
+}
+
+// Modifies the list of AWS Identity and Access Management (IAM) roles that
+// can be used by the cluster to access other AWS services.
+//
+// A cluster can have up to 10 IAM roles associated at any time.
+func (c *Redshift) ModifyClusterIamRoles(input *ModifyClusterIamRolesInput) (*ModifyClusterIamRolesOutput, error) {
+ req, out := c.ModifyClusterIamRolesRequest(input)
+ err := req.Send()
+ return out, err
+}
+
const opModifyClusterParameterGroup = "ModifyClusterParameterGroup"
// ModifyClusterParameterGroupRequest generates a request for the ModifyClusterParameterGroup operation.
@@ -2492,6 +2522,10 @@ type Cluster struct {
// Values: active, applying
HsmStatus *HsmStatus `type:"structure"`
+ // A list of AWS Identity and Access Management (IAM) roles that can be used
+ // by the cluster to access other AWS services.
+ IamRoles []*ClusterIamRole `locationNameList:"ClusterIamRole" type:"list"`
+
// The AWS Key Management Service (KMS) key ID of the encryption key used to
// encrypt data in the cluster.
KmsKeyId *string `type:"string"`
@@ -2545,6 +2579,34 @@ func (s Cluster) GoString() string {
return s.String()
}
+// An AWS Identity and Access Management (IAM) role that can be used by the
+// associated Amazon Redshift cluster to access other AWS services.
+type ClusterIamRole struct {
+ _ struct{} `type:"structure"`
+
+ // Describes the status of the IAM role's association with an Amazon Redshift
+ // cluster.
+ //
+ // The following are possible statuses and descriptions. in-sync: The role
+ // is available for use by the cluster. adding: The role is in the process of
+ // being associated with the cluster. removing: The role is in the process of
+ // being disassociated with the cluster.
+ ApplyStatus *string `type:"string"`
+
+ // The Amazon Resource Name (ARN) of the IAM role. For example, arn:aws:iam::123456789012:role/RedshiftCopyUnload.
+ IamRoleArn *string `type:"string"`
+}
+
+// String returns the string representation
+func (s ClusterIamRole) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ClusterIamRole) GoString() string {
+ return s.String()
+}
+
// The identifier of a node in a cluster.
type ClusterNode struct {
_ struct{} `type:"structure"`
@@ -3011,6 +3073,14 @@ type CreateClusterInput struct {
// the Amazon Redshift cluster can use to retrieve and store keys in an HSM.
HsmConfigurationIdentifier *string `type:"string"`
+ // A list of AWS Identity and Access Management (IAM) roles that can be used
+ // by the cluster to access other AWS services. You must supply the IAM roles
+ // in their Amazon Resource Name (ARN) format. You can supply up to 10 IAM roles
+ // in a single request.
+ //
+ // A cluster can have up to 10 IAM roles associated at any time.
+ IamRoles []*string `locationNameList:"IamRoleArn" type:"list"`
+
// The AWS Key Management Service (KMS) key ID of the encryption key that you
// want to use to encrypt data in the cluster.
KmsKeyId *string `type:"string"`
@@ -6005,6 +6075,50 @@ func (s LoggingStatus) GoString() string {
return s.String()
}
+type ModifyClusterIamRolesInput struct {
+ _ struct{} `type:"structure"`
+
+ // Zero or more IAM roles (in their ARN format) to associate with the cluster.
+ // You can associate up to 10 IAM roles with a single cluster in a single request.
+ AddIamRoles []*string `locationNameList:"IamRoleArn" type:"list"`
+
+ // The unique identifier of the cluster for which you want to associate or disassociate
+ // IAM roles.
+ ClusterIdentifier *string `type:"string" required:"true"`
+
+ // Zero or more IAM roles (in their ARN format) to disassociate from the cluster.
+ // You can disassociate up to 10 IAM roles from a single cluster in a single
+ // request.
+ RemoveIamRoles []*string `locationNameList:"IamRoleArn" type:"list"`
+}
+
+// String returns the string representation
+func (s ModifyClusterIamRolesInput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ModifyClusterIamRolesInput) GoString() string {
+ return s.String()
+}
+
+type ModifyClusterIamRolesOutput struct {
+ _ struct{} `type:"structure"`
+
+ // Describes a cluster.
+ Cluster *Cluster `type:"structure"`
+}
+
+// String returns the string representation
+func (s ModifyClusterIamRolesOutput) String() string {
+ return awsutil.Prettify(s)
+}
+
+// GoString returns the string representation
+func (s ModifyClusterIamRolesOutput) GoString() string {
+ return s.String()
+}
+
type ModifyClusterInput struct {
_ struct{} `type:"structure"`
@@ -6816,6 +6930,14 @@ type RestoreFromClusterSnapshotInput struct {
// the Amazon Redshift cluster can use to retrieve and store keys in an HSM.
HsmConfigurationIdentifier *string `type:"string"`
+ // A list of AWS Identity and Access Management (IAM) roles that can be used
+ // by the cluster to access other AWS services. You must supply the IAM roles
+ // in their Amazon Resource Name (ARN) format. You can supply up to 10 IAM roles
+ // in a single request.
+ //
+ // A cluster can have up to 10 IAM roles associated at any time.
+ IamRoles []*string `locationNameList:"IamRoleArn" type:"list"`
+
// The AWS Key Management Service (KMS) key ID of the encryption key that you
// want to use to encrypt data in the cluster that you restore from a shared
// snapshot.
@@ -6965,7 +7087,8 @@ type RestoreTableFromClusterSnapshotInput struct {
// The name of the source database that contains the table to restore from.
SourceDatabaseName *string `type:"string" required:"true"`
- // The name of the source schema that contains the table to restore from.
+ // The name of the source schema that contains the table to restore from. If
+ // you do not specify a SourceSchemaName value, the default is public.
SourceSchemaName *string `type:"string"`
// The name of the source table to restore from.
@@ -7316,7 +7439,7 @@ type TableRestoreStatus struct {
ClusterIdentifier *string `type:"string"`
// A description of the status of the table restore request. Status values include
- // SUCCEEDED, FAILED, CANCELLED, PENDING, IN_PROGRESS.
+ // SUCCEEDED, FAILED, CANCELED, PENDING, IN_PROGRESS.
Message *string `type:"string"`
// The name of the table to create as a result of the table restore request.
@@ -7343,7 +7466,7 @@ type TableRestoreStatus struct {
// A value that describes the current state of the table restore request.
//
- // Valid Values: SUCCEEDED, FAILED, CANCELLED, PENDING, IN_PROGRESS
+ // Valid Values: SUCCEEDED, FAILED, CANCELED, PENDING, IN_PROGRESS
Status *string `type:"string" enum:"TableRestoreStatusType"`
// The unique identifier for the table restore request.
diff --git a/vendor/github.com/aws/aws-sdk-go/service/route53/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/route53/waiters.go
new file mode 100644
index 000000000000..04786169e2a6
--- /dev/null
+++ b/vendor/github.com/aws/aws-sdk-go/service/route53/waiters.go
@@ -0,0 +1,30 @@
+// THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT.
+
+package route53
+
+import (
+ "github.com/aws/aws-sdk-go/private/waiter"
+)
+
+func (c *Route53) WaitUntilResourceRecordSetsChanged(input *GetChangeInput) error {
+ waiterCfg := waiter.Config{
+ Operation: "GetChange",
+ Delay: 30,
+ MaxAttempts: 60,
+ Acceptors: []waiter.WaitAcceptor{
+ {
+ State: "success",
+ Matcher: "path",
+ Argument: "ChangeInfo.Status",
+ Expected: "INSYNC",
+ },
+ },
+ }
+
+ w := waiter.Waiter{
+ Client: c,
+ Input: input,
+ Config: waiterCfg,
+ }
+ return w.Wait()
+}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go b/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go
index 8671d162728c..103abb3c8016 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go
@@ -47,9 +47,7 @@ func (c *SQS) AddPermissionRequest(input *AddPermissionInput) (req *request.Requ
//
// Some API actions take lists of parameters. These lists are specified using
// the param.n notation. Values of n are integers starting from 1. For example,
-// a parameter list with two elements looks like this: &Attribute.1=this
-//
-// &Attribute.2=that
+// a parameter list with two elements looks like this:
func (c *SQS) AddPermission(input *AddPermissionInput) (*AddPermissionOutput, error) {
req, out := c.AddPermissionRequest(input)
err := req.Send()
@@ -145,9 +143,7 @@ func (c *SQS) ChangeMessageVisibilityBatchRequest(input *ChangeMessageVisibility
// returns an HTTP status code of 200. Some API actions take lists of parameters.
// These lists are specified using the param.n notation. Values of n are integers
// starting from 1. For example, a parameter list with two elements looks like
-// this: &Attribute.1=this
-//
-// &Attribute.2=that
+// this:
func (c *SQS) ChangeMessageVisibilityBatch(input *ChangeMessageVisibilityBatchInput) (*ChangeMessageVisibilityBatchOutput, error) {
req, out := c.ChangeMessageVisibilityBatchRequest(input)
err := req.Send()
@@ -196,9 +192,7 @@ func (c *SQS) CreateQueueRequest(input *CreateQueueInput) (req *request.Request,
//
// Some API actions take lists of parameters. These lists are specified using
// the param.n notation. Values of n are integers starting from 1. For example,
-// a parameter list with two elements looks like this: &Attribute.1=this
-//
-// &Attribute.2=that
+// a parameter list with two elements looks like this:
func (c *SQS) CreateQueue(input *CreateQueueInput) (*CreateQueueOutput, error) {
req, out := c.CreateQueueRequest(input)
err := req.Send()
@@ -283,9 +277,7 @@ func (c *SQS) DeleteMessageBatchRequest(input *DeleteMessageBatchInput) (req *re
//
// Some API actions take lists of parameters. These lists are specified using
// the param.n notation. Values of n are integers starting from 1. For example,
-// a parameter list with two elements looks like this: &Attribute.1=this
-//
-// &Attribute.2=that
+// a parameter list with two elements looks like this:
func (c *SQS) DeleteMessageBatch(input *DeleteMessageBatchInput) (*DeleteMessageBatchOutput, error) {
req, out := c.DeleteMessageBatchRequest(input)
err := req.Send()
@@ -358,27 +350,27 @@ func (c *SQS) GetQueueAttributesRequest(input *GetQueueAttributesInput) (req *re
}
// Gets attributes for the specified queue. The following attributes are supported:
-// All - returns all values. ApproximateNumberOfMessages - returns the approximate
+// All - returns all values. ApproximateNumberOfMessages - returns the approximate
// number of visible messages in a queue. For more information, see Resources
// Required to Process Messages (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ApproximateNumber.html)
-// in the Amazon SQS Developer Guide. ApproximateNumberOfMessagesNotVisible
+// in the Amazon SQS Developer Guide. ApproximateNumberOfMessagesNotVisible
// - returns the approximate number of messages that are not timed-out and not
// deleted. For more information, see Resources Required to Process Messages
// (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ApproximateNumber.html)
-// in the Amazon SQS Developer Guide. VisibilityTimeout - returns the visibility
+// in the Amazon SQS Developer Guide. VisibilityTimeout - returns the visibility
// timeout for the queue. For more information about visibility timeout, see
// Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html)
-// in the Amazon SQS Developer Guide. CreatedTimestamp - returns the time when
-// the queue was created (epoch time in seconds). LastModifiedTimestamp - returns
-// the time when the queue was last changed (epoch time in seconds). Policy
-// - returns the queue's policy. MaximumMessageSize - returns the limit of
-// how many bytes a message can contain before Amazon SQS rejects it. MessageRetentionPeriod
-// - returns the number of seconds Amazon SQS retains a message. QueueArn -
-// returns the queue's Amazon resource name (ARN). ApproximateNumberOfMessagesDelayed
+// in the Amazon SQS Developer Guide. CreatedTimestamp - returns the time when
+// the queue was created (epoch time in seconds). LastModifiedTimestamp - returns
+// the time when the queue was last changed (epoch time in seconds). Policy
+// - returns the queue's policy. MaximumMessageSize - returns the limit of how
+// many bytes a message can contain before Amazon SQS rejects it. MessageRetentionPeriod
+// - returns the number of seconds Amazon SQS retains a message. QueueArn -
+// returns the queue's Amazon resource name (ARN). ApproximateNumberOfMessagesDelayed
// - returns the approximate number of messages that are pending to be added
-// to the queue. DelaySeconds - returns the default delay on the queue in seconds.
-// ReceiveMessageWaitTimeSeconds - returns the time for which a ReceiveMessage
-// call will wait for a message to arrive. RedrivePolicy - returns the parameters
+// to the queue. DelaySeconds - returns the default delay on the queue in seconds.
+// ReceiveMessageWaitTimeSeconds - returns the time for which a ReceiveMessage
+// call will wait for a message to arrive. RedrivePolicy - returns the parameters
// for dead letter queue functionality of the source queue. For more information
// about RedrivePolicy and dead letter queues, see Using Amazon SQS Dead Letter
// Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html)
@@ -389,9 +381,7 @@ func (c *SQS) GetQueueAttributesRequest(input *GetQueueAttributesInput) (req *re
// handle new attributes gracefully. Some API actions take lists of parameters.
// These lists are specified using the param.n notation. Values of n are integers
// starting from 1. For example, a parameter list with two elements looks like
-// this: &Attribute.1=this
-//
-// &Attribute.2=that
+// this:
func (c *SQS) GetQueueAttributes(input *GetQueueAttributesInput) (*GetQueueAttributesOutput, error) {
req, out := c.GetQueueAttributesRequest(input)
err := req.Send()
@@ -708,9 +698,7 @@ func (c *SQS) SendMessageBatchRequest(input *SendMessageBatchInput) (req *reques
// returns an HTTP status code of 200. Some API actions take lists of parameters.
// These lists are specified using the param.n notation. Values of n are integers
// starting from 1. For example, a parameter list with two elements looks like
-// this: &Attribute.1=this
-//
-// &Attribute.2=that
+// this:
func (c *SQS) SendMessageBatch(input *SendMessageBatchInput) (*SendMessageBatchOutput, error) {
req, out := c.SendMessageBatchRequest(input)
err := req.Send()
@@ -886,11 +874,9 @@ func (s ChangeMessageVisibilityBatchOutput) GoString() string {
// starting with 1. For example, a parameter list for this action might look
// like this:
//
-// &ChangeMessageVisibilityBatchRequestEntry.1.Id=change_visibility_msg_2
//
-// &ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle=Your_Receipt_Handle
//
-// &ChangeMessageVisibilityBatchRequestEntry.1.VisibilityTimeout=45
+// Your_Receipt_Handle]]>
type ChangeMessageVisibilityBatchRequestEntry struct {
_ struct{} `type:"structure"`
@@ -981,19 +967,19 @@ type CreateQueueInput struct {
// The following lists the names, descriptions, and values of the special request
// parameters the CreateQueue action uses:
//
- // DelaySeconds - The time in seconds that the delivery of all messages
- // in the queue will be delayed. An integer from 0 to 900 (15 minutes). The
- // default for this attribute is 0 (zero). MaximumMessageSize - The limit of
- // how many bytes a message can contain before Amazon SQS rejects it. An integer
- // from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this
- // attribute is 262144 (256 KiB). MessageRetentionPeriod - The number of seconds
- // Amazon SQS retains a message. Integer representing seconds, from 60 (1 minute)
- // to 1209600 (14 days). The default for this attribute is 345600 (4 days).
- // Policy - The queue's policy. A valid AWS policy. For more information about
- // policy structure, see Overview of AWS IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html)
- // in the Amazon IAM User Guide. ReceiveMessageWaitTimeSeconds - The time for
+ // DelaySeconds - The time in seconds that the delivery of all messages in
+ // the queue will be delayed. An integer from 0 to 900 (15 minutes). The default
+ // for this attribute is 0 (zero). MaximumMessageSize - The limit of how many
+ // bytes a message can contain before Amazon SQS rejects it. An integer from
+ // 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute
+ // is 262144 (256 KiB). MessageRetentionPeriod - The number of seconds Amazon
+ // SQS retains a message. Integer representing seconds, from 60 (1 minute) to
+ // 1209600 (14 days). The default for this attribute is 345600 (4 days). Policy
+ // - The queue's policy. A valid AWS policy. For more information about policy
+ // structure, see Overview of AWS IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html)
+ // in the Amazon IAM User Guide. ReceiveMessageWaitTimeSeconds - The time for
// which a ReceiveMessage call will wait for a message to arrive. An integer
- // from 0 to 20 (seconds). The default for this attribute is 0. VisibilityTimeout
+ // from 0 to 20 (seconds). The default for this attribute is 0. VisibilityTimeout
// - The visibility timeout for the queue. An integer from 0 to 43200 (12 hours).
// The default for this attribute is 30. For more information about visibility
// timeout, see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html)
@@ -1460,13 +1446,13 @@ type ReceiveMessageInput struct {
// The following lists the names and descriptions of the attributes that can
// be returned:
//
- // All - returns all values. ApproximateFirstReceiveTimestamp - returns
- // the time when the message was first received from the queue (epoch time in
- // milliseconds). ApproximateReceiveCount - returns the number of times a message
- // has been received from the queue but not deleted. SenderId - returns the
- // AWS account number (or the IP address, if anonymous access is allowed) of
- // the sender. SentTimestamp - returns the time when the message was sent to
- // the queue (epoch time in milliseconds).
+ // All - returns all values. ApproximateFirstReceiveTimestamp - returns the
+ // time when the message was first received from the queue (epoch time in milliseconds).
+ // ApproximateReceiveCount - returns the number of times a message has been
+ // received from the queue but not deleted. SenderId - returns the AWS account
+ // number (or the IP address, if anonymous access is allowed) of the sender.
+ // SentTimestamp - returns the time when the message was sent to the queue (epoch
+ // time in milliseconds).
AttributeNames []*string `locationNameList:"AttributeName" type:"list" flattened:"true"`
// The maximum number of messages to return. Amazon SQS never returns more messages
@@ -1745,22 +1731,22 @@ type SetQueueAttributesInput struct {
// The following lists the names, descriptions, and values of the special request
// parameters the SetQueueAttributes action uses:
//
- // DelaySeconds - The time in seconds that the delivery of all messages
- // in the queue will be delayed. An integer from 0 to 900 (15 minutes). The
- // default for this attribute is 0 (zero). MaximumMessageSize - The limit of
- // how many bytes a message can contain before Amazon SQS rejects it. An integer
- // from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this
- // attribute is 262144 (256 KiB). MessageRetentionPeriod - The number of seconds
- // Amazon SQS retains a message. Integer representing seconds, from 60 (1 minute)
- // to 1209600 (14 days). The default for this attribute is 345600 (4 days).
- // Policy - The queue's policy. A valid AWS policy. For more information about
- // policy structure, see Overview of AWS IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html)
- // in the Amazon IAM User Guide. ReceiveMessageWaitTimeSeconds - The time for
+ // DelaySeconds - The time in seconds that the delivery of all messages in
+ // the queue will be delayed. An integer from 0 to 900 (15 minutes). The default
+ // for this attribute is 0 (zero). MaximumMessageSize - The limit of how many
+ // bytes a message can contain before Amazon SQS rejects it. An integer from
+ // 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute
+ // is 262144 (256 KiB). MessageRetentionPeriod - The number of seconds Amazon
+ // SQS retains a message. Integer representing seconds, from 60 (1 minute) to
+ // 1209600 (14 days). The default for this attribute is 345600 (4 days). Policy
+ // - The queue's policy. A valid AWS policy. For more information about policy
+ // structure, see Overview of AWS IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html)
+ // in the Amazon IAM User Guide. ReceiveMessageWaitTimeSeconds - The time for
// which a ReceiveMessage call will wait for a message to arrive. An integer
- // from 0 to 20 (seconds). The default for this attribute is 0. VisibilityTimeout
+ // from 0 to 20 (seconds). The default for this attribute is 0. VisibilityTimeout
// - The visibility timeout for the queue. An integer from 0 to 43200 (12 hours).
// The default for this attribute is 30. For more information about visibility
- // timeout, see Visibility Timeout in the Amazon SQS Developer Guide. RedrivePolicy
+ // timeout, see Visibility Timeout in the Amazon SQS Developer Guide. RedrivePolicy
// - The parameters for dead letter queue functionality of the source queue.
// For more information about RedrivePolicy and dead letter queues, see Using
// Amazon SQS Dead Letter Queues in the Amazon SQS Developer Guide.
diff --git a/vendor/github.com/davecgh/go-spew/LICENSE b/vendor/github.com/davecgh/go-spew/LICENSE
new file mode 100644
index 000000000000..2a7cfd2bf6a7
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/LICENSE
@@ -0,0 +1,13 @@
+Copyright (c) 2012-2013 Dave Collins
+
+Permission to use, copy, modify, and distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
diff --git a/vendor/github.com/davecgh/go-spew/spew/bypass.go b/vendor/github.com/davecgh/go-spew/spew/bypass.go
new file mode 100644
index 000000000000..565bf5899f27
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/bypass.go
@@ -0,0 +1,151 @@
+// Copyright (c) 2015 Dave Collins
+//
+// Permission to use, copy, modify, and distribute this software for any
+// purpose with or without fee is hereby granted, provided that the above
+// copyright notice and this permission notice appear in all copies.
+//
+// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+// NOTE: Due to the following build constraints, this file will only be compiled
+// when the code is not running on Google App Engine and "-tags disableunsafe"
+// is not added to the go build command line.
+// +build !appengine,!disableunsafe
+
+package spew
+
+import (
+ "reflect"
+ "unsafe"
+)
+
+const (
+ // UnsafeDisabled is a build-time constant which specifies whether or
+ // not access to the unsafe package is available.
+ UnsafeDisabled = false
+
+ // ptrSize is the size of a pointer on the current arch.
+ ptrSize = unsafe.Sizeof((*byte)(nil))
+)
+
+var (
+ // offsetPtr, offsetScalar, and offsetFlag are the offsets for the
+ // internal reflect.Value fields. These values are valid before golang
+ // commit ecccf07e7f9d which changed the format. The are also valid
+ // after commit 82f48826c6c7 which changed the format again to mirror
+ // the original format. Code in the init function updates these offsets
+ // as necessary.
+ offsetPtr = uintptr(ptrSize)
+ offsetScalar = uintptr(0)
+ offsetFlag = uintptr(ptrSize * 2)
+
+ // flagKindWidth and flagKindShift indicate various bits that the
+ // reflect package uses internally to track kind information.
+ //
+ // flagRO indicates whether or not the value field of a reflect.Value is
+ // read-only.
+ //
+ // flagIndir indicates whether the value field of a reflect.Value is
+ // the actual data or a pointer to the data.
+ //
+ // These values are valid before golang commit 90a7c3c86944 which
+ // changed their positions. Code in the init function updates these
+ // flags as necessary.
+ flagKindWidth = uintptr(5)
+ flagKindShift = uintptr(flagKindWidth - 1)
+ flagRO = uintptr(1 << 0)
+ flagIndir = uintptr(1 << 1)
+)
+
+func init() {
+ // Older versions of reflect.Value stored small integers directly in the
+ // ptr field (which is named val in the older versions). Versions
+ // between commits ecccf07e7f9d and 82f48826c6c7 added a new field named
+ // scalar for this purpose which unfortunately came before the flag
+ // field, so the offset of the flag field is different for those
+ // versions.
+ //
+ // This code constructs a new reflect.Value from a known small integer
+ // and checks if the size of the reflect.Value struct indicates it has
+ // the scalar field. When it does, the offsets are updated accordingly.
+ vv := reflect.ValueOf(0xf00)
+ if unsafe.Sizeof(vv) == (ptrSize * 4) {
+ offsetScalar = ptrSize * 2
+ offsetFlag = ptrSize * 3
+ }
+
+ // Commit 90a7c3c86944 changed the flag positions such that the low
+ // order bits are the kind. This code extracts the kind from the flags
+ // field and ensures it's the correct type. When it's not, the flag
+ // order has been changed to the newer format, so the flags are updated
+ // accordingly.
+ upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag)
+ upfv := *(*uintptr)(upf)
+ flagKindMask := uintptr((1<>flagKindShift != uintptr(reflect.Int) {
+ flagKindShift = 0
+ flagRO = 1 << 5
+ flagIndir = 1 << 6
+
+ // Commit adf9b30e5594 modified the flags to separate the
+ // flagRO flag into two bits which specifies whether or not the
+ // field is embedded. This causes flagIndir to move over a bit
+ // and means that flagRO is the combination of either of the
+ // original flagRO bit and the new bit.
+ //
+ // This code detects the change by extracting what used to be
+ // the indirect bit to ensure it's set. When it's not, the flag
+ // order has been changed to the newer format, so the flags are
+ // updated accordingly.
+ if upfv&flagIndir == 0 {
+ flagRO = 3 << 5
+ flagIndir = 1 << 7
+ }
+ }
+}
+
+// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
+// the typical safety restrictions preventing access to unaddressable and
+// unexported data. It works by digging the raw pointer to the underlying
+// value out of the protected value and generating a new unprotected (unsafe)
+// reflect.Value to it.
+//
+// This allows us to check for implementations of the Stringer and error
+// interfaces to be used for pretty printing ordinarily unaddressable and
+// inaccessible values such as unexported struct fields.
+func unsafeReflectValue(v reflect.Value) (rv reflect.Value) {
+ indirects := 1
+ vt := v.Type()
+ upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr)
+ rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag))
+ if rvf&flagIndir != 0 {
+ vt = reflect.PtrTo(v.Type())
+ indirects++
+ } else if offsetScalar != 0 {
+ // The value is in the scalar field when it's not one of the
+ // reference types.
+ switch vt.Kind() {
+ case reflect.Uintptr:
+ case reflect.Chan:
+ case reflect.Func:
+ case reflect.Map:
+ case reflect.Ptr:
+ case reflect.UnsafePointer:
+ default:
+ upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) +
+ offsetScalar)
+ }
+ }
+
+ pv := reflect.NewAt(vt, upv)
+ rv = pv
+ for i := 0; i < indirects; i++ {
+ rv = rv.Elem()
+ }
+ return rv
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
new file mode 100644
index 000000000000..457e41235ed7
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
@@ -0,0 +1,37 @@
+// Copyright (c) 2015 Dave Collins
+//
+// Permission to use, copy, modify, and distribute this software for any
+// purpose with or without fee is hereby granted, provided that the above
+// copyright notice and this permission notice appear in all copies.
+//
+// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+// NOTE: Due to the following build constraints, this file will only be compiled
+// when either the code is running on Google App Engine or "-tags disableunsafe"
+// is added to the go build command line.
+// +build appengine disableunsafe
+
+package spew
+
+import "reflect"
+
+const (
+ // UnsafeDisabled is a build-time constant which specifies whether or
+ // not access to the unsafe package is available.
+ UnsafeDisabled = true
+)
+
+// unsafeReflectValue typically converts the passed reflect.Value into a one
+// that bypasses the typical safety restrictions preventing access to
+// unaddressable and unexported data. However, doing this relies on access to
+// the unsafe package. This is a stub version which simply returns the passed
+// reflect.Value when the unsafe package is not available.
+func unsafeReflectValue(v reflect.Value) reflect.Value {
+ return v
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/common.go b/vendor/github.com/davecgh/go-spew/spew/common.go
new file mode 100644
index 000000000000..14f02dc15b7d
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/common.go
@@ -0,0 +1,341 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+package spew
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "reflect"
+ "sort"
+ "strconv"
+)
+
+// Some constants in the form of bytes to avoid string overhead. This mirrors
+// the technique used in the fmt package.
+var (
+ panicBytes = []byte("(PANIC=")
+ plusBytes = []byte("+")
+ iBytes = []byte("i")
+ trueBytes = []byte("true")
+ falseBytes = []byte("false")
+ interfaceBytes = []byte("(interface {})")
+ commaNewlineBytes = []byte(",\n")
+ newlineBytes = []byte("\n")
+ openBraceBytes = []byte("{")
+ openBraceNewlineBytes = []byte("{\n")
+ closeBraceBytes = []byte("}")
+ asteriskBytes = []byte("*")
+ colonBytes = []byte(":")
+ colonSpaceBytes = []byte(": ")
+ openParenBytes = []byte("(")
+ closeParenBytes = []byte(")")
+ spaceBytes = []byte(" ")
+ pointerChainBytes = []byte("->")
+ nilAngleBytes = []byte("")
+ maxNewlineBytes = []byte("\n")
+ maxShortBytes = []byte("")
+ circularBytes = []byte("")
+ circularShortBytes = []byte("")
+ invalidAngleBytes = []byte("")
+ openBracketBytes = []byte("[")
+ closeBracketBytes = []byte("]")
+ percentBytes = []byte("%")
+ precisionBytes = []byte(".")
+ openAngleBytes = []byte("<")
+ closeAngleBytes = []byte(">")
+ openMapBytes = []byte("map[")
+ closeMapBytes = []byte("]")
+ lenEqualsBytes = []byte("len=")
+ capEqualsBytes = []byte("cap=")
+)
+
+// hexDigits is used to map a decimal value to a hex digit.
+var hexDigits = "0123456789abcdef"
+
+// catchPanic handles any panics that might occur during the handleMethods
+// calls.
+func catchPanic(w io.Writer, v reflect.Value) {
+ if err := recover(); err != nil {
+ w.Write(panicBytes)
+ fmt.Fprintf(w, "%v", err)
+ w.Write(closeParenBytes)
+ }
+}
+
+// handleMethods attempts to call the Error and String methods on the underlying
+// type the passed reflect.Value represents and outputes the result to Writer w.
+//
+// It handles panics in any called methods by catching and displaying the error
+// as the formatted value.
+func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
+ // We need an interface to check if the type implements the error or
+ // Stringer interface. However, the reflect package won't give us an
+ // interface on certain things like unexported struct fields in order
+ // to enforce visibility rules. We use unsafe, when it's available,
+ // to bypass these restrictions since this package does not mutate the
+ // values.
+ if !v.CanInterface() {
+ if UnsafeDisabled {
+ return false
+ }
+
+ v = unsafeReflectValue(v)
+ }
+
+ // Choose whether or not to do error and Stringer interface lookups against
+ // the base type or a pointer to the base type depending on settings.
+ // Technically calling one of these methods with a pointer receiver can
+ // mutate the value, however, types which choose to satisify an error or
+ // Stringer interface with a pointer receiver should not be mutating their
+ // state inside these interface methods.
+ if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
+ v = unsafeReflectValue(v)
+ }
+ if v.CanAddr() {
+ v = v.Addr()
+ }
+
+ // Is it an error or Stringer?
+ switch iface := v.Interface().(type) {
+ case error:
+ defer catchPanic(w, v)
+ if cs.ContinueOnMethod {
+ w.Write(openParenBytes)
+ w.Write([]byte(iface.Error()))
+ w.Write(closeParenBytes)
+ w.Write(spaceBytes)
+ return false
+ }
+
+ w.Write([]byte(iface.Error()))
+ return true
+
+ case fmt.Stringer:
+ defer catchPanic(w, v)
+ if cs.ContinueOnMethod {
+ w.Write(openParenBytes)
+ w.Write([]byte(iface.String()))
+ w.Write(closeParenBytes)
+ w.Write(spaceBytes)
+ return false
+ }
+ w.Write([]byte(iface.String()))
+ return true
+ }
+ return false
+}
+
+// printBool outputs a boolean value as true or false to Writer w.
+func printBool(w io.Writer, val bool) {
+ if val {
+ w.Write(trueBytes)
+ } else {
+ w.Write(falseBytes)
+ }
+}
+
+// printInt outputs a signed integer value to Writer w.
+func printInt(w io.Writer, val int64, base int) {
+ w.Write([]byte(strconv.FormatInt(val, base)))
+}
+
+// printUint outputs an unsigned integer value to Writer w.
+func printUint(w io.Writer, val uint64, base int) {
+ w.Write([]byte(strconv.FormatUint(val, base)))
+}
+
+// printFloat outputs a floating point value using the specified precision,
+// which is expected to be 32 or 64bit, to Writer w.
+func printFloat(w io.Writer, val float64, precision int) {
+ w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
+}
+
+// printComplex outputs a complex value using the specified float precision
+// for the real and imaginary parts to Writer w.
+func printComplex(w io.Writer, c complex128, floatPrecision int) {
+ r := real(c)
+ w.Write(openParenBytes)
+ w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
+ i := imag(c)
+ if i >= 0 {
+ w.Write(plusBytes)
+ }
+ w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
+ w.Write(iBytes)
+ w.Write(closeParenBytes)
+}
+
+// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x'
+// prefix to Writer w.
+func printHexPtr(w io.Writer, p uintptr) {
+ // Null pointer.
+ num := uint64(p)
+ if num == 0 {
+ w.Write(nilAngleBytes)
+ return
+ }
+
+ // Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
+ buf := make([]byte, 18)
+
+ // It's simpler to construct the hex string right to left.
+ base := uint64(16)
+ i := len(buf) - 1
+ for num >= base {
+ buf[i] = hexDigits[num%base]
+ num /= base
+ i--
+ }
+ buf[i] = hexDigits[num]
+
+ // Add '0x' prefix.
+ i--
+ buf[i] = 'x'
+ i--
+ buf[i] = '0'
+
+ // Strip unused leading bytes.
+ buf = buf[i:]
+ w.Write(buf)
+}
+
+// valuesSorter implements sort.Interface to allow a slice of reflect.Value
+// elements to be sorted.
+type valuesSorter struct {
+ values []reflect.Value
+ strings []string // either nil or same len and values
+ cs *ConfigState
+}
+
+// newValuesSorter initializes a valuesSorter instance, which holds a set of
+// surrogate keys on which the data should be sorted. It uses flags in
+// ConfigState to decide if and how to populate those surrogate keys.
+func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
+ vs := &valuesSorter{values: values, cs: cs}
+ if canSortSimply(vs.values[0].Kind()) {
+ return vs
+ }
+ if !cs.DisableMethods {
+ vs.strings = make([]string, len(values))
+ for i := range vs.values {
+ b := bytes.Buffer{}
+ if !handleMethods(cs, &b, vs.values[i]) {
+ vs.strings = nil
+ break
+ }
+ vs.strings[i] = b.String()
+ }
+ }
+ if vs.strings == nil && cs.SpewKeys {
+ vs.strings = make([]string, len(values))
+ for i := range vs.values {
+ vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
+ }
+ }
+ return vs
+}
+
+// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
+// directly, or whether it should be considered for sorting by surrogate keys
+// (if the ConfigState allows it).
+func canSortSimply(kind reflect.Kind) bool {
+ // This switch parallels valueSortLess, except for the default case.
+ switch kind {
+ case reflect.Bool:
+ return true
+ case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
+ return true
+ case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
+ return true
+ case reflect.Float32, reflect.Float64:
+ return true
+ case reflect.String:
+ return true
+ case reflect.Uintptr:
+ return true
+ case reflect.Array:
+ return true
+ }
+ return false
+}
+
+// Len returns the number of values in the slice. It is part of the
+// sort.Interface implementation.
+func (s *valuesSorter) Len() int {
+ return len(s.values)
+}
+
+// Swap swaps the values at the passed indices. It is part of the
+// sort.Interface implementation.
+func (s *valuesSorter) Swap(i, j int) {
+ s.values[i], s.values[j] = s.values[j], s.values[i]
+ if s.strings != nil {
+ s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
+ }
+}
+
+// valueSortLess returns whether the first value should sort before the second
+// value. It is used by valueSorter.Less as part of the sort.Interface
+// implementation.
+func valueSortLess(a, b reflect.Value) bool {
+ switch a.Kind() {
+ case reflect.Bool:
+ return !a.Bool() && b.Bool()
+ case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
+ return a.Int() < b.Int()
+ case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
+ return a.Uint() < b.Uint()
+ case reflect.Float32, reflect.Float64:
+ return a.Float() < b.Float()
+ case reflect.String:
+ return a.String() < b.String()
+ case reflect.Uintptr:
+ return a.Uint() < b.Uint()
+ case reflect.Array:
+ // Compare the contents of both arrays.
+ l := a.Len()
+ for i := 0; i < l; i++ {
+ av := a.Index(i)
+ bv := b.Index(i)
+ if av.Interface() == bv.Interface() {
+ continue
+ }
+ return valueSortLess(av, bv)
+ }
+ }
+ return a.String() < b.String()
+}
+
+// Less returns whether the value at index i should sort before the
+// value at index j. It is part of the sort.Interface implementation.
+func (s *valuesSorter) Less(i, j int) bool {
+ if s.strings == nil {
+ return valueSortLess(s.values[i], s.values[j])
+ }
+ return s.strings[i] < s.strings[j]
+}
+
+// sortValues is a sort function that handles both native types and any type that
+// can be converted to error or Stringer. Other inputs are sorted according to
+// their Value.String() value to ensure display stability.
+func sortValues(values []reflect.Value, cs *ConfigState) {
+ if len(values) == 0 {
+ return
+ }
+ sort.Sort(newValuesSorter(values, cs))
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/config.go b/vendor/github.com/davecgh/go-spew/spew/config.go
new file mode 100644
index 000000000000..ee1ab07b3fdb
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/config.go
@@ -0,0 +1,297 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+package spew
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "os"
+)
+
+// ConfigState houses the configuration options used by spew to format and
+// display values. There is a global instance, Config, that is used to control
+// all top-level Formatter and Dump functionality. Each ConfigState instance
+// provides methods equivalent to the top-level functions.
+//
+// The zero value for ConfigState provides no indentation. You would typically
+// want to set it to a space or a tab.
+//
+// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
+// with default settings. See the documentation of NewDefaultConfig for default
+// values.
+type ConfigState struct {
+ // Indent specifies the string to use for each indentation level. The
+ // global config instance that all top-level functions use set this to a
+ // single space by default. If you would like more indentation, you might
+ // set this to a tab with "\t" or perhaps two spaces with " ".
+ Indent string
+
+ // MaxDepth controls the maximum number of levels to descend into nested
+ // data structures. The default, 0, means there is no limit.
+ //
+ // NOTE: Circular data structures are properly detected, so it is not
+ // necessary to set this value unless you specifically want to limit deeply
+ // nested data structures.
+ MaxDepth int
+
+ // DisableMethods specifies whether or not error and Stringer interfaces are
+ // invoked for types that implement them.
+ DisableMethods bool
+
+ // DisablePointerMethods specifies whether or not to check for and invoke
+ // error and Stringer interfaces on types which only accept a pointer
+ // receiver when the current type is not a pointer.
+ //
+ // NOTE: This might be an unsafe action since calling one of these methods
+ // with a pointer receiver could technically mutate the value, however,
+ // in practice, types which choose to satisify an error or Stringer
+ // interface with a pointer receiver should not be mutating their state
+ // inside these interface methods. As a result, this option relies on
+ // access to the unsafe package, so it will not have any effect when
+ // running in environments without access to the unsafe package such as
+ // Google App Engine or with the "disableunsafe" build tag specified.
+ DisablePointerMethods bool
+
+ // ContinueOnMethod specifies whether or not recursion should continue once
+ // a custom error or Stringer interface is invoked. The default, false,
+ // means it will print the results of invoking the custom error or Stringer
+ // interface and return immediately instead of continuing to recurse into
+ // the internals of the data type.
+ //
+ // NOTE: This flag does not have any effect if method invocation is disabled
+ // via the DisableMethods or DisablePointerMethods options.
+ ContinueOnMethod bool
+
+ // SortKeys specifies map keys should be sorted before being printed. Use
+ // this to have a more deterministic, diffable output. Note that only
+ // native types (bool, int, uint, floats, uintptr and string) and types
+ // that support the error or Stringer interfaces (if methods are
+ // enabled) are supported, with other types sorted according to the
+ // reflect.Value.String() output which guarantees display stability.
+ SortKeys bool
+
+ // SpewKeys specifies that, as a last resort attempt, map keys should
+ // be spewed to strings and sorted by those strings. This is only
+ // considered if SortKeys is true.
+ SpewKeys bool
+}
+
+// Config is the active configuration of the top-level functions.
+// The configuration can be changed by modifying the contents of spew.Config.
+var Config = ConfigState{Indent: " "}
+
+// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the formatted string as a value that satisfies error. See NewFormatter
+// for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
+ return fmt.Errorf(format, c.convertArgs(a)...)
+}
+
+// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
+ return fmt.Fprint(w, c.convertArgs(a)...)
+}
+
+// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
+ return fmt.Fprintf(w, format, c.convertArgs(a)...)
+}
+
+// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
+// passed with a Formatter interface returned by c.NewFormatter. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
+ return fmt.Fprintln(w, c.convertArgs(a)...)
+}
+
+// Print is a wrapper for fmt.Print that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
+ return fmt.Print(c.convertArgs(a)...)
+}
+
+// Printf is a wrapper for fmt.Printf that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
+ return fmt.Printf(format, c.convertArgs(a)...)
+}
+
+// Println is a wrapper for fmt.Println that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
+ return fmt.Println(c.convertArgs(a)...)
+}
+
+// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Sprint(a ...interface{}) string {
+ return fmt.Sprint(c.convertArgs(a)...)
+}
+
+// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
+// passed with a Formatter interface returned by c.NewFormatter. It returns
+// the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
+ return fmt.Sprintf(format, c.convertArgs(a)...)
+}
+
+// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
+// were passed with a Formatter interface returned by c.NewFormatter. It
+// returns the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
+func (c *ConfigState) Sprintln(a ...interface{}) string {
+ return fmt.Sprintln(c.convertArgs(a)...)
+}
+
+/*
+NewFormatter returns a custom formatter that satisfies the fmt.Formatter
+interface. As a result, it integrates cleanly with standard fmt package
+printing functions. The formatter is useful for inline printing of smaller data
+types similar to the standard %v format specifier.
+
+The custom formatter only responds to the %v (most compact), %+v (adds pointer
+addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
+combinations. Any other verbs such as %x and %q will be sent to the the
+standard fmt package for formatting. In addition, the custom formatter ignores
+the width and precision arguments (however they will still work on the format
+specifiers not handled by the custom formatter).
+
+Typically this function shouldn't be called directly. It is much easier to make
+use of the custom formatter by calling one of the convenience functions such as
+c.Printf, c.Println, or c.Printf.
+*/
+func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
+ return newFormatter(c, v)
+}
+
+// Fdump formats and displays the passed arguments to io.Writer w. It formats
+// exactly the same as Dump.
+func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
+ fdump(c, w, a...)
+}
+
+/*
+Dump displays the passed parameters to standard out with newlines, customizable
+indentation, and additional debug information such as complete types and all
+pointer addresses used to indirect to the final value. It provides the
+following features over the built-in printing facilities provided by the fmt
+package:
+
+ * Pointers are dereferenced and followed
+ * Circular data structures are detected and handled properly
+ * Custom Stringer/error interfaces are optionally invoked, including
+ on unexported types
+ * Custom types which only implement the Stringer/error interfaces via
+ a pointer receiver are optionally invoked when passing non-pointer
+ variables
+ * Byte arrays and slices are dumped like the hexdump -C command which
+ includes offsets, byte values in hex, and ASCII output
+
+The configuration options are controlled by modifying the public members
+of c. See ConfigState for options documentation.
+
+See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
+get the formatted result as a string.
+*/
+func (c *ConfigState) Dump(a ...interface{}) {
+ fdump(c, os.Stdout, a...)
+}
+
+// Sdump returns a string with the passed arguments formatted exactly the same
+// as Dump.
+func (c *ConfigState) Sdump(a ...interface{}) string {
+ var buf bytes.Buffer
+ fdump(c, &buf, a...)
+ return buf.String()
+}
+
+// convertArgs accepts a slice of arguments and returns a slice of the same
+// length with each argument converted to a spew Formatter interface using
+// the ConfigState associated with s.
+func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
+ formatters = make([]interface{}, len(args))
+ for index, arg := range args {
+ formatters[index] = newFormatter(c, arg)
+ }
+ return formatters
+}
+
+// NewDefaultConfig returns a ConfigState with the following default settings.
+//
+// Indent: " "
+// MaxDepth: 0
+// DisableMethods: false
+// DisablePointerMethods: false
+// ContinueOnMethod: false
+// SortKeys: false
+func NewDefaultConfig() *ConfigState {
+ return &ConfigState{Indent: " "}
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/doc.go b/vendor/github.com/davecgh/go-spew/spew/doc.go
new file mode 100644
index 000000000000..5be0c4060908
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/doc.go
@@ -0,0 +1,202 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/*
+Package spew implements a deep pretty printer for Go data structures to aid in
+debugging.
+
+A quick overview of the additional features spew provides over the built-in
+printing facilities for Go data types are as follows:
+
+ * Pointers are dereferenced and followed
+ * Circular data structures are detected and handled properly
+ * Custom Stringer/error interfaces are optionally invoked, including
+ on unexported types
+ * Custom types which only implement the Stringer/error interfaces via
+ a pointer receiver are optionally invoked when passing non-pointer
+ variables
+ * Byte arrays and slices are dumped like the hexdump -C command which
+ includes offsets, byte values in hex, and ASCII output (only when using
+ Dump style)
+
+There are two different approaches spew allows for dumping Go data structures:
+
+ * Dump style which prints with newlines, customizable indentation,
+ and additional debug information such as types and all pointer addresses
+ used to indirect to the final value
+ * A custom Formatter interface that integrates cleanly with the standard fmt
+ package and replaces %v, %+v, %#v, and %#+v to provide inline printing
+ similar to the default %v while providing the additional functionality
+ outlined above and passing unsupported format verbs such as %x and %q
+ along to fmt
+
+Quick Start
+
+This section demonstrates how to quickly get started with spew. See the
+sections below for further details on formatting and configuration options.
+
+To dump a variable with full newlines, indentation, type, and pointer
+information use Dump, Fdump, or Sdump:
+ spew.Dump(myVar1, myVar2, ...)
+ spew.Fdump(someWriter, myVar1, myVar2, ...)
+ str := spew.Sdump(myVar1, myVar2, ...)
+
+Alternatively, if you would prefer to use format strings with a compacted inline
+printing style, use the convenience wrappers Printf, Fprintf, etc with
+%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
+%#+v (adds types and pointer addresses):
+ spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
+ spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
+ spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
+ spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
+
+Configuration Options
+
+Configuration of spew is handled by fields in the ConfigState type. For
+convenience, all of the top-level functions use a global state available
+via the spew.Config global.
+
+It is also possible to create a ConfigState instance that provides methods
+equivalent to the top-level functions. This allows concurrent configuration
+options. See the ConfigState documentation for more details.
+
+The following configuration options are available:
+ * Indent
+ String to use for each indentation level for Dump functions.
+ It is a single space by default. A popular alternative is "\t".
+
+ * MaxDepth
+ Maximum number of levels to descend into nested data structures.
+ There is no limit by default.
+
+ * DisableMethods
+ Disables invocation of error and Stringer interface methods.
+ Method invocation is enabled by default.
+
+ * DisablePointerMethods
+ Disables invocation of error and Stringer interface methods on types
+ which only accept pointer receivers from non-pointer variables.
+ Pointer method invocation is enabled by default.
+
+ * ContinueOnMethod
+ Enables recursion into types after invoking error and Stringer interface
+ methods. Recursion after method invocation is disabled by default.
+
+ * SortKeys
+ Specifies map keys should be sorted before being printed. Use
+ this to have a more deterministic, diffable output. Note that
+ only native types (bool, int, uint, floats, uintptr and string)
+ and types which implement error or Stringer interfaces are
+ supported with other types sorted according to the
+ reflect.Value.String() output which guarantees display
+ stability. Natural map order is used by default.
+
+ * SpewKeys
+ Specifies that, as a last resort attempt, map keys should be
+ spewed to strings and sorted by those strings. This is only
+ considered if SortKeys is true.
+
+Dump Usage
+
+Simply call spew.Dump with a list of variables you want to dump:
+
+ spew.Dump(myVar1, myVar2, ...)
+
+You may also call spew.Fdump if you would prefer to output to an arbitrary
+io.Writer. For example, to dump to standard error:
+
+ spew.Fdump(os.Stderr, myVar1, myVar2, ...)
+
+A third option is to call spew.Sdump to get the formatted output as a string:
+
+ str := spew.Sdump(myVar1, myVar2, ...)
+
+Sample Dump Output
+
+See the Dump example for details on the setup of the types and variables being
+shown here.
+
+ (main.Foo) {
+ unexportedField: (*main.Bar)(0xf84002e210)({
+ flag: (main.Flag) flagTwo,
+ data: (uintptr)
+ }),
+ ExportedField: (map[interface {}]interface {}) (len=1) {
+ (string) (len=3) "one": (bool) true
+ }
+ }
+
+Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
+command as shown.
+ ([]uint8) (len=32 cap=32) {
+ 00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
+ 00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
+ 00000020 31 32 |12|
+ }
+
+Custom Formatter
+
+Spew provides a custom formatter that implements the fmt.Formatter interface
+so that it integrates cleanly with standard fmt package printing functions. The
+formatter is useful for inline printing of smaller data types similar to the
+standard %v format specifier.
+
+The custom formatter only responds to the %v (most compact), %+v (adds pointer
+addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
+combinations. Any other verbs such as %x and %q will be sent to the the
+standard fmt package for formatting. In addition, the custom formatter ignores
+the width and precision arguments (however they will still work on the format
+specifiers not handled by the custom formatter).
+
+Custom Formatter Usage
+
+The simplest way to make use of the spew custom formatter is to call one of the
+convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
+functions have syntax you are most likely already familiar with:
+
+ spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
+ spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
+ spew.Println(myVar, myVar2)
+ spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
+ spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
+
+See the Index for the full list convenience functions.
+
+Sample Formatter Output
+
+Double pointer to a uint8:
+ %v: <**>5
+ %+v: <**>(0xf8400420d0->0xf8400420c8)5
+ %#v: (**uint8)5
+ %#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
+
+Pointer to circular struct with a uint8 field and a pointer to itself:
+ %v: <*>{1 <*>}
+ %+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)}
+ %#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)}
+ %#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)}
+
+See the Printf example for details on the setup of variables being shown
+here.
+
+Errors
+
+Since it is possible for custom Stringer/error interfaces to panic, spew
+detects them and handles them internally by printing the panic information
+inline with the output. Since spew is intended to provide deep pretty printing
+capabilities on structures, it intentionally does not return any errors.
+*/
+package spew
diff --git a/vendor/github.com/davecgh/go-spew/spew/dump.go b/vendor/github.com/davecgh/go-spew/spew/dump.go
new file mode 100644
index 000000000000..a0ff95e27e52
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/dump.go
@@ -0,0 +1,509 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+package spew
+
+import (
+ "bytes"
+ "encoding/hex"
+ "fmt"
+ "io"
+ "os"
+ "reflect"
+ "regexp"
+ "strconv"
+ "strings"
+)
+
+var (
+ // uint8Type is a reflect.Type representing a uint8. It is used to
+ // convert cgo types to uint8 slices for hexdumping.
+ uint8Type = reflect.TypeOf(uint8(0))
+
+ // cCharRE is a regular expression that matches a cgo char.
+ // It is used to detect character arrays to hexdump them.
+ cCharRE = regexp.MustCompile("^.*\\._Ctype_char$")
+
+ // cUnsignedCharRE is a regular expression that matches a cgo unsigned
+ // char. It is used to detect unsigned character arrays to hexdump
+ // them.
+ cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$")
+
+ // cUint8tCharRE is a regular expression that matches a cgo uint8_t.
+ // It is used to detect uint8_t arrays to hexdump them.
+ cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$")
+)
+
+// dumpState contains information about the state of a dump operation.
+type dumpState struct {
+ w io.Writer
+ depth int
+ pointers map[uintptr]int
+ ignoreNextType bool
+ ignoreNextIndent bool
+ cs *ConfigState
+}
+
+// indent performs indentation according to the depth level and cs.Indent
+// option.
+func (d *dumpState) indent() {
+ if d.ignoreNextIndent {
+ d.ignoreNextIndent = false
+ return
+ }
+ d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
+}
+
+// unpackValue returns values inside of non-nil interfaces when possible.
+// This is useful for data types like structs, arrays, slices, and maps which
+// can contain varying types packed inside an interface.
+func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
+ if v.Kind() == reflect.Interface && !v.IsNil() {
+ v = v.Elem()
+ }
+ return v
+}
+
+// dumpPtr handles formatting of pointers by indirecting them as necessary.
+func (d *dumpState) dumpPtr(v reflect.Value) {
+ // Remove pointers at or below the current depth from map used to detect
+ // circular refs.
+ for k, depth := range d.pointers {
+ if depth >= d.depth {
+ delete(d.pointers, k)
+ }
+ }
+
+ // Keep list of all dereferenced pointers to show later.
+ pointerChain := make([]uintptr, 0)
+
+ // Figure out how many levels of indirection there are by dereferencing
+ // pointers and unpacking interfaces down the chain while detecting circular
+ // references.
+ nilFound := false
+ cycleFound := false
+ indirects := 0
+ ve := v
+ for ve.Kind() == reflect.Ptr {
+ if ve.IsNil() {
+ nilFound = true
+ break
+ }
+ indirects++
+ addr := ve.Pointer()
+ pointerChain = append(pointerChain, addr)
+ if pd, ok := d.pointers[addr]; ok && pd < d.depth {
+ cycleFound = true
+ indirects--
+ break
+ }
+ d.pointers[addr] = d.depth
+
+ ve = ve.Elem()
+ if ve.Kind() == reflect.Interface {
+ if ve.IsNil() {
+ nilFound = true
+ break
+ }
+ ve = ve.Elem()
+ }
+ }
+
+ // Display type information.
+ d.w.Write(openParenBytes)
+ d.w.Write(bytes.Repeat(asteriskBytes, indirects))
+ d.w.Write([]byte(ve.Type().String()))
+ d.w.Write(closeParenBytes)
+
+ // Display pointer information.
+ if len(pointerChain) > 0 {
+ d.w.Write(openParenBytes)
+ for i, addr := range pointerChain {
+ if i > 0 {
+ d.w.Write(pointerChainBytes)
+ }
+ printHexPtr(d.w, addr)
+ }
+ d.w.Write(closeParenBytes)
+ }
+
+ // Display dereferenced value.
+ d.w.Write(openParenBytes)
+ switch {
+ case nilFound == true:
+ d.w.Write(nilAngleBytes)
+
+ case cycleFound == true:
+ d.w.Write(circularBytes)
+
+ default:
+ d.ignoreNextType = true
+ d.dump(ve)
+ }
+ d.w.Write(closeParenBytes)
+}
+
+// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
+// reflection) arrays and slices are dumped in hexdump -C fashion.
+func (d *dumpState) dumpSlice(v reflect.Value) {
+ // Determine whether this type should be hex dumped or not. Also,
+ // for types which should be hexdumped, try to use the underlying data
+ // first, then fall back to trying to convert them to a uint8 slice.
+ var buf []uint8
+ doConvert := false
+ doHexDump := false
+ numEntries := v.Len()
+ if numEntries > 0 {
+ vt := v.Index(0).Type()
+ vts := vt.String()
+ switch {
+ // C types that need to be converted.
+ case cCharRE.MatchString(vts):
+ fallthrough
+ case cUnsignedCharRE.MatchString(vts):
+ fallthrough
+ case cUint8tCharRE.MatchString(vts):
+ doConvert = true
+
+ // Try to use existing uint8 slices and fall back to converting
+ // and copying if that fails.
+ case vt.Kind() == reflect.Uint8:
+ // We need an addressable interface to convert the type
+ // to a byte slice. However, the reflect package won't
+ // give us an interface on certain things like
+ // unexported struct fields in order to enforce
+ // visibility rules. We use unsafe, when available, to
+ // bypass these restrictions since this package does not
+ // mutate the values.
+ vs := v
+ if !vs.CanInterface() || !vs.CanAddr() {
+ vs = unsafeReflectValue(vs)
+ }
+ if !UnsafeDisabled {
+ vs = vs.Slice(0, numEntries)
+
+ // Use the existing uint8 slice if it can be
+ // type asserted.
+ iface := vs.Interface()
+ if slice, ok := iface.([]uint8); ok {
+ buf = slice
+ doHexDump = true
+ break
+ }
+ }
+
+ // The underlying data needs to be converted if it can't
+ // be type asserted to a uint8 slice.
+ doConvert = true
+ }
+
+ // Copy and convert the underlying type if needed.
+ if doConvert && vt.ConvertibleTo(uint8Type) {
+ // Convert and copy each element into a uint8 byte
+ // slice.
+ buf = make([]uint8, numEntries)
+ for i := 0; i < numEntries; i++ {
+ vv := v.Index(i)
+ buf[i] = uint8(vv.Convert(uint8Type).Uint())
+ }
+ doHexDump = true
+ }
+ }
+
+ // Hexdump the entire slice as needed.
+ if doHexDump {
+ indent := strings.Repeat(d.cs.Indent, d.depth)
+ str := indent + hex.Dump(buf)
+ str = strings.Replace(str, "\n", "\n"+indent, -1)
+ str = strings.TrimRight(str, d.cs.Indent)
+ d.w.Write([]byte(str))
+ return
+ }
+
+ // Recursively call dump for each item.
+ for i := 0; i < numEntries; i++ {
+ d.dump(d.unpackValue(v.Index(i)))
+ if i < (numEntries - 1) {
+ d.w.Write(commaNewlineBytes)
+ } else {
+ d.w.Write(newlineBytes)
+ }
+ }
+}
+
+// dump is the main workhorse for dumping a value. It uses the passed reflect
+// value to figure out what kind of object we are dealing with and formats it
+// appropriately. It is a recursive function, however circular data structures
+// are detected and handled properly.
+func (d *dumpState) dump(v reflect.Value) {
+ // Handle invalid reflect values immediately.
+ kind := v.Kind()
+ if kind == reflect.Invalid {
+ d.w.Write(invalidAngleBytes)
+ return
+ }
+
+ // Handle pointers specially.
+ if kind == reflect.Ptr {
+ d.indent()
+ d.dumpPtr(v)
+ return
+ }
+
+ // Print type information unless already handled elsewhere.
+ if !d.ignoreNextType {
+ d.indent()
+ d.w.Write(openParenBytes)
+ d.w.Write([]byte(v.Type().String()))
+ d.w.Write(closeParenBytes)
+ d.w.Write(spaceBytes)
+ }
+ d.ignoreNextType = false
+
+ // Display length and capacity if the built-in len and cap functions
+ // work with the value's kind and the len/cap itself is non-zero.
+ valueLen, valueCap := 0, 0
+ switch v.Kind() {
+ case reflect.Array, reflect.Slice, reflect.Chan:
+ valueLen, valueCap = v.Len(), v.Cap()
+ case reflect.Map, reflect.String:
+ valueLen = v.Len()
+ }
+ if valueLen != 0 || valueCap != 0 {
+ d.w.Write(openParenBytes)
+ if valueLen != 0 {
+ d.w.Write(lenEqualsBytes)
+ printInt(d.w, int64(valueLen), 10)
+ }
+ if valueCap != 0 {
+ if valueLen != 0 {
+ d.w.Write(spaceBytes)
+ }
+ d.w.Write(capEqualsBytes)
+ printInt(d.w, int64(valueCap), 10)
+ }
+ d.w.Write(closeParenBytes)
+ d.w.Write(spaceBytes)
+ }
+
+ // Call Stringer/error interfaces if they exist and the handle methods flag
+ // is enabled
+ if !d.cs.DisableMethods {
+ if (kind != reflect.Invalid) && (kind != reflect.Interface) {
+ if handled := handleMethods(d.cs, d.w, v); handled {
+ return
+ }
+ }
+ }
+
+ switch kind {
+ case reflect.Invalid:
+ // Do nothing. We should never get here since invalid has already
+ // been handled above.
+
+ case reflect.Bool:
+ printBool(d.w, v.Bool())
+
+ case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
+ printInt(d.w, v.Int(), 10)
+
+ case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
+ printUint(d.w, v.Uint(), 10)
+
+ case reflect.Float32:
+ printFloat(d.w, v.Float(), 32)
+
+ case reflect.Float64:
+ printFloat(d.w, v.Float(), 64)
+
+ case reflect.Complex64:
+ printComplex(d.w, v.Complex(), 32)
+
+ case reflect.Complex128:
+ printComplex(d.w, v.Complex(), 64)
+
+ case reflect.Slice:
+ if v.IsNil() {
+ d.w.Write(nilAngleBytes)
+ break
+ }
+ fallthrough
+
+ case reflect.Array:
+ d.w.Write(openBraceNewlineBytes)
+ d.depth++
+ if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
+ d.indent()
+ d.w.Write(maxNewlineBytes)
+ } else {
+ d.dumpSlice(v)
+ }
+ d.depth--
+ d.indent()
+ d.w.Write(closeBraceBytes)
+
+ case reflect.String:
+ d.w.Write([]byte(strconv.Quote(v.String())))
+
+ case reflect.Interface:
+ // The only time we should get here is for nil interfaces due to
+ // unpackValue calls.
+ if v.IsNil() {
+ d.w.Write(nilAngleBytes)
+ }
+
+ case reflect.Ptr:
+ // Do nothing. We should never get here since pointers have already
+ // been handled above.
+
+ case reflect.Map:
+ // nil maps should be indicated as different than empty maps
+ if v.IsNil() {
+ d.w.Write(nilAngleBytes)
+ break
+ }
+
+ d.w.Write(openBraceNewlineBytes)
+ d.depth++
+ if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
+ d.indent()
+ d.w.Write(maxNewlineBytes)
+ } else {
+ numEntries := v.Len()
+ keys := v.MapKeys()
+ if d.cs.SortKeys {
+ sortValues(keys, d.cs)
+ }
+ for i, key := range keys {
+ d.dump(d.unpackValue(key))
+ d.w.Write(colonSpaceBytes)
+ d.ignoreNextIndent = true
+ d.dump(d.unpackValue(v.MapIndex(key)))
+ if i < (numEntries - 1) {
+ d.w.Write(commaNewlineBytes)
+ } else {
+ d.w.Write(newlineBytes)
+ }
+ }
+ }
+ d.depth--
+ d.indent()
+ d.w.Write(closeBraceBytes)
+
+ case reflect.Struct:
+ d.w.Write(openBraceNewlineBytes)
+ d.depth++
+ if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
+ d.indent()
+ d.w.Write(maxNewlineBytes)
+ } else {
+ vt := v.Type()
+ numFields := v.NumField()
+ for i := 0; i < numFields; i++ {
+ d.indent()
+ vtf := vt.Field(i)
+ d.w.Write([]byte(vtf.Name))
+ d.w.Write(colonSpaceBytes)
+ d.ignoreNextIndent = true
+ d.dump(d.unpackValue(v.Field(i)))
+ if i < (numFields - 1) {
+ d.w.Write(commaNewlineBytes)
+ } else {
+ d.w.Write(newlineBytes)
+ }
+ }
+ }
+ d.depth--
+ d.indent()
+ d.w.Write(closeBraceBytes)
+
+ case reflect.Uintptr:
+ printHexPtr(d.w, uintptr(v.Uint()))
+
+ case reflect.UnsafePointer, reflect.Chan, reflect.Func:
+ printHexPtr(d.w, v.Pointer())
+
+ // There were not any other types at the time this code was written, but
+ // fall back to letting the default fmt package handle it in case any new
+ // types are added.
+ default:
+ if v.CanInterface() {
+ fmt.Fprintf(d.w, "%v", v.Interface())
+ } else {
+ fmt.Fprintf(d.w, "%v", v.String())
+ }
+ }
+}
+
+// fdump is a helper function to consolidate the logic from the various public
+// methods which take varying writers and config states.
+func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
+ for _, arg := range a {
+ if arg == nil {
+ w.Write(interfaceBytes)
+ w.Write(spaceBytes)
+ w.Write(nilAngleBytes)
+ w.Write(newlineBytes)
+ continue
+ }
+
+ d := dumpState{w: w, cs: cs}
+ d.pointers = make(map[uintptr]int)
+ d.dump(reflect.ValueOf(arg))
+ d.w.Write(newlineBytes)
+ }
+}
+
+// Fdump formats and displays the passed arguments to io.Writer w. It formats
+// exactly the same as Dump.
+func Fdump(w io.Writer, a ...interface{}) {
+ fdump(&Config, w, a...)
+}
+
+// Sdump returns a string with the passed arguments formatted exactly the same
+// as Dump.
+func Sdump(a ...interface{}) string {
+ var buf bytes.Buffer
+ fdump(&Config, &buf, a...)
+ return buf.String()
+}
+
+/*
+Dump displays the passed parameters to standard out with newlines, customizable
+indentation, and additional debug information such as complete types and all
+pointer addresses used to indirect to the final value. It provides the
+following features over the built-in printing facilities provided by the fmt
+package:
+
+ * Pointers are dereferenced and followed
+ * Circular data structures are detected and handled properly
+ * Custom Stringer/error interfaces are optionally invoked, including
+ on unexported types
+ * Custom types which only implement the Stringer/error interfaces via
+ a pointer receiver are optionally invoked when passing non-pointer
+ variables
+ * Byte arrays and slices are dumped like the hexdump -C command which
+ includes offsets, byte values in hex, and ASCII output
+
+The configuration options are controlled by an exported package global,
+spew.Config. See ConfigState for options documentation.
+
+See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
+get the formatted result as a string.
+*/
+func Dump(a ...interface{}) {
+ fdump(&Config, os.Stdout, a...)
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/format.go b/vendor/github.com/davecgh/go-spew/spew/format.go
new file mode 100644
index 000000000000..ecf3b80e24bc
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/format.go
@@ -0,0 +1,419 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+package spew
+
+import (
+ "bytes"
+ "fmt"
+ "reflect"
+ "strconv"
+ "strings"
+)
+
+// supportedFlags is a list of all the character flags supported by fmt package.
+const supportedFlags = "0-+# "
+
+// formatState implements the fmt.Formatter interface and contains information
+// about the state of a formatting operation. The NewFormatter function can
+// be used to get a new Formatter which can be used directly as arguments
+// in standard fmt package printing calls.
+type formatState struct {
+ value interface{}
+ fs fmt.State
+ depth int
+ pointers map[uintptr]int
+ ignoreNextType bool
+ cs *ConfigState
+}
+
+// buildDefaultFormat recreates the original format string without precision
+// and width information to pass in to fmt.Sprintf in the case of an
+// unrecognized type. Unless new types are added to the language, this
+// function won't ever be called.
+func (f *formatState) buildDefaultFormat() (format string) {
+ buf := bytes.NewBuffer(percentBytes)
+
+ for _, flag := range supportedFlags {
+ if f.fs.Flag(int(flag)) {
+ buf.WriteRune(flag)
+ }
+ }
+
+ buf.WriteRune('v')
+
+ format = buf.String()
+ return format
+}
+
+// constructOrigFormat recreates the original format string including precision
+// and width information to pass along to the standard fmt package. This allows
+// automatic deferral of all format strings this package doesn't support.
+func (f *formatState) constructOrigFormat(verb rune) (format string) {
+ buf := bytes.NewBuffer(percentBytes)
+
+ for _, flag := range supportedFlags {
+ if f.fs.Flag(int(flag)) {
+ buf.WriteRune(flag)
+ }
+ }
+
+ if width, ok := f.fs.Width(); ok {
+ buf.WriteString(strconv.Itoa(width))
+ }
+
+ if precision, ok := f.fs.Precision(); ok {
+ buf.Write(precisionBytes)
+ buf.WriteString(strconv.Itoa(precision))
+ }
+
+ buf.WriteRune(verb)
+
+ format = buf.String()
+ return format
+}
+
+// unpackValue returns values inside of non-nil interfaces when possible and
+// ensures that types for values which have been unpacked from an interface
+// are displayed when the show types flag is also set.
+// This is useful for data types like structs, arrays, slices, and maps which
+// can contain varying types packed inside an interface.
+func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
+ if v.Kind() == reflect.Interface {
+ f.ignoreNextType = false
+ if !v.IsNil() {
+ v = v.Elem()
+ }
+ }
+ return v
+}
+
+// formatPtr handles formatting of pointers by indirecting them as necessary.
+func (f *formatState) formatPtr(v reflect.Value) {
+ // Display nil if top level pointer is nil.
+ showTypes := f.fs.Flag('#')
+ if v.IsNil() && (!showTypes || f.ignoreNextType) {
+ f.fs.Write(nilAngleBytes)
+ return
+ }
+
+ // Remove pointers at or below the current depth from map used to detect
+ // circular refs.
+ for k, depth := range f.pointers {
+ if depth >= f.depth {
+ delete(f.pointers, k)
+ }
+ }
+
+ // Keep list of all dereferenced pointers to possibly show later.
+ pointerChain := make([]uintptr, 0)
+
+ // Figure out how many levels of indirection there are by derferencing
+ // pointers and unpacking interfaces down the chain while detecting circular
+ // references.
+ nilFound := false
+ cycleFound := false
+ indirects := 0
+ ve := v
+ for ve.Kind() == reflect.Ptr {
+ if ve.IsNil() {
+ nilFound = true
+ break
+ }
+ indirects++
+ addr := ve.Pointer()
+ pointerChain = append(pointerChain, addr)
+ if pd, ok := f.pointers[addr]; ok && pd < f.depth {
+ cycleFound = true
+ indirects--
+ break
+ }
+ f.pointers[addr] = f.depth
+
+ ve = ve.Elem()
+ if ve.Kind() == reflect.Interface {
+ if ve.IsNil() {
+ nilFound = true
+ break
+ }
+ ve = ve.Elem()
+ }
+ }
+
+ // Display type or indirection level depending on flags.
+ if showTypes && !f.ignoreNextType {
+ f.fs.Write(openParenBytes)
+ f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
+ f.fs.Write([]byte(ve.Type().String()))
+ f.fs.Write(closeParenBytes)
+ } else {
+ if nilFound || cycleFound {
+ indirects += strings.Count(ve.Type().String(), "*")
+ }
+ f.fs.Write(openAngleBytes)
+ f.fs.Write([]byte(strings.Repeat("*", indirects)))
+ f.fs.Write(closeAngleBytes)
+ }
+
+ // Display pointer information depending on flags.
+ if f.fs.Flag('+') && (len(pointerChain) > 0) {
+ f.fs.Write(openParenBytes)
+ for i, addr := range pointerChain {
+ if i > 0 {
+ f.fs.Write(pointerChainBytes)
+ }
+ printHexPtr(f.fs, addr)
+ }
+ f.fs.Write(closeParenBytes)
+ }
+
+ // Display dereferenced value.
+ switch {
+ case nilFound == true:
+ f.fs.Write(nilAngleBytes)
+
+ case cycleFound == true:
+ f.fs.Write(circularShortBytes)
+
+ default:
+ f.ignoreNextType = true
+ f.format(ve)
+ }
+}
+
+// format is the main workhorse for providing the Formatter interface. It
+// uses the passed reflect value to figure out what kind of object we are
+// dealing with and formats it appropriately. It is a recursive function,
+// however circular data structures are detected and handled properly.
+func (f *formatState) format(v reflect.Value) {
+ // Handle invalid reflect values immediately.
+ kind := v.Kind()
+ if kind == reflect.Invalid {
+ f.fs.Write(invalidAngleBytes)
+ return
+ }
+
+ // Handle pointers specially.
+ if kind == reflect.Ptr {
+ f.formatPtr(v)
+ return
+ }
+
+ // Print type information unless already handled elsewhere.
+ if !f.ignoreNextType && f.fs.Flag('#') {
+ f.fs.Write(openParenBytes)
+ f.fs.Write([]byte(v.Type().String()))
+ f.fs.Write(closeParenBytes)
+ }
+ f.ignoreNextType = false
+
+ // Call Stringer/error interfaces if they exist and the handle methods
+ // flag is enabled.
+ if !f.cs.DisableMethods {
+ if (kind != reflect.Invalid) && (kind != reflect.Interface) {
+ if handled := handleMethods(f.cs, f.fs, v); handled {
+ return
+ }
+ }
+ }
+
+ switch kind {
+ case reflect.Invalid:
+ // Do nothing. We should never get here since invalid has already
+ // been handled above.
+
+ case reflect.Bool:
+ printBool(f.fs, v.Bool())
+
+ case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
+ printInt(f.fs, v.Int(), 10)
+
+ case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
+ printUint(f.fs, v.Uint(), 10)
+
+ case reflect.Float32:
+ printFloat(f.fs, v.Float(), 32)
+
+ case reflect.Float64:
+ printFloat(f.fs, v.Float(), 64)
+
+ case reflect.Complex64:
+ printComplex(f.fs, v.Complex(), 32)
+
+ case reflect.Complex128:
+ printComplex(f.fs, v.Complex(), 64)
+
+ case reflect.Slice:
+ if v.IsNil() {
+ f.fs.Write(nilAngleBytes)
+ break
+ }
+ fallthrough
+
+ case reflect.Array:
+ f.fs.Write(openBracketBytes)
+ f.depth++
+ if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
+ f.fs.Write(maxShortBytes)
+ } else {
+ numEntries := v.Len()
+ for i := 0; i < numEntries; i++ {
+ if i > 0 {
+ f.fs.Write(spaceBytes)
+ }
+ f.ignoreNextType = true
+ f.format(f.unpackValue(v.Index(i)))
+ }
+ }
+ f.depth--
+ f.fs.Write(closeBracketBytes)
+
+ case reflect.String:
+ f.fs.Write([]byte(v.String()))
+
+ case reflect.Interface:
+ // The only time we should get here is for nil interfaces due to
+ // unpackValue calls.
+ if v.IsNil() {
+ f.fs.Write(nilAngleBytes)
+ }
+
+ case reflect.Ptr:
+ // Do nothing. We should never get here since pointers have already
+ // been handled above.
+
+ case reflect.Map:
+ // nil maps should be indicated as different than empty maps
+ if v.IsNil() {
+ f.fs.Write(nilAngleBytes)
+ break
+ }
+
+ f.fs.Write(openMapBytes)
+ f.depth++
+ if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
+ f.fs.Write(maxShortBytes)
+ } else {
+ keys := v.MapKeys()
+ if f.cs.SortKeys {
+ sortValues(keys, f.cs)
+ }
+ for i, key := range keys {
+ if i > 0 {
+ f.fs.Write(spaceBytes)
+ }
+ f.ignoreNextType = true
+ f.format(f.unpackValue(key))
+ f.fs.Write(colonBytes)
+ f.ignoreNextType = true
+ f.format(f.unpackValue(v.MapIndex(key)))
+ }
+ }
+ f.depth--
+ f.fs.Write(closeMapBytes)
+
+ case reflect.Struct:
+ numFields := v.NumField()
+ f.fs.Write(openBraceBytes)
+ f.depth++
+ if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
+ f.fs.Write(maxShortBytes)
+ } else {
+ vt := v.Type()
+ for i := 0; i < numFields; i++ {
+ if i > 0 {
+ f.fs.Write(spaceBytes)
+ }
+ vtf := vt.Field(i)
+ if f.fs.Flag('+') || f.fs.Flag('#') {
+ f.fs.Write([]byte(vtf.Name))
+ f.fs.Write(colonBytes)
+ }
+ f.format(f.unpackValue(v.Field(i)))
+ }
+ }
+ f.depth--
+ f.fs.Write(closeBraceBytes)
+
+ case reflect.Uintptr:
+ printHexPtr(f.fs, uintptr(v.Uint()))
+
+ case reflect.UnsafePointer, reflect.Chan, reflect.Func:
+ printHexPtr(f.fs, v.Pointer())
+
+ // There were not any other types at the time this code was written, but
+ // fall back to letting the default fmt package handle it if any get added.
+ default:
+ format := f.buildDefaultFormat()
+ if v.CanInterface() {
+ fmt.Fprintf(f.fs, format, v.Interface())
+ } else {
+ fmt.Fprintf(f.fs, format, v.String())
+ }
+ }
+}
+
+// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
+// details.
+func (f *formatState) Format(fs fmt.State, verb rune) {
+ f.fs = fs
+
+ // Use standard formatting for verbs that are not v.
+ if verb != 'v' {
+ format := f.constructOrigFormat(verb)
+ fmt.Fprintf(fs, format, f.value)
+ return
+ }
+
+ if f.value == nil {
+ if fs.Flag('#') {
+ fs.Write(interfaceBytes)
+ }
+ fs.Write(nilAngleBytes)
+ return
+ }
+
+ f.format(reflect.ValueOf(f.value))
+}
+
+// newFormatter is a helper function to consolidate the logic from the various
+// public methods which take varying config states.
+func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
+ fs := &formatState{value: v, cs: cs}
+ fs.pointers = make(map[uintptr]int)
+ return fs
+}
+
+/*
+NewFormatter returns a custom formatter that satisfies the fmt.Formatter
+interface. As a result, it integrates cleanly with standard fmt package
+printing functions. The formatter is useful for inline printing of smaller data
+types similar to the standard %v format specifier.
+
+The custom formatter only responds to the %v (most compact), %+v (adds pointer
+addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
+combinations. Any other verbs such as %x and %q will be sent to the the
+standard fmt package for formatting. In addition, the custom formatter ignores
+the width and precision arguments (however they will still work on the format
+specifiers not handled by the custom formatter).
+
+Typically this function shouldn't be called directly. It is much easier to make
+use of the custom formatter by calling one of the convenience functions such as
+Printf, Println, or Fprintf.
+*/
+func NewFormatter(v interface{}) fmt.Formatter {
+ return newFormatter(&Config, v)
+}
diff --git a/vendor/github.com/davecgh/go-spew/spew/spew.go b/vendor/github.com/davecgh/go-spew/spew/spew.go
new file mode 100644
index 000000000000..d8233f542e12
--- /dev/null
+++ b/vendor/github.com/davecgh/go-spew/spew/spew.go
@@ -0,0 +1,148 @@
+/*
+ * Copyright (c) 2013 Dave Collins
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+package spew
+
+import (
+ "fmt"
+ "io"
+)
+
+// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the formatted string as a value that satisfies error. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
+func Errorf(format string, a ...interface{}) (err error) {
+ return fmt.Errorf(format, convertArgs(a)...)
+}
+
+// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
+func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
+ return fmt.Fprint(w, convertArgs(a)...)
+}
+
+// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
+func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
+ return fmt.Fprintf(w, format, convertArgs(a)...)
+}
+
+// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
+// passed with a default Formatter interface returned by NewFormatter. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
+func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
+ return fmt.Fprintln(w, convertArgs(a)...)
+}
+
+// Print is a wrapper for fmt.Print that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
+func Print(a ...interface{}) (n int, err error) {
+ return fmt.Print(convertArgs(a)...)
+}
+
+// Printf is a wrapper for fmt.Printf that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
+func Printf(format string, a ...interface{}) (n int, err error) {
+ return fmt.Printf(format, convertArgs(a)...)
+}
+
+// Println is a wrapper for fmt.Println that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the number of bytes written and any write error encountered. See
+// NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
+func Println(a ...interface{}) (n int, err error) {
+ return fmt.Println(convertArgs(a)...)
+}
+
+// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
+func Sprint(a ...interface{}) string {
+ return fmt.Sprint(convertArgs(a)...)
+}
+
+// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
+// passed with a default Formatter interface returned by NewFormatter. It
+// returns the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
+func Sprintf(format string, a ...interface{}) string {
+ return fmt.Sprintf(format, convertArgs(a)...)
+}
+
+// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
+// were passed with a default Formatter interface returned by NewFormatter. It
+// returns the resulting string. See NewFormatter for formatting details.
+//
+// This function is shorthand for the following syntax:
+//
+// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
+func Sprintln(a ...interface{}) string {
+ return fmt.Sprintln(convertArgs(a)...)
+}
+
+// convertArgs accepts a slice of arguments and returns a slice of the same
+// length with each argument converted to a default spew Formatter interface.
+func convertArgs(args []interface{}) (formatters []interface{}) {
+ formatters = make([]interface{}, len(args))
+ for index, arg := range args {
+ formatters[index] = NewFormatter(arg)
+ }
+ return formatters
+}
diff --git a/vendor/github.com/pearkes/cloudflare/.gitignore b/vendor/github.com/fatih/structs/.gitignore
similarity index 100%
rename from vendor/github.com/pearkes/cloudflare/.gitignore
rename to vendor/github.com/fatih/structs/.gitignore
diff --git a/vendor/github.com/fatih/structs/.travis.yml b/vendor/github.com/fatih/structs/.travis.yml
new file mode 100644
index 000000000000..845012b7ab0e
--- /dev/null
+++ b/vendor/github.com/fatih/structs/.travis.yml
@@ -0,0 +1,11 @@
+language: go
+go:
+ - 1.6
+ - tip
+sudo: false
+before_install:
+- go get github.com/axw/gocov/gocov
+- go get github.com/mattn/goveralls
+- if ! go get github.com/golang/tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
+script:
+- $HOME/gopath/bin/goveralls -service=travis-ci
diff --git a/vendor/github.com/fatih/structs/LICENSE b/vendor/github.com/fatih/structs/LICENSE
new file mode 100644
index 000000000000..34504e4b3efb
--- /dev/null
+++ b/vendor/github.com/fatih/structs/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014 Fatih Arslan
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
\ No newline at end of file
diff --git a/vendor/github.com/fatih/structs/README.md b/vendor/github.com/fatih/structs/README.md
new file mode 100644
index 000000000000..44e01006e135
--- /dev/null
+++ b/vendor/github.com/fatih/structs/README.md
@@ -0,0 +1,163 @@
+# Structs [![GoDoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](http://godoc.org/github.com/fatih/structs) [![Build Status](http://img.shields.io/travis/fatih/structs.svg?style=flat-square)](https://travis-ci.org/fatih/structs) [![Coverage Status](http://img.shields.io/coveralls/fatih/structs.svg?style=flat-square)](https://coveralls.io/r/fatih/structs)
+
+Structs contains various utilities to work with Go (Golang) structs. It was
+initially used by me to convert a struct into a `map[string]interface{}`. With
+time I've added other utilities for structs. It's basically a high level
+package based on primitives from the reflect package. Feel free to add new
+functions or improve the existing code.
+
+## Install
+
+```bash
+go get github.com/fatih/structs
+```
+
+## Usage and Examples
+
+Just like the standard lib `strings`, `bytes` and co packages, `structs` has
+many global functions to manipulate or organize your struct data. Lets define
+and declare a struct:
+
+```go
+type Server struct {
+ Name string `json:"name,omitempty"`
+ ID int
+ Enabled bool
+ users []string // not exported
+ http.Server // embedded
+}
+
+server := &Server{
+ Name: "gopher",
+ ID: 123456,
+ Enabled: true,
+}
+```
+
+```go
+// Convert a struct to a map[string]interface{}
+// => {"Name":"gopher", "ID":123456, "Enabled":true}
+m := structs.Map(server)
+
+// Convert the values of a struct to a []interface{}
+// => ["gopher", 123456, true]
+v := structs.Values(server)
+
+// Convert the names of a struct to a []string
+// (see "Names methods" for more info about fields)
+n := structs.Names(server)
+
+// Convert the values of a struct to a []*Field
+// (see "Field methods" for more info about fields)
+f := structs.Fields(server)
+
+// Return the struct name => "Server"
+n := structs.Name(server)
+
+// Check if any field of a struct is initialized or not.
+h := structs.HasZero(server)
+
+// Check if all fields of a struct is initialized or not.
+z := structs.IsZero(server)
+
+// Check if server is a struct or a pointer to struct
+i := structs.IsStruct(server)
+```
+
+### Struct methods
+
+The structs functions can be also used as independent methods by creating a new
+`*structs.Struct`. This is handy if you want to have more control over the
+structs (such as retrieving a single Field).
+
+```go
+// Create a new struct type:
+s := structs.New(server)
+
+m := s.Map() // Get a map[string]interface{}
+v := s.Values() // Get a []interface{}
+f := s.Fields() // Get a []*Field
+n := s.Names() // Get a []string
+f := s.Field(name) // Get a *Field based on the given field name
+f, ok := s.FieldOk(name) // Get a *Field based on the given field name
+n := s.Name() // Get the struct name
+h := s.HasZero() // Check if any field is initialized
+z := s.IsZero() // Check if all fields are initialized
+```
+
+### Field methods
+
+We can easily examine a single Field for more detail. Below you can see how we
+get and interact with various field methods:
+
+
+```go
+s := structs.New(server)
+
+// Get the Field struct for the "Name" field
+name := s.Field("Name")
+
+// Get the underlying value, value => "gopher"
+value := name.Value().(string)
+
+// Set the field's value
+name.Set("another gopher")
+
+// Get the field's kind, kind => "string"
+name.Kind()
+
+// Check if the field is exported or not
+if name.IsExported() {
+ fmt.Println("Name field is exported")
+}
+
+// Check if the value is a zero value, such as "" for string, 0 for int
+if !name.IsZero() {
+ fmt.Println("Name is initialized")
+}
+
+// Check if the field is an anonymous (embedded) field
+if !name.IsEmbedded() {
+ fmt.Println("Name is not an embedded field")
+}
+
+// Get the Field's tag value for tag name "json", tag value => "name,omitempty"
+tagValue := name.Tag("json")
+```
+
+Nested structs are supported too:
+
+```go
+addrField := s.Field("Server").Field("Addr")
+
+// Get the value for addr
+a := addrField.Value().(string)
+
+// Or get all fields
+httpServer := s.Field("Server").Fields()
+```
+
+We can also get a slice of Fields from the Struct type to iterate over all
+fields. This is handy if you wish to examine all fields:
+
+```go
+s := structs.New(server)
+
+for _, f := range s.Fields() {
+ fmt.Printf("field name: %+v\n", f.Name())
+
+ if f.IsExported() {
+ fmt.Printf("value : %+v\n", f.Value())
+ fmt.Printf("is zero : %+v\n", f.IsZero())
+ }
+}
+```
+
+## Credits
+
+ * [Fatih Arslan](https://github.com/fatih)
+ * [Cihangir Savas](https://github.com/cihangir)
+
+## License
+
+The MIT License (MIT) - see LICENSE.md for more details
diff --git a/vendor/github.com/fatih/structs/field.go b/vendor/github.com/fatih/structs/field.go
new file mode 100644
index 000000000000..566f5497ecbb
--- /dev/null
+++ b/vendor/github.com/fatih/structs/field.go
@@ -0,0 +1,133 @@
+package structs
+
+import (
+ "errors"
+ "fmt"
+ "reflect"
+)
+
+var (
+ errNotExported = errors.New("field is not exported")
+ errNotSettable = errors.New("field is not settable")
+)
+
+// Field represents a single struct field that encapsulates high level
+// functions around the field.
+type Field struct {
+ value reflect.Value
+ field reflect.StructField
+ defaultTag string
+}
+
+// Tag returns the value associated with key in the tag string. If there is no
+// such key in the tag, Tag returns the empty string.
+func (f *Field) Tag(key string) string {
+ return f.field.Tag.Get(key)
+}
+
+// Value returns the underlying value of of the field. It panics if the field
+// is not exported.
+func (f *Field) Value() interface{} {
+ return f.value.Interface()
+}
+
+// IsEmbedded returns true if the given field is an anonymous field (embedded)
+func (f *Field) IsEmbedded() bool {
+ return f.field.Anonymous
+}
+
+// IsExported returns true if the given field is exported.
+func (f *Field) IsExported() bool {
+ return f.field.PkgPath == ""
+}
+
+// IsZero returns true if the given field is not initialized (has a zero value).
+// It panics if the field is not exported.
+func (f *Field) IsZero() bool {
+ zero := reflect.Zero(f.value.Type()).Interface()
+ current := f.Value()
+
+ return reflect.DeepEqual(current, zero)
+}
+
+// Name returns the name of the given field
+func (f *Field) Name() string {
+ return f.field.Name
+}
+
+// Kind returns the fields kind, such as "string", "map", "bool", etc ..
+func (f *Field) Kind() reflect.Kind {
+ return f.value.Kind()
+}
+
+// Set sets the field to given value v. It returns an error if the field is not
+// settable (not addressable or not exported) or if the given value's type
+// doesn't match the fields type.
+func (f *Field) Set(val interface{}) error {
+ // we can't set unexported fields, so be sure this field is exported
+ if !f.IsExported() {
+ return errNotExported
+ }
+
+ // do we get here? not sure...
+ if !f.value.CanSet() {
+ return errNotSettable
+ }
+
+ given := reflect.ValueOf(val)
+
+ if f.value.Kind() != given.Kind() {
+ return fmt.Errorf("wrong kind. got: %s want: %s", given.Kind(), f.value.Kind())
+ }
+
+ f.value.Set(given)
+ return nil
+}
+
+// Zero sets the field to its zero value. It returns an error if the field is not
+// settable (not addressable or not exported).
+func (f *Field) Zero() error {
+ zero := reflect.Zero(f.value.Type()).Interface()
+ return f.Set(zero)
+}
+
+// Fields returns a slice of Fields. This is particular handy to get the fields
+// of a nested struct . A struct tag with the content of "-" ignores the
+// checking of that particular field. Example:
+//
+// // Field is ignored by this package.
+// Field *http.Request `structs:"-"`
+//
+// It panics if field is not exported or if field's kind is not struct
+func (f *Field) Fields() []*Field {
+ return getFields(f.value, f.defaultTag)
+}
+
+// Field returns the field from a nested struct. It panics if the nested struct
+// is not exported or if the field was not found.
+func (f *Field) Field(name string) *Field {
+ field, ok := f.FieldOk(name)
+ if !ok {
+ panic("field not found")
+ }
+
+ return field
+}
+
+// Field returns the field from a nested struct. The boolean returns true if
+// the field was found. It panics if the nested struct is not exported or if
+// the field was not found.
+func (f *Field) FieldOk(name string) (*Field, bool) {
+ v := strctVal(f.value.Interface())
+ t := v.Type()
+
+ field, ok := t.FieldByName(name)
+ if !ok {
+ return nil, false
+ }
+
+ return &Field{
+ field: field,
+ value: v.FieldByName(name),
+ }, true
+}
diff --git a/vendor/github.com/fatih/structs/structs.go b/vendor/github.com/fatih/structs/structs.go
new file mode 100644
index 000000000000..408d50f2839b
--- /dev/null
+++ b/vendor/github.com/fatih/structs/structs.go
@@ -0,0 +1,494 @@
+// Package structs contains various utilities functions to work with structs.
+package structs
+
+import (
+ "fmt"
+
+ "reflect"
+)
+
+var (
+ // DefaultTagName is the default tag name for struct fields which provides
+ // a more granular to tweak certain structs. Lookup the necessary functions
+ // for more info.
+ DefaultTagName = "structs" // struct's field default tag name
+)
+
+// Struct encapsulates a struct type to provide several high level functions
+// around the struct.
+type Struct struct {
+ raw interface{}
+ value reflect.Value
+ TagName string
+}
+
+// New returns a new *Struct with the struct s. It panics if the s's kind is
+// not struct.
+func New(s interface{}) *Struct {
+ return &Struct{
+ raw: s,
+ value: strctVal(s),
+ TagName: DefaultTagName,
+ }
+}
+
+// Map converts the given struct to a map[string]interface{}, where the keys
+// of the map are the field names and the values of the map the associated
+// values of the fields. The default key string is the struct field name but
+// can be changed in the struct field's tag value. The "structs" key in the
+// struct's field tag value is the key name. Example:
+//
+// // Field appears in map as key "myName".
+// Name string `structs:"myName"`
+//
+// A tag value with the content of "-" ignores that particular field. Example:
+//
+// // Field is ignored by this package.
+// Field bool `structs:"-"`
+//
+// A tag value with the content of "string" uses the stringer to get the value. Example:
+//
+// // The value will be output of Animal's String() func.
+// // Map will panic if Animal does not implement String().
+// Field *Animal `structs:"field,string"`
+//
+// A tag value with the option of "omitnested" stops iterating further if the type
+// is a struct. Example:
+//
+// // Field is not processed further by this package.
+// Field time.Time `structs:"myName,omitnested"`
+// Field *http.Request `structs:",omitnested"`
+//
+// A tag value with the option of "omitempty" ignores that particular field if
+// the field value is empty. Example:
+//
+// // Field appears in map as key "myName", but the field is
+// // skipped if empty.
+// Field string `structs:"myName,omitempty"`
+//
+// // Field appears in map as key "Field" (the default), but
+// // the field is skipped if empty.
+// Field string `structs:",omitempty"`
+//
+// Note that only exported fields of a struct can be accessed, non exported
+// fields will be neglected.
+func (s *Struct) Map() map[string]interface{} {
+ out := make(map[string]interface{})
+ s.FillMap(out)
+ return out
+}
+
+// FillMap is the same as Map. Instead of returning the output, it fills the
+// given map.
+func (s *Struct) FillMap(out map[string]interface{}) {
+ if out == nil {
+ return
+ }
+
+ fields := s.structFields()
+
+ for _, field := range fields {
+ name := field.Name
+ val := s.value.FieldByName(name)
+
+ var finalVal interface{}
+
+ tagName, tagOpts := parseTag(field.Tag.Get(s.TagName))
+ if tagName != "" {
+ name = tagName
+ }
+
+ // if the value is a zero value and the field is marked as omitempty do
+ // not include
+ if tagOpts.Has("omitempty") {
+ zero := reflect.Zero(val.Type()).Interface()
+ current := val.Interface()
+
+ if reflect.DeepEqual(current, zero) {
+ continue
+ }
+ }
+
+ if IsStruct(val.Interface()) && !tagOpts.Has("omitnested") {
+ // look out for embedded structs, and convert them to a
+ // map[string]interface{} too
+ n := New(val.Interface())
+ n.TagName = s.TagName
+ m := n.Map()
+ if len(m) == 0 {
+ finalVal = val.Interface()
+ } else {
+ finalVal = m
+ }
+ } else {
+ finalVal = val.Interface()
+ }
+
+ if tagOpts.Has("string") {
+ s, ok := val.Interface().(fmt.Stringer)
+ if ok {
+ out[name] = s.String()
+ }
+ continue
+ }
+
+ out[name] = finalVal
+ }
+}
+
+// Values converts the given s struct's field values to a []interface{}. A
+// struct tag with the content of "-" ignores the that particular field.
+// Example:
+//
+// // Field is ignored by this package.
+// Field int `structs:"-"`
+//
+// A value with the option of "omitnested" stops iterating further if the type
+// is a struct. Example:
+//
+// // Fields is not processed further by this package.
+// Field time.Time `structs:",omitnested"`
+// Field *http.Request `structs:",omitnested"`
+//
+// A tag value with the option of "omitempty" ignores that particular field and
+// is not added to the values if the field value is empty. Example:
+//
+// // Field is skipped if empty
+// Field string `structs:",omitempty"`
+//
+// Note that only exported fields of a struct can be accessed, non exported
+// fields will be neglected.
+func (s *Struct) Values() []interface{} {
+ fields := s.structFields()
+
+ var t []interface{}
+
+ for _, field := range fields {
+ val := s.value.FieldByName(field.Name)
+
+ _, tagOpts := parseTag(field.Tag.Get(s.TagName))
+
+ // if the value is a zero value and the field is marked as omitempty do
+ // not include
+ if tagOpts.Has("omitempty") {
+ zero := reflect.Zero(val.Type()).Interface()
+ current := val.Interface()
+
+ if reflect.DeepEqual(current, zero) {
+ continue
+ }
+ }
+
+ if tagOpts.Has("string") {
+ s, ok := val.Interface().(fmt.Stringer)
+ if ok {
+ t = append(t, s.String())
+ }
+ continue
+ }
+
+ if IsStruct(val.Interface()) && !tagOpts.Has("omitnested") {
+ // look out for embedded structs, and convert them to a
+ // []interface{} to be added to the final values slice
+ for _, embeddedVal := range Values(val.Interface()) {
+ t = append(t, embeddedVal)
+ }
+ } else {
+ t = append(t, val.Interface())
+ }
+ }
+
+ return t
+}
+
+// Fields returns a slice of Fields. A struct tag with the content of "-"
+// ignores the checking of that particular field. Example:
+//
+// // Field is ignored by this package.
+// Field bool `structs:"-"`
+//
+// It panics if s's kind is not struct.
+func (s *Struct) Fields() []*Field {
+ return getFields(s.value, s.TagName)
+}
+
+// Names returns a slice of field names. A struct tag with the content of "-"
+// ignores the checking of that particular field. Example:
+//
+// // Field is ignored by this package.
+// Field bool `structs:"-"`
+//
+// It panics if s's kind is not struct.
+func (s *Struct) Names() []string {
+ fields := getFields(s.value, s.TagName)
+
+ names := make([]string, len(fields))
+
+ for i, field := range fields {
+ names[i] = field.Name()
+ }
+
+ return names
+}
+
+func getFields(v reflect.Value, tagName string) []*Field {
+ if v.Kind() == reflect.Ptr {
+ v = v.Elem()
+ }
+
+ t := v.Type()
+
+ var fields []*Field
+
+ for i := 0; i < t.NumField(); i++ {
+ field := t.Field(i)
+
+ if tag := field.Tag.Get(tagName); tag == "-" {
+ continue
+ }
+
+ f := &Field{
+ field: field,
+ value: v.FieldByName(field.Name),
+ }
+
+ fields = append(fields, f)
+
+ }
+
+ return fields
+}
+
+// Field returns a new Field struct that provides several high level functions
+// around a single struct field entity. It panics if the field is not found.
+func (s *Struct) Field(name string) *Field {
+ f, ok := s.FieldOk(name)
+ if !ok {
+ panic("field not found")
+ }
+
+ return f
+}
+
+// Field returns a new Field struct that provides several high level functions
+// around a single struct field entity. The boolean returns true if the field
+// was found.
+func (s *Struct) FieldOk(name string) (*Field, bool) {
+ t := s.value.Type()
+
+ field, ok := t.FieldByName(name)
+ if !ok {
+ return nil, false
+ }
+
+ return &Field{
+ field: field,
+ value: s.value.FieldByName(name),
+ defaultTag: s.TagName,
+ }, true
+}
+
+// IsZero returns true if all fields in a struct is a zero value (not
+// initialized) A struct tag with the content of "-" ignores the checking of
+// that particular field. Example:
+//
+// // Field is ignored by this package.
+// Field bool `structs:"-"`
+//
+// A value with the option of "omitnested" stops iterating further if the type
+// is a struct. Example:
+//
+// // Field is not processed further by this package.
+// Field time.Time `structs:"myName,omitnested"`
+// Field *http.Request `structs:",omitnested"`
+//
+// Note that only exported fields of a struct can be accessed, non exported
+// fields will be neglected. It panics if s's kind is not struct.
+func (s *Struct) IsZero() bool {
+ fields := s.structFields()
+
+ for _, field := range fields {
+ val := s.value.FieldByName(field.Name)
+
+ _, tagOpts := parseTag(field.Tag.Get(s.TagName))
+
+ if IsStruct(val.Interface()) && !tagOpts.Has("omitnested") {
+ ok := IsZero(val.Interface())
+ if !ok {
+ return false
+ }
+
+ continue
+ }
+
+ // zero value of the given field, such as "" for string, 0 for int
+ zero := reflect.Zero(val.Type()).Interface()
+
+ // current value of the given field
+ current := val.Interface()
+
+ if !reflect.DeepEqual(current, zero) {
+ return false
+ }
+ }
+
+ return true
+}
+
+// HasZero returns true if a field in a struct is not initialized (zero value).
+// A struct tag with the content of "-" ignores the checking of that particular
+// field. Example:
+//
+// // Field is ignored by this package.
+// Field bool `structs:"-"`
+//
+// A value with the option of "omitnested" stops iterating further if the type
+// is a struct. Example:
+//
+// // Field is not processed further by this package.
+// Field time.Time `structs:"myName,omitnested"`
+// Field *http.Request `structs:",omitnested"`
+//
+// Note that only exported fields of a struct can be accessed, non exported
+// fields will be neglected. It panics if s's kind is not struct.
+func (s *Struct) HasZero() bool {
+ fields := s.structFields()
+
+ for _, field := range fields {
+ val := s.value.FieldByName(field.Name)
+
+ _, tagOpts := parseTag(field.Tag.Get(s.TagName))
+
+ if IsStruct(val.Interface()) && !tagOpts.Has("omitnested") {
+ ok := HasZero(val.Interface())
+ if ok {
+ return true
+ }
+
+ continue
+ }
+
+ // zero value of the given field, such as "" for string, 0 for int
+ zero := reflect.Zero(val.Type()).Interface()
+
+ // current value of the given field
+ current := val.Interface()
+
+ if reflect.DeepEqual(current, zero) {
+ return true
+ }
+ }
+
+ return false
+}
+
+// Name returns the structs's type name within its package. For more info refer
+// to Name() function.
+func (s *Struct) Name() string {
+ return s.value.Type().Name()
+}
+
+// structFields returns the exported struct fields for a given s struct. This
+// is a convenient helper method to avoid duplicate code in some of the
+// functions.
+func (s *Struct) structFields() []reflect.StructField {
+ t := s.value.Type()
+
+ var f []reflect.StructField
+
+ for i := 0; i < t.NumField(); i++ {
+ field := t.Field(i)
+ // we can't access the value of unexported fields
+ if field.PkgPath != "" {
+ continue
+ }
+
+ // don't check if it's omitted
+ if tag := field.Tag.Get(s.TagName); tag == "-" {
+ continue
+ }
+
+ f = append(f, field)
+ }
+
+ return f
+}
+
+func strctVal(s interface{}) reflect.Value {
+ v := reflect.ValueOf(s)
+
+ // if pointer get the underlying element≤
+ if v.Kind() == reflect.Ptr {
+ v = v.Elem()
+ }
+
+ if v.Kind() != reflect.Struct {
+ panic("not struct")
+ }
+
+ return v
+}
+
+// Map converts the given struct to a map[string]interface{}. For more info
+// refer to Struct types Map() method. It panics if s's kind is not struct.
+func Map(s interface{}) map[string]interface{} {
+ return New(s).Map()
+}
+
+// FillMap is the same as Map. Instead of returning the output, it fills the
+// given map.
+func FillMap(s interface{}, out map[string]interface{}) {
+ New(s).FillMap(out)
+}
+
+// Values converts the given struct to a []interface{}. For more info refer to
+// Struct types Values() method. It panics if s's kind is not struct.
+func Values(s interface{}) []interface{} {
+ return New(s).Values()
+}
+
+// Fields returns a slice of *Field. For more info refer to Struct types
+// Fields() method. It panics if s's kind is not struct.
+func Fields(s interface{}) []*Field {
+ return New(s).Fields()
+}
+
+// Names returns a slice of field names. For more info refer to Struct types
+// Names() method. It panics if s's kind is not struct.
+func Names(s interface{}) []string {
+ return New(s).Names()
+}
+
+// IsZero returns true if all fields is equal to a zero value. For more info
+// refer to Struct types IsZero() method. It panics if s's kind is not struct.
+func IsZero(s interface{}) bool {
+ return New(s).IsZero()
+}
+
+// HasZero returns true if any field is equal to a zero value. For more info
+// refer to Struct types HasZero() method. It panics if s's kind is not struct.
+func HasZero(s interface{}) bool {
+ return New(s).HasZero()
+}
+
+// IsStruct returns true if the given variable is a struct or a pointer to
+// struct.
+func IsStruct(s interface{}) bool {
+ v := reflect.ValueOf(s)
+ if v.Kind() == reflect.Ptr {
+ v = v.Elem()
+ }
+
+ // uninitialized zero value of a struct
+ if v.Kind() == reflect.Invalid {
+ return false
+ }
+
+ return v.Kind() == reflect.Struct
+}
+
+// Name returns the structs's type name within its package. It returns an
+// empty string for unnamed types. It panics if s's kind is not struct.
+func Name(s interface{}) string {
+ return New(s).Name()
+}
diff --git a/vendor/github.com/fatih/structs/tags.go b/vendor/github.com/fatih/structs/tags.go
new file mode 100644
index 000000000000..8859341c1f9b
--- /dev/null
+++ b/vendor/github.com/fatih/structs/tags.go
@@ -0,0 +1,32 @@
+package structs
+
+import "strings"
+
+// tagOptions contains a slice of tag options
+type tagOptions []string
+
+// Has returns true if the given optiton is available in tagOptions
+func (t tagOptions) Has(opt string) bool {
+ for _, tagOpt := range t {
+ if tagOpt == opt {
+ return true
+ }
+ }
+
+ return false
+}
+
+// parseTag splits a struct field's tag into its name and a list of options
+// which comes after a name. A tag is in the form of: "name,option1,option2".
+// The name can be neglectected.
+func parseTag(tag string) (string, tagOptions) {
+ // tag is one of followings:
+ // ""
+ // "name"
+ // "name,opt"
+ // "name,opt,opt2"
+ // ",opt"
+
+ res := strings.Split(tag, ",")
+ return res[0], res[1:]
+}
diff --git a/vendor/github.com/hashicorp/atlas-go/v1/client.go b/vendor/github.com/hashicorp/atlas-go/v1/client.go
index abae4fe56a87..2e61e064b277 100644
--- a/vendor/github.com/hashicorp/atlas-go/v1/client.go
+++ b/vendor/github.com/hashicorp/atlas-go/v1/client.go
@@ -2,6 +2,7 @@ package atlas
import (
"bytes"
+ "crypto/tls"
"encoding/json"
"fmt"
"io"
@@ -14,6 +15,7 @@ import (
"strings"
"github.com/hashicorp/go-cleanhttp"
+ "github.com/hashicorp/go-rootcerts"
)
const (
@@ -24,6 +26,14 @@ const (
// default Atlas address.
atlasEndpointEnvVar = "ATLAS_ADDRESS"
+ // atlasCAFileEnvVar is the environment variable that causes the client to
+ // load trusted certs from a file
+ atlasCAFileEnvVar = "ATLAS_CAFILE"
+
+ // atlasCAPathEnvVar is the environment variable that causes the client to
+ // load trusted certs from a directory
+ atlasCAPathEnvVar = "ATLAS_CAPATH"
+
// atlasTokenHeader is the header key used for authenticating with Atlas
atlasTokenHeader = "X-Atlas-Token"
)
@@ -112,6 +122,17 @@ func NewClient(urlString string) (*Client, error) {
// init() sets defaults on the client.
func (c *Client) init() error {
c.HTTPClient = cleanhttp.DefaultClient()
+ tlsConfig := &tls.Config{}
+ err := rootcerts.ConfigureTLS(tlsConfig, &rootcerts.Config{
+ CAFile: os.Getenv(atlasCAFileEnvVar),
+ CAPath: os.Getenv(atlasCAPathEnvVar),
+ })
+ if err != nil {
+ return err
+ }
+ t := cleanhttp.DefaultTransport()
+ t.TLSClientConfig = tlsConfig
+ c.HTTPClient.Transport = t
return nil
}
diff --git a/vendor/github.com/hashicorp/go-rootcerts/.travis.yml b/vendor/github.com/hashicorp/go-rootcerts/.travis.yml
new file mode 100644
index 000000000000..80e1de44e96d
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/.travis.yml
@@ -0,0 +1,12 @@
+sudo: false
+
+language: go
+
+go:
+ - 1.6
+
+branches:
+ only:
+ - master
+
+script: make test
diff --git a/vendor/github.com/pearkes/cloudflare/LICENSE b/vendor/github.com/hashicorp/go-rootcerts/LICENSE
similarity index 99%
rename from vendor/github.com/pearkes/cloudflare/LICENSE
rename to vendor/github.com/hashicorp/go-rootcerts/LICENSE
index be2cc4dfb609..e87a115e462e 100644
--- a/vendor/github.com/pearkes/cloudflare/LICENSE
+++ b/vendor/github.com/hashicorp/go-rootcerts/LICENSE
@@ -360,3 +360,4 @@ Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.
+
diff --git a/vendor/github.com/hashicorp/go-rootcerts/Makefile b/vendor/github.com/hashicorp/go-rootcerts/Makefile
new file mode 100644
index 000000000000..c3989e789f69
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/Makefile
@@ -0,0 +1,8 @@
+TEST?=./...
+
+test:
+ go test $(TEST) $(TESTARGS) -timeout=3s -parallel=4
+ go vet $(TEST)
+ go test $(TEST) -race
+
+.PHONY: test
diff --git a/vendor/github.com/hashicorp/go-rootcerts/README.md b/vendor/github.com/hashicorp/go-rootcerts/README.md
new file mode 100644
index 000000000000..f5abffc29343
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/README.md
@@ -0,0 +1,43 @@
+# rootcerts
+
+Functions for loading root certificates for TLS connections.
+
+-----
+
+Go's standard library `crypto/tls` provides a common mechanism for configuring
+TLS connections in `tls.Config`. The `RootCAs` field on this struct is a pool
+of certificates for the client to use as a trust store when verifying server
+certificates.
+
+This library contains utility functions for loading certificates destined for
+that field, as well as one other important thing:
+
+When the `RootCAs` field is `nil`, the standard library attempts to load the
+host's root CA set. This behavior is OS-specific, and the Darwin
+implementation contains [a bug that prevents trusted certificates from the
+System and Login keychains from being loaded][1]. This library contains
+Darwin-specific behavior that works around that bug.
+
+[1]: https://github.com/golang/go/issues/14514
+
+## Example Usage
+
+Here's a snippet demonstrating how this library is meant to be used:
+
+```go
+func httpClient() (*http.Client, error)
+ tlsConfig := &tls.Config{}
+ err := rootcerts.ConfigureTLS(tlsConfig, &rootcerts.Config{
+ CAFile: os.Getenv("MYAPP_CAFILE"),
+ CAPath: os.Getenv("MYAPP_CAPATH"),
+ })
+ if err != nil {
+ return nil, err
+ }
+ c := cleanhttp.DefaultClient()
+ t := cleanhttp.DefaultTransport()
+ t.TLSClientConfig = tlsConfig
+ c.Transport = t
+ return c, nil
+}
+```
diff --git a/vendor/github.com/hashicorp/go-rootcerts/doc.go b/vendor/github.com/hashicorp/go-rootcerts/doc.go
new file mode 100644
index 000000000000..b55cc6284850
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/doc.go
@@ -0,0 +1,9 @@
+// Package rootcerts contains functions to aid in loading CA certificates for
+// TLS connections.
+//
+// In addition, its default behavior on Darwin works around an open issue [1]
+// in Go's crypto/x509 that prevents certicates from being loaded from the
+// System or Login keychains.
+//
+// [1] https://github.com/golang/go/issues/14514
+package rootcerts
diff --git a/vendor/github.com/hashicorp/go-rootcerts/rootcerts.go b/vendor/github.com/hashicorp/go-rootcerts/rootcerts.go
new file mode 100644
index 000000000000..aeb30ece3240
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/rootcerts.go
@@ -0,0 +1,103 @@
+package rootcerts
+
+import (
+ "crypto/tls"
+ "crypto/x509"
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+)
+
+// Config determines where LoadCACerts will load certificates from. When both
+// CAFile and CAPath are blank, this library's functions will either load
+// system roots explicitly and return them, or set the CertPool to nil to allow
+// Go's standard library to load system certs.
+type Config struct {
+ // CAFile is a path to a PEM-encoded certificate file or bundle. Takes
+ // precedence over CAPath.
+ CAFile string
+
+ // CAPath is a path to a directory populated with PEM-encoded certificates.
+ CAPath string
+}
+
+// ConfigureTLS sets up the RootCAs on the provided tls.Config based on the
+// Config specified.
+func ConfigureTLS(t *tls.Config, c *Config) error {
+ if t == nil {
+ return nil
+ }
+ pool, err := LoadCACerts(c)
+ if err != nil {
+ return err
+ }
+ t.RootCAs = pool
+ return nil
+}
+
+// LoadCACerts loads a CertPool based on the Config specified.
+func LoadCACerts(c *Config) (*x509.CertPool, error) {
+ if c == nil {
+ c = &Config{}
+ }
+ if c.CAFile != "" {
+ return LoadCAFile(c.CAFile)
+ }
+ if c.CAPath != "" {
+ return LoadCAPath(c.CAPath)
+ }
+
+ return LoadSystemCAs()
+}
+
+// LoadCAFile loads a single PEM-encoded file from the path specified.
+func LoadCAFile(caFile string) (*x509.CertPool, error) {
+ pool := x509.NewCertPool()
+
+ pem, err := ioutil.ReadFile(caFile)
+ if err != nil {
+ return nil, fmt.Errorf("Error loading CA File: %s", err)
+ }
+
+ ok := pool.AppendCertsFromPEM(pem)
+ if !ok {
+ return nil, fmt.Errorf("Error loading CA File: Couldn't parse PEM in: %s", caFile)
+ }
+
+ return pool, nil
+}
+
+// LoadCAPath walks the provided path and loads all certificates encounted into
+// a pool.
+func LoadCAPath(caPath string) (*x509.CertPool, error) {
+ pool := x509.NewCertPool()
+ walkFn := func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ if info.IsDir() {
+ return nil
+ }
+
+ pem, err := ioutil.ReadFile(path)
+ if err != nil {
+ return fmt.Errorf("Error loading file from CAPath: %s", err)
+ }
+
+ ok := pool.AppendCertsFromPEM(pem)
+ if !ok {
+ return fmt.Errorf("Error loading CA Path: Couldn't parse PEM in: %s", path)
+ }
+
+ return nil
+ }
+
+ err := filepath.Walk(caPath, walkFn)
+ if err != nil {
+ return nil, err
+ }
+
+ return pool, nil
+}
diff --git a/vendor/github.com/hashicorp/go-rootcerts/rootcerts_base.go b/vendor/github.com/hashicorp/go-rootcerts/rootcerts_base.go
new file mode 100644
index 000000000000..66b1472c4a04
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/rootcerts_base.go
@@ -0,0 +1,12 @@
+// +build !darwin
+
+package rootcerts
+
+import "crypto/x509"
+
+// LoadSystemCAs does nothing on non-Darwin systems. We return nil so that
+// default behavior of standard TLS config libraries is triggered, which is to
+// load system certs.
+func LoadSystemCAs() (*x509.CertPool, error) {
+ return nil, nil
+}
diff --git a/vendor/github.com/hashicorp/go-rootcerts/rootcerts_darwin.go b/vendor/github.com/hashicorp/go-rootcerts/rootcerts_darwin.go
new file mode 100644
index 000000000000..a9a040657fe3
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-rootcerts/rootcerts_darwin.go
@@ -0,0 +1,48 @@
+package rootcerts
+
+import (
+ "crypto/x509"
+ "os/exec"
+ "path"
+
+ "github.com/mitchellh/go-homedir"
+)
+
+// LoadSystemCAs has special behavior on Darwin systems to work around
+func LoadSystemCAs() (*x509.CertPool, error) {
+ pool := x509.NewCertPool()
+
+ for _, keychain := range certKeychains() {
+ err := addCertsFromKeychain(pool, keychain)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return pool, nil
+}
+
+func addCertsFromKeychain(pool *x509.CertPool, keychain string) error {
+ cmd := exec.Command("/usr/bin/security", "find-certificate", "-a", "-p", keychain)
+ data, err := cmd.Output()
+ if err != nil {
+ return err
+ }
+
+ pool.AppendCertsFromPEM(data)
+
+ return nil
+}
+
+func certKeychains() []string {
+ keychains := []string{
+ "/System/Library/Keychains/SystemRootCertificates.keychain",
+ "/Library/Keychains/System.keychain",
+ }
+ home, err := homedir.Dir()
+ if err == nil {
+ loginKeychain := path.Join(home, "Library", "Keychains", "login.keychain")
+ keychains = append(keychains, loginKeychain)
+ }
+ return keychains
+}
diff --git a/vendor/github.com/hashicorp/hcl/hcl/fmtcmd/fmtcmd.go b/vendor/github.com/hashicorp/hcl/hcl/fmtcmd/fmtcmd.go
index afc1e4eb12a2..15a5f66d7cdd 100644
--- a/vendor/github.com/hashicorp/hcl/hcl/fmtcmd/fmtcmd.go
+++ b/vendor/github.com/hashicorp/hcl/hcl/fmtcmd/fmtcmd.go
@@ -60,8 +60,6 @@ func processFile(filename string, in io.Reader, out io.Writer, stdin bool, opts
if err != nil {
return err
}
- // Files should end with newlines
- res = append(res, []byte("\n")...)
if !bytes.Equal(src, res) {
// formatting has changed
diff --git a/vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go b/vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go
index a98495c7629d..218b56a81851 100644
--- a/vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go
+++ b/vendor/github.com/hashicorp/hcl/hcl/printer/nodes.go
@@ -221,12 +221,12 @@ func (p *printer) objectType(o *ast.ObjectType) []byte {
defer un(trace(p, "ObjectType"))
var buf bytes.Buffer
buf.WriteString("{")
- buf.WriteByte(newline)
var index int
var nextItem token.Pos
- var commented bool
+ var commented, newlinePrinted bool
for {
+
// Print stand alone comments
for _, c := range p.standaloneComments {
for _, comment := range c.List {
@@ -238,6 +238,13 @@ func (p *printer) objectType(o *ast.ObjectType) []byte {
}
if comment.Pos().After(p.prev) && comment.Pos().Before(nextItem) {
+ // If there are standalone comments and the initial newline has not
+ // been printed yet, do it now.
+ if !newlinePrinted {
+ newlinePrinted = true
+ buf.WriteByte(newline)
+ }
+
// add newline if it's between other printed nodes
if index > 0 {
commented = true
@@ -258,6 +265,14 @@ func (p *printer) objectType(o *ast.ObjectType) []byte {
break
}
+ // At this point we are sure that it's not a totally empty block: print
+ // the initial newline if it hasn't been printed yet by the previous
+ // block about standalone comments.
+ if !newlinePrinted {
+ buf.WriteByte(newline)
+ newlinePrinted = true
+ }
+
// check if we have adjacent one liner items. If yes we'll going to align
// the comments.
var aligned []*ast.ObjectItem
diff --git a/vendor/github.com/hashicorp/hcl/hcl/printer/printer.go b/vendor/github.com/hashicorp/hcl/hcl/printer/printer.go
index fb9df58d4bfe..a296fc851a8e 100644
--- a/vendor/github.com/hashicorp/hcl/hcl/printer/printer.go
+++ b/vendor/github.com/hashicorp/hcl/hcl/printer/printer.go
@@ -60,5 +60,8 @@ func Format(src []byte) ([]byte, error) {
return nil, err
}
+ // Add trailing newline to result
+ buf.WriteString("\n")
+
return buf.Bytes(), nil
}
diff --git a/vendor/github.com/hashicorp/hil/LICENSE b/vendor/github.com/hashicorp/hil/LICENSE
new file mode 100644
index 000000000000..82b4de97c7e3
--- /dev/null
+++ b/vendor/github.com/hashicorp/hil/LICENSE
@@ -0,0 +1,353 @@
+Mozilla Public License, version 2.0
+
+1. Definitions
+
+1.1. “Contributor”
+
+ means each individual or legal entity that creates, contributes to the
+ creation of, or owns Covered Software.
+
+1.2. “Contributor Version”
+
+ means the combination of the Contributions of others (if any) used by a
+ Contributor and that particular Contributor’s Contribution.
+
+1.3. “Contribution”
+
+ means Covered Software of a particular Contributor.
+
+1.4. “Covered Software”
+
+ means Source Code Form to which the initial Contributor has attached the
+ notice in Exhibit A, the Executable Form of such Source Code Form, and
+ Modifications of such Source Code Form, in each case including portions
+ thereof.
+
+1.5. “Incompatible With Secondary Licenses”
+ means
+
+ a. that the initial Contributor has attached the notice described in
+ Exhibit B to the Covered Software; or
+
+ b. that the Covered Software was made available under the terms of version
+ 1.1 or earlier of the License, but not also under the terms of a
+ Secondary License.
+
+1.6. “Executable Form”
+
+ means any form of the work other than Source Code Form.
+
+1.7. “Larger Work”
+
+ means a work that combines Covered Software with other material, in a separate
+ file or files, that is not Covered Software.
+
+1.8. “License”
+
+ means this document.
+
+1.9. “Licensable”
+
+ means having the right to grant, to the maximum extent possible, whether at the
+ time of the initial grant or subsequently, any and all of the rights conveyed by
+ this License.
+
+1.10. “Modifications”
+
+ means any of the following:
+
+ a. any file in Source Code Form that results from an addition to, deletion
+ from, or modification of the contents of Covered Software; or
+
+ b. any new file in Source Code Form that contains any Covered Software.
+
+1.11. “Patent Claims” of a Contributor
+
+ means any patent claim(s), including without limitation, method, process,
+ and apparatus claims, in any patent Licensable by such Contributor that
+ would be infringed, but for the grant of the License, by the making,
+ using, selling, offering for sale, having made, import, or transfer of
+ either its Contributions or its Contributor Version.
+
+1.12. “Secondary License”
+
+ means either the GNU General Public License, Version 2.0, the GNU Lesser
+ General Public License, Version 2.1, the GNU Affero General Public
+ License, Version 3.0, or any later versions of those licenses.
+
+1.13. “Source Code Form”
+
+ means the form of the work preferred for making modifications.
+
+1.14. “You” (or “Your”)
+
+ means an individual or a legal entity exercising rights under this
+ License. For legal entities, “You” includes any entity that controls, is
+ controlled by, or is under common control with You. For purposes of this
+ definition, “control” means (a) the power, direct or indirect, to cause
+ the direction or management of such entity, whether by contract or
+ otherwise, or (b) ownership of more than fifty percent (50%) of the
+ outstanding shares or beneficial ownership of such entity.
+
+
+2. License Grants and Conditions
+
+2.1. Grants
+
+ Each Contributor hereby grants You a world-wide, royalty-free,
+ non-exclusive license:
+
+ a. under intellectual property rights (other than patent or trademark)
+ Licensable by such Contributor to use, reproduce, make available,
+ modify, display, perform, distribute, and otherwise exploit its
+ Contributions, either on an unmodified basis, with Modifications, or as
+ part of a Larger Work; and
+
+ b. under Patent Claims of such Contributor to make, use, sell, offer for
+ sale, have made, import, and otherwise transfer either its Contributions
+ or its Contributor Version.
+
+2.2. Effective Date
+
+ The licenses granted in Section 2.1 with respect to any Contribution become
+ effective for each Contribution on the date the Contributor first distributes
+ such Contribution.
+
+2.3. Limitations on Grant Scope
+
+ The licenses granted in this Section 2 are the only rights granted under this
+ License. No additional rights or licenses will be implied from the distribution
+ or licensing of Covered Software under this License. Notwithstanding Section
+ 2.1(b) above, no patent license is granted by a Contributor:
+
+ a. for any code that a Contributor has removed from Covered Software; or
+
+ b. for infringements caused by: (i) Your and any other third party’s
+ modifications of Covered Software, or (ii) the combination of its
+ Contributions with other software (except as part of its Contributor
+ Version); or
+
+ c. under Patent Claims infringed by Covered Software in the absence of its
+ Contributions.
+
+ This License does not grant any rights in the trademarks, service marks, or
+ logos of any Contributor (except as may be necessary to comply with the
+ notice requirements in Section 3.4).
+
+2.4. Subsequent Licenses
+
+ No Contributor makes additional grants as a result of Your choice to
+ distribute the Covered Software under a subsequent version of this License
+ (see Section 10.2) or under the terms of a Secondary License (if permitted
+ under the terms of Section 3.3).
+
+2.5. Representation
+
+ Each Contributor represents that the Contributor believes its Contributions
+ are its original creation(s) or it has sufficient rights to grant the
+ rights to its Contributions conveyed by this License.
+
+2.6. Fair Use
+
+ This License is not intended to limit any rights You have under applicable
+ copyright doctrines of fair use, fair dealing, or other equivalents.
+
+2.7. Conditions
+
+ Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
+ Section 2.1.
+
+
+3. Responsibilities
+
+3.1. Distribution of Source Form
+
+ All distribution of Covered Software in Source Code Form, including any
+ Modifications that You create or to which You contribute, must be under the
+ terms of this License. You must inform recipients that the Source Code Form
+ of the Covered Software is governed by the terms of this License, and how
+ they can obtain a copy of this License. You may not attempt to alter or
+ restrict the recipients’ rights in the Source Code Form.
+
+3.2. Distribution of Executable Form
+
+ If You distribute Covered Software in Executable Form then:
+
+ a. such Covered Software must also be made available in Source Code Form,
+ as described in Section 3.1, and You must inform recipients of the
+ Executable Form how they can obtain a copy of such Source Code Form by
+ reasonable means in a timely manner, at a charge no more than the cost
+ of distribution to the recipient; and
+
+ b. You may distribute such Executable Form under the terms of this License,
+ or sublicense it under different terms, provided that the license for
+ the Executable Form does not attempt to limit or alter the recipients’
+ rights in the Source Code Form under this License.
+
+3.3. Distribution of a Larger Work
+
+ You may create and distribute a Larger Work under terms of Your choice,
+ provided that You also comply with the requirements of this License for the
+ Covered Software. If the Larger Work is a combination of Covered Software
+ with a work governed by one or more Secondary Licenses, and the Covered
+ Software is not Incompatible With Secondary Licenses, this License permits
+ You to additionally distribute such Covered Software under the terms of
+ such Secondary License(s), so that the recipient of the Larger Work may, at
+ their option, further distribute the Covered Software under the terms of
+ either this License or such Secondary License(s).
+
+3.4. Notices
+
+ You may not remove or alter the substance of any license notices (including
+ copyright notices, patent notices, disclaimers of warranty, or limitations
+ of liability) contained within the Source Code Form of the Covered
+ Software, except that You may alter any license notices to the extent
+ required to remedy known factual inaccuracies.
+
+3.5. Application of Additional Terms
+
+ You may choose to offer, and to charge a fee for, warranty, support,
+ indemnity or liability obligations to one or more recipients of Covered
+ Software. However, You may do so only on Your own behalf, and not on behalf
+ of any Contributor. You must make it absolutely clear that any such
+ warranty, support, indemnity, or liability obligation is offered by You
+ alone, and You hereby agree to indemnify every Contributor for any
+ liability incurred by such Contributor as a result of warranty, support,
+ indemnity or liability terms You offer. You may include additional
+ disclaimers of warranty and limitations of liability specific to any
+ jurisdiction.
+
+4. Inability to Comply Due to Statute or Regulation
+
+ If it is impossible for You to comply with any of the terms of this License
+ with respect to some or all of the Covered Software due to statute, judicial
+ order, or regulation then You must: (a) comply with the terms of this License
+ to the maximum extent possible; and (b) describe the limitations and the code
+ they affect. Such description must be placed in a text file included with all
+ distributions of the Covered Software under this License. Except to the
+ extent prohibited by statute or regulation, such description must be
+ sufficiently detailed for a recipient of ordinary skill to be able to
+ understand it.
+
+5. Termination
+
+5.1. The rights granted under this License will terminate automatically if You
+ fail to comply with any of its terms. However, if You become compliant,
+ then the rights granted under this License from a particular Contributor
+ are reinstated (a) provisionally, unless and until such Contributor
+ explicitly and finally terminates Your grants, and (b) on an ongoing basis,
+ if such Contributor fails to notify You of the non-compliance by some
+ reasonable means prior to 60 days after You have come back into compliance.
+ Moreover, Your grants from a particular Contributor are reinstated on an
+ ongoing basis if such Contributor notifies You of the non-compliance by
+ some reasonable means, this is the first time You have received notice of
+ non-compliance with this License from such Contributor, and You become
+ compliant prior to 30 days after Your receipt of the notice.
+
+5.2. If You initiate litigation against any entity by asserting a patent
+ infringement claim (excluding declaratory judgment actions, counter-claims,
+ and cross-claims) alleging that a Contributor Version directly or
+ indirectly infringes any patent, then the rights granted to You by any and
+ all Contributors for the Covered Software under Section 2.1 of this License
+ shall terminate.
+
+5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
+ license agreements (excluding distributors and resellers) which have been
+ validly granted by You or Your distributors under this License prior to
+ termination shall survive termination.
+
+6. Disclaimer of Warranty
+
+ Covered Software is provided under this License on an “as is” basis, without
+ warranty of any kind, either expressed, implied, or statutory, including,
+ without limitation, warranties that the Covered Software is free of defects,
+ merchantable, fit for a particular purpose or non-infringing. The entire
+ risk as to the quality and performance of the Covered Software is with You.
+ Should any Covered Software prove defective in any respect, You (not any
+ Contributor) assume the cost of any necessary servicing, repair, or
+ correction. This disclaimer of warranty constitutes an essential part of this
+ License. No use of any Covered Software is authorized under this License
+ except under this disclaimer.
+
+7. Limitation of Liability
+
+ Under no circumstances and under no legal theory, whether tort (including
+ negligence), contract, or otherwise, shall any Contributor, or anyone who
+ distributes Covered Software as permitted above, be liable to You for any
+ direct, indirect, special, incidental, or consequential damages of any
+ character including, without limitation, damages for lost profits, loss of
+ goodwill, work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses, even if such party shall have been
+ informed of the possibility of such damages. This limitation of liability
+ shall not apply to liability for death or personal injury resulting from such
+ party’s negligence to the extent applicable law prohibits such limitation.
+ Some jurisdictions do not allow the exclusion or limitation of incidental or
+ consequential damages, so this exclusion and limitation may not apply to You.
+
+8. Litigation
+
+ Any litigation relating to this License may be brought only in the courts of
+ a jurisdiction where the defendant maintains its principal place of business
+ and such litigation shall be governed by laws of that jurisdiction, without
+ reference to its conflict-of-law provisions. Nothing in this Section shall
+ prevent a party’s ability to bring cross-claims or counter-claims.
+
+9. Miscellaneous
+
+ This License represents the complete agreement concerning the subject matter
+ hereof. If any provision of this License is held to be unenforceable, such
+ provision shall be reformed only to the extent necessary to make it
+ enforceable. Any law or regulation which provides that the language of a
+ contract shall be construed against the drafter shall not be used to construe
+ this License against a Contributor.
+
+
+10. Versions of the License
+
+10.1. New Versions
+
+ Mozilla Foundation is the license steward. Except as provided in Section
+ 10.3, no one other than the license steward has the right to modify or
+ publish new versions of this License. Each version will be given a
+ distinguishing version number.
+
+10.2. Effect of New Versions
+
+ You may distribute the Covered Software under the terms of the version of
+ the License under which You originally received the Covered Software, or
+ under the terms of any subsequent version published by the license
+ steward.
+
+10.3. Modified Versions
+
+ If you create software not governed by this License, and you want to
+ create a new license for such software, you may create and use a modified
+ version of this License if you rename the license and remove any
+ references to the name of the license steward (except to note that such
+ modified license differs from this License).
+
+10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
+ If You choose to distribute Source Code Form that is Incompatible With
+ Secondary Licenses under the terms of this version of the License, the
+ notice described in Exhibit B of this License must be attached.
+
+Exhibit A - Source Code Form License Notice
+
+ This Source Code Form is subject to the
+ terms of the Mozilla Public License, v.
+ 2.0. If a copy of the MPL was not
+ distributed with this file, You can
+ obtain one at
+ http://mozilla.org/MPL/2.0/.
+
+If it is not possible or desirable to put the notice in a particular file, then
+You may include the notice in a location (such as a LICENSE file in a relevant
+directory) where a recipient would be likely to look for such a notice.
+
+You may add additional accurate notices of copyright ownership.
+
+Exhibit B - “Incompatible With Secondary Licenses” Notice
+
+ This Source Code Form is “Incompatible
+ With Secondary Licenses”, as defined by
+ the Mozilla Public License, v. 2.0.
diff --git a/vendor/github.com/hashicorp/hil/README.md b/vendor/github.com/hashicorp/hil/README.md
index 2b405ecfe42e..186ed2518c8f 100644
--- a/vendor/github.com/hashicorp/hil/README.md
+++ b/vendor/github.com/hashicorp/hil/README.md
@@ -72,7 +72,7 @@ docs, we'll assume you're within `${}`.
`add(1, var.foo)` or even nested function calls:
`add(1, get("some value"))`.
- * Witin strings, further interpolations can be opened with `${}`.
+ * Within strings, further interpolations can be opened with `${}`.
Example: `"Hello ${nested}"`. A full example including the
original `${}` (remember this list assumes were inside of one
already) could be: `foo ${func("hello ${var.foo}")}`.
diff --git a/vendor/github.com/hashicorp/hil/ast/concat.go b/vendor/github.com/hashicorp/hil/ast/concat.go
deleted file mode 100644
index 0246a3bc11e4..000000000000
--- a/vendor/github.com/hashicorp/hil/ast/concat.go
+++ /dev/null
@@ -1,42 +0,0 @@
-package ast
-
-import (
- "bytes"
- "fmt"
-)
-
-// Concat represents a node where the result of two or more expressions are
-// concatenated. The result of all expressions must be a string.
-type Concat struct {
- Exprs []Node
- Posx Pos
-}
-
-func (n *Concat) Accept(v Visitor) Node {
- for i, expr := range n.Exprs {
- n.Exprs[i] = expr.Accept(v)
- }
-
- return v(n)
-}
-
-func (n *Concat) Pos() Pos {
- return n.Posx
-}
-
-func (n *Concat) GoString() string {
- return fmt.Sprintf("*%#v", *n)
-}
-
-func (n *Concat) String() string {
- var b bytes.Buffer
- for _, expr := range n.Exprs {
- b.WriteString(fmt.Sprintf("%s", expr))
- }
-
- return b.String()
-}
-
-func (n *Concat) Type(Scope) (Type, error) {
- return TypeString, nil
-}
diff --git a/vendor/github.com/hashicorp/hil/ast/output.go b/vendor/github.com/hashicorp/hil/ast/output.go
new file mode 100644
index 000000000000..1e27f970b33b
--- /dev/null
+++ b/vendor/github.com/hashicorp/hil/ast/output.go
@@ -0,0 +1,78 @@
+package ast
+
+import (
+ "bytes"
+ "fmt"
+)
+
+// Output represents the root node of all interpolation evaluations. If the
+// output only has one expression which is either a TypeList or TypeMap, the
+// Output can be type-asserted to []interface{} or map[string]interface{}
+// respectively. Otherwise the Output evaluates as a string, and concatenates
+// the evaluation of each expression.
+type Output struct {
+ Exprs []Node
+ Posx Pos
+}
+
+func (n *Output) Accept(v Visitor) Node {
+ for i, expr := range n.Exprs {
+ n.Exprs[i] = expr.Accept(v)
+ }
+
+ return v(n)
+}
+
+func (n *Output) Pos() Pos {
+ return n.Posx
+}
+
+func (n *Output) GoString() string {
+ return fmt.Sprintf("*%#v", *n)
+}
+
+func (n *Output) String() string {
+ var b bytes.Buffer
+ for _, expr := range n.Exprs {
+ b.WriteString(fmt.Sprintf("%s", expr))
+ }
+
+ return b.String()
+}
+
+func (n *Output) Type(s Scope) (Type, error) {
+ // Special case no expressions for backward compatibility
+ if len(n.Exprs) == 0 {
+ return TypeString, nil
+ }
+
+ // Special case a single expression of types list or map
+ if len(n.Exprs) == 1 {
+ exprType, err := n.Exprs[0].Type(s)
+ if err != nil {
+ return TypeInvalid, err
+ }
+ switch exprType {
+ case TypeList:
+ return TypeList, nil
+ case TypeMap:
+ return TypeMap, nil
+ }
+ }
+
+ // Otherwise ensure all our expressions are strings
+ for index, expr := range n.Exprs {
+ exprType, err := expr.Type(s)
+ if err != nil {
+ return TypeInvalid, err
+ }
+ // We only look for things we know we can't coerce with an implicit conversion func
+ if exprType == TypeList || exprType == TypeMap {
+ return TypeInvalid, fmt.Errorf(
+ "multi-expression HIL outputs may only have string inputs: %d is type %s",
+ index, exprType)
+ }
+ }
+
+ return TypeString, nil
+}
diff --git a/vendor/github.com/hashicorp/hil/ast/type_string.go b/vendor/github.com/hashicorp/hil/ast/type_string.go
index 32bfde84038c..11793ea59137 100644
--- a/vendor/github.com/hashicorp/hil/ast/type_string.go
+++ b/vendor/github.com/hashicorp/hil/ast/type_string.go
@@ -11,6 +11,7 @@ const (
_Type_name_3 = "TypeInt"
_Type_name_4 = "TypeFloat"
_Type_name_5 = "TypeList"
+ _Type_name_6 = "TypeMap"
)
var (
@@ -20,6 +21,7 @@ var (
_Type_index_3 = [...]uint8{0, 7}
_Type_index_4 = [...]uint8{0, 9}
_Type_index_5 = [...]uint8{0, 8}
+ _Type_index_6 = [...]uint8{0, 7}
)
func (i Type) String() string {
@@ -36,6 +38,8 @@ func (i Type) String() string {
return _Type_name_4
case i == 32:
return _Type_name_5
+ case i == 64:
+ return _Type_name_6
default:
return fmt.Sprintf("Type(%d)", i)
}
diff --git a/vendor/github.com/hashicorp/hil/check_identifier.go b/vendor/github.com/hashicorp/hil/check_identifier.go
index d36ee97bf8ae..474f50588e17 100644
--- a/vendor/github.com/hashicorp/hil/check_identifier.go
+++ b/vendor/github.com/hashicorp/hil/check_identifier.go
@@ -35,7 +35,7 @@ func (c *IdentifierCheck) visit(raw ast.Node) ast.Node {
c.visitCall(n)
case *ast.VariableAccess:
c.visitVariableAccess(n)
- case *ast.Concat:
+ case *ast.Output:
// Ignore
case *ast.LiteralNode:
// Ignore
diff --git a/vendor/github.com/hashicorp/hil/check_types.go b/vendor/github.com/hashicorp/hil/check_types.go
index b5a88eefebba..554676a41869 100644
--- a/vendor/github.com/hashicorp/hil/check_types.go
+++ b/vendor/github.com/hashicorp/hil/check_types.go
@@ -64,8 +64,8 @@ func (v *TypeCheck) visit(raw ast.Node) ast.Node {
case *ast.Index:
tc := &typeCheckIndex{n}
result, err = tc.TypeCheck(v)
- case *ast.Concat:
- tc := &typeCheckConcat{n}
+ case *ast.Output:
+ tc := &typeCheckOutput{n}
result, err = tc.TypeCheck(v)
case *ast.LiteralNode:
tc := &typeCheckLiteral{n}
@@ -230,11 +230,11 @@ func (tc *typeCheckCall) TypeCheck(v *TypeCheck) (ast.Node, error) {
return tc.n, nil
}
-type typeCheckConcat struct {
- n *ast.Concat
+type typeCheckOutput struct {
+ n *ast.Output
}
-func (tc *typeCheckConcat) TypeCheck(v *TypeCheck) (ast.Node, error) {
+func (tc *typeCheckOutput) TypeCheck(v *TypeCheck) (ast.Node, error) {
n := tc.n
types := make([]ast.Type, len(n.Exprs))
for i, _ := range n.Exprs {
@@ -247,6 +247,12 @@ func (tc *typeCheckConcat) TypeCheck(v *TypeCheck) (ast.Node, error) {
return n, nil
}
+ // If there is only one argument and it is a map, we evaluate to a map
+ if len(types) == 1 && types[0] == ast.TypeMap {
+ v.StackPush(ast.TypeMap)
+ return n, nil
+ }
+
// Otherwise, all concat args must be strings, so validate that
for i, t := range types {
if t != ast.TypeString {
diff --git a/vendor/github.com/hashicorp/hil/convert.go b/vendor/github.com/hashicorp/hil/convert.go
new file mode 100644
index 000000000000..c52e2f3054e1
--- /dev/null
+++ b/vendor/github.com/hashicorp/hil/convert.go
@@ -0,0 +1,54 @@
+package hil
+
+import (
+ "fmt"
+
+ "github.com/hashicorp/hil/ast"
+ "github.com/mitchellh/mapstructure"
+)
+
+func InterfaceToVariable(input interface{}) (ast.Variable, error) {
+ var stringVal string
+ if err := mapstructure.WeakDecode(input, &stringVal); err == nil {
+ return ast.Variable{
+ Type: ast.TypeString,
+ Value: stringVal,
+ }, nil
+ }
+
+ var sliceVal []interface{}
+ if err := mapstructure.WeakDecode(input, &sliceVal); err == nil {
+ elements := make([]ast.Variable, len(sliceVal))
+ for i, element := range sliceVal {
+ varElement, err := InterfaceToVariable(element)
+ if err != nil {
+ return ast.Variable{}, err
+ }
+ elements[i] = varElement
+ }
+
+ return ast.Variable{
+ Type: ast.TypeList,
+ Value: elements,
+ }, nil
+ }
+
+ var mapVal map[string]interface{}
+ if err := mapstructure.WeakDecode(input, &mapVal); err == nil {
+ elements := make(map[string]ast.Variable)
+ for i, element := range mapVal {
+ varElement, err := InterfaceToVariable(element)
+ if err != nil {
+ return ast.Variable{}, err
+ }
+ elements[i] = varElement
+ }
+
+ return ast.Variable{
+ Type: ast.TypeMap,
+ Value: elements,
+ }, nil
+ }
+
+ return ast.Variable{}, fmt.Errorf("value for conversion must be a string, interface{} or map[string]interface: got %T", input)
+}
diff --git a/vendor/github.com/hashicorp/hil/eval.go b/vendor/github.com/hashicorp/hil/eval.go
index 51c8aa71231d..f5537312e953 100644
--- a/vendor/github.com/hashicorp/hil/eval.go
+++ b/vendor/github.com/hashicorp/hil/eval.go
@@ -23,9 +23,68 @@ type EvalConfig struct {
// semantic check on an AST tree. This will be called with the root node.
type SemanticChecker func(ast.Node) error
+// EvalType represents the type of the output returned from a HIL
+// evaluation.
+type EvalType uint32
+
+const (
+ TypeInvalid EvalType = 0
+ TypeString EvalType = 1 << iota
+ TypeList
+ TypeMap
+)
+
+//go:generate stringer -type=EvalType
+
+// EvaluationResult is a struct returned from the hil.Eval function,
+// representing the result of an interpolation. Results are returned in their
+// "natural" Go structure rather than in terms of the HIL AST. For the types
+// currently implemented, this means that the Value field can be interpreted as
+// the following Go types:
+// TypeInvalid: undefined
+// TypeString: string
+// TypeList: []interface{}
+// TypeMap: map[string]interface{}
+type EvaluationResult struct {
+ Type EvalType
+ Value interface{}
+}
+
+// InvalidResult is a structure representing the result of a HIL interpolation
+// which has invalid syntax, missing variables, or some other type of error.
+// The error is described out of band in the accompanying error return value.
+var InvalidResult = EvaluationResult{Type: TypeInvalid, Value: nil}
+
+func Eval(root ast.Node, config *EvalConfig) (EvaluationResult, error) {
+ output, outputType, err := internalEval(root, config)
+ if err != nil {
+ return InvalidResult, err
+ }
+
+ switch outputType {
+ case ast.TypeList:
+ return EvaluationResult{
+ Type: TypeList,
+ Value: hilListToGoSlice(output.([]ast.Variable)),
+ }, nil
+ case ast.TypeMap:
+ return EvaluationResult{
+ Type: TypeMap,
+ Value: hilMapToGoMap(output.(map[string]ast.Variable)),
+ }, nil
+ case ast.TypeString:
+ return EvaluationResult{
+ Type: TypeString,
+ Value: output,
+ }, nil
+ default:
+ return InvalidResult, fmt.Errorf("unknown type %s as interpolation output", outputType)
+ }
+}
+
// Eval evaluates the given AST tree and returns its output value, the type
// of the output, and any error that occurred.
-func Eval(root ast.Node, config *EvalConfig) (interface{}, ast.Type, error) {
+func internalEval(root ast.Node, config *EvalConfig) (interface{}, ast.Type, error) {
// Copy the scope so we can add our builtins
if config == nil {
config = new(EvalConfig)
@@ -145,8 +204,8 @@ func evalNode(raw ast.Node) (EvalNode, error) {
return &evalIndex{n}, nil
case *ast.Call:
return &evalCall{n}, nil
- case *ast.Concat:
- return &evalConcat{n}, nil
+ case *ast.Output:
+ return &evalOutput{n}, nil
case *ast.LiteralNode:
return &evalLiteralNode{n}, nil
case *ast.VariableAccess:
@@ -278,9 +337,35 @@ func (v *evalIndex) evalMapIndex(variableName string, target interface{}, key in
return value.Value, value.Type, nil
}
-type evalConcat struct{ *ast.Concat }
+// hilListToGoSlice converts an ast.Variable into a []interface{}. We assume that
+// the type checking is already done since this is internal and only used in output
+// evaluation.
+func hilListToGoSlice(variable []ast.Variable) []interface{} {
+ output := make([]interface{}, len(variable))
+
+ for index, element := range variable {
+ output[index] = element.Value
+ }
+
+ return output
+}
+
+// hilMapToGoMap converts an ast.Variable into a map[string]interface{}. We assume
+// that the type checking is already done since this is internal and only used in
+// output evaluation.
+func hilMapToGoMap(variable map[string]ast.Variable) map[string]interface{} {
+ output := make(map[string]interface{})
-func (v *evalConcat) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type, error) {
+ for key, element := range variable {
+ output[key] = element.Value
+ }
+
+ return output
+}
+
+type evalOutput struct{ *ast.Output }
+
+func (v *evalOutput) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type, error) {
// The expressions should all be on the stack in reverse
// order. So pop them off, reverse their order, and concatenate.
nodes := make([]*ast.LiteralNode, 0, len(v.Exprs))
@@ -288,10 +373,13 @@ func (v *evalConcat) Eval(s ast.Scope, stack *ast.Stack) (interface{}, ast.Type,
nodes = append(nodes, stack.Pop().(*ast.LiteralNode))
}
- // Special case the single list
+ // Special case the single list and map
if len(nodes) == 1 && nodes[0].Typex == ast.TypeList {
return nodes[0].Value, ast.TypeList, nil
}
+ if len(nodes) == 1 && nodes[0].Typex == ast.TypeMap {
+ return nodes[0].Value, ast.TypeMap, nil
+ }
// Otherwise concatenate the strings
var buf bytes.Buffer
diff --git a/vendor/github.com/hashicorp/hil/evaltype_string.go b/vendor/github.com/hashicorp/hil/evaltype_string.go
new file mode 100644
index 000000000000..911ff30e138b
--- /dev/null
+++ b/vendor/github.com/hashicorp/hil/evaltype_string.go
@@ -0,0 +1,34 @@
+// Code generated by "stringer -type=EvalType"; DO NOT EDIT
+
+package hil
+
+import "fmt"
+
+const (
+ _EvalType_name_0 = "TypeInvalid"
+ _EvalType_name_1 = "TypeString"
+ _EvalType_name_2 = "TypeList"
+ _EvalType_name_3 = "TypeMap"
+)
+
+var (
+ _EvalType_index_0 = [...]uint8{0, 11}
+ _EvalType_index_1 = [...]uint8{0, 10}
+ _EvalType_index_2 = [...]uint8{0, 8}
+ _EvalType_index_3 = [...]uint8{0, 7}
+)
+
+func (i EvalType) String() string {
+ switch {
+ case i == 0:
+ return _EvalType_name_0
+ case i == 2:
+ return _EvalType_name_1
+ case i == 4:
+ return _EvalType_name_2
+ case i == 8:
+ return _EvalType_name_3
+ default:
+ return fmt.Sprintf("EvalType(%d)", i)
+ }
+}
diff --git a/vendor/github.com/hashicorp/hil/lang.y b/vendor/github.com/hashicorp/hil/lang.y
index 6dc15f0d8bb1..67a7dc2aaaa8 100644
--- a/vendor/github.com/hashicorp/hil/lang.y
+++ b/vendor/github.com/hashicorp/hil/lang.y
@@ -44,17 +44,17 @@ top:
{
parserResult = $1
- // We want to make sure that the top value is always a Concat
- // so that the return value is always a string type from an
+ // We want to make sure that the top value is always an Output
+ // so that the return value is always a string, list of map from an
// interpolation.
//
// The logic for checking for a LiteralNode is a little annoying
// because functionally the AST is the same, but we do that because
// it makes for an easy literal check later (to check if a string
// has any interpolations).
- if _, ok := $1.(*ast.Concat); !ok {
+ if _, ok := $1.(*ast.Output); !ok {
if n, ok := $1.(*ast.LiteralNode); !ok || n.Typex != ast.TypeString {
- parserResult = &ast.Concat{
+ parserResult = &ast.Output{
Exprs: []ast.Node{$1},
Posx: $1.Pos(),
}
@@ -70,13 +70,13 @@ literalModeTop:
| literalModeTop literalModeValue
{
var result []ast.Node
- if c, ok := $1.(*ast.Concat); ok {
+ if c, ok := $1.(*ast.Output); ok {
result = append(c.Exprs, $2)
} else {
result = []ast.Node{$1, $2}
}
- $$ = &ast.Concat{
+ $$ = &ast.Output{
Exprs: result,
Posx: result[0].Pos(),
}
diff --git a/vendor/github.com/hashicorp/hil/transform_fixed.go b/vendor/github.com/hashicorp/hil/transform_fixed.go
index 81c10377a5e7..e69df294325b 100644
--- a/vendor/github.com/hashicorp/hil/transform_fixed.go
+++ b/vendor/github.com/hashicorp/hil/transform_fixed.go
@@ -14,7 +14,7 @@ func FixedValueTransform(root ast.Node, Value *ast.LiteralNode) ast.Node {
// We visit the nodes in top-down order
result := root
switch n := result.(type) {
- case *ast.Concat:
+ case *ast.Output:
for i, v := range n.Exprs {
n.Exprs[i] = FixedValueTransform(v, Value)
}
diff --git a/vendor/github.com/hashicorp/hil/y.go b/vendor/github.com/hashicorp/hil/y.go
index cf9887cf3c2a..30eb86aa7e4a 100644
--- a/vendor/github.com/hashicorp/hil/y.go
+++ b/vendor/github.com/hashicorp/hil/y.go
@@ -484,17 +484,17 @@ parserdefault:
{
parserResult = parserDollar[1].node
- // We want to make sure that the top value is always a Concat
- // so that the return value is always a string type from an
+ // We want to make sure that the top value is always an Output
+ // so that the return value is always a string, list of map from an
// interpolation.
//
// The logic for checking for a LiteralNode is a little annoying
// because functionally the AST is the same, but we do that because
// it makes for an easy literal check later (to check if a string
// has any interpolations).
- if _, ok := parserDollar[1].node.(*ast.Concat); !ok {
+ if _, ok := parserDollar[1].node.(*ast.Output); !ok {
if n, ok := parserDollar[1].node.(*ast.LiteralNode); !ok || n.Typex != ast.TypeString {
- parserResult = &ast.Concat{
+ parserResult = &ast.Output{
Exprs: []ast.Node{parserDollar[1].node},
Posx: parserDollar[1].node.Pos(),
}
@@ -512,13 +512,13 @@ parserdefault:
//line lang.y:71
{
var result []ast.Node
- if c, ok := parserDollar[1].node.(*ast.Concat); ok {
+ if c, ok := parserDollar[1].node.(*ast.Output); ok {
result = append(c.Exprs, parserDollar[2].node)
} else {
result = []ast.Node{parserDollar[1].node, parserDollar[2].node}
}
- parserVAL.node = &ast.Concat{
+ parserVAL.node = &ast.Output{
Exprs: result,
Posx: result[0].Pos(),
}
diff --git a/vendor/github.com/henrikhodne/go-librato/LICENSE b/vendor/github.com/henrikhodne/go-librato/LICENSE
new file mode 100644
index 000000000000..1255f582f863
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright 2015 Henrik Hodne
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/henrikhodne/go-librato/librato/client.go b/vendor/github.com/henrikhodne/go-librato/librato/client.go
new file mode 100644
index 000000000000..181e2c7f587e
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/librato/client.go
@@ -0,0 +1,282 @@
+package librato
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "net/url"
+ "reflect"
+
+ "github.com/google/go-querystring/query"
+)
+
+const (
+ libraryVersion = "0.1"
+ defaultBaseURL = "https://metrics-api.librato.com/v1/"
+ userAgent = "go-librato/" + libraryVersion
+
+ defaultMediaType = "application/json"
+)
+
+// A Client manages communication with the Librato API.
+type Client struct {
+ // HTTP client used to communicate with the API
+ client *http.Client
+
+ // Headers to attach to every request made with the client. Headers will be
+ // used to provide Librato API authentication details and other necessary
+ // headers.
+ Headers map[string]string
+
+ // Email and Token contains the authentication details needed to authenticate
+ // against the Librato API.
+ Email, Token string
+
+ // Base URL for API requests. Defaults to the public Librato API, but can be
+ // set to an alternate endpoint if necessary. BaseURL should always be
+ // terminated by a slash.
+ BaseURL *url.URL
+
+ // User agent used when communicating with the Librato API.
+ UserAgent string
+
+ // Services used to manipulate API entities.
+ Spaces *SpacesService
+}
+
+// NewClient returns a new Librato API client bound to the public Librato API.
+func NewClient(email, token string) *Client {
+ bu, err := url.Parse(defaultBaseURL)
+ if err != nil {
+ panic("Default Librato API base URL couldn't be parsed")
+ }
+
+ return NewClientWithBaseURL(bu, email, token)
+}
+
+// NewClientWithBaseURL returned a new Librato API client with a custom base URL.
+func NewClientWithBaseURL(baseURL *url.URL, email, token string) *Client {
+ headers := map[string]string{
+ "Content-Type": defaultMediaType,
+ "Accept": defaultMediaType,
+ }
+
+ c := &Client{
+ client: http.DefaultClient,
+ Headers: headers,
+ Email: email,
+ Token: token,
+ BaseURL: baseURL,
+ UserAgent: userAgent,
+ }
+
+ c.Spaces = &SpacesService{client: c}
+
+ return c
+}
+
+// NewRequest creates an API request. A relative URL can be provided in urlStr,
+// in which case it is resolved relative to the BaseURL of the Client.
+// Relative URLs should always be specified without a preceding slash. If
+// specified, the value pointed to by body is JSON encoded and included as the
+// request body. If specified, the map provided by headers will be used to
+// update request headers.
+func (c *Client) NewRequest(method, urlStr string, body interface{}) (*http.Request, error) {
+ rel, err := url.Parse(urlStr)
+ if err != nil {
+ return nil, err
+ }
+
+ u := c.BaseURL.ResolveReference(rel)
+
+ var buf io.ReadWriter
+ if body != nil {
+ buf = new(bytes.Buffer)
+ err := json.NewEncoder(buf).Encode(body)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ req, err := http.NewRequest(method, u.String(), buf)
+ if err != nil {
+ return nil, err
+ }
+
+ req.SetBasicAuth(c.Email, c.Token)
+ if c.UserAgent != "" {
+ req.Header.Set("User-Agent", c.UserAgent)
+ }
+
+ for k, v := range c.Headers {
+ req.Header.Set(k, v)
+ }
+
+ return req, nil
+}
+
+// Do sends an API request and returns the API response. The API response is
+// JSON decoded and stored in the value pointed to by v, or returned as an
+// error if an API error has occurred. If v implements the io.Writer
+// interface, the raw response body will be written to v, without attempting to
+// first decode it.
+func (c *Client) Do(req *http.Request, v interface{}) (*http.Response, error) {
+ resp, err := c.client.Do(req)
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ err = CheckResponse(resp)
+ if err != nil {
+ return resp, err
+ }
+
+ if v != nil {
+ if w, ok := v.(io.Writer); ok {
+ _, err = io.Copy(w, resp.Body)
+ } else {
+ err = json.NewDecoder(resp.Body).Decode(v)
+ }
+ }
+
+ return resp, err
+}
+
+// ErrorResponse reports an error caused by an API request.
+// ErrorResponse implements the Error interface.
+type ErrorResponse struct {
+ // HTTP response that caused this error
+ Response *http.Response
+
+ // Error messages produces by Librato API.
+ Errors ErrorResponseMessages `json:"errors"`
+}
+
+func (er *ErrorResponse) Error() string {
+ buf := new(bytes.Buffer)
+
+ if er.Errors.Params != nil && len(er.Errors.Params) > 0 {
+ buf.WriteString(" Parameter errors:")
+ for param, errs := range er.Errors.Params {
+ fmt.Fprintf(buf, " %s:", param)
+ for _, err := range errs {
+ fmt.Fprintf(buf, " %s,", err)
+ }
+ }
+ buf.WriteString(".")
+ }
+
+ if er.Errors.Request != nil && len(er.Errors.Request) > 0 {
+ buf.WriteString(" Request errors:")
+ for _, err := range er.Errors.Request {
+ fmt.Fprintf(buf, " %s,", err)
+ }
+ buf.WriteString(".")
+ }
+
+ if er.Errors.System != nil && len(er.Errors.System) > 0 {
+ buf.WriteString(" System errors:")
+ for _, err := range er.Errors.System {
+ fmt.Fprintf(buf, " %s,", err)
+ }
+ buf.WriteString(".")
+ }
+
+ return fmt.Sprintf(
+ "%v %v: %d %v",
+ er.Response.Request.Method,
+ er.Response.Request.URL,
+ er.Response.StatusCode,
+ buf.String(),
+ )
+}
+
+// ErrorResponseMessages contains error messages returned from the Librato API.
+type ErrorResponseMessages struct {
+ Params map[string][]string `json:"params,omitempty"`
+ Request []string `json:"request,omitempty"`
+ System []string `json:"system,omitempty"`
+}
+
+// CheckResponse checks the API response for errors; and returns them if
+// present. A Response is considered an error if it has a status code outside
+// the 2XX range.
+func CheckResponse(r *http.Response) error {
+ if c := r.StatusCode; 200 <= c && c <= 299 {
+ return nil
+ }
+
+ errorResponse := &ErrorResponse{Response: r}
+
+ data, err := ioutil.ReadAll(r.Body)
+ if err == nil && data != nil {
+ json.Unmarshal(data, errorResponse)
+ }
+
+ return errorResponse
+}
+
+func urlWithOptions(s string, opt interface{}) (string, error) {
+ rv := reflect.ValueOf(opt)
+ if rv.Kind() == reflect.Ptr && rv.IsNil() {
+ return s, nil
+ }
+
+ u, err := url.Parse(s)
+ if err != nil {
+ return s, err
+ }
+
+ qs, err := query.Values(opt)
+ if err != nil {
+ return "", err
+ }
+ u.RawQuery = qs.Encode()
+
+ return u.String(), nil
+}
+
+// Bool is a helper routine that allocates a new bool value
+// to store v and returns a pointer to it.
+func Bool(v bool) *bool {
+ p := new(bool)
+ *p = v
+ return p
+}
+
+// Int is a helper routine that allocates a new int32 value
+// to store v and returns a pointer to it, but unlike Int32
+// its argument value is an int.
+func Int(v int) *int {
+ p := new(int)
+ *p = v
+ return p
+}
+
+// Uint is a helper routine that allocates a new uint value
+// to store v and returns a pointer to it.
+func Uint(v uint) *uint {
+ p := new(uint)
+ *p = v
+ return p
+}
+
+// String is a helper routine that allocates a new string value
+// to store v and returns a pointer to it.
+func String(v string) *string {
+ p := new(string)
+ *p = v
+ return p
+}
+
+// Float is a helper routine that allocates a new float64 value
+// to store v and returns a pointer to it.
+func Float(v float64) *float64 {
+ p := new(float64)
+ *p = v
+ return p
+}
diff --git a/vendor/github.com/henrikhodne/go-librato/librato/spaces.go b/vendor/github.com/henrikhodne/go-librato/librato/spaces.go
new file mode 100644
index 000000000000..6a003dfbe80d
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/librato/spaces.go
@@ -0,0 +1,123 @@
+package librato
+
+import (
+ "fmt"
+ "net/http"
+)
+
+// SpacesService handles communication with the Librato API methods related to
+// spaces.
+type SpacesService struct {
+ client *Client
+}
+
+// Space represents a Librato Space.
+type Space struct {
+ Name *string `json:"name"`
+ ID *uint `json:"id,omitempty"`
+}
+
+func (s Space) String() string {
+ return Stringify(s)
+}
+
+// SpaceListOptions specifies the optional parameters to the SpaceService.Find
+// method.
+type SpaceListOptions struct {
+ // filter by name
+ Name string `url:"name,omitempty"`
+}
+
+type listSpacesResponse struct {
+ Spaces []Space `json:"spaces"`
+}
+
+// List spaces using the provided options.
+//
+// Librato API docs: http://dev.librato.com/v1/get/spaces
+func (s *SpacesService) List(opt *SpaceListOptions) ([]Space, *http.Response, error) {
+ u, err := urlWithOptions("spaces", opt)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ req, err := s.client.NewRequest("GET", u, nil)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ var spacesResp listSpacesResponse
+ resp, err := s.client.Do(req, &spacesResp)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return spacesResp.Spaces, resp, nil
+}
+
+// Get fetches a space based on the provided ID.
+//
+// Librato API docs: http://dev.librato.com/v1/get/spaces/:id
+func (s *SpacesService) Get(id uint) (*Space, *http.Response, error) {
+ u, err := urlWithOptions(fmt.Sprintf("spaces/%d", id), nil)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ req, err := s.client.NewRequest("GET", u, nil)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ sp := new(Space)
+ resp, err := s.client.Do(req, sp)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return sp, resp, err
+}
+
+// Create a space with a given name.
+//
+// Librato API docs: http://dev.librato.com/v1/post/spaces
+func (s *SpacesService) Create(space *Space) (*Space, *http.Response, error) {
+ req, err := s.client.NewRequest("POST", "spaces", space)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ sp := new(Space)
+ resp, err := s.client.Do(req, sp)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return sp, resp, err
+}
+
+// Edit a space.
+//
+// Librato API docs: http://dev.librato.com/v1/put/spaces/:id
+func (s *SpacesService) Edit(spaceID uint, space *Space) (*http.Response, error) {
+ u := fmt.Sprintf("spaces/%d", spaceID)
+ req, err := s.client.NewRequest("PUT", u, space)
+ if err != nil {
+ return nil, err
+ }
+
+ return s.client.Do(req, nil)
+}
+
+// Delete a space.
+//
+// Librato API docs: http://dev.librato.com/v1/delete/spaces/:id
+func (s *SpacesService) Delete(id uint) (*http.Response, error) {
+ u := fmt.Sprintf("spaces/%d", id)
+ req, err := s.client.NewRequest("DELETE", u, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ return s.client.Do(req, nil)
+}
diff --git a/vendor/github.com/henrikhodne/go-librato/librato/spaces_charts.go b/vendor/github.com/henrikhodne/go-librato/librato/spaces_charts.go
new file mode 100644
index 000000000000..fe8f68973595
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/librato/spaces_charts.go
@@ -0,0 +1,118 @@
+package librato
+
+import (
+ "fmt"
+ "net/http"
+)
+
+// SpaceChart represents a chart in a Librato Space.
+type SpaceChart struct {
+ ID *uint `json:"id,omitempty"`
+ Name *string `json:"name,omitempty"`
+ Type *string `json:"type,omitempty"`
+ Min *float64 `json:"min,omitempty"`
+ Max *float64 `json:"max,omitempty"`
+ Label *string `json:"label,omitempty"`
+ RelatedSpace *uint `json:"related_space,omitempty"`
+ Streams []SpaceChartStream `json:"streams,omitempty"`
+}
+
+// SpaceChartStream represents a single stream in a chart in a Librato Space.
+type SpaceChartStream struct {
+ Metric *string `json:"metric,omitempty"`
+ Source *string `json:"source,omitempty"`
+ Composite *string `json:"composite,omitempty"`
+ GroupFunction *string `json:"group_function,omitempty"`
+ SummaryFunction *string `json:"summary_function,omitempty"`
+ Color *string `json:"color,omitempty"`
+ Name *string `json:"name,omitempty"`
+ UnitsShort *string `json:"units_short,omitempty"`
+ UnitsLong *string `json:"units_long,omitempty"`
+ Min *float64 `json:"min,omitempty"`
+ Max *float64 `json:"max,omitempty"`
+ TransformFunction *string `json:"transform_function,omitempty"`
+ Period *int64 `json:"period,omitempty"`
+}
+
+// CreateChart creates a chart in a given Librato Space.
+//
+// Librato API docs: http://dev.librato.com/v1/post/spaces/:id/charts
+func (s *SpacesService) CreateChart(spaceID uint, chart *SpaceChart) (*SpaceChart, *http.Response, error) {
+ u := fmt.Sprintf("spaces/%d/charts", spaceID)
+ req, err := s.client.NewRequest("POST", u, chart)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ c := new(SpaceChart)
+ resp, err := s.client.Do(req, c)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return c, resp, err
+}
+
+// ListCharts lists all charts in a given Librato Space.
+//
+// Librato API docs: http://dev.librato.com/v1/get/spaces/:id/charts
+func (s *SpacesService) ListCharts(spaceID uint) ([]SpaceChart, *http.Response, error) {
+ u := fmt.Sprintf("spaces/%d/charts", spaceID)
+ req, err := s.client.NewRequest("GET", u, nil)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ charts := new([]SpaceChart)
+ resp, err := s.client.Do(req, charts)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return *charts, resp, err
+}
+
+// GetChart gets a chart with a given ID in a space with a given ID.
+//
+// Librato API docs: http://dev.librato.com/v1/get/spaces/:id/charts
+func (s *SpacesService) GetChart(spaceID, chartID uint) (*SpaceChart, *http.Response, error) {
+ u := fmt.Sprintf("spaces/%d/charts/%d", spaceID, chartID)
+ req, err := s.client.NewRequest("GET", u, nil)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ c := new(SpaceChart)
+ resp, err := s.client.Do(req, c)
+ if err != nil {
+ return nil, resp, err
+ }
+
+ return c, resp, err
+}
+
+// EditChart edits a chart.
+//
+// Librato API docs: http://dev.librato.com/v1/put/spaces/:id/charts/:id
+func (s *SpacesService) EditChart(spaceID, chartID uint, chart *SpaceChart) (*http.Response, error) {
+ u := fmt.Sprintf("spaces/%d/charts/%d", spaceID, chartID)
+ req, err := s.client.NewRequest("PUT", u, chart)
+ if err != nil {
+ return nil, err
+ }
+
+ return s.client.Do(req, nil)
+}
+
+// DeleteChart deletes a chart.
+//
+// Librato API docs: http://dev.librato.com/v1/delete/spaces/:id/charts/:id
+func (s *SpacesService) DeleteChart(spaceID, chartID uint) (*http.Response, error) {
+ u := fmt.Sprintf("spaces/%d/charts/%d", spaceID, chartID)
+ req, err := s.client.NewRequest("DELETE", u, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ return s.client.Do(req, nil)
+}
diff --git a/vendor/github.com/henrikhodne/go-librato/librato/strings.go b/vendor/github.com/henrikhodne/go-librato/librato/strings.go
new file mode 100644
index 000000000000..953bf86e8c04
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/librato/strings.go
@@ -0,0 +1,79 @@
+package librato
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "reflect"
+)
+
+// Stringify attempts to create a reasonable string representation of types in
+// the Librato library. It does things like resolve pointers to their values
+// and omits struct fields with nil values.
+func Stringify(message interface{}) string {
+ var buf bytes.Buffer
+ v := reflect.ValueOf(message)
+ stringifyValue(&buf, v)
+ return buf.String()
+}
+
+// stringifyValue was heavily inspired by the goprotobuf library.
+
+func stringifyValue(w io.Writer, val reflect.Value) {
+ if val.Kind() == reflect.Ptr && val.IsNil() {
+ w.Write([]byte(""))
+ return
+ }
+
+ v := reflect.Indirect(val)
+
+ switch v.Kind() {
+ case reflect.String:
+ fmt.Fprintf(w, `"%s"`, v)
+ case reflect.Slice:
+ w.Write([]byte{'['})
+ for i := 0; i < v.Len(); i++ {
+ if i > 0 {
+ w.Write([]byte{' '})
+ }
+
+ stringifyValue(w, v.Index(i))
+ }
+
+ w.Write([]byte{']'})
+ return
+ case reflect.Struct:
+ if v.Type().Name() != "" {
+ w.Write([]byte(v.Type().String()))
+ }
+
+ w.Write([]byte{'{'})
+
+ var sep bool
+ for i := 0; i < v.NumField(); i++ {
+ fv := v.Field(i)
+ if fv.Kind() == reflect.Ptr && fv.IsNil() {
+ continue
+ }
+ if fv.Kind() == reflect.Slice && fv.IsNil() {
+ continue
+ }
+
+ if sep {
+ w.Write([]byte(", "))
+ } else {
+ sep = true
+ }
+
+ w.Write([]byte(v.Type().Field(i).Name))
+ w.Write([]byte{':'})
+ stringifyValue(w, fv)
+ }
+
+ w.Write([]byte{'}'})
+ default:
+ if v.CanInterface() {
+ fmt.Fprint(w, v.Interface())
+ }
+ }
+}
diff --git a/vendor/github.com/henrikhodne/go-librato/librato/testing_helpers.go b/vendor/github.com/henrikhodne/go-librato/librato/testing_helpers.go
new file mode 100644
index 000000000000..feaca2564108
--- /dev/null
+++ b/vendor/github.com/henrikhodne/go-librato/librato/testing_helpers.go
@@ -0,0 +1,17 @@
+package librato
+
+import (
+ "fmt"
+ "path/filepath"
+ "runtime"
+ "testing"
+)
+
+// ok fails the test if an err is not nil.
+func ok(tb testing.TB, err error) {
+ if err != nil {
+ _, file, line, _ := runtime.Caller(1)
+ fmt.Printf("\033[31m%s:%d: unexpected error: %s\033[39m\n\n", filepath.Base(file), line, err.Error())
+ tb.FailNow()
+ }
+}
diff --git a/vendor/github.com/joyent/gosdc/cloudapi/cloudapi.go b/vendor/github.com/joyent/gosdc/cloudapi/cloudapi.go
index eb88e5fd6ad7..2f7c406ac5bf 100644
--- a/vendor/github.com/joyent/gosdc/cloudapi/cloudapi.go
+++ b/vendor/github.com/joyent/gosdc/cloudapi/cloudapi.go
@@ -45,6 +45,7 @@ const (
apiFabricVLANs = "fabrics/default/vlans"
apiFabricNetworks = "networks"
apiNICs = "nics"
+ apiServices = "services"
// CloudAPI actions
actionExport = "export"
diff --git a/vendor/github.com/joyent/gosdc/cloudapi/images.go b/vendor/github.com/joyent/gosdc/cloudapi/images.go
index e3299d5223e8..c7f9a2fe37bd 100644
--- a/vendor/github.com/joyent/gosdc/cloudapi/images.go
+++ b/vendor/github.com/joyent/gosdc/cloudapi/images.go
@@ -19,7 +19,7 @@ type Image struct {
Requirements map[string]interface{} // Minimum requirements for provisioning a machine with this image, e.g. 'password' indicates that a password must be provided
Homepage string // URL for a web page including detailed information for this image (new in API version 7.0)
PublishedAt string `json:"published_at"` // Time this image has been made publicly available (new in API version 7.0)
- Public string // Indicates if the image is publicly available (new in API version 7.1)
+ Public bool // Indicates if the image is publicly available (new in API version 7.1)
State string // Current image state. One of 'active', 'unactivated', 'disabled', 'creating', 'failed' (new in API version 7.1)
Tags map[string]string // A map of key/value pairs that allows clients to categorize images by any given criteria (new in API version 7.1)
EULA string // URL of the End User License Agreement (EULA) for the image (new in API version 7.1)
@@ -44,14 +44,14 @@ type MantaLocation struct {
// CreateImageFromMachineOpts represent the option that can be specified
// when creating a new image from an existing machine.
type CreateImageFromMachineOpts struct {
- Machine string `json:"machine"` // The machine UUID from which the image is to be created
- Name string `json:"name"` // Image name
- Version string `json:"version"` // Image version
- Description string `json:"description"` // Image description
- Homepage string `json:"homepage"` // URL for a web page including detailed information for this image
- EULA string `json:"eula"` // URL of the End User License Agreement (EULA) for the image
- ACL []string `json:"acl"` // An array of account UUIDs given access to a private image. The field is only relevant to private images
- Tags map[string]string `json:"tags"` // A map of key/value pairs that allows clients to categorize images by any given criteria
+ Machine string `json:"machine"` // The machine UUID from which the image is to be created
+ Name string `json:"name"` // Image name
+ Version string `json:"version"` // Image version
+ Description string `json:"description,omitempty"` // Image description
+ Homepage string `json:"homepage,omitempty"` // URL for a web page including detailed information for this image
+ EULA string `json:"eula,omitempty"` // URL of the End User License Agreement (EULA) for the image
+ ACL []string `json:"acl,omitempty"` // An array of account UUIDs given access to a private image. The field is only relevant to private images
+ Tags map[string]string `json:"tags,omitempty"` // A map of key/value pairs that allows clients to categorize images by any given criteria
}
// ListImages provides a list of images available in the datacenter.
diff --git a/vendor/github.com/joyent/gosdc/cloudapi/machines.go b/vendor/github.com/joyent/gosdc/cloudapi/machines.go
index 073afb061b74..e89980ee439e 100644
--- a/vendor/github.com/joyent/gosdc/cloudapi/machines.go
+++ b/vendor/github.com/joyent/gosdc/cloudapi/machines.go
@@ -29,7 +29,7 @@ type Machine struct {
Image string // The image id the machine was provisioned with
PrimaryIP string // The primary (public) IP address for the machine
Networks []string // The network IDs for the machine
- FirewallEnabled bool // whether or not the firewall is enabled
+ FirewallEnabled bool `json:"firewall_enabled"` // whether or not the firewall is enabled
}
// Equals compares two machines. Ignores state and timestamps.
diff --git a/vendor/github.com/joyent/gosdc/cloudapi/services.go b/vendor/github.com/joyent/gosdc/cloudapi/services.go
new file mode 100644
index 000000000000..634b69ff3f14
--- /dev/null
+++ b/vendor/github.com/joyent/gosdc/cloudapi/services.go
@@ -0,0 +1,20 @@
+package cloudapi
+
+import (
+ "github.com/joyent/gocommon/client"
+ "github.com/joyent/gocommon/errors"
+)
+
+// list available services
+func (c *Client) ListServices() (map[string]string, error) {
+ var resp map[string]string
+ req := request{
+ method: client.GET,
+ url: apiServices,
+ resp: &resp,
+ }
+ if _, err := c.sendRequest(req); err != nil {
+ return nil, errors.Newf(err, "failed to get list of services")
+ }
+ return resp, nil
+}
diff --git a/vendor/github.com/joyent/gosign/COPYING b/vendor/github.com/joyent/gosign/COPYING
deleted file mode 100644
index 94a9ed024d38..000000000000
--- a/vendor/github.com/joyent/gosign/COPYING
+++ /dev/null
@@ -1,674 +0,0 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/vendor/github.com/joyent/gosign/COPYING.LESSER b/vendor/github.com/joyent/gosign/COPYING.LESSER
deleted file mode 100644
index 65c5ca88a67c..000000000000
--- a/vendor/github.com/joyent/gosign/COPYING.LESSER
+++ /dev/null
@@ -1,165 +0,0 @@
- GNU LESSER GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
-
- This version of the GNU Lesser General Public License incorporates
-the terms and conditions of version 3 of the GNU General Public
-License, supplemented by the additional permissions listed below.
-
- 0. Additional Definitions.
-
- As used herein, "this License" refers to version 3 of the GNU Lesser
-General Public License, and the "GNU GPL" refers to version 3 of the GNU
-General Public License.
-
- "The Library" refers to a covered work governed by this License,
-other than an Application or a Combined Work as defined below.
-
- An "Application" is any work that makes use of an interface provided
-by the Library, but which is not otherwise based on the Library.
-Defining a subclass of a class defined by the Library is deemed a mode
-of using an interface provided by the Library.
-
- A "Combined Work" is a work produced by combining or linking an
-Application with the Library. The particular version of the Library
-with which the Combined Work was made is also called the "Linked
-Version".
-
- The "Minimal Corresponding Source" for a Combined Work means the
-Corresponding Source for the Combined Work, excluding any source code
-for portions of the Combined Work that, considered in isolation, are
-based on the Application, and not on the Linked Version.
-
- The "Corresponding Application Code" for a Combined Work means the
-object code and/or source code for the Application, including any data
-and utility programs needed for reproducing the Combined Work from the
-Application, but excluding the System Libraries of the Combined Work.
-
- 1. Exception to Section 3 of the GNU GPL.
-
- You may convey a covered work under sections 3 and 4 of this License
-without being bound by section 3 of the GNU GPL.
-
- 2. Conveying Modified Versions.
-
- If you modify a copy of the Library, and, in your modifications, a
-facility refers to a function or data to be supplied by an Application
-that uses the facility (other than as an argument passed when the
-facility is invoked), then you may convey a copy of the modified
-version:
-
- a) under this License, provided that you make a good faith effort to
- ensure that, in the event an Application does not supply the
- function or data, the facility still operates, and performs
- whatever part of its purpose remains meaningful, or
-
- b) under the GNU GPL, with none of the additional permissions of
- this License applicable to that copy.
-
- 3. Object Code Incorporating Material from Library Header Files.
-
- The object code form of an Application may incorporate material from
-a header file that is part of the Library. You may convey such object
-code under terms of your choice, provided that, if the incorporated
-material is not limited to numerical parameters, data structure
-layouts and accessors, or small macros, inline functions and templates
-(ten or fewer lines in length), you do both of the following:
-
- a) Give prominent notice with each copy of the object code that the
- Library is used in it and that the Library and its use are
- covered by this License.
-
- b) Accompany the object code with a copy of the GNU GPL and this license
- document.
-
- 4. Combined Works.
-
- You may convey a Combined Work under terms of your choice that,
-taken together, effectively do not restrict modification of the
-portions of the Library contained in the Combined Work and reverse
-engineering for debugging such modifications, if you also do each of
-the following:
-
- a) Give prominent notice with each copy of the Combined Work that
- the Library is used in it and that the Library and its use are
- covered by this License.
-
- b) Accompany the Combined Work with a copy of the GNU GPL and this license
- document.
-
- c) For a Combined Work that displays copyright notices during
- execution, include the copyright notice for the Library among
- these notices, as well as a reference directing the user to the
- copies of the GNU GPL and this license document.
-
- d) Do one of the following:
-
- 0) Convey the Minimal Corresponding Source under the terms of this
- License, and the Corresponding Application Code in a form
- suitable for, and under terms that permit, the user to
- recombine or relink the Application with a modified version of
- the Linked Version to produce a modified Combined Work, in the
- manner specified by section 6 of the GNU GPL for conveying
- Corresponding Source.
-
- 1) Use a suitable shared library mechanism for linking with the
- Library. A suitable mechanism is one that (a) uses at run time
- a copy of the Library already present on the user's computer
- system, and (b) will operate properly with a modified version
- of the Library that is interface-compatible with the Linked
- Version.
-
- e) Provide Installation Information, but only if you would otherwise
- be required to provide such information under section 6 of the
- GNU GPL, and only to the extent that such information is
- necessary to install and execute a modified version of the
- Combined Work produced by recombining or relinking the
- Application with a modified version of the Linked Version. (If
- you use option 4d0, the Installation Information must accompany
- the Minimal Corresponding Source and Corresponding Application
- Code. If you use option 4d1, you must provide the Installation
- Information in the manner specified by section 6 of the GNU GPL
- for conveying Corresponding Source.)
-
- 5. Combined Libraries.
-
- You may place library facilities that are a work based on the
-Library side by side in a single library together with other library
-facilities that are not Applications and are not covered by this
-License, and convey such a combined library under terms of your
-choice, if you do both of the following:
-
- a) Accompany the combined library with a copy of the same work based
- on the Library, uncombined with any other library facilities,
- conveyed under the terms of this License.
-
- b) Give prominent notice with the combined library that part of it
- is a work based on the Library, and explaining where to find the
- accompanying uncombined form of the same work.
-
- 6. Revised Versions of the GNU Lesser General Public License.
-
- The Free Software Foundation may publish revised and/or new versions
-of the GNU Lesser General Public License from time to time. Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Library as you received it specifies that a certain numbered version
-of the GNU Lesser General Public License "or any later version"
-applies to it, you have the option of following the terms and
-conditions either of that published version or of any later version
-published by the Free Software Foundation. If the Library as you
-received it does not specify a version number of the GNU Lesser
-General Public License, you may choose any version of the GNU Lesser
-General Public License ever published by the Free Software Foundation.
-
- If the Library as you received it specifies that a proxy can decide
-whether future versions of the GNU Lesser General Public License shall
-apply, that proxy's public statement of acceptance of any version is
-permanent authorization for you to choose that version for the
-Library.
diff --git a/vendor/github.com/jtopjian/cobblerclient/.gitignore b/vendor/github.com/jtopjian/cobblerclient/.gitignore
new file mode 100644
index 000000000000..ead84456eb0e
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/.gitignore
@@ -0,0 +1 @@
+**/*.swp
diff --git a/vendor/github.com/jtopjian/cobblerclient/LICENSE b/vendor/github.com/jtopjian/cobblerclient/LICENSE
new file mode 100644
index 000000000000..8f71f43fee3f
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/LICENSE
@@ -0,0 +1,202 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
diff --git a/vendor/github.com/jtopjian/cobblerclient/Makefile b/vendor/github.com/jtopjian/cobblerclient/Makefile
new file mode 100644
index 000000000000..a69df5699a5e
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/Makefile
@@ -0,0 +1,20 @@
+# Copyright 2015 Container Solutions
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+build:
+ @go build .
+
+test:
+ @go test -v .
diff --git a/vendor/github.com/jtopjian/cobblerclient/README.md b/vendor/github.com/jtopjian/cobblerclient/README.md
new file mode 100644
index 000000000000..283252d480cf
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/README.md
@@ -0,0 +1,2 @@
+# cobblerclient
+Cobbler Client written in Go
diff --git a/vendor/github.com/jtopjian/cobblerclient/cobblerclient.go b/vendor/github.com/jtopjian/cobblerclient/cobblerclient.go
new file mode 100644
index 000000000000..7383eee821ea
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/cobblerclient.go
@@ -0,0 +1,198 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "reflect"
+ "strings"
+
+ "github.com/kolo/xmlrpc"
+ "github.com/mitchellh/mapstructure"
+)
+
+const bodyTypeXML = "text/xml"
+
+type HTTPClient interface {
+ Post(string, string, io.Reader) (*http.Response, error)
+}
+
+type Client struct {
+ httpClient HTTPClient
+ config ClientConfig
+ Token string
+}
+
+type ClientConfig struct {
+ Url string
+ Username string
+ Password string
+}
+
+func NewClient(httpClient HTTPClient, c ClientConfig) Client {
+ return Client{
+ httpClient: httpClient,
+ config: c,
+ }
+}
+
+func (c *Client) Call(method string, args ...interface{}) (interface{}, error) {
+ var result interface{}
+
+ reqBody, err := xmlrpc.EncodeMethodCall(method, args...)
+ if err != nil {
+ return nil, err
+ }
+
+ r := fmt.Sprintf("%s\n", string(reqBody))
+ res, err := c.httpClient.Post(c.config.Url, bodyTypeXML, bytes.NewReader([]byte(r)))
+ if err != nil {
+ return nil, err
+ }
+
+ defer res.Body.Close()
+ body, err := ioutil.ReadAll(res.Body)
+ if err != nil {
+ return nil, err
+ }
+
+ resp := xmlrpc.NewResponse(body)
+ if err := resp.Unmarshal(&result); err != nil {
+ return nil, err
+ }
+
+ if resp.Failed() {
+ return nil, resp.Err()
+ }
+
+ return result, nil
+}
+
+// Performs a login request to Cobbler using the credentials provided
+// in the configuration in the initializer.
+func (c *Client) Login() (bool, error) {
+ result, err := c.Call("login", c.config.Username, c.config.Password)
+ if err != nil {
+ return false, err
+ }
+
+ c.Token = result.(string)
+ return true, nil
+}
+
+// Sync the system.
+// Returns an error if anything went wrong
+func (c *Client) Sync() error {
+ _, err := c.Call("sync", c.Token)
+ return err
+}
+
+// GetItemHandle gets the internal ID of a Cobbler item.
+func (c *Client) GetItemHandle(what, name string) (string, error) {
+ result, err := c.Call("get_item_handle", what, name, c.Token)
+ if err != nil {
+ return "", err
+ } else {
+ return result.(string), err
+ }
+}
+
+// cobblerDataHacks is a hook for the mapstructure decoder. It's only used by
+// decodeCobblerItem and should never be invoked directly.
+// It's used to smooth out issues with converting fields and types from Cobbler.
+func cobblerDataHacks(f, t reflect.Kind, data interface{}) (interface{}, error) {
+ dataVal := reflect.ValueOf(data)
+
+ // Cobbler uses ~ internally to mean None/nil
+ if dataVal.String() == "~" {
+ return map[string]interface{}{}, nil
+ }
+
+ if f == reflect.Int64 && t == reflect.Bool {
+ if dataVal.Int() > 0 {
+ return true, nil
+ } else {
+ return false, nil
+ }
+ }
+ return data, nil
+}
+
+// decodeCobblerItem is a custom mapstructure decoder to handler Cobbler's uniqueness.
+func decodeCobblerItem(raw interface{}, result interface{}) (interface{}, error) {
+ var metadata mapstructure.Metadata
+ decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
+ Metadata: &metadata,
+ Result: result,
+ WeaklyTypedInput: true,
+ DecodeHook: cobblerDataHacks,
+ })
+
+ if err != nil {
+ return nil, err
+ }
+
+ if err := decoder.Decode(raw); err != nil {
+ return nil, err
+ }
+
+ return result, nil
+}
+
+// updateCobblerFields updates all fields in a Cobbler Item structure.
+func (c *Client) updateCobblerFields(what string, item reflect.Value, id string) error {
+ method := fmt.Sprintf("modify_%s", what)
+
+ typeOfT := item.Type()
+ for i := 0; i < item.NumField(); i++ {
+ v := item.Field(i)
+ tag := typeOfT.Field(i).Tag
+ field := tag.Get("mapstructure")
+ cobblerTag := tag.Get("cobbler")
+
+ if cobblerTag == "noupdate" {
+ continue
+ }
+
+ if field == "" {
+ continue
+ }
+
+ var value interface{}
+ switch v.Type().String() {
+ case "string", "bool", "int64", "int":
+ value = v.Interface()
+ case "[]string":
+ value = strings.Join(v.Interface().([]string), " ")
+ }
+
+ //fmt.Printf("%s, %s, %s\n", id, field, value)
+ if result, err := c.Call(method, id, field, value, c.Token); err != nil {
+ return err
+ } else {
+ if result.(bool) == false && value != false {
+ return fmt.Errorf("Error updating %s to %s.", field, value)
+ }
+ }
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/jtopjian/cobblerclient/distro.go b/vendor/github.com/jtopjian/cobblerclient/distro.go
new file mode 100644
index 000000000000..12ad5698999f
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/distro.go
@@ -0,0 +1,145 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+import (
+ "fmt"
+ "reflect"
+)
+
+// Distro is a created distro.
+type Distro struct {
+ // These are internal fields and cannot be modified.
+ Ctime float64 `mapstructure:"ctime" cobbler:"noupdate"` // TODO: convert to time
+ Depth int `mapstructure:"depth" cobbler:"noupdate"`
+ ID string `mapstructure:"uid" cobbler:"noupdate"`
+ Mtime float64 `mapstructure:"mtime" cobbler:"noupdate"` // TODO: convert to time
+ TreeBuildTime string `mapstructure:tree_build_time" cobbler:"noupdate"`
+
+ Arch string `mapstructure:"arch"`
+ Breed string `mapstructure:"breed"`
+ BootFiles string `mapstructure:"boot_files"`
+ Comment string `mapstructure:"comment"`
+ FetchableFiles string `mapstructure:"fetchable_files"`
+ Kernel string `mapstructure:"kernel"`
+ KernelOptions string `mapstructure:"kernel_options"`
+ KernelOptionsPost string `mapstructure:"kernel_options_post"`
+ Initrd string `mapstructure:"initrd"`
+ MGMTClasses []string `mapstructure:"mgmt_classes"`
+ Name string `mapstructure:"name"`
+ OSVersion string `mapstructure:"os_version"`
+ Owners []string `mapstructure:"owners"`
+ RedHatManagementKey string `mapstructure:"redhat_management_key"`
+ RedHatManagementServer string `mapstructure:"redhat_management_server"`
+ TemplateFiles string `mapstructure:"template_files"`
+
+ //KSMeta string `mapstructure:"ks_meta"`
+ //SourceRepos []string `mapstructure:"source_repos"`
+}
+
+// GetDistros returns all systems in Cobbler.
+func (c *Client) GetDistros() ([]*Distro, error) {
+ var distros []*Distro
+
+ result, err := c.Call("get_distros", "-1", c.Token)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, d := range result.([]interface{}) {
+ var distro Distro
+ decodedResult, err := decodeCobblerItem(d, &distro)
+ if err != nil {
+ return nil, err
+ }
+
+ distros = append(distros, decodedResult.(*Distro))
+ }
+
+ return distros, nil
+}
+
+// GetDistro returns a single distro obtained by its name.
+func (c *Client) GetDistro(name string) (*Distro, error) {
+ var distro Distro
+
+ result, err := c.Call("get_distro", name, c.Token)
+ if result == "~" {
+ return nil, fmt.Errorf("Distro %s not found.", name)
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ decodeResult, err := decodeCobblerItem(result, &distro)
+ if err != nil {
+ return nil, err
+ }
+
+ return decodeResult.(*Distro), nil
+}
+
+// CreateDistro creates a distro.
+func (c *Client) CreateDistro(distro Distro) (*Distro, error) {
+ // Make sure a distro with the same name does not already exist
+ if _, err := c.GetDistro(distro.Name); err == nil {
+ return nil, fmt.Errorf("A Distro with the name %s already exists.", distro.Name)
+ }
+
+ result, err := c.Call("new_distro", c.Token)
+ if err != nil {
+ return nil, err
+ }
+ newId := result.(string)
+
+ item := reflect.ValueOf(&distro).Elem()
+ if err := c.updateCobblerFields("distro", item, newId); err != nil {
+ return nil, err
+ }
+
+ if _, err := c.Call("save_distro", newId, c.Token); err != nil {
+ return nil, err
+ }
+
+ return c.GetDistro(distro.Name)
+}
+
+// UpdateDistro updates a single distro.
+func (c *Client) UpdateDistro(distro *Distro) error {
+ item := reflect.ValueOf(distro).Elem()
+ id, err := c.GetItemHandle("distro", distro.Name)
+ if err != nil {
+ return err
+ }
+
+ if err := c.updateCobblerFields("distro", item, id); err != nil {
+ return err
+ }
+
+ if _, err := c.Call("save_distro", id, c.Token); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// DeleteDistro deletes a single distro by its name.
+func (c *Client) DeleteDistro(name string) error {
+ _, err := c.Call("remove_distro", name, c.Token)
+ return err
+}
diff --git a/vendor/github.com/jtopjian/cobblerclient/kickstart_file.go b/vendor/github.com/jtopjian/cobblerclient/kickstart_file.go
new file mode 100644
index 000000000000..368777b82a9f
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/kickstart_file.go
@@ -0,0 +1,56 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+type KickstartFile struct {
+ Name string // The name the kickstart file will be saved in Cobbler
+ Body string // The contents of the kickstart file
+}
+
+// Creates a kickstart file in Cobbler.
+// Takes a KickstartFile struct as input.
+// Returns true/false and error if creation failed.
+func (c *Client) CreateKickstartFile(f KickstartFile) error {
+ _, err := c.Call("read_or_write_kickstart_template", f.Name, false, f.Body, c.Token)
+ return err
+}
+
+// Gets a kickstart file in Cobbler.
+// Takes a kickstart file name as input.
+// Returns *KickstartFile and error if read failed.
+func (c *Client) GetKickstartFile(ksName string) (*KickstartFile, error) {
+ result, err := c.Call("read_or_write_kickstart_template", ksName, true, "", c.Token)
+
+ if err != nil {
+ return nil, err
+ }
+
+ ks := KickstartFile{
+ Name: ksName,
+ Body: result.(string),
+ }
+
+ return &ks, nil
+}
+
+// Deletes a kickstart file in Cobbler.
+// Takes a kickstart file name as input.
+// Returns error if delete failed.
+func (c *Client) DeleteKickstartFile(name string) error {
+ _, err := c.Call("read_or_write_kickstart_template", name, false, -1, c.Token)
+ return err
+}
diff --git a/vendor/github.com/jtopjian/cobblerclient/methods.txt b/vendor/github.com/jtopjian/cobblerclient/methods.txt
new file mode 100644
index 000000000000..8d3dcac27d11
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/methods.txt
@@ -0,0 +1,1990 @@
+This is a short document that lists the xmlrpc calls needed.
+
+-- login:
+curl -XPOST -d '
+
+ login
+
+
+
+ cobbler
+
+
+
+
+ cobbler
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+
+-- create new system:
+curl -XPOST -d '
+
+ new_system
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+ ___NEW___system::qxK4MZaxtZzTaxZW98nNZWbgkmyTXtU14Q==
+
+
+
+
+
+-- set system name:
+curl -XPOST -d '
+
+ modify_system
+
+
+
+ ___NEW___system::qxK4MZaxtZzTaxZW98nNZWbgkmyTXtU14Q==
+
+
+
+
+ name
+
+
+
+
+ systemname01
+
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+ 1
+
+
+
+
+-- set system profile:
+curl -XPOST -d '
+
+ modify_system
+
+
+
+ ___NEW___system::qxK4MZaxtZzTaxZW98nNZWbgkmyTXtU14Q==
+
+
+
+
+ profile
+
+
+
+
+ centos7-x86_64
+
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+' http://localhost:25151/
+
+-- configure network interface (still need to figure out name of gateway property):
+curl -XPOST -d '
+
+ modify_system
+
+
+
+ ___NEW___system::ridhgThzSpL5wwjdWSonGsM8nv/HtSfNQQ==
+
+
+
+
+ modify_interface
+
+
+
+
+
+
+ macaddress-eth0
+
+ 01:02:03:04:05:06
+
+
+
+ ipaddress-eth0
+
+ 10.20.30.40
+
+
+
+ dnsname-eth0
+
+ systemname01.domain.tld
+
+
+
+ subnetmask-eth0
+
+ 255.255.255.0
+
+
+
+ if-gateway-eth0
+
+ 10.20.30.1
+
+
+
+
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+ 1
+
+
+
+
+
+-- save the system:
+curl -XPOST -d '
+
+ save_system
+
+
+
+ ___NEW___system::qxK4MZaxtZzTaxZW98nNZWbgkmyTXtU14Q==
+
+
+
+
+ ZyWe2dxicTWGsDpbo+WT3z1WZ2trEgfoaw==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+ 1
+
+
+
+
+result will be 0 if save failed.
+
+
+--- sync
+curl -XPOST -d '
+
+ sync
+
+
+ zYli1fFyS3Hi6qlSPMorEWfiUhBfAuOsrA==
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+ 1
+
+
+
+
+-- create a kickstart file:
+curl -XPOST -d '
+
+ read_or_write_kickstart_template
+
+
+
+ /var/lib/cobbler/kickstarts/foo.ks
+
+
+
+
+ 0
+
+
+
+
+ # test content for the kickstart file
+
+
+
+
+ zYli1fFyS3Hi6qlSPMorEWfiUhBfAuOsrA==
+
+
+
+' http://localhost:25151/
+
+-- get a kickstart file:
+curl -XPOST -d '
+
+ read_or_write_kickstart_template
+
+
+
+ /var/lib/cobbler/kickstarts/foo.ks
+
+
+
+
+ 1
+
+
+
+
+
+
+
+
+
+ securetoken99
+
+
+
+' http://localhost:25151/
+
+-- create a snippet:
+curl -XPOST -d '
+
+ read_or_write_snippet
+
+
+
+ /var/lib/cobbler/snippets/foo
+
+
+
+
+ 0
+
+
+
+
+ # test content for the snippet file
+
+
+
+
+ zYli1fFyS3Hi6qlSPMorEWfiUhBfAuOsrA==
+
+
+
+' http://localhost:25151/
+
+-- get a snippet:
+curl -XPOST -d '
+
+ read_or_write_snippet
+
+
+
+ /var/lib/cobbler/snippets/some-snippet
+
+
+
+
+ 1
+
+
+
+
+
+
+
+
+
+ securetoken99
+
+
+
+' http://localhost:25151/
+
+-- get distros:
+curl -XPOST -d '
+ get_distros
+
+
+
+ -1
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ kernel
+
+ /var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/linux
+
+
+
+ uid
+
+ MTQ1MTg1NjMzNC4yMTk0MTg3My43Mzg0NTM
+
+
+
+ kernel_options_post
+
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ kernel_options
+
+
+
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ initrd
+
+ /var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/initrd.gz
+
+
+
+ mtime
+
+ 1451856336.460383
+
+
+
+ template_files
+
+
+
+
+
+
+ ks_meta
+
+
+
+ tree
+
+ http://@@http_server@@/cblr/links/Ubuntu-14.04-x86_64
+
+
+
+
+
+
+ boot_files
+
+
+
+
+
+
+ breed
+
+ ubuntu
+
+
+
+ os_version
+
+ trusty
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+
+ fetchable_files
+
+
+
+
+
+
+ tree_build_time
+
+ 0
+
+
+
+ arch
+
+ x86_64
+
+
+
+ name
+
+ Ubuntu-14.04-x86_64
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856334.214615
+
+
+
+ source_repos
+
+
+
+
+
+
+
+
+ depth
+
+ 0
+
+
+
+
+
+
+
+
+
+
+
+-- get distro:
+curl -XPOST -d '
+ get_distro
+
+
+
+ Ubuntu-14.04-x86_64
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ kernel
+
+ /var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/linux
+
+
+
+ uid
+
+ MTQ1MTg1NjMzNC4yMTk0MTg3My43Mzg0NTM
+
+
+
+ kernel_options_post
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ kernel_options
+
+
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ initrd
+
+ /var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/initrd.gz
+
+
+
+ mtime
+
+ 1451856336.460383
+
+
+
+ template_files
+
+
+
+
+
+ ks_meta
+
+ tree=http://@@http_server@@/cblr/links/Ubuntu-14.04-x86_64
+
+
+
+ boot_files
+
+
+
+
+
+ breed
+
+ ubuntu
+
+
+
+ os_version
+
+ trusty
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+ fetchable_files
+
+
+
+
+
+ tree_build_time
+
+ 0
+
+
+
+ arch
+
+ x86_64
+
+
+
+ name
+
+ Ubuntu-14.04-x86_64
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856334.214615
+
+
+
+ source_repos
+
+
+
+
+
+
+
+ depth
+
+ 0
+
+
+
+
+
+
+
+
+-- get profiles:
+curl -XPOST -d '
+ get_profiles
+
+
+
+ -1
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ kickstart
+
+ /var/lib/cobbler/kickstarts/sample.seed
+
+
+
+ name_servers_search
+
+
+
+
+
+
+
+
+ ks_meta
+
+
+
+
+
+
+ kernel_options_post
+
+
+
+
+
+
+ repos
+
+
+
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ virt_path
+
+
+
+
+
+ kernel_options
+
+
+
+
+
+
+ virt_file_size
+
+ 5
+
+
+
+ mtime
+
+ 1451856335.087784
+
+
+
+ enable_gpxe
+
+ 0
+
+
+
+ template_files
+
+
+
+
+
+
+ uid
+
+ MTQ1MTg1NjMzNS4wOTk4MTczMTYuMTI1ODc
+
+
+
+ virt_auto_boot
+
+ 1
+
+
+
+ virt_cpus
+
+ 1
+
+
+
+ mgmt_parameters
+
+ <<inherit>>
+
+
+
+ boot_files
+
+
+
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+
+ distro
+
+ Ubuntu-14.04-x86_64
+
+
+
+ virt_disk_driver
+
+ raw
+
+
+
+ virt_bridge
+
+ virbr0
+
+
+
+ parent
+
+
+
+
+
+ virt_type
+
+ kvm
+
+
+
+ proxy
+
+
+
+
+
+ enable_menu
+
+ 1
+
+
+
+ fetchable_files
+
+
+
+
+
+
+ name_servers
+
+
+
+
+
+
+
+
+ name
+
+ Ubuntu-14.04-x86_64
+
+
+
+ dhcp_tag
+
+ default
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856335.087784
+
+
+
+ virt_ram
+
+ 512
+
+
+
+ server
+
+ <<inherit>>
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ depth
+
+ 1
+
+
+
+ template_remote_kickstarts
+
+ 0
+
+
+
+
+
+
+
+
+
+
+
+-- get profile:
+curl -XPOST -d '
+ get_profile
+
+
+
+ Ubuntu-14.04-x86_64
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ kickstart
+
+ /var/lib/cobbler/kickstarts/sample.seed
+
+
+
+ name_servers_search
+
+
+
+
+
+
+
+
+ ks_meta
+
+
+
+
+
+ kernel_options_post
+
+
+
+
+
+ repos
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ virt_path
+
+
+
+
+
+ kernel_options
+
+
+
+
+
+ virt_file_size
+
+ 5
+
+
+
+ mtime
+
+ 1451856335.087784
+
+
+
+ enable_gpxe
+
+ 0
+
+
+
+ template_files
+
+
+
+
+
+ uid
+
+ MTQ1MTg1NjMzNS4wOTk4MTczMTYuMTI1ODc
+
+
+
+ virt_auto_boot
+
+ 1
+
+
+
+ virt_cpus
+
+ 1
+
+
+
+ mgmt_parameters
+
+ <<inherit>>
+
+
+
+ boot_files
+
+
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+
+ distro
+
+ Ubuntu-14.04-x86_64
+
+
+
+ virt_disk_driver
+
+ raw
+
+
+
+ virt_bridge
+
+ virbr0
+
+
+
+ parent
+
+
+
+
+
+ virt_type
+
+ kvm
+
+
+
+ proxy
+
+
+
+
+
+ enable_menu
+
+ 1
+
+
+
+ fetchable_files
+
+
+
+
+
+ name_servers
+
+
+
+
+
+
+
+
+ name
+
+ Ubuntu-14.04-x86_64
+
+
+
+ dhcp_tag
+
+ default
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856335.087784
+
+
+
+ virt_ram
+
+ 512
+
+
+
+ server
+
+ <<inherit>>
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ depth
+
+ 1
+
+
+
+ template_remote_kickstarts
+
+ 0
+
+
+
+
+
+
+
+
+-- get systems:
+curl -XPOST -d '
+ get_systems
+
+
+
+ -1
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ profile
+
+ Ubuntu-14.04-x86_64
+
+
+
+ kickstart
+
+ <<inherit>>
+
+
+
+ name_servers_search
+
+
+
+
+
+
+
+
+ ks_meta
+
+
+
+
+
+
+ kernel_options_post
+
+
+
+
+
+
+ image
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ power_type
+
+ ether_wake
+
+
+
+ power_user
+
+
+
+
+
+ kernel_options
+
+
+
+
+
+
+ virt_file_size
+
+ <<inherit>>
+
+
+
+ mtime
+
+ 1451856819.487791
+
+
+
+ enable_gpxe
+
+ 0
+
+
+
+ template_files
+
+
+
+
+
+
+ gateway
+
+
+
+
+
+ uid
+
+ MTQ1MTg1NjgxOS40OTE4ODYyODQuNzAxMTY
+
+
+
+ virt_auto_boot
+
+ <<inherit>>
+
+
+
+ monit_enabled
+
+ 0
+
+
+
+ virt_cpus
+
+ <<inherit>>
+
+
+
+ mgmt_parameters
+
+ <<inherit>>
+
+
+
+ boot_files
+
+
+
+
+
+
+ hostname
+
+
+
+
+
+ repos_enabled
+
+ 0
+
+
+
+ name
+
+ test
+
+
+
+ virt_type
+
+ <<inherit>>
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+
+ power_pass
+
+
+
+
+
+ netboot_enabled
+
+ 1
+
+
+
+ ipv6_autoconfiguration
+
+ 0
+
+
+
+ status
+
+ production
+
+
+
+ virt_path
+
+ <<inherit>>
+
+
+
+ interfaces
+
+
+
+
+
+
+ power_address
+
+
+
+
+
+ proxy
+
+ <<inherit>>
+
+
+
+ fetchable_files
+
+
+
+
+
+
+ name_servers
+
+
+
+
+
+
+
+
+ ldap_enabled
+
+ 0
+
+
+
+ ipv6_default_device
+
+
+
+
+
+ virt_pxe_boot
+
+ 0
+
+
+
+ virt_disk_driver
+
+ <<inherit>>
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856819.487791
+
+
+
+ virt_ram
+
+ <<inherit>>
+
+
+
+ power_id
+
+
+
+
+
+ server
+
+ <<inherit>>
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ depth
+
+ 2
+
+
+
+ ldap_type
+
+ authconfig
+
+
+
+ template_remote_kickstarts
+
+ 0
+
+
+
+
+
+
+
+
+
+
+
+-- get system:
+curl -XPOST -d '
+ get_system
+
+
+
+ test
+
+
+
+
+ 4f8464lmE6s+6YmQcOr+ACJvdyd5kIzV0w==
+
+
+
+
+' http://localhost:25151/
+
+
+
+
+
+
+
+
+ comment
+
+
+
+
+
+ profile
+
+ Ubuntu-14.04-x86_64
+
+
+
+ kickstart
+
+ <<inherit>>
+
+
+
+ name_servers_search
+
+
+
+
+
+
+
+
+ ks_meta
+
+
+
+
+
+ kernel_options_post
+
+
+
+
+
+ image
+
+
+
+
+
+ redhat_management_key
+
+ <<inherit>>
+
+
+
+ power_type
+
+ ether_wake
+
+
+
+ power_user
+
+
+
+
+
+ kernel_options
+
+
+
+
+
+ virt_file_size
+
+ <<inherit>>
+
+
+
+ mtime
+
+ 1451856819.487791
+
+
+
+ enable_gpxe
+
+ 0
+
+
+
+ template_files
+
+
+
+
+
+ gateway
+
+
+
+
+
+ uid
+
+ MTQ1MTg1NjgxOS40OTE4ODYyODQuNzAxMTY
+
+
+
+ virt_auto_boot
+
+ <<inherit>>
+
+
+
+ monit_enabled
+
+ 0
+
+
+
+ virt_cpus
+
+ <<inherit>>
+
+
+
+ mgmt_parameters
+
+ <<inherit>>
+
+
+
+ boot_files
+
+
+
+
+
+ hostname
+
+
+
+
+
+ repos_enabled
+
+ 0
+
+
+
+ name
+
+ test
+
+
+
+ virt_type
+
+ <<inherit>>
+
+
+
+ mgmt_classes
+
+
+
+
+
+
+
+
+ power_pass
+
+
+
+
+
+ netboot_enabled
+
+ 1
+
+
+
+ ipv6_autoconfiguration
+
+ 0
+
+
+
+ status
+
+ production
+
+
+
+ virt_path
+
+ <<inherit>>
+
+
+
+ interfaces
+
+
+
+
+
+
+ power_address
+
+
+
+
+
+ proxy
+
+ <<inherit>>
+
+
+
+ fetchable_files
+
+
+
+
+
+ name_servers
+
+
+
+
+
+
+
+
+ ldap_enabled
+
+ 0
+
+
+
+ ipv6_default_device
+
+
+
+
+
+ virt_pxe_boot
+
+ 0
+
+
+
+ virt_disk_driver
+
+ <<inherit>>
+
+
+
+ owners
+
+
+
+
+ admin
+
+
+
+
+
+
+ ctime
+
+ 1451856819.487791
+
+
+
+ virt_ram
+
+ <<inherit>>
+
+
+
+ power_id
+
+
+
+
+
+ server
+
+ <<inherit>>
+
+
+
+ redhat_management_server
+
+ <<inherit>>
+
+
+
+ depth
+
+ 2
+
+
+
+ ldap_type
+
+ authconfig
+
+
+
+ template_remote_kickstarts
+
+ 0
+
+
+
+
+
+
+
diff --git a/vendor/github.com/jtopjian/cobblerclient/profile.go b/vendor/github.com/jtopjian/cobblerclient/profile.go
new file mode 100644
index 000000000000..5aa971e5207d
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/profile.go
@@ -0,0 +1,230 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+import (
+ "fmt"
+ "reflect"
+)
+
+// Profile is a created profile.
+type Profile struct {
+ // These are internal fields and cannot be modified.
+ Ctime float64 `mapstructure:"ctime" cobbler:"noupdate"` // TODO: convert to time
+ Depth int `mapstructure:"depth" cobbler:"noupdate"`
+ ID string `mapstructure:"uid" cobbler:"noupdate"`
+ Mtime float64 `mapstructure:"mtime" cobbler:"noupdate"` // TODO: convert to time
+ ReposEnabled bool `mapstructure:"repos_enabled" cobbler:"noupdate"`
+
+ BootFiles string `mapstructure:"boot_files"`
+ Comment string `mapstructure:"comment"`
+ Distro string `mapstructure:"distro"`
+ EnableGPXE bool `mapstructure:"enable_gpxe"`
+ EnableMenu bool `mapstructure:"enable_menu"`
+ FetchableFiles string `mapstructure:"fetchable_files"`
+ KernelOptions string `mapstructure:"kernel_options"`
+ KernelOptionsPost string `mapstructure:"kernel_options_post"`
+ Kickstart string `mapstructure:"kickstart"`
+ KSMeta string `mapstructure:"ks_meta"`
+ MGMTClasses []string `mapstructure:"mgmt_classes"`
+ MGMTParameters string `mapstructure:"mgmt_parameters"`
+ Name string `mapstructure:"name"`
+ NameServersSearch []string `mapstructure:"name_servers_search"`
+ NameServers []string `mapstructure:"name_servers"`
+ Owners []string `mapstructure:"owners"`
+ Parent string `mapstructure:"parent"`
+ Proxy string `mapstructure:"proxy"`
+ RedHatManagementKey string `mapstructure:"redhat_management_key"`
+ RedHatManagementServer string `mapstructure:"redhat_management_server"`
+ Repos []string `mapstructure:"repos"`
+ Server string `mapstructure:"server"`
+ TemplateFiles string `mapstructure:"template_files"`
+ TemplateRemoteKickstarts int `mapstructure:"template_remote_kickstarts"`
+ VirtAutoBoot string `mapstructure:"virt_auto_boot"`
+ VirtBridge string `mapstructure:"virt_bridge"`
+ VirtCPUs string `mapstructure:"virt_cpus"`
+ VirtDiskDriver string `mapstructure:"virt_disk_driver"`
+ VirtFileSize string `mapstructure:"virt_file_size"`
+ VirtPath string `mapstructure:"virt_path"`
+ VirtRam string `mapstructure:"virt_ram"`
+ VirtType string `mapstructure:"virt_type"`
+
+ Client
+}
+
+// GetProfiles returns all systems in Cobbler.
+func (c *Client) GetProfiles() ([]*Profile, error) {
+ var profiles []*Profile
+
+ result, err := c.Call("get_profiles", "-1", c.Token)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, p := range result.([]interface{}) {
+ var profile Profile
+ decodedResult, err := decodeCobblerItem(p, &profile)
+ if err != nil {
+ return nil, err
+ }
+ decodedProfile := decodedResult.(*Profile)
+ decodedProfile.Client = *c
+ profiles = append(profiles, decodedProfile)
+ }
+
+ return profiles, nil
+}
+
+// GetProfile returns a single profile obtained by its name.
+func (c *Client) GetProfile(name string) (*Profile, error) {
+ var profile Profile
+
+ result, err := c.Call("get_profile", name, c.Token)
+ if err != nil {
+ return &profile, err
+ }
+
+ if result == "~" {
+ return nil, fmt.Errorf("Profile %s not found.", name)
+ }
+
+ decodeResult, err := decodeCobblerItem(result, &profile)
+ if err != nil {
+ return nil, err
+ }
+
+ s := decodeResult.(*Profile)
+ s.Client = *c
+
+ return s, nil
+}
+
+// CreateProfile creates a system.
+// It ensures that a Distro is set and then sets other default values.
+func (c *Client) CreateProfile(profile Profile) (*Profile, error) {
+ // Check if a profile with the same name already exists
+ if _, err := c.GetProfile(profile.Name); err == nil {
+ return nil, fmt.Errorf("A profile with the name %s already exists.", profile.Name)
+ }
+
+ if profile.Distro == "" {
+ return nil, fmt.Errorf("A profile must have a distro set.")
+ }
+
+ /*
+ // Set default values. I guess these aren't taken care of by Cobbler?
+ if system.BootFiles == "" {
+ system.BootFiles = "<>"
+ }
+
+ if system.FetchableFiles == "" {
+ system.FetchableFiles = "<>"
+ }
+
+ */
+
+ if profile.MGMTParameters == "" {
+ profile.MGMTParameters = "<>"
+ }
+
+ if profile.VirtAutoBoot == "" {
+ profile.VirtAutoBoot = "0"
+ }
+
+ if profile.VirtRam == "" {
+ profile.VirtRam = "<>"
+ }
+
+ if profile.VirtType == "" {
+ profile.VirtType = "<>"
+ }
+
+ /*
+
+ if system.PowerType == "" {
+ system.PowerType = "ipmilan"
+ }
+
+ if system.Status == "" {
+ system.Status = "production"
+ }
+
+ if system.VirtCPUs == "" {
+ system.VirtCPUs = "<>"
+ }
+
+ if system.VirtDiskDriver == "" {
+ system.VirtDiskDriver = "<>"
+ }
+
+ if system.VirtFileSize == "" {
+ system.VirtFileSize = "<>"
+ }
+
+ if system.VirtPath == "" {
+ system.VirtPath = "<>"
+ }
+
+ */
+
+ // To create a profile via the Cobbler API, first call new_profile to obtain an ID
+ result, err := c.Call("new_profile", c.Token)
+ if err != nil {
+ return nil, err
+ }
+ newId := result.(string)
+
+ // Set the value of all fields
+ item := reflect.ValueOf(&profile).Elem()
+ if err := c.updateCobblerFields("profile", item, newId); err != nil {
+ return nil, err
+ }
+
+ // Save the final profile
+ if _, err := c.Call("save_profile", newId, c.Token); err != nil {
+ return nil, err
+ }
+
+ // Return a clean copy of the profile
+ return c.GetProfile(profile.Name)
+}
+
+// UpdateProfile updates a single profile.
+func (c *Client) UpdateProfile(profile *Profile) error {
+ item := reflect.ValueOf(profile).Elem()
+ id, err := c.GetItemHandle("profile", profile.Name)
+ if err != nil {
+ return err
+ }
+
+ if err := c.updateCobblerFields("profile", item, id); err != nil {
+ return err
+ }
+
+ // Save the final profile
+ if _, err := c.Call("save_profile", id, c.Token); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// DeleteProfile deletes a single profile by its name.
+func (c *Client) DeleteProfile(name string) error {
+ _, err := c.Call("remove_profile", name, c.Token)
+ return err
+}
diff --git a/vendor/github.com/jtopjian/cobblerclient/snippet.go b/vendor/github.com/jtopjian/cobblerclient/snippet.go
new file mode 100644
index 000000000000..69cf8afc3455
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/snippet.go
@@ -0,0 +1,56 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+type Snippet struct {
+ Name string // The name the snippet file will be saved in Cobbler
+ Body string // The contents of the kickstart file
+}
+
+// Creates a snippet in Cobbler.
+// Takes a Snippet struct as input
+// Returns true/false and error if creation failed.
+func (c *Client) CreateSnippet(s Snippet) error {
+ _, err := c.Call("read_or_write_snippet", s.Name, false, s.Body, c.Token)
+ return err
+}
+
+// Gets a snippet file in Cobbler.
+// Takes a snippet file name as input.
+// Returns *Snippet and error if read failed.
+func (c *Client) GetSnippet(name string) (*Snippet, error) {
+ result, err := c.Call("read_or_write_snippet", name, true, "", c.Token)
+
+ if err != nil {
+ return nil, err
+ }
+
+ snippet := Snippet{
+ Name: name,
+ Body: result.(string),
+ }
+
+ return &snippet, nil
+}
+
+// Gets a snippet file in Cobbler.
+// Takes a snippet file name as input.
+// Returns error if delete failed.
+func (c *Client) DeleteSnippet(name string) error {
+ _, err := c.Call("read_or_write_snippet", name, false, -1, c.Token)
+ return err
+}
diff --git a/vendor/github.com/jtopjian/cobblerclient/system.go b/vendor/github.com/jtopjian/cobblerclient/system.go
new file mode 100644
index 000000000000..1f93d8428f19
--- /dev/null
+++ b/vendor/github.com/jtopjian/cobblerclient/system.go
@@ -0,0 +1,336 @@
+/*
+Copyright 2015 Container Solutions
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cobblerclient
+
+import (
+ "fmt"
+ "reflect"
+
+ "github.com/fatih/structs"
+ "github.com/mitchellh/mapstructure"
+)
+
+// System is a created system.
+type System struct {
+ // These are internal fields and cannot be modified.
+ Ctime float64 `mapstructure:"ctime" cobbler:"noupdate"` // TODO: convert to time
+ Depth int `mapstructure:"depth" cobbler:"noupdate"`
+ ID string `mapstructure:"uid" cobbler:"noupdate"`
+ IPv6Autoconfiguration bool `mapstructure:"ipv6_autoconfiguration" cobbler:"noupdate"`
+ Mtime float64 `mapstructure:"mtime" cobbler:"noupdate"` // TODO: convert to time
+ ReposEnabled bool `mapstructure:"repos_enabled" cobbler:"noupdate"`
+
+ BootFiles string `mapstructure:"boot_files"`
+ Comment string `mapstructure:"comment"`
+ EnableGPXE bool `mapstructure:"enable_gpxe"`
+ FetchableFiles string `mapstructure:"fetchable_files"`
+ Gateway string `mapstructure:"gateway"`
+ Hostname string `mapstructure:"hostname"`
+ Image string `mapstructure:"image"`
+ Interfaces map[string]interface{} `mapstructure:"interfaces" cobbler:"noupdate"`
+ IPv6DefaultDevice string `mapstructure:"ipv6_default_device"`
+ KernelOptions string `mapstructure:"kernel_options"`
+ KernelOptionsPost string `mapstructure:"kernel_options_post"`
+ Kickstart string `mapstructure:"kickstart"`
+ KSMeta string `mapstructure:"ks_meta"`
+ LDAPEnabled bool `mapstructure:"ldap_enabled"`
+ LDAPType string `mapstructure:"ldap_type"`
+ MGMTClasses []string `mapstructure:"mgmt_classes"`
+ MGMTParameters string `mapstructure:"mgmt_parameters"`
+ MonitEnabled bool `mapstructure:"monit_enabled"`
+ Name string `mapstructure:"name"`
+ NameServersSearch []string `mapstructure:"name_servers_search"`
+ NameServers []string `mapstructure:"name_servers"`
+ NetbootEnabled bool `mapstructure:"netboot_enabled"`
+ Owners []string `mapstructure:"owners"`
+ PowerAddress string `mapstructure:"power_address"`
+ PowerID string `mapstructure:"power_id"`
+ PowerPass string `mapstructure:"power_pass"`
+ PowerType string `mapstructure:"power_type"`
+ PowerUser string `mapstructure:"power_user"`
+ Profile string `mapstructure:"profile"`
+ Proxy string `mapstructure:"proxy"`
+ RedHatManagementKey string `mapstructure:"redhat_management_key"`
+ RedHatManagementServer string `mapstructure:"redhat_management_server"`
+ Status string `mapstructure:"status"`
+ TemplateFiles string `mapstructure:"template_files"`
+ TemplateRemoteKickstarts int `mapstructure:"template_remote_kickstarts"`
+ VirtAutoBoot string `mapstructure:"virt_auto_boot"`
+ VirtCPUs string `mapstructure:"virt_cpus"`
+ VirtDiskDriver string `mapstructure:"virt_disk_driver"`
+ VirtFileSize string `mapstructure:"virt_file_size"`
+ VirtPath string `mapstructure:"virt_path"`
+ VirtPXEBoot int `mapstructure:"virt_pxe_boot"`
+ VirtRam string `mapstructure:"virt_ram"`
+ VirtType string `mapstructure:"virt_type"`
+
+ Client
+}
+
+// Interface is an interface in a system.
+type Interface struct {
+ CNAMEs []string `mapstructure:"cnames" structs:"cnames"`
+ DHCPTag string `mapstructure:"dhcp_tag" structs:"dhcp_tag"`
+ DNSName string `mapstructure:"dns_name" structs:"dns_name"`
+ BondingOpts string `mapstructure:"bonding_opts" structs:"bonding_opts"`
+ BridgeOpts string `mapstructure:"bridge_opts" structs:"bridge_opts"`
+ Gateway string `mapstructure:"if_gateway" structs:"if_gateway"`
+ InterfaceType string `mapstructure:"interface_type" structs:"interface_type"`
+ InterfaceMaster string `mapstructure:"interface_master" structs:"interface_master"`
+ IPAddress string `mapstructure:"ip_address" structs:"ip_address"`
+ IPv6Address string `mapstructure:"ipv6_address" structs:"ipv6_address"`
+ IPv6Secondaries []string `mapstructure:"ipv6_secondaries" structs:"ipv6_secondaries"`
+ IPv6MTU string `mapstructure:"ipv6_mtu" structs:"ipv6_mtu"`
+ IPv6StaticRoutes []string `mapstructure:"ipv6_static_routes" structs:"ipv6_static_routes"`
+ IPv6DefaultGateway string `mapstructure:"ipv6_default_gateway structs:"ipv6_default_gateway"`
+ MACAddress string `mapstructure:"mac_address" structs:"mac_address"`
+ Management bool `mapstructure:"management" structs:"managment"`
+ Netmask string `mapstructure:"netmask" structs:"netmask"`
+ Static bool `mapstructure:"static" structs:"static"`
+ StaticRoutes []string `mapstructure:"static_routes" structs:"static_routes"`
+ VirtBridge string `mapstructure:"virt_bridge" structs:"virt_bridge"`
+}
+
+// Interfaces is a collection of interfaces in a system.
+type Interfaces map[string]Interface
+
+// GetSystems returns all systems in Cobbler.
+func (c *Client) GetSystems() ([]*System, error) {
+ var systems []*System
+
+ result, err := c.Call("get_systems", "", c.Token)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, s := range result.([]interface{}) {
+ var system System
+ decodedResult, err := decodeCobblerItem(s, &system)
+ if err != nil {
+ return nil, err
+ }
+ decodedSystem := decodedResult.(*System)
+ decodedSystem.Client = *c
+ systems = append(systems, decodedSystem)
+ }
+
+ return systems, nil
+}
+
+// GetSystem returns a single system obtained by its name.
+func (c *Client) GetSystem(name string) (*System, error) {
+ var system System
+
+ result, err := c.Call("get_system", name, c.Token)
+ if err != nil {
+ return &system, err
+ }
+
+ if result == "~" {
+ return nil, fmt.Errorf("System %s not found.", name)
+ }
+
+ decodeResult, err := decodeCobblerItem(result, &system)
+ if err != nil {
+ return nil, err
+ }
+
+ s := decodeResult.(*System)
+ s.Client = *c
+
+ return s, nil
+}
+
+// CreateSystem creates a system.
+// It ensures that either a Profile or Image are set and then sets other default values.
+func (c *Client) CreateSystem(system System) (*System, error) {
+ // Check if a system with the same name already exists
+ if _, err := c.GetSystem(system.Name); err == nil {
+ return nil, fmt.Errorf("A system with the name %s already exists.", system.Name)
+ }
+
+ if system.Profile == "" && system.Image == "" {
+ return nil, fmt.Errorf("A system must have a profile or image set.")
+ }
+
+ // Set default values. I guess these aren't taken care of by Cobbler?
+ if system.BootFiles == "" {
+ system.BootFiles = "<>"
+ }
+
+ if system.FetchableFiles == "" {
+ system.FetchableFiles = "<>"
+ }
+
+ if system.MGMTParameters == "" {
+ system.MGMTParameters = "<>"
+ }
+
+ if system.PowerType == "" {
+ system.PowerType = "ipmilan"
+ }
+
+ if system.Status == "" {
+ system.Status = "production"
+ }
+
+ if system.VirtAutoBoot == "" {
+ system.VirtAutoBoot = "0"
+ }
+
+ if system.VirtCPUs == "" {
+ system.VirtCPUs = "<>"
+ }
+
+ if system.VirtDiskDriver == "" {
+ system.VirtDiskDriver = "<>"
+ }
+
+ if system.VirtFileSize == "" {
+ system.VirtFileSize = "<>"
+ }
+
+ if system.VirtPath == "" {
+ system.VirtPath = "<>"
+ }
+
+ if system.VirtRam == "" {
+ system.VirtRam = "<>"
+ }
+
+ if system.VirtType == "" {
+ system.VirtType = "<>"
+ }
+
+ // To create a system via the Cobbler API, first call new_system to obtain an ID
+ result, err := c.Call("new_system", c.Token)
+ if err != nil {
+ return nil, err
+ }
+ newId := result.(string)
+
+ // Set the value of all fields
+ item := reflect.ValueOf(&system).Elem()
+ if err := c.updateCobblerFields("system", item, newId); err != nil {
+ return nil, err
+ }
+
+ // Save the final system
+ if _, err := c.Call("save_system", newId, c.Token); err != nil {
+ return nil, err
+ }
+
+ // Return a clean copy of the system
+ return c.GetSystem(system.Name)
+}
+
+// UpdateSystem updates a single system.
+func (c *Client) UpdateSystem(system *System) error {
+ item := reflect.ValueOf(system).Elem()
+ id, err := c.GetItemHandle("system", system.Name)
+ if err != nil {
+ return err
+ }
+ return c.updateCobblerFields("system", item, id)
+}
+
+// DeleteSystem deletes a single system by its name.
+func (c *Client) DeleteSystem(name string) error {
+ _, err := c.Call("remove_system", name, c.Token)
+ return err
+}
+
+func (s *System) CreateInterface(name string, iface Interface) error {
+ i := structs.Map(iface)
+ nic := make(map[string]interface{})
+ for key, value := range i {
+ attrName := fmt.Sprintf("%s-%s", key, name)
+ nic[attrName] = value
+ }
+
+ systemId, err := s.Client.GetItemHandle("system", s.Name)
+ if err != nil {
+ return err
+ }
+
+ if _, err := s.Client.Call("modify_system", systemId, "modify_interface", nic, s.Client.Token); err != nil {
+ return err
+ }
+
+ // Save the final system
+ if _, err := s.Client.Call("save_system", systemId, s.Client.Token); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// GetInterfaces returns all interfaces in a System.
+func (s *System) GetInterfaces() (Interfaces, error) {
+ nics := make(Interfaces)
+ for nicName, nicData := range s.Interfaces {
+ var nic Interface
+ if err := mapstructure.Decode(nicData, &nic); err != nil {
+ return nil, err
+ }
+ nics[nicName] = nic
+ }
+
+ return nics, nil
+}
+
+// GetInterface returns a single interface in a System.
+func (s *System) GetInterface(name string) (Interface, error) {
+ nics := make(Interfaces)
+ var iface Interface
+ for nicName, nicData := range s.Interfaces {
+ var nic Interface
+ if err := mapstructure.Decode(nicData, &nic); err != nil {
+ return iface, err
+ }
+ nics[nicName] = nic
+ }
+
+ if iface, ok := nics[name]; ok {
+ return iface, nil
+ } else {
+ return iface, fmt.Errorf("Interface %s not found.", name)
+ }
+}
+
+// DeleteInterface deletes a single interface in a System.
+func (s *System) DeleteInterface(name string) error {
+ if _, err := s.GetInterface(name); err != nil {
+ return err
+ }
+
+ systemId, err := s.Client.GetItemHandle("system", s.Name)
+ if err != nil {
+ return err
+ }
+
+ if _, err := s.Client.Call("modify_system", systemId, "delete_interface", name, s.Client.Token); err != nil {
+ return err
+ }
+
+ // Save the final system
+ if _, err := s.Client.Call("save_system", systemId, s.Client.Token); err != nil {
+ return err
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/LICENSE b/vendor/github.com/kolo/xmlrpc/LICENSE
new file mode 100644
index 000000000000..8103dd139136
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/LICENSE
@@ -0,0 +1,19 @@
+Copyright (C) 2012 Dmitry Maksimov
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/kolo/xmlrpc/README.md b/vendor/github.com/kolo/xmlrpc/README.md
new file mode 100644
index 000000000000..12b7692e9077
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/README.md
@@ -0,0 +1,79 @@
+## Overview
+
+xmlrpc is an implementation of client side part of XMLRPC protocol in Go language.
+
+## Installation
+
+To install xmlrpc package run `go get github.com/kolo/xmlrpc`. To use
+it in application add `"github.com/kolo/xmlrpc"` string to `import`
+statement.
+
+## Usage
+
+ client, _ := xmlrpc.NewClient("https://bugzilla.mozilla.org/xmlrpc.cgi", nil)
+ result := struct{
+ Version string `xmlrpc:"version"`
+ }{}
+ client.Call("Bugzilla.version", nil, &result)
+ fmt.Printf("Version: %s\n", result.Version) // Version: 4.2.7+
+
+Second argument of NewClient function is an object that implements
+[http.RoundTripper](http://golang.org/pkg/net/http/#RoundTripper)
+interface, it can be used to get more control over connection options.
+By default it initialized by http.DefaultTransport object.
+
+### Arguments encoding
+
+xmlrpc package supports encoding of native Go data types to method
+arguments.
+
+Data types encoding rules:
+* int, int8, int16, int32, int64 encoded to int;
+* float32, float64 encoded to double;
+* bool encoded to boolean;
+* string encoded to string;
+* time.Time encoded to datetime.iso8601;
+* xmlrpc.Base64 encoded to base64;
+* slice decoded to array;
+
+Structs decoded to struct by following rules:
+* all public field become struct members;
+* field name become member name;
+* if field has xmlrpc tag, its value become member name.
+
+Server method can accept few arguments, to handle this case there is
+special approach to handle slice of empty interfaces (`[]interface{}`).
+Each value of such slice encoded as separate argument.
+
+### Result decoding
+
+Result of remote function is decoded to native Go data type.
+
+Data types decoding rules:
+* int, i4 decoded to int, int8, int16, int32, int64;
+* double decoded to float32, float64;
+* boolean decoded to bool;
+* string decoded to string;
+* array decoded to slice;
+* structs decoded following the rules described in previous section;
+* datetime.iso8601 decoded as time.Time data type;
+* base64 decoded to string.
+
+## Implementation details
+
+xmlrpc package contains clientCodec type, that implements [rpc.ClientCodec](http://golang.org/pkg/net/rpc/#ClientCodec)
+interface of [net/rpc](http://golang.org/pkg/net/rpc) package.
+
+xmlrpc package works over HTTP protocol, but some internal functions
+and data type were made public to make it easier to create another
+implementation of xmlrpc that works over another protocol. To encode
+request body there is EncodeMethodCall function. To decode server
+response Response data type can be used.
+
+## Contribution
+
+Feel free to fork the project, submit pull requests, ask questions.
+
+## Authors
+
+Dmitry Maksimov (dmtmax@gmail.com)
diff --git a/vendor/github.com/kolo/xmlrpc/client.go b/vendor/github.com/kolo/xmlrpc/client.go
new file mode 100644
index 000000000000..fb66b65fbc8c
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/client.go
@@ -0,0 +1,144 @@
+package xmlrpc
+
+import (
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "net/http/cookiejar"
+ "net/rpc"
+ "net/url"
+)
+
+type Client struct {
+ *rpc.Client
+}
+
+// clientCodec is rpc.ClientCodec interface implementation.
+type clientCodec struct {
+ // url presents url of xmlrpc service
+ url *url.URL
+
+ // httpClient works with HTTP protocol
+ httpClient *http.Client
+
+ // cookies stores cookies received on last request
+ cookies http.CookieJar
+
+ // responses presents map of active requests. It is required to return request id, that
+ // rpc.Client can mark them as done.
+ responses map[uint64]*http.Response
+
+ response *Response
+
+ // ready presents channel, that is used to link request and it`s response.
+ ready chan uint64
+}
+
+func (codec *clientCodec) WriteRequest(request *rpc.Request, args interface{}) (err error) {
+ httpRequest, err := NewRequest(codec.url.String(), request.ServiceMethod, args)
+
+ if codec.cookies != nil {
+ for _, cookie := range codec.cookies.Cookies(codec.url) {
+ httpRequest.AddCookie(cookie)
+ }
+ }
+
+ if err != nil {
+ return err
+ }
+
+ var httpResponse *http.Response
+ httpResponse, err = codec.httpClient.Do(httpRequest)
+
+ if err != nil {
+ return err
+ }
+
+ if codec.cookies != nil {
+ codec.cookies.SetCookies(codec.url, httpResponse.Cookies())
+ }
+
+ codec.responses[request.Seq] = httpResponse
+ codec.ready <- request.Seq
+
+ return nil
+}
+
+func (codec *clientCodec) ReadResponseHeader(response *rpc.Response) (err error) {
+ seq := <-codec.ready
+ httpResponse := codec.responses[seq]
+
+ if httpResponse.StatusCode < 200 || httpResponse.StatusCode >= 300 {
+ return fmt.Errorf("request error: bad status code - %d", httpResponse.StatusCode)
+ }
+
+ respData, err := ioutil.ReadAll(httpResponse.Body)
+
+ if err != nil {
+ return err
+ }
+
+ httpResponse.Body.Close()
+
+ resp := NewResponse(respData)
+
+ if resp.Failed() {
+ response.Error = fmt.Sprintf("%v", resp.Err())
+ }
+
+ codec.response = resp
+
+ response.Seq = seq
+ delete(codec.responses, seq)
+
+ return nil
+}
+
+func (codec *clientCodec) ReadResponseBody(v interface{}) (err error) {
+ if v == nil {
+ return nil
+ }
+
+ if err = codec.response.Unmarshal(v); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (codec *clientCodec) Close() error {
+ transport := codec.httpClient.Transport.(*http.Transport)
+ transport.CloseIdleConnections()
+ return nil
+}
+
+// NewClient returns instance of rpc.Client object, that is used to send request to xmlrpc service.
+func NewClient(requrl string, transport http.RoundTripper) (*Client, error) {
+ if transport == nil {
+ transport = http.DefaultTransport
+ }
+
+ httpClient := &http.Client{Transport: transport}
+
+ jar, err := cookiejar.New(nil)
+
+ if err != nil {
+ return nil, err
+ }
+
+ u, err := url.Parse(requrl)
+
+ if err != nil {
+ return nil, err
+ }
+
+ codec := clientCodec{
+ url: u,
+ httpClient: httpClient,
+ ready: make(chan uint64),
+ responses: make(map[uint64]*http.Response),
+ cookies: jar,
+ }
+
+ return &Client{rpc.NewClientWithCodec(&codec)}, nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/decoder.go b/vendor/github.com/kolo/xmlrpc/decoder.go
new file mode 100644
index 000000000000..b73955978ece
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/decoder.go
@@ -0,0 +1,449 @@
+package xmlrpc
+
+import (
+ "bytes"
+ "encoding/xml"
+ "errors"
+ "fmt"
+ "io"
+ "reflect"
+ "strconv"
+ "strings"
+ "time"
+)
+
+const iso8601 = "20060102T15:04:05"
+
+var (
+ // CharsetReader is a function to generate reader which converts a non UTF-8
+ // charset into UTF-8.
+ CharsetReader func(string, io.Reader) (io.Reader, error)
+
+ invalidXmlError = errors.New("invalid xml")
+)
+
+type TypeMismatchError string
+
+func (e TypeMismatchError) Error() string { return string(e) }
+
+type decoder struct {
+ *xml.Decoder
+}
+
+func unmarshal(data []byte, v interface{}) (err error) {
+ dec := &decoder{xml.NewDecoder(bytes.NewBuffer(data))}
+
+ if CharsetReader != nil {
+ dec.CharsetReader = CharsetReader
+ }
+
+ var tok xml.Token
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+
+ if t, ok := tok.(xml.StartElement); ok {
+ if t.Name.Local == "value" {
+ val := reflect.ValueOf(v)
+ if val.Kind() != reflect.Ptr {
+ return errors.New("non-pointer value passed to unmarshal")
+ }
+ if err = dec.decodeValue(val.Elem()); err != nil {
+ return err
+ }
+
+ break
+ }
+ }
+ }
+
+ // read until end of document
+ err = dec.Skip()
+ if err != nil && err != io.EOF {
+ return err
+ }
+
+ return nil
+}
+
+func (dec *decoder) decodeValue(val reflect.Value) error {
+ var tok xml.Token
+ var err error
+
+ if val.Kind() == reflect.Ptr {
+ if val.IsNil() {
+ val.Set(reflect.New(val.Type().Elem()))
+ }
+ val = val.Elem()
+ }
+
+ var typeName string
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+
+ if t, ok := tok.(xml.EndElement); ok {
+ if t.Name.Local == "value" {
+ return nil
+ } else {
+ return invalidXmlError
+ }
+ }
+
+ if t, ok := tok.(xml.StartElement); ok {
+ typeName = t.Name.Local
+ break
+ }
+
+ // Treat value data without type identifier as string
+ if t, ok := tok.(xml.CharData); ok {
+ if value := strings.TrimSpace(string(t)); value != "" {
+ if err = checkType(val, reflect.String); err != nil {
+ return err
+ }
+
+ val.SetString(value)
+ return nil
+ }
+ }
+ }
+
+ switch typeName {
+ case "struct":
+ ismap := false
+ pmap := val
+ valType := val.Type()
+
+ if err = checkType(val, reflect.Struct); err != nil {
+ if checkType(val, reflect.Map) == nil {
+ if valType.Key().Kind() != reflect.String {
+ return fmt.Errorf("only maps with string key type can be unmarshalled")
+ }
+ ismap = true
+ } else if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ var dummy map[string]interface{}
+ pmap = reflect.New(reflect.TypeOf(dummy)).Elem()
+ valType = pmap.Type()
+ ismap = true
+ } else {
+ return err
+ }
+ }
+
+ var fields map[string]reflect.Value
+
+ if !ismap {
+ fields = make(map[string]reflect.Value)
+
+ for i := 0; i < valType.NumField(); i++ {
+ field := valType.Field(i)
+ fieldVal := val.FieldByName(field.Name)
+
+ if fieldVal.CanSet() {
+ if fn := field.Tag.Get("xmlrpc"); fn != "" {
+ fields[fn] = fieldVal
+ } else {
+ fields[field.Name] = fieldVal
+ }
+ }
+ }
+ } else {
+ // Create initial empty map
+ pmap.Set(reflect.MakeMap(valType))
+ }
+
+ // Process struct members.
+ StructLoop:
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+ switch t := tok.(type) {
+ case xml.StartElement:
+ if t.Name.Local != "member" {
+ return invalidXmlError
+ }
+
+ tagName, fieldName, err := dec.readTag()
+ if err != nil {
+ return err
+ }
+ if tagName != "name" {
+ return invalidXmlError
+ }
+
+ var fv reflect.Value
+ ok := true
+
+ if !ismap {
+ fv, ok = fields[string(fieldName)]
+ } else {
+ fv = reflect.New(valType.Elem())
+ }
+
+ if ok {
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+ if t, ok := tok.(xml.StartElement); ok && t.Name.Local == "value" {
+ if err = dec.decodeValue(fv); err != nil {
+ return err
+ }
+
+ //
+ if err = dec.Skip(); err != nil {
+ return err
+ }
+
+ break
+ }
+ }
+ }
+
+ //
+ if err = dec.Skip(); err != nil {
+ return err
+ }
+
+ if ismap {
+ pmap.SetMapIndex(reflect.ValueOf(string(fieldName)), reflect.Indirect(fv))
+ val.Set(pmap)
+ }
+ case xml.EndElement:
+ break StructLoop
+ }
+ }
+ case "array":
+ pslice := val
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ var dummy []interface{}
+ pslice = reflect.New(reflect.TypeOf(dummy)).Elem()
+ } else if err = checkType(val, reflect.Slice); err != nil {
+ return err
+ }
+
+ ArrayLoop:
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+
+ switch t := tok.(type) {
+ case xml.StartElement:
+ if t.Name.Local != "data" {
+ return invalidXmlError
+ }
+
+ slice := reflect.MakeSlice(pslice.Type(), 0, 0)
+
+ DataLoop:
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+
+ switch tt := tok.(type) {
+ case xml.StartElement:
+ if tt.Name.Local != "value" {
+ return invalidXmlError
+ }
+
+ v := reflect.New(pslice.Type().Elem())
+ if err = dec.decodeValue(v); err != nil {
+ return err
+ }
+
+ slice = reflect.Append(slice, v.Elem())
+
+ //
+ if err = dec.Skip(); err != nil {
+ return err
+ }
+ case xml.EndElement:
+ pslice.Set(slice)
+ val.Set(pslice)
+ break DataLoop
+ }
+ }
+ case xml.EndElement:
+ break ArrayLoop
+ }
+ }
+ default:
+ if tok, err = dec.Token(); err != nil {
+ return err
+ }
+
+ var data []byte
+
+ switch t := tok.(type) {
+ case xml.EndElement:
+ return nil
+ case xml.CharData:
+ data = []byte(t.Copy())
+ default:
+ return invalidXmlError
+ }
+
+ switch typeName {
+ case "int", "i4", "i8":
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ i, err := strconv.ParseInt(string(data), 10, 64)
+ if err != nil {
+ return err
+ }
+
+ pi := reflect.New(reflect.TypeOf(i)).Elem()
+ pi.SetInt(i)
+ val.Set(pi)
+ } else if err = checkType(val, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64); err != nil {
+ return err
+ } else {
+ i, err := strconv.ParseInt(string(data), 10, val.Type().Bits())
+ if err != nil {
+ return err
+ }
+
+ val.SetInt(i)
+ }
+ case "string", "base64":
+ str := string(data)
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ pstr := reflect.New(reflect.TypeOf(str)).Elem()
+ pstr.SetString(str)
+ val.Set(pstr)
+ } else if err = checkType(val, reflect.String); err != nil {
+ return err
+ } else {
+ val.SetString(str)
+ }
+ case "dateTime.iso8601":
+ t, err := time.Parse(iso8601, string(data))
+ if err != nil {
+ return err
+ }
+
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ ptime := reflect.New(reflect.TypeOf(t)).Elem()
+ ptime.Set(reflect.ValueOf(t))
+ val.Set(ptime)
+ } else if _, ok := val.Interface().(time.Time); !ok {
+ return TypeMismatchError(fmt.Sprintf("error: type mismatch error - can't decode %v to time", val.Kind()))
+ } else {
+ val.Set(reflect.ValueOf(t))
+ }
+ case "boolean":
+ v, err := strconv.ParseBool(string(data))
+ if err != nil {
+ return err
+ }
+
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ pv := reflect.New(reflect.TypeOf(v)).Elem()
+ pv.SetBool(v)
+ val.Set(pv)
+ } else if err = checkType(val, reflect.Bool); err != nil {
+ return err
+ } else {
+ val.SetBool(v)
+ }
+ case "double":
+ if checkType(val, reflect.Interface) == nil && val.IsNil() {
+ i, err := strconv.ParseFloat(string(data), 64)
+ if err != nil {
+ return err
+ }
+
+ pdouble := reflect.New(reflect.TypeOf(i)).Elem()
+ pdouble.SetFloat(i)
+ val.Set(pdouble)
+ } else if err = checkType(val, reflect.Float32, reflect.Float64); err != nil {
+ return err
+ } else {
+ i, err := strconv.ParseFloat(string(data), val.Type().Bits())
+ if err != nil {
+ return err
+ }
+
+ val.SetFloat(i)
+ }
+ default:
+ return errors.New("unsupported type")
+ }
+
+ //
+ if err = dec.Skip(); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (dec *decoder) readTag() (string, []byte, error) {
+ var tok xml.Token
+ var err error
+
+ var name string
+ for {
+ if tok, err = dec.Token(); err != nil {
+ return "", nil, err
+ }
+
+ if t, ok := tok.(xml.StartElement); ok {
+ name = t.Name.Local
+ break
+ }
+ }
+
+ value, err := dec.readCharData()
+ if err != nil {
+ return "", nil, err
+ }
+
+ return name, value, dec.Skip()
+}
+
+func (dec *decoder) readCharData() ([]byte, error) {
+ var tok xml.Token
+ var err error
+
+ if tok, err = dec.Token(); err != nil {
+ return nil, err
+ }
+
+ if t, ok := tok.(xml.CharData); ok {
+ return []byte(t.Copy()), nil
+ } else {
+ return nil, invalidXmlError
+ }
+}
+
+func checkType(val reflect.Value, kinds ...reflect.Kind) error {
+ if len(kinds) == 0 {
+ return nil
+ }
+
+ if val.Kind() == reflect.Ptr {
+ val = val.Elem()
+ }
+
+ match := false
+
+ for _, kind := range kinds {
+ if val.Kind() == kind {
+ match = true
+ break
+ }
+ }
+
+ if !match {
+ return TypeMismatchError(fmt.Sprintf("error: type mismatch - can't unmarshal %v to %v",
+ val.Kind(), kinds[0]))
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/encoder.go b/vendor/github.com/kolo/xmlrpc/encoder.go
new file mode 100644
index 000000000000..bb1285ff7adc
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/encoder.go
@@ -0,0 +1,164 @@
+package xmlrpc
+
+import (
+ "bytes"
+ "encoding/xml"
+ "fmt"
+ "reflect"
+ "strconv"
+ "time"
+)
+
+type encodeFunc func(reflect.Value) ([]byte, error)
+
+func marshal(v interface{}) ([]byte, error) {
+ if v == nil {
+ return []byte{}, nil
+ }
+
+ val := reflect.ValueOf(v)
+ return encodeValue(val)
+}
+
+func encodeValue(val reflect.Value) ([]byte, error) {
+ var b []byte
+ var err error
+
+ if val.Kind() == reflect.Ptr || val.Kind() == reflect.Interface {
+ if val.IsNil() {
+ return []byte(""), nil
+ }
+
+ val = val.Elem()
+ }
+
+ switch val.Kind() {
+ case reflect.Struct:
+ switch val.Interface().(type) {
+ case time.Time:
+ t := val.Interface().(time.Time)
+ b = []byte(fmt.Sprintf("%s", t.Format(iso8601)))
+ default:
+ b, err = encodeStruct(val)
+ }
+ case reflect.Map:
+ b, err = encodeMap(val)
+ case reflect.Slice:
+ b, err = encodeSlice(val)
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ b = []byte(fmt.Sprintf("%s", strconv.FormatInt(val.Int(), 10)))
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ b = []byte(fmt.Sprintf("%s", strconv.FormatUint(val.Uint(), 10)))
+ case reflect.Float32, reflect.Float64:
+ b = []byte(fmt.Sprintf("%s",
+ strconv.FormatFloat(val.Float(), 'g', -1, val.Type().Bits())))
+ case reflect.Bool:
+ if val.Bool() {
+ b = []byte("1")
+ } else {
+ b = []byte("0")
+ }
+ case reflect.String:
+ var buf bytes.Buffer
+
+ xml.Escape(&buf, []byte(val.String()))
+
+ if _, ok := val.Interface().(Base64); ok {
+ b = []byte(fmt.Sprintf("%s", buf.String()))
+ } else {
+ b = []byte(fmt.Sprintf("%s", buf.String()))
+ }
+ default:
+ return nil, fmt.Errorf("xmlrpc encode error: unsupported type")
+ }
+
+ if err != nil {
+ return nil, err
+ }
+
+ return []byte(fmt.Sprintf("%s", string(b))), nil
+}
+
+func encodeStruct(val reflect.Value) ([]byte, error) {
+ var b bytes.Buffer
+
+ b.WriteString("")
+
+ t := val.Type()
+ for i := 0; i < t.NumField(); i++ {
+ b.WriteString("")
+ f := t.Field(i)
+
+ name := f.Tag.Get("xmlrpc")
+ if name == "" {
+ name = f.Name
+ }
+ b.WriteString(fmt.Sprintf("%s", name))
+
+ p, err := encodeValue(val.FieldByName(f.Name))
+ if err != nil {
+ return nil, err
+ }
+ b.Write(p)
+
+ b.WriteString("")
+ }
+
+ b.WriteString("")
+
+ return b.Bytes(), nil
+}
+
+func encodeMap(val reflect.Value) ([]byte, error) {
+ var t = val.Type()
+
+ if t.Key().Kind() != reflect.String {
+ return nil, fmt.Errorf("xmlrpc encode error: only maps with string keys are supported")
+ }
+
+ var b bytes.Buffer
+
+ b.WriteString("")
+
+ keys := val.MapKeys()
+
+ for i := 0; i < val.Len(); i++ {
+ key := keys[i]
+ kval := val.MapIndex(key)
+
+ b.WriteString("")
+ b.WriteString(fmt.Sprintf("%s", key.String()))
+
+ p, err := encodeValue(kval)
+
+ if err != nil {
+ return nil, err
+ }
+
+ b.Write(p)
+ b.WriteString("")
+ }
+
+ b.WriteString("")
+
+ return b.Bytes(), nil
+}
+
+func encodeSlice(val reflect.Value) ([]byte, error) {
+ var b bytes.Buffer
+
+ b.WriteString("")
+
+ for i := 0; i < val.Len(); i++ {
+ p, err := encodeValue(val.Index(i))
+ if err != nil {
+ return nil, err
+ }
+
+ b.Write(p)
+ }
+
+ b.WriteString("")
+
+ return b.Bytes(), nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/request.go b/vendor/github.com/kolo/xmlrpc/request.go
new file mode 100644
index 000000000000..acb8251b2b3c
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/request.go
@@ -0,0 +1,57 @@
+package xmlrpc
+
+import (
+ "bytes"
+ "fmt"
+ "net/http"
+)
+
+func NewRequest(url string, method string, args interface{}) (*http.Request, error) {
+ var t []interface{}
+ var ok bool
+ if t, ok = args.([]interface{}); !ok {
+ if args != nil {
+ t = []interface{}{args}
+ }
+ }
+
+ body, err := EncodeMethodCall(method, t...)
+ if err != nil {
+ return nil, err
+ }
+
+ request, err := http.NewRequest("POST", url, bytes.NewReader(body))
+ if err != nil {
+ return nil, err
+ }
+
+ request.Header.Set("Content-Type", "text/xml")
+ request.Header.Set("Content-Length", fmt.Sprintf("%d", len(body)))
+
+ return request, nil
+}
+
+func EncodeMethodCall(method string, args ...interface{}) ([]byte, error) {
+ var b bytes.Buffer
+ b.WriteString(``)
+ b.WriteString(fmt.Sprintf("%s", method))
+
+ if args != nil {
+ b.WriteString("")
+
+ for _, arg := range args {
+ p, err := marshal(arg)
+ if err != nil {
+ return nil, err
+ }
+
+ b.WriteString(fmt.Sprintf("%s", string(p)))
+ }
+
+ b.WriteString("")
+ }
+
+ b.WriteString("")
+
+ return b.Bytes(), nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/response.go b/vendor/github.com/kolo/xmlrpc/response.go
new file mode 100644
index 000000000000..6742a1c74860
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/response.go
@@ -0,0 +1,52 @@
+package xmlrpc
+
+import (
+ "regexp"
+)
+
+var (
+ faultRx = regexp.MustCompile(`(\s|\S)+`)
+)
+
+type failedResponse struct {
+ Code int `xmlrpc:"faultCode"`
+ Error string `xmlrpc:"faultString"`
+}
+
+func (r *failedResponse) err() error {
+ return &xmlrpcError{
+ code: r.Code,
+ err: r.Error,
+ }
+}
+
+type Response struct {
+ data []byte
+}
+
+func NewResponse(data []byte) *Response {
+ return &Response{
+ data: data,
+ }
+}
+
+func (r *Response) Failed() bool {
+ return faultRx.Match(r.data)
+}
+
+func (r *Response) Err() error {
+ failedResp := new(failedResponse)
+ if err := unmarshal(r.data, failedResp); err != nil {
+ return err
+ }
+
+ return failedResp.err()
+}
+
+func (r *Response) Unmarshal(v interface{}) error {
+ if err := unmarshal(r.data, v); err != nil {
+ return err
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/kolo/xmlrpc/test_server.rb b/vendor/github.com/kolo/xmlrpc/test_server.rb
new file mode 100644
index 000000000000..1b1ff8760f79
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/test_server.rb
@@ -0,0 +1,25 @@
+# encoding: utf-8
+
+require "xmlrpc/server"
+
+class Service
+ def time
+ Time.now
+ end
+
+ def upcase(s)
+ s.upcase
+ end
+
+ def sum(x, y)
+ x + y
+ end
+
+ def error
+ raise XMLRPC::FaultException.new(500, "Server error")
+ end
+end
+
+server = XMLRPC::Server.new 5001, 'localhost'
+server.add_handler "service", Service.new
+server.serve
diff --git a/vendor/github.com/kolo/xmlrpc/xmlrpc.go b/vendor/github.com/kolo/xmlrpc/xmlrpc.go
new file mode 100644
index 000000000000..8766403afeff
--- /dev/null
+++ b/vendor/github.com/kolo/xmlrpc/xmlrpc.go
@@ -0,0 +1,19 @@
+package xmlrpc
+
+import (
+ "fmt"
+)
+
+// xmlrpcError represents errors returned on xmlrpc request.
+type xmlrpcError struct {
+ code int
+ err string
+}
+
+// Error() method implements Error interface
+func (e *xmlrpcError) Error() string {
+ return fmt.Sprintf("error: \"%s\" code: %d", e.err, e.code)
+}
+
+// Base64 represents value in base64 encoding
+type Base64 string
diff --git a/vendor/github.com/maximilien/softlayer-go/LICENSE b/vendor/github.com/maximilien/softlayer-go/LICENSE
new file mode 100644
index 000000000000..5c304d1a4a7b
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/LICENSE
@@ -0,0 +1,201 @@
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/maximilien/softlayer-go/client/http_client.go b/vendor/github.com/maximilien/softlayer-go/client/http_client.go
new file mode 100644
index 000000000000..16223f279526
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/client/http_client.go
@@ -0,0 +1,214 @@
+package client
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "net/http/httputil"
+ "os"
+ "path/filepath"
+ "regexp"
+ "text/template"
+)
+
+const NON_VERBOSE = "NON_VERBOSE"
+
+type HttpClient struct {
+ HTTPClient *http.Client
+
+ username string
+ password string
+
+ useHttps bool
+
+ apiUrl string
+
+ nonVerbose bool
+
+ templatePath string
+}
+
+func NewHttpsClient(username, password, apiUrl, templatePath string) *HttpClient {
+ return NewHttpClient(username, password, apiUrl, templatePath, true)
+}
+
+func NewHttpClient(username, password, apiUrl, templatePath string, useHttps bool) *HttpClient {
+ pwd, err := os.Getwd()
+ if err != nil {
+ panic(err)
+ }
+
+ hClient := &HttpClient{
+ username: username,
+ password: password,
+
+ useHttps: useHttps,
+
+ apiUrl: apiUrl,
+
+ templatePath: filepath.Join(pwd, templatePath),
+
+ HTTPClient: http.DefaultClient,
+
+ nonVerbose: checkNonVerbose(),
+ }
+
+ return hClient
+}
+
+// Public methods
+
+func (slc *HttpClient) DoRawHttpRequestWithObjectMask(path string, masks []string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error) {
+ url := fmt.Sprintf("%s://%s:%s@%s/%s", slc.scheme(), slc.username, slc.password, slc.apiUrl, path)
+
+ url += "?objectMask="
+ for i := 0; i < len(masks); i++ {
+ url += masks[i]
+ if i != len(masks)-1 {
+ url += ";"
+ }
+ }
+
+ return slc.makeHttpRequest(url, requestType, requestBody)
+}
+
+func (slc *HttpClient) DoRawHttpRequestWithObjectFilter(path string, filters string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error) {
+ url := fmt.Sprintf("%s://%s:%s@%s/%s", slc.scheme(), slc.username, slc.password, slc.apiUrl, path)
+ url += "?objectFilter=" + filters
+
+ return slc.makeHttpRequest(url, requestType, requestBody)
+}
+
+func (slc *HttpClient) DoRawHttpRequestWithObjectFilterAndObjectMask(path string, masks []string, filters string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error) {
+ url := fmt.Sprintf("%s://%s:%s@%s/%s", slc.scheme(), slc.username, slc.password, slc.apiUrl, path)
+
+ url += "?objectFilter=" + filters
+
+ url += "&objectMask=filteredMask["
+ for i := 0; i < len(masks); i++ {
+ url += masks[i]
+ if i != len(masks)-1 {
+ url += ";"
+ }
+ }
+ url += "]"
+
+ return slc.makeHttpRequest(url, requestType, requestBody)
+}
+
+func (slc *HttpClient) DoRawHttpRequest(path string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error) {
+ url := fmt.Sprintf("%s://%s:%s@%s/%s", slc.scheme(), slc.username, slc.password, slc.apiUrl, path)
+ return slc.makeHttpRequest(url, requestType, requestBody)
+}
+
+func (slc *HttpClient) GenerateRequestBody(templateData interface{}) (*bytes.Buffer, error) {
+ cwd, err := os.Getwd()
+ if err != nil {
+ return nil, err
+ }
+
+ bodyTemplate := template.Must(template.ParseFiles(filepath.Join(cwd, slc.templatePath)))
+ body := new(bytes.Buffer)
+ bodyTemplate.Execute(body, templateData)
+
+ return body, nil
+}
+
+func (slc *HttpClient) HasErrors(body map[string]interface{}) error {
+ if errString, ok := body["error"]; !ok {
+ return nil
+ } else {
+ return errors.New(errString.(string))
+ }
+}
+
+func (slc *HttpClient) CheckForHttpResponseErrors(data []byte) error {
+ var decodedResponse map[string]interface{}
+ err := json.Unmarshal(data, &decodedResponse)
+ if err != nil {
+ return err
+ }
+
+ if err := slc.HasErrors(decodedResponse); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// Private methods
+
+func (slc *HttpClient) scheme() string {
+ if !slc.useHttps {
+ return "http"
+ }
+
+ return "https"
+}
+
+func (slc *HttpClient) makeHttpRequest(url string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error) {
+ req, err := http.NewRequest(requestType, url, requestBody)
+ if err != nil {
+ return nil, 0, err
+ }
+
+ bs, err := httputil.DumpRequest(req, true)
+ if err != nil {
+ return nil, 0, err
+ }
+
+ if !slc.nonVerbose {
+ fmt.Fprintf(os.Stderr, "\n---\n[softlayer-go] Request:\n%s\n", hideCredentials(string(bs)))
+ }
+
+ resp, err := slc.HTTPClient.Do(req)
+ if err != nil {
+ return nil, 520, err
+ }
+
+ defer resp.Body.Close()
+
+ bs, err = httputil.DumpResponse(resp, true)
+ if err != nil {
+ return nil, resp.StatusCode, err
+ }
+
+ if !slc.nonVerbose {
+ fmt.Fprintf(os.Stderr, "[softlayer-go] Response:\n%s\n", hideCredentials(string(bs)))
+ }
+
+ responseBody, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return nil, resp.StatusCode, err
+ }
+
+ return responseBody, resp.StatusCode, nil
+}
+
+// Private functions
+
+func hideCredentials(s string) string {
+ hiddenStr := "\"password\":\"******\""
+ r := regexp.MustCompile(`"password":"[^"]*"`)
+
+ return r.ReplaceAllString(s, hiddenStr)
+}
+
+func checkNonVerbose() bool {
+ slGoNonVerbose := os.Getenv(NON_VERBOSE)
+ switch slGoNonVerbose {
+ case "yes":
+ return true
+ case "YES":
+ return true
+ case "true":
+ return true
+ case "TRUE":
+ return true
+ }
+
+ return false
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/client/softlayer_client.go b/vendor/github.com/maximilien/softlayer-go/client/softlayer_client.go
new file mode 100644
index 000000000000..36a4f5ddf728
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/client/softlayer_client.go
@@ -0,0 +1,192 @@
+package client
+
+import (
+ "errors"
+ "fmt"
+
+ services "github.com/maximilien/softlayer-go/services"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+const (
+ SOFTLAYER_API_URL = "api.softlayer.com/rest/v3"
+ TEMPLATE_ROOT_PATH = "templates"
+)
+
+type SoftLayerClient struct {
+ HttpClient softlayer.HttpClient
+
+ softLayerServices map[string]softlayer.Service
+}
+
+func NewSoftLayerClient(username, apiKey string) *SoftLayerClient {
+ slc := &SoftLayerClient{
+ HttpClient: NewHttpsClient(username, apiKey, SOFTLAYER_API_URL, TEMPLATE_ROOT_PATH),
+
+ softLayerServices: map[string]softlayer.Service{},
+ }
+
+ slc.initSoftLayerServices()
+
+ return slc
+}
+
+//softlayer.Client interface methods
+
+func (slc *SoftLayerClient) GetHttpClient() softlayer.HttpClient {
+ return slc.HttpClient
+}
+
+func (slc *SoftLayerClient) GetService(serviceName string) (softlayer.Service, error) {
+ slService, ok := slc.softLayerServices[serviceName]
+ if !ok {
+ return nil, errors.New(fmt.Sprintf("softlayer-go does not support service '%s'", serviceName))
+ }
+
+ return slService, nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Account_Service() (softlayer.SoftLayer_Account_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Account")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Account_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Virtual_Guest_Service() (softlayer.SoftLayer_Virtual_Guest_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Virtual_Guest")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Virtual_Guest_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Dns_Domain_Service() (softlayer.SoftLayer_Dns_Domain_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Dns_Domain")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Dns_Domain_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Virtual_Disk_Image_Service() (softlayer.SoftLayer_Virtual_Disk_Image_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Virtual_Disk_Image")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Virtual_Disk_Image_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Security_Ssh_Key_Service() (softlayer.SoftLayer_Security_Ssh_Key_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Security_Ssh_Key")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Security_Ssh_Key_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Product_Package_Service() (softlayer.SoftLayer_Product_Package_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Product_Package")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Product_Package_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Virtual_Guest_Block_Device_Template_Group_Service() (softlayer.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Virtual_Guest_Block_Device_Template_Group")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Network_Storage_Service() (softlayer.SoftLayer_Network_Storage_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Network_Storage")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Network_Storage_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Network_Storage_Allowed_Host_Service() (softlayer.SoftLayer_Network_Storage_Allowed_Host_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Network_Storage_Allowed_Host")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Network_Storage_Allowed_Host_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Product_Order_Service() (softlayer.SoftLayer_Product_Order_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Product_Order")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Product_Order_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Billing_Item_Cancellation_Request_Service() (softlayer.SoftLayer_Billing_Item_Cancellation_Request_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Billing_Item_Cancellation_Request")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Billing_Item_Cancellation_Request_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Billing_Item_Service() (softlayer.SoftLayer_Billing_Item_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Billing_Item")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Billing_Item_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Hardware_Service() (softlayer.SoftLayer_Hardware_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Hardware")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Hardware_Service), nil
+}
+
+func (slc *SoftLayerClient) GetSoftLayer_Dns_Domain_ResourceRecord_Service() (softlayer.SoftLayer_Dns_Domain_ResourceRecord_Service, error) {
+ slService, err := slc.GetService("SoftLayer_Dns_Domain_ResourceRecord")
+ if err != nil {
+ return nil, err
+ }
+
+ return slService.(softlayer.SoftLayer_Dns_Domain_ResourceRecord_Service), nil
+}
+
+//Private methods
+
+func (slc *SoftLayerClient) initSoftLayerServices() {
+ slc.softLayerServices["SoftLayer_Account"] = services.NewSoftLayer_Account_Service(slc)
+ slc.softLayerServices["SoftLayer_Virtual_Guest"] = services.NewSoftLayer_Virtual_Guest_Service(slc)
+ slc.softLayerServices["SoftLayer_Virtual_Disk_Image"] = services.NewSoftLayer_Virtual_Disk_Image_Service(slc)
+ slc.softLayerServices["SoftLayer_Security_Ssh_Key"] = services.NewSoftLayer_Security_Ssh_Key_Service(slc)
+ slc.softLayerServices["SoftLayer_Product_Package"] = services.NewSoftLayer_Product_Package_Service(slc)
+ slc.softLayerServices["SoftLayer_Network_Storage"] = services.NewSoftLayer_Network_Storage_Service(slc)
+ slc.softLayerServices["SoftLayer_Network_Storage_Allowed_Host"] = services.NewSoftLayer_Network_Storage_Allowed_Host_Service(slc)
+ slc.softLayerServices["SoftLayer_Product_Order"] = services.NewSoftLayer_Product_Order_Service(slc)
+ slc.softLayerServices["SoftLayer_Billing_Item_Cancellation_Request"] = services.NewSoftLayer_Billing_Item_Cancellation_Request_Service(slc)
+ slc.softLayerServices["SoftLayer_Billing_Item"] = services.NewSoftLayer_Billing_Item_Service(slc)
+ slc.softLayerServices["SoftLayer_Virtual_Guest_Block_Device_Template_Group"] = services.NewSoftLayer_Virtual_Guest_Block_Device_Template_Group_Service(slc)
+ slc.softLayerServices["SoftLayer_Hardware"] = services.NewSoftLayer_Hardware_Service(slc)
+ slc.softLayerServices["SoftLayer_Dns_Domain"] = services.NewSoftLayer_Dns_Domain_Service(slc)
+ slc.softLayerServices["SoftLayer_Dns_Domain_ResourceRecord"] = services.NewSoftLayer_Dns_Domain_ResourceRecord_Service(slc)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/common/utility.go b/vendor/github.com/maximilien/softlayer-go/common/utility.go
new file mode 100644
index 000000000000..1e260c8d874f
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/common/utility.go
@@ -0,0 +1,24 @@
+package common
+
+import (
+ "encoding/json"
+)
+
+func ValidateJson(s string) (bool, error) {
+ var js map[string]interface{}
+
+ err := json.Unmarshal([]byte(s), &js)
+ if err != nil {
+ return false, err
+ }
+
+ return true, nil
+}
+
+func IsHttpErrorCode(errorCode int) bool {
+ if errorCode >= 400 {
+ return true
+ }
+
+ return false
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_network_storage_credential.go b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_network_storage_credential.go
new file mode 100644
index 000000000000..98a4e17181e4
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_network_storage_credential.go
@@ -0,0 +1,14 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Network_Storage_Credential struct {
+ AccountId string `json:"accountId"`
+ CreateDate time.Time `json:"createDate"`
+ Id int `json:"Id"`
+ NasCredentialTypeId int `json:"nasCredentialTypeId"`
+ Password string `json:"password"`
+ Username string `json:"username"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_tag_reference.go b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_tag_reference.go
new file mode 100644
index 000000000000..4297d9c6ee5d
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_tag_reference.go
@@ -0,0 +1,24 @@
+package data_types
+
+type SoftLayer_Tag_Reference struct {
+ EmpRecordId *int `json:"empRecordId"`
+ Id int `json:"id"`
+ ResourceTableId int `json:"resourceTableId"`
+ Tag TagReference `json:"tag"`
+ TagId int `json:"tagId"`
+ TagType TagType `json:"tagType"`
+ TagTypeId int `json:"tagTypeId"`
+ UsrRecordId int `json:"usrRecordId"`
+}
+
+type TagReference struct {
+ AccountId int `json:"accountId"`
+ Id int `json:"id"`
+ Internal int `json:"internal"`
+ Name string `json:"name"`
+}
+
+type TagType struct {
+ Description string `json:"description"`
+ KeyName string `json:"keyName"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_attribute.go b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_attribute.go
new file mode 100644
index 000000000000..3f9494fd803b
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_attribute.go
@@ -0,0 +1,12 @@
+package data_types
+
+type SoftLayer_Virtual_Guest_Attribute_Type struct {
+ Keyname string `json:"keyname"`
+ Name string `json:"name"`
+}
+
+type SoftLayer_Virtual_Guest_Attribute struct {
+ Value string `json:"value"`
+
+ Type SoftLayer_Virtual_Guest_Attribute_Type `json:"type"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_init_parameters.go b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_init_parameters.go
new file mode 100644
index 000000000000..649e5e1164b1
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_init_parameters.go
@@ -0,0 +1,21 @@
+package data_types
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameters struct {
+ Parameters SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameter `json:"parameters"`
+}
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameter struct {
+ AccountId int `json:"accountId"`
+}
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameters struct {
+ Parameters SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameter `json:"parameters"`
+}
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameter struct {
+ Locations []SoftLayer_Location `json:"locations"`
+}
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameters2 struct {
+ Parameters []interface{} `json:"parameters"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_status.go b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_status.go
new file mode 100644
index 000000000000..91eaabe7dd79
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softLayer_virtual_guest_block_device_template_group_status.go
@@ -0,0 +1,7 @@
+package data_types
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status struct {
+ Description string `json:"description"`
+ KeyName string `json:"keyName"`
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_account_status.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_account_status.go
new file mode 100644
index 000000000000..a653d9740f0c
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_account_status.go
@@ -0,0 +1,6 @@
+package data_types
+
+type SoftLayer_Account_Status struct {
+ Id int `json:"id"`
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item.go
new file mode 100644
index 000000000000..e5b50d4da14f
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item.go
@@ -0,0 +1,30 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Billing_Item struct {
+ Id int `json:"id"`
+ AllowCancellationFlag int `json:"allowCancellationFlag,omitempty"`
+ CancellationDate *time.Time `json:"cancellationDate,omitempty"`
+ CategoryCode string `json:"categoryCode,omitempty"`
+ CycleStartDate *time.Time `json:"cycleStartDate,omitempty"`
+ CreateDate *time.Time `json:"createDate,omitempty"`
+ Description string `json:"description,omitempty"`
+ LaborFee string `json:"laborFee,omitempty"`
+ LaborFeeTaxRate string `json:"laborFeeTaxRate,omitempty"`
+ LastBillDate *time.Time `json:"lastBillDate,omitempty"`
+ ModifyDate *time.Time `json:"modifyDate,omitempty"`
+ NextBillDate *time.Time `json:"nextBillDate,omitempty"`
+ OneTimeFee string `json:"oneTimeFee,omitempty"`
+ OneTimeFeeTaxRate string `json:"oneTimeFeeTaxRate,omitempty"`
+ OrderItemId int `json:"orderItemId,omitempty"`
+ ParentId int `json:"parentId,omitempty"`
+ RecurringFee string `json:"recurringFee,omitempty"`
+ RecurringFeeTaxRate string `json:"recurringFeeTaxRate,omitempty"`
+ RecurringMonths int `json:"recurringMonths,omitempty"`
+ ServiceProviderId int `json:"serviceProviderId,omitempty"`
+ SetupFee string `json:"setupFee,omitempty"`
+ SetupFeeTaxRate string `json:"setupFeeTaxRate,omitempty"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item_cancellation_request.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item_cancellation_request.go
new file mode 100644
index 000000000000..9124fb012979
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_billing_item_cancellation_request.go
@@ -0,0 +1,18 @@
+package data_types
+
+type SoftLayer_Billing_Item_Cancellation_Request_Parameters struct {
+ Parameters []SoftLayer_Billing_Item_Cancellation_Request `json:"parameters"`
+}
+
+type SoftLayer_Billing_Item_Cancellation_Request struct {
+ ComplexType string `json:"complexType"`
+ AccountId int `json:"accountId"`
+ Id int `json:"id"`
+ TicketId int `json:"ticketId"`
+ Items []SoftLayer_Billing_Item_Cancellation_Request_Item `json:"items"`
+}
+
+type SoftLayer_Billing_Item_Cancellation_Request_Item struct {
+ BillingItemId int `json:"billingItemId"`
+ ImmediateCancellationFlag bool `json:"immediateCancellationFlag"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_disk_image_capture_template.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_disk_image_capture_template.go
new file mode 100644
index 000000000000..6cb86ab317c1
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_disk_image_capture_template.go
@@ -0,0 +1,17 @@
+package data_types
+
+type SoftLayer_Container_Disk_Image_Capture_Template struct {
+ Description string `json:"description"`
+ Name string `json:"name"`
+ Summary string `json:"summary"`
+ Volumes []SoftLayer_Container_Disk_Image_Capture_Template_Volume `json:"volumes"`
+}
+
+type SoftLayer_Container_Disk_Image_Capture_Template_Volume struct {
+ Name string `json:"name"`
+ Partitions []SoftLayer_Container_Disk_Image_Capture_Template_Volume_Partition
+}
+
+type SoftLayer_Container_Disk_Image_Capture_Template_Volume_Partition struct {
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_product_order.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_product_order.go
new file mode 100644
index 000000000000..e9bca3bcaf1a
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_product_order.go
@@ -0,0 +1,60 @@
+package data_types
+
+type SoftLayer_Container_Product_Order_Receipt struct {
+ OrderId int `json:"orderId"`
+}
+
+type SoftLayer_Container_Product_Order_Parameters struct {
+ Parameters []SoftLayer_Container_Product_Order `json:"parameters"`
+}
+
+type SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi_Parameters struct {
+ Parameters []SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi `json:"parameters"`
+}
+
+type SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade_Parameters struct {
+ Parameters []SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade `json:"parameters"`
+}
+
+//http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order
+type SoftLayer_Container_Product_Order struct {
+ ComplexType string `json:"complexType"`
+ Location string `json:"location,omitempty"`
+ PackageId int `json:"packageId"`
+ Prices []SoftLayer_Product_Item_Price `json:"prices,omitempty"`
+ VirtualGuests []VirtualGuest `json:"virtualGuests,omitempty"`
+ Properties []Property `json:"properties,omitempty"`
+ Quantity int `json:"quantity,omitempty"`
+}
+
+//http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi
+type SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi struct {
+ ComplexType string `json:"complexType"`
+ Location string `json:"location,omitempty"`
+ PackageId int `json:"packageId"`
+ Prices []SoftLayer_Product_Item_Price `json:"prices,omitempty"`
+ VirtualGuests []VirtualGuest `json:"virtualGuests,omitempty"`
+ Properties []Property `json:"properties,omitempty"`
+ Quantity int `json:"quantity,omitempty"`
+ OsFormatType SoftLayer_Network_Storage_Iscsi_OS_Type `json:"osFormatType,omitempty"`
+}
+
+//http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade
+type SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade struct {
+ ComplexType string `json:"complexType"`
+ Location string `json:"location,omitempty"`
+ PackageId int `json:"packageId"`
+ Prices []SoftLayer_Product_Item_Price `json:"prices,omitempty"`
+ VirtualGuests []VirtualGuest `json:"virtualGuests,omitempty"`
+ Properties []Property `json:"properties,omitempty"`
+ Quantity int `json:"quantity,omitempty"`
+}
+
+type Property struct {
+ Name string `json:"name"`
+ Value string `json:"value"`
+}
+
+type VirtualGuest struct {
+ Id int `json:"id"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_virtual_guest_block_device_template_configuration.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_virtual_guest_block_device_template_configuration.go
new file mode 100644
index 000000000000..003a396d24f4
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_container_virtual_guest_block_device_template_configuration.go
@@ -0,0 +1,12 @@
+package data_types
+
+type SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration_Parameters struct {
+ Parameters []SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration `json:"parameters"`
+}
+
+type SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration struct {
+ Name string `json:"name"`
+ Note string `json:"note"`
+ OperatingSystemReferenceCode string `json:"operatingSystemReferenceCode"`
+ Uri string `json:"uri"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain.go
new file mode 100644
index 000000000000..a84e5939bbcb
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain.go
@@ -0,0 +1,20 @@
+package data_types
+
+type SoftLayer_Dns_Domain_Template struct {
+ Name string `json:"name"`
+ ResourceRecords []SoftLayer_Dns_Domain_ResourceRecord `json:"resourceRecords"`
+}
+
+type SoftLayer_Dns_Domain_Template_Parameters struct {
+ Parameters []SoftLayer_Dns_Domain_Template `json:"parameters"`
+}
+
+type SoftLayer_Dns_Domain struct {
+ Id int `json:"id"`
+ Name string `json:"name"`
+ Serial int `json:"serial"`
+ UpdateDate string `json:"updateDate"`
+ ManagedResourceFlag bool `json:"managedResourceFlag"`
+ ResourceRecordCount int `json:"resourceRecordCount"`
+ ResourceRecords []SoftLayer_Dns_Domain_ResourceRecord `json:"resourceRecords"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain_record.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain_record.go
new file mode 100644
index 000000000000..c80956127ab1
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_dns_domain_record.go
@@ -0,0 +1,49 @@
+package data_types
+
+type SoftLayer_Dns_Domain_ResourceRecord_Template_Parameters struct {
+ Parameters []SoftLayer_Dns_Domain_ResourceRecord_Template `json:"parameters"`
+}
+
+type SoftLayer_Dns_Domain_ResourceRecord_Template struct {
+ Data string `json:"data"`
+ DomainId int `json:"domainId"`
+ Expire int `json:"expire"`
+ Host string `json:"host"`
+ Id int `json:"id"`
+ Minimum int `json:"minimum"`
+ MxPriority int `json:"mxPriority"`
+ Refresh int `json:"refresh"`
+ ResponsiblePerson string `json:"responsiblePerson"`
+ Retry int `json:"retry"`
+ Ttl int `json:"ttl"`
+ Type string `json:"type"`
+ Service string `json:"service,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ Priority int `json:"priority,omitempty"`
+ Port int `json:"port,omitempty"`
+ Weight int `json:"weight,omitempty"`
+}
+
+type SoftLayer_Dns_Domain_ResourceRecord_Parameters struct {
+ Parameters []SoftLayer_Dns_Domain_ResourceRecord `json:"parameters"`
+}
+
+type SoftLayer_Dns_Domain_ResourceRecord struct {
+ Data string `json:"data"`
+ DomainId int `json:"domainId"`
+ Expire int `json:"expire"`
+ Host string `json:"host"`
+ Id int `json:"id"`
+ Minimum int `json:"minimum"`
+ MxPriority int `json:"mxPriority"`
+ Refresh int `json:"refresh"`
+ ResponsiblePerson string `json:"responsiblePerson"`
+ Retry int `json:"retry"`
+ Ttl int `json:"ttl"`
+ Type string `json:"type"`
+ Service string `json:"service,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ Priority int `json:"priority,omitempty"`
+ Port int `json:"port,omitempty"`
+ Weight int `json:"weight,omitempty"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_hardware.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_hardware.go
new file mode 100644
index 000000000000..0cb54dbc0a6c
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_hardware.go
@@ -0,0 +1,33 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Hardware_Template_Parameters struct {
+ Parameters []SoftLayer_Hardware_Template `json:"parameters"`
+}
+
+type SoftLayer_Hardware_Template struct {
+ Hostname string `json:"hostname"`
+ Domain string `json:"domain"`
+ ProcessorCoreAmount int `json:"processorCoreAmount"`
+ MemoryCapacity int `json:"memoryCapacity"`
+ HourlyBillingFlag bool `json:"hourlyBillingFlag"`
+ OperatingSystemReferenceCode string `json:"operatingSystemReferenceCode"`
+
+ Datacenter *Datacenter `json:"datacenter"`
+}
+
+type SoftLayer_Hardware struct {
+ BareMetalInstanceFlag int `json:"bareMetalInstanceFlag"`
+ Domain string `json:"domain"`
+ Hostname string `json:"hostname"`
+ Id int `json:"id"`
+ HardwareStatusId int `json:"hardwareStatusId"`
+ ProvisionDate *time.Time `json:"provisionDate"`
+ GlobalIdentifier string `json:"globalIdentifier"`
+ PrimaryIpAddress string `json:"primaryIpAddress"`
+
+ OperatingSystem *SoftLayer_Operating_System `json:"operatingSystem"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_image_type.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_image_type.go
new file mode 100644
index 000000000000..dd99e80311a1
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_image_type.go
@@ -0,0 +1,7 @@
+package data_types
+
+type SoftLayer_Image_Type struct {
+ Description string `json:"description"`
+ KeyName string `json:"keyName"`
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_location.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_location.go
new file mode 100644
index 000000000000..dff22b5a4b31
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_location.go
@@ -0,0 +1,7 @@
+package data_types
+
+type SoftLayer_Location struct {
+ Id int `json:"id"`
+ LongName string `json:"longName"`
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage.go
new file mode 100644
index 000000000000..bb5a2300ca7a
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage.go
@@ -0,0 +1,37 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Network_Storage struct {
+ AccountId int `json:"accountId,omitempty"`
+ CapacityGb int `json:"capacityGb,omitempty"`
+ CreateDate time.Time `json:"createDate,omitempty"`
+ GuestId int `json:"guestId,omitempty"`
+ HardwareId int `json:"hardwareId,omitempty"`
+ HostId int `json:"hostId,omitempty"`
+ Id int `json:"id,omitempty"`
+ NasType string `json:"nasType,omitempty"`
+ Notes string `json:"notes,omitempty"`
+ Password string `json:"password,omitempty"`
+ ServiceProviderId int `json:"serviceProviderId,omitempty"`
+ UpgradableFlag bool `json:"upgradableFlag,omitempty"`
+ Username string `json:"username,omitempty"`
+ BillingItem *Billing_Item `json:"billingItem,omitempty"`
+ LunId string `json:"lunId,omitempty"`
+ ServiceResourceBackendIpAddress string `json:"serviceResourceBackendIpAddress,omitempty"`
+}
+
+type Billing_Item struct {
+ Id int `json:"id,omitempty"`
+ OrderItem *Order_Item `json:"orderItem,omitempty"`
+}
+
+type Order_Item struct {
+ Order *Order `json:"order,omitempty"`
+}
+
+type Order struct {
+ Id int `json:"id,omitempty"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_allowed_host.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_allowed_host.go
new file mode 100644
index 000000000000..8a4ee171b2c5
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_allowed_host.go
@@ -0,0 +1,9 @@
+package data_types
+
+type SoftLayer_Network_Storage_Allowed_Host struct {
+ CredentialId int `json:"credentialId"`
+ Id int `json:"id"`
+ Name string `json:"name"`
+ ResourceTableId int `json:"resourceTabledId"`
+ ResourceTableName string `jsob:"resourceTableName"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_iscsi_os_type.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_iscsi_os_type.go
new file mode 100644
index 000000000000..68356e58fb61
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_storage_iscsi_os_type.go
@@ -0,0 +1,12 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Network_Storage_Iscsi_OS_Type struct {
+ CreateDate time.Time `json:"createDate"`
+ Id int `json:"id"`
+ Name string `json:"name"`
+ KeyName string `json:"keyName"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_vlan.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_vlan.go
new file mode 100644
index 000000000000..16739c6c1296
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_network_vlan.go
@@ -0,0 +1,16 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Network_Vlan struct {
+ AccountId int `json:"accountId"`
+ Id int `json:"Id"`
+ ModifyDate *time.Time `json:"modifyDate,omitempty"`
+ Name string `json:"name"`
+ NetworkVrfId int `json:"networkVrfId"`
+ Note string `json:"note"`
+ PrimarySubnetId int `json:"primarySubnetId"`
+ VlanNumber int `json:"vlanNumber"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_item_price.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_item_price.go
new file mode 100644
index 000000000000..ab912b787189
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_item_price.go
@@ -0,0 +1,19 @@
+package data_types
+
+type SoftLayer_Product_Item_Price struct {
+ Id int `json:"id"`
+ LocationGroupId int `json:"locationGroupId"`
+ Categories []Category `json:"categories,omitempty"`
+ Item *Item `json:"item,omitempty"`
+}
+
+type Item struct {
+ Id int `json:"id"`
+ Description string `json:"description"`
+ Capacity string `json:"capacity"`
+}
+
+type Category struct {
+ Id int `json:"id"`
+ CategoryCode string `json:"categoryCode"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_package.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_package.go
new file mode 100644
index 000000000000..15c86f28bf87
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_product_package.go
@@ -0,0 +1,20 @@
+package data_types
+
+type Softlayer_Product_Package struct {
+ Id int `json:"id"`
+ Name string `json:"name"`
+ IsActive int `json:"isActive"`
+ Description string `json:"description"`
+ PackageType *Package_Type `json:"type"`
+}
+
+type Package_Type struct {
+ KeyName string `json:"keyName"`
+}
+
+type SoftLayer_Product_Item struct {
+ Id int `json:"id"`
+ Description string `json:"description"`
+ Capacity string `json:"capacity"`
+ Prices []SoftLayer_Product_Item_Price `json:"prices,omitempty"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_provisioning_version1_transaction.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_provisioning_version1_transaction.go
new file mode 100644
index 000000000000..3fb042575058
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_provisioning_version1_transaction.go
@@ -0,0 +1,29 @@
+package data_types
+
+import (
+ "time"
+)
+
+type TransactionGroup struct {
+ AverageTimeToComplete string `json:"averageTimeToComplete"`
+ Name string `json:"name"`
+}
+
+type TransactionStatus struct {
+ AverageDuration string `json:"averageDuration"`
+ FriendlyName string `json:"friendlyName"`
+ Name string `json:"name"`
+}
+
+type SoftLayer_Provisioning_Version1_Transaction struct {
+ CreateDate *time.Time `json:"createDate"`
+ ElapsedSeconds int `json:"elapsedSeconds"`
+ GuestId int `json:"guestId"`
+ HardwareId int `json:"hardwareId"`
+ Id int `json:"id"`
+ ModifyDate *time.Time `json:"modifyDate"`
+ StatusChangeDate *time.Time `json:"statusChangeDate"`
+
+ TransactionGroup TransactionGroup `json:"transactionGroup,omitempty"`
+ TransactionStatus TransactionStatus `json:"transactionStatus,omitempty"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_security_ssh_key.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_security_ssh_key.go
new file mode 100644
index 000000000000..8695cdc95beb
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_security_ssh_key.go
@@ -0,0 +1,19 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Shh_Key_Parameters struct {
+ Parameters []SoftLayer_Security_Ssh_Key `json:"parameters"`
+}
+
+type SoftLayer_Security_Ssh_Key struct {
+ CreateDate *time.Time `json:"createDate"`
+ Fingerprint string `json:"fingerprint"`
+ Id int `json:"id"`
+ Key string `json:"key"`
+ Label string `json:"label"`
+ ModifyDate *time.Time `json:"modifyDate"`
+ Notes string `json:"notes"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_set_user_metadata.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_set_user_metadata.go
new file mode 100644
index 000000000000..036a9ac64037
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_set_user_metadata.go
@@ -0,0 +1,8 @@
+package data_types
+
+type UserMetadata string
+type UserMetadataArray []UserMetadata
+
+type SoftLayer_SetUserMetadata_Parameters struct {
+ Parameters []UserMetadataArray `json:"parameters"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_software_component_password.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_software_component_password.go
new file mode 100644
index 000000000000..3f1fe1b3dae2
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_software_component_password.go
@@ -0,0 +1,24 @@
+package data_types
+
+import (
+ "time"
+)
+
+type Software struct {
+ HardwareId int `json:"hardwareId,omitempty"`
+ Id int `json:"id"`
+ ManufacturerLicenseInstance string `json:"manufacturerLicenseInstance"`
+}
+
+type SoftLayer_Software_Component_Password struct {
+ CreateDate *time.Time `json:"createDate"`
+ Id int `json:"id"`
+ ModifyDate *time.Time `json:"modifyDate"`
+ Notes string `json:"notes"`
+ Password string `json:"password"`
+ Port int `json:"port"`
+ SoftwareId int `json:"softwareId"`
+ Username string `json:"username"`
+
+ Software Software `json:"software"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_disk_image.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_disk_image.go
new file mode 100644
index 000000000000..d29de90945df
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_disk_image.go
@@ -0,0 +1,20 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Virtual_Disk_Image struct {
+ Capacity int `json:"capacity"`
+ Checksum string `json:"checksum"`
+ CreateDate *time.Time `json:"createDate"`
+ Description string `json:"description"`
+ Id int `json:"id"`
+ ModifyDate *time.Time `json:"modifyDate"`
+ Name string `json:"name"`
+ ParentId int `json:"parentId"`
+ StorageRepositoryId int `json:"storageRepositoryId"`
+ TypeId int `json:"typeId"`
+ Units string `json:"units"`
+ Uuid string `json:"uuid"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest.go
new file mode 100644
index 000000000000..8a4d7dc844d3
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest.go
@@ -0,0 +1,150 @@
+package data_types
+
+import (
+ "time"
+)
+
+type SoftLayer_Virtual_Guest_Parameters struct {
+ Parameters []SoftLayer_Virtual_Guest `json:"parameters"`
+}
+
+type SoftLayer_Virtual_Guest struct {
+ AccountId int `json:"accountId,omitempty"`
+ CreateDate *time.Time `json:"createDate,omitempty"`
+ DedicatedAccountHostOnlyFlag bool `json:"dedicatedAccountHostOnlyFlag,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ FullyQualifiedDomainName string `json:"fullyQualifiedDomainName,omitempty"`
+ Hostname string `json:"hostname,omitempty"`
+ Id int `json:"id,omitempty"`
+ LastPowerStateId int `json:"lastPowerStateId,omitempty"`
+ LastVerifiedDate *time.Time `json:"lastVerifiedDate,omitempty"`
+ MaxCpu int `json:"maxCpu,omitempty"`
+ MaxCpuUnits string `json:"maxCpuUnits,omitempty"`
+ MaxMemory int `json:"maxMemory,omitempty"`
+ MetricPollDate *time.Time `json:"metricPollDate,omitempty"`
+ ModifyDate *time.Time `json:"modifyDate,omitempty"`
+ Notes string `json:"notes,omitempty"`
+ PostInstallScriptUri string `json:"postInstallScriptUri,omitempty"`
+ PrivateNetworkOnlyFlag bool `json:"privateNetworkOnlyFlag,omitempty"`
+ StartCpus int `json:"startCpus,omitempty"`
+ StatusId int `json:"statusId,omitempty"`
+ Uuid string `json:"uuid,omitempty"`
+ LocalDiskFlag bool `json:"localDiskFlag,omitempty"`
+ HourlyBillingFlag bool `json:"hourlyBillingFlag,omitempty"`
+
+ GlobalIdentifier string `json:"globalIdentifier,omitempty"`
+ ManagedResourceFlag bool `json:"managedResourceFlag,omitempty"`
+ PrimaryBackendIpAddress string `json:"primaryBackendIpAddress,omitempty"`
+ PrimaryIpAddress string `json:"primaryIpAddress,omitempty"`
+
+ PrimaryNetworkComponent *PrimaryNetworkComponent `json:"primaryNetworkComponent,omitempty"`
+ PrimaryBackendNetworkComponent *PrimaryBackendNetworkComponent `json:"primaryBackendNetworkComponent,omitempty"`
+
+ Location *SoftLayer_Location `json:"location"`
+ Datacenter *SoftLayer_Location `json:"datacenter"`
+ NetworkComponents []NetworkComponents `json:"networkComponents,omitempty"`
+ UserData []UserData `json:"userData,omitempty"`
+
+ OperatingSystem *SoftLayer_Operating_System `json:"operatingSystem"`
+
+ BlockDeviceTemplateGroup *BlockDeviceTemplateGroup `json:"blockDeviceTemplateGroup,omitempty"`
+}
+
+type SoftLayer_Operating_System struct {
+ Passwords []SoftLayer_Password `json:"passwords"`
+}
+
+type SoftLayer_Password struct {
+ Username string `json:"username"`
+ Password string `json:"password"`
+}
+
+type SoftLayer_Virtual_Guest_Template_Parameters struct {
+ Parameters []SoftLayer_Virtual_Guest_Template `json:"parameters"`
+}
+
+type SoftLayer_Virtual_Guest_Template struct {
+ //Required
+ Hostname string `json:"hostname"`
+ Domain string `json:"domain"`
+ StartCpus int `json:"startCpus"`
+ MaxMemory int `json:"maxMemory"`
+ Datacenter Datacenter `json:"datacenter"`
+ HourlyBillingFlag bool `json:"hourlyBillingFlag"`
+ LocalDiskFlag bool `json:"localDiskFlag"`
+
+ //Conditionally required
+ OperatingSystemReferenceCode string `json:"operatingSystemReferenceCode,omitempty"`
+ BlockDeviceTemplateGroup *BlockDeviceTemplateGroup `json:"blockDeviceTemplateGroup,omitempty"`
+
+ //Optional
+ DedicatedAccountHostOnlyFlag bool `json:"dedicatedAccountHostOnlyFlag,omitempty"`
+ NetworkComponents []NetworkComponents `json:"networkComponents,omitempty"`
+ PrivateNetworkOnlyFlag bool `json:"privateNetworkOnlyFlag,omitempty"`
+ PrimaryNetworkComponent *PrimaryNetworkComponent `json:"primaryNetworkComponent,omitempty"`
+ PrimaryBackendNetworkComponent *PrimaryBackendNetworkComponent `json:"primaryBackendNetworkComponent,omitempty"`
+ PostInstallScriptUri string `json:"postInstallScriptUri,omitempty"`
+
+ BlockDevices []BlockDevice `json:"blockDevices,omitempty"`
+ UserData []UserData `json:"userData,omitempty"`
+ SshKeys []SshKey `json:"sshKeys,omitempty"`
+}
+
+type Datacenter struct {
+ //Required
+ Name string `json:"name"`
+}
+
+type BlockDeviceTemplateGroup struct {
+ //Required
+ GlobalIdentifier string `json:"globalIdentifier,omitempty"`
+}
+
+type NetworkComponents struct {
+ //Required, defaults to 10
+ MaxSpeed int `json:"maxSpeed,omitempty"`
+}
+
+type NetworkVlan struct {
+ //Required
+ Id int `json:"id,omitempty"`
+}
+
+type PrimaryNetworkComponent struct {
+ //Required
+ NetworkVlan NetworkVlan `json:"networkVlan,omitempty"`
+}
+
+type PrimaryBackendNetworkComponent struct {
+ //Required
+ NetworkVlan NetworkVlan `json:"networkVlan,omitempty"`
+}
+
+type DiskImage struct {
+ //Required
+ Capacity int `json:"capacity,omitempty"`
+}
+
+type BlockDevice struct {
+ //Required
+ Device string `json:"device,omitempty"`
+ DiskImage DiskImage `json:"diskImage,omitempty"`
+}
+
+type UserData struct {
+ //Required
+ Value string `json:"value,omitempty"`
+}
+
+type SshKey struct {
+ //Required
+ Id int `json:"id,omitempty"`
+}
+
+type SoftLayer_Virtual_Guest_SetTags_Parameters struct {
+ Parameters []string `json:"parameters"`
+}
+
+type Image_Template_Config struct {
+ ImageTemplateId string `json:"imageTemplateId"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device.go
new file mode 100644
index 000000000000..4ba787f0a93e
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device.go
@@ -0,0 +1,18 @@
+package data_types
+
+import "time"
+
+type SoftLayer_Virtual_Guest_Block_Device struct {
+ BootableFlag int `json:"bootableFlag"`
+ CreateDate *time.Time `json:"createDate"`
+ Device string `json:"device"`
+ DiskImageId int `json:"diskImageId"`
+ GuestId int `json:"guestId"`
+ HotPlugFlag int `json:"hotPlugFlag"`
+ Id int `json:"id"`
+ ModifyDate *time.Time `json:"modifyDate"`
+ MountMode string `json:"mountMode"`
+ MountType string `json:"mountType"`
+ StatusId int `json:"statusId"`
+ Uuid string `json:"uuid"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device_template_group.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device_template_group.go
new file mode 100644
index 000000000000..bf0a7355bbfe
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_block_device_template_group.go
@@ -0,0 +1,18 @@
+package data_types
+
+import "time"
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_Group struct {
+ AccountId int `json:"accountId"`
+ CreateDate *time.Time `json:"createDate"`
+ Id int `json:"id"`
+ Name string `json:"name"`
+ Note string `json:"note"`
+ ParentId *int `json:"parentId"`
+ PublicFlag int `json:"publicFlag"`
+ StatusId int `json:"statusId"`
+ Summary string `json:"summary"`
+ TransactionId *int `json:"transactionId"`
+ UserRecordId int `json:"userRecordId"`
+ GlobalIdentifier string `json:"globalIdentifier"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_init_parameters.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_init_parameters.go
new file mode 100644
index 000000000000..fd11c95f7658
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_init_parameters.go
@@ -0,0 +1,13 @@
+package data_types
+
+type SoftLayer_Virtual_GuestInitParameters struct {
+ Parameters []interface{} `json:"parameters"`
+}
+
+type SoftLayer_Virtual_GuestInit_ImageId_Parameters struct {
+ Parameters ImageId_Parameter `json:"parameters"`
+}
+
+type ImageId_Parameter struct {
+ ImageId int `json:"imageId"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_power_state.go b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_power_state.go
new file mode 100644
index 000000000000..6dd271db059c
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/data_types/softlayer_virtual_guest_power_state.go
@@ -0,0 +1,7 @@
+package data_types
+
+type SoftLayer_Virtual_Guest_Power_State struct {
+ Description string `json:"description"`
+ KeyName string `json:"keyName"`
+ Name string `json:"name"`
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softLayer_account.go b/vendor/github.com/maximilien/softlayer-go/services/softLayer_account.go
new file mode 100644
index 000000000000..df114f0b0adf
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softLayer_account.go
@@ -0,0 +1,422 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Account_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Account_Service(client softlayer.Client) *softLayer_Account_Service {
+ return &softLayer_Account_Service{
+ client: client,
+ }
+}
+
+func (slas *softLayer_Account_Service) GetName() string {
+ return "SoftLayer_Account"
+}
+
+func (slas *softLayer_Account_Service) GetAccountStatus() (datatypes.SoftLayer_Account_Status, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getAccountStatus.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getAccountStatus, error message '%s'", err.Error())
+ return datatypes.SoftLayer_Account_Status{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getAccountStatus, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Account_Status{}, errors.New(errorMessage)
+ }
+
+ accountStatus := datatypes.SoftLayer_Account_Status{}
+ err = json.Unmarshal(responseBytes, &accountStatus)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return datatypes.SoftLayer_Account_Status{}, err
+ }
+
+ return accountStatus, nil
+}
+
+func (slas *softLayer_Account_Service) GetVirtualGuests() ([]datatypes.SoftLayer_Virtual_Guest, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getVirtualGuests.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualGuests, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualGuests, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ virtualGuests := []datatypes.SoftLayer_Virtual_Guest{}
+ err = json.Unmarshal(responseBytes, &virtualGuests)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ return virtualGuests, nil
+}
+
+func (slas *softLayer_Account_Service) GetVirtualGuestsByFilter(filters string) ([]datatypes.SoftLayer_Virtual_Guest, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getVirtualGuests.json")
+
+ objectMasks := []string{
+ "accountId",
+ "createDate",
+ "dedicatedAccountHostOnlyFlag",
+ "domain",
+ "fullyQualifiedDomainName",
+ "hostname",
+ "hourlyBillingFlag",
+ "id",
+ "lastPowerStateId",
+ "lastVerifiedDate",
+ "maxCpu",
+ "maxCpuUnits",
+ "maxMemory",
+ "metricPollDate",
+ "modifyDate",
+ "notes",
+ "postInstallScriptUri",
+ "privateNetworkOnlyFlag",
+ "startCpus",
+ "statusId",
+ "uuid",
+ "userData.value",
+ "localDiskFlag",
+
+ "globalIdentifier",
+ "managedResourceFlag",
+ "primaryBackendIpAddress",
+ "primaryIpAddress",
+
+ "location.name",
+ "location.longName",
+ "location.id",
+ "datacenter.name",
+ "datacenter.longName",
+ "datacenter.id",
+ "networkComponents.maxSpeed",
+ "operatingSystem.passwords.password",
+ "operatingSystem.passwords.username",
+
+ "blockDeviceTemplateGroup.globalIdentifier",
+ "primaryNetworkComponent.networkVlan.id",
+ "primaryBackendNetworkComponent.networkVlan.id",
+ }
+
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequestWithObjectFilterAndObjectMask(path, objectMasks, filters, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualGuests, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualGuests, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ virtualGuests := []datatypes.SoftLayer_Virtual_Guest{}
+ err = json.Unmarshal(responseBytes, &virtualGuests)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ return virtualGuests, nil
+}
+
+func (slas *softLayer_Account_Service) GetNetworkStorage() ([]datatypes.SoftLayer_Network_Storage, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getNetworkStorage.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getNetworkStorage, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getNetworkStorage, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ networkStorage := []datatypes.SoftLayer_Network_Storage{}
+ err = json.Unmarshal(responseBytes, &networkStorage)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ return networkStorage, nil
+}
+
+func (slas *softLayer_Account_Service) GetIscsiNetworkStorage() ([]datatypes.SoftLayer_Network_Storage, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getIscsiNetworkStorage.json")
+
+ objectMasks := []string{
+ "username",
+ "accountId",
+ "capacityGb",
+ "id",
+ "billingItem.id",
+ "billingItem.orderItem.order.id",
+ }
+
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequestWithObjectMask(path, objectMasks, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getIscsiNetworkStorage, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getIscsiNetworkStorage, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ networkStorage := []datatypes.SoftLayer_Network_Storage{}
+ err = json.Unmarshal(responseBytes, &networkStorage)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ return networkStorage, nil
+}
+
+func (slas *softLayer_Account_Service) GetIscsiNetworkStorageWithFilter(filter string) ([]datatypes.SoftLayer_Network_Storage, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getIscsiNetworkStorage.json")
+
+ objectMasks := []string{
+ "username",
+ "accountId",
+ "capacityGb",
+ "id",
+ "billingItem.id",
+ "billingItem.orderItem.order.id",
+ }
+
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequestWithObjectFilterAndObjectMask(path, objectMasks, filter, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getIscsiNetworkStorage, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getIscsiNetworkStorage, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ networkStorage := []datatypes.SoftLayer_Network_Storage{}
+ err = json.Unmarshal(responseBytes, &networkStorage)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ return networkStorage, nil
+}
+
+func (slas *softLayer_Account_Service) GetVirtualDiskImages() ([]datatypes.SoftLayer_Virtual_Disk_Image, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getVirtualDiskImages.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could get SoftLayer_Account#getVirtualDiskImages, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualDiskImages, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ virtualDiskImages := []datatypes.SoftLayer_Virtual_Disk_Image{}
+ err = json.Unmarshal(responseBytes, &virtualDiskImages)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, err
+ }
+
+ return virtualDiskImages, nil
+}
+
+func (slas *softLayer_Account_Service) GetVirtualDiskImagesWithFilter(filters string) ([]datatypes.SoftLayer_Virtual_Disk_Image, error) {
+ isJson, err := common.ValidateJson(filters)
+ if !isJson || err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: filters string %s is not a valid Json formatted string, error message '%s'", filters, err.Error())
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getVirtualDiskImages.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequestWithObjectFilter(path, filters, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could get SoftLayer_Account#getVirtualDiskImages, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getVirtualDiskImages, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ virtualDiskImages := []datatypes.SoftLayer_Virtual_Disk_Image{}
+ err = json.Unmarshal(responseBytes, &virtualDiskImages)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Disk_Image{}, err
+ }
+
+ return virtualDiskImages, nil
+}
+
+func (slas *softLayer_Account_Service) GetSshKeys() ([]datatypes.SoftLayer_Security_Ssh_Key, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getSshKeys.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getSshKeys, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getSshKeys, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ sshKeys := []datatypes.SoftLayer_Security_Ssh_Key{}
+ err = json.Unmarshal(responseBytes, &sshKeys)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ return sshKeys, nil
+}
+
+func (slas *softLayer_Account_Service) GetBlockDeviceTemplateGroups() ([]datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getBlockDeviceTemplateGroups.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getBlockDeviceTemplateGroups, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getBlockDeviceTemplateGroups, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ vgbdtGroups := []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}
+ err = json.Unmarshal(responseBytes, &vgbdtGroups)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ return vgbdtGroups, nil
+}
+
+func (slas *softLayer_Account_Service) GetBlockDeviceTemplateGroupsWithFilter(filters string) ([]datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error) {
+ isJson, err := common.ValidateJson(filters)
+ if !isJson || err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: filters string %s is not a valid Json formatted string, error message '%s'", filters, err.Error())
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getBlockDeviceTemplateGroups.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequestWithObjectFilter(path, filters, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getBlockDeviceTemplateGroups, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getBlockDeviceTemplateGroups, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ vgbdtGroups := []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}
+ err = json.Unmarshal(responseBytes, &vgbdtGroups)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ return vgbdtGroups, nil
+}
+
+//TODO: why is this method empty? Remove?
+func (slas *softLayer_Account_Service) GetDatacentersWithSubnetAllocations() ([]datatypes.SoftLayer_Location, error) {
+ return []datatypes.SoftLayer_Location{}, nil
+}
+
+func (slas *softLayer_Account_Service) GetHardware() ([]datatypes.SoftLayer_Hardware, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getHardware.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getHardware, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Hardware{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getHardware, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Hardware{}, errors.New(errorMessage)
+ }
+
+ hardwares := []datatypes.SoftLayer_Hardware{}
+ err = json.Unmarshal(responseBytes, &hardwares)
+
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Hardware{}, err
+ }
+
+ return hardwares, nil
+}
+
+func (slas *softLayer_Account_Service) GetDnsDomains() ([]datatypes.SoftLayer_Dns_Domain, error) {
+ path := fmt.Sprintf("%s/%s", slas.GetName(), "getDomains.json")
+ responseBytes, errorCode, err := slas.client.GetHttpClient().DoRawHttpRequest(path, "GET", &bytes.Buffer{})
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getDomains, error message '%s'", err.Error())
+ return []datatypes.SoftLayer_Dns_Domain{}, errors.New(errorMessage)
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getDomains, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Dns_Domain{}, errors.New(errorMessage)
+ }
+
+ domains := []datatypes.SoftLayer_Dns_Domain{}
+ err = json.Unmarshal(responseBytes, &domains)
+ if err != nil {
+ errorMessage := fmt.Sprintf("softlayer-go: failed to decode JSON response, err message '%s'", err.Error())
+ err := errors.New(errorMessage)
+ return []datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ return domains, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softLayer_virtual_guest_block_device_template_group.go b/vendor/github.com/maximilien/softlayer-go/services/softLayer_virtual_guest_block_device_template_group.go
new file mode 100644
index 000000000000..60e7511fc485
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softLayer_virtual_guest_block_device_template_group.go
@@ -0,0 +1,437 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "net/url"
+ "strconv"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Virtual_Guest_Block_Device_Template_Group_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Virtual_Guest_Block_Device_Template_Group_Service(client softlayer.Client) *softLayer_Virtual_Guest_Block_Device_Template_Group_Service {
+ return &softLayer_Virtual_Guest_Block_Device_Template_Group_Service{
+ client: client,
+ }
+}
+
+func (slvgs *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetName() string {
+ return "SoftLayer_Virtual_Guest_Block_Device_Template_Group"
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetObject(id int) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getObject.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ vgbdtGroup := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}
+ err = json.Unmarshal(response, &vgbdtGroup)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ return vgbdtGroup, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) DeleteObject(id int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", slvgbdtg.GetName(), id), "DELETE", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#deleteObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetDatacenters(id int) ([]datatypes.SoftLayer_Location, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getDatacenters.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Location{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getDatacenters, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Location{}, errors.New(errorMessage)
+ }
+
+ locations := []datatypes.SoftLayer_Location{}
+ err = json.Unmarshal(response, &locations)
+ if err != nil {
+ return []datatypes.SoftLayer_Location{}, err
+ }
+
+ return locations, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetSshKeys(id int) ([]datatypes.SoftLayer_Security_Ssh_Key, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getSshKeys.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getSshKeys, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ sshKeys := []datatypes.SoftLayer_Security_Ssh_Key{}
+ err = json.Unmarshal(response, &sshKeys)
+ if err != nil {
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ return sshKeys, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetStatus(id int) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getStatus.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getStatus, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status{}, errors.New(errorMessage)
+ }
+
+ status := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status{}
+ err = json.Unmarshal(response, &status)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status{}, err
+ }
+
+ return status, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetImageType(id int) (datatypes.SoftLayer_Image_Type, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getImageType.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Image_Type{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getImageType, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Image_Type{}, errors.New(errorMessage)
+ }
+
+ imageType := datatypes.SoftLayer_Image_Type{}
+ err = json.Unmarshal(response, &imageType)
+ if err != nil {
+ return datatypes.SoftLayer_Image_Type{}, err
+ }
+
+ return imageType, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetStorageLocations(id int) ([]datatypes.SoftLayer_Location, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getStorageLocations.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Location{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getStorageLocations, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Location{}, errors.New(errorMessage)
+ }
+
+ locations := []datatypes.SoftLayer_Location{}
+ err = json.Unmarshal(response, &locations)
+ if err != nil {
+ return []datatypes.SoftLayer_Location{}, err
+ }
+
+ return locations, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) CreateFromExternalSource(configuration datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error) {
+ parameters := datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration_Parameters{
+ Parameters: []datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration{configuration},
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/createFromExternalSource.json", slvgbdtg.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#createFromExternalSource, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, errors.New(errorMessage)
+ }
+
+ vgbdtGroup := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}
+ err = json.Unmarshal(response, &vgbdtGroup)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group{}, err
+ }
+
+ return vgbdtGroup, err
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) CopyToExternalSource(configuration datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration) (bool, error) {
+ parameters := datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration_Parameters{
+ Parameters: []datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration{configuration},
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/copyToExternalSource.json", slvgbdtg.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#copyToExternalSource, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to create virtual guest block device template group, got '%s' as response from the API.", res))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetImageTypeKeyName(id int) (string, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getImageTypeKeyName.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getImageTypeKeyName, HTTP error code: '%d'", errorCode)
+ return "", errors.New(errorMessage)
+ }
+
+ return string(response), err
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) GetTransaction(id int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getTransaction.json", slvgbdtg.GetName(), id), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#getTransaction, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) DenySharingAccess(id int, accountId int) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameters{
+ Parameters: datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameter{
+ AccountId: accountId,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/denySharingAccess.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#denySharingAccess, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to permit sharing access to VGDBTG with ID: %d to account ID: %d", id, accountId))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) PermitSharingAccess(id int, accountId int) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameters{
+ Parameters: datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameter{
+ AccountId: accountId,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/permitSharingAccess.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#permitSharingAccess, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to permit sharing access to VGDBTG with ID: %d to account ID: %d", id, accountId))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) AddLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameters{
+ Parameters: datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameter{
+ Locations: locations,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/addLocations.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#addLocations, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to add locations access to VGDBTG with ID: %d", id))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) RemoveLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameters{
+ Parameters: datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameter{
+ Locations: locations,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/removeLocations.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#removeLocations, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to remove locations access to VGDBTG with ID: %d", id))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) SetAvailableLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameters{
+ Parameters: datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_LocationsInitParameter{
+ Locations: locations,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/setAvailableLocations.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#setAvailableLocations, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to set available locations access to VGDBTG with ID: %d", id))
+ }
+
+ return true, nil
+}
+
+func (slvgbdtg *softLayer_Virtual_Guest_Block_Device_Template_Group_Service) CreatePublicArchiveTransaction(id int, groupName string, summary string, note string, locations []datatypes.SoftLayer_Location) (int, error) {
+ locationIdsArray := []int{}
+ for _, location := range locations {
+ locationIdsArray = append(locationIdsArray, location.Id)
+ }
+
+ groupName = url.QueryEscape(groupName)
+ summary = url.QueryEscape(summary)
+ note = url.QueryEscape(note)
+
+ parameters := datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_GroupInitParameters2{
+ Parameters: []interface{}{groupName, summary, note, locationIdsArray},
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return 0, err
+ }
+
+ response, errorCode, err := slvgbdtg.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/createPublicArchiveTransaction.json", slvgbdtg.GetName(), id), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return 0, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest_Block_Device_Template_Group#createPublicArchiveTransaction, HTTP error code: '%d'", errorCode)
+ return 0, errors.New(errorMessage)
+ }
+
+ transactionId, err := strconv.Atoi(string(response[:]))
+ if err != nil {
+ return 0, errors.New(fmt.Sprintf("Failed to createPublicArchiveTransaction for ID: %d, error: %s", id, string(response[:])))
+ }
+
+ return transactionId, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item.go
new file mode 100644
index 000000000000..1359e2e8da00
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item.go
@@ -0,0 +1,41 @@
+package services
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ common "github.com/maximilien/softlayer-go/common"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Billing_Item_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Billing_Item_Service(client softlayer.Client) *softLayer_Billing_Item_Service {
+ return &softLayer_Billing_Item_Service{
+ client: client,
+ }
+}
+
+func (slbi *softLayer_Billing_Item_Service) GetName() string {
+ return "SoftLayer_Billing_Item"
+}
+
+func (slbi *softLayer_Billing_Item_Service) CancelService(billingId int) (bool, error) {
+ response, errorCode, err := slbi.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/cancelService.json", slbi.GetName(), billingId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, nil
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Billing_Item#CancelService, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item_cancellation_request.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item_cancellation_request.go
new file mode 100644
index 000000000000..f60cf5c61c7d
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_billing_item_cancellation_request.go
@@ -0,0 +1,57 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Billing_Item_Cancellation_Request_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Billing_Item_Cancellation_Request_Service(client softlayer.Client) *softLayer_Billing_Item_Cancellation_Request_Service {
+ return &softLayer_Billing_Item_Cancellation_Request_Service{
+ client: client,
+ }
+}
+
+func (slbicr *softLayer_Billing_Item_Cancellation_Request_Service) GetName() string {
+ return "SoftLayer_Billing_Item_Cancellation_Request"
+}
+
+func (slbicr *softLayer_Billing_Item_Cancellation_Request_Service) CreateObject(request datatypes.SoftLayer_Billing_Item_Cancellation_Request) (datatypes.SoftLayer_Billing_Item_Cancellation_Request, error) {
+ parameters := datatypes.SoftLayer_Billing_Item_Cancellation_Request_Parameters{
+ Parameters: []datatypes.SoftLayer_Billing_Item_Cancellation_Request{
+ request,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Billing_Item_Cancellation_Request{}, err
+ }
+
+ responseBytes, errorCode, err := slbicr.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/createObject.json", slbicr.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Billing_Item_Cancellation_Request{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Billing_Item_Cancellation_Request#createObject TTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Billing_Item_Cancellation_Request{}, errors.New(errorMessage)
+ }
+
+ result := datatypes.SoftLayer_Billing_Item_Cancellation_Request{}
+ err = json.Unmarshal(responseBytes, &result)
+ if err != nil {
+ return datatypes.SoftLayer_Billing_Item_Cancellation_Request{}, err
+ }
+
+ return result, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain.go
new file mode 100644
index 000000000000..40f288b2651d
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain.go
@@ -0,0 +1,113 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Dns_Domain_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Dns_Domain_Service(client softlayer.Client) *softLayer_Dns_Domain_Service {
+ return &softLayer_Dns_Domain_Service{
+ client: client,
+ }
+}
+
+func (sldds *softLayer_Dns_Domain_Service) GetName() string {
+ return "SoftLayer_Dns_Domain"
+}
+
+func (sldds *softLayer_Dns_Domain_Service) CreateObject(template datatypes.SoftLayer_Dns_Domain_Template) (datatypes.SoftLayer_Dns_Domain, error) {
+ if template.ResourceRecords == nil {
+ template.ResourceRecords = []datatypes.SoftLayer_Dns_Domain_ResourceRecord{}
+ }
+
+ parameters := datatypes.SoftLayer_Dns_Domain_Template_Parameters{
+ Parameters: []datatypes.SoftLayer_Dns_Domain_Template{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ response, errorCode, err := sldds.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s.json", sldds.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain#createObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Dns_Domain{}, errors.New(errorMessage)
+ }
+
+ err = sldds.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ softLayer_Dns_Domain := datatypes.SoftLayer_Dns_Domain{}
+ err = json.Unmarshal(response, &softLayer_Dns_Domain)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ return softLayer_Dns_Domain, nil
+}
+
+func (sldds *softLayer_Dns_Domain_Service) GetObject(dnsId int) (datatypes.SoftLayer_Dns_Domain, error) {
+ objectMask := []string{
+ "id",
+ "name",
+ "serial",
+ "updateDate",
+ "account",
+ "managedResourceFlag",
+ "resourceRecordCount",
+ "resourceRecords",
+ "secondary",
+ }
+
+ response, errorCode, err := sldds.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getObject.json", sldds.GetName(), dnsId), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Dns_Domain{}, errors.New(errorMessage)
+ }
+
+ dns_domain := datatypes.SoftLayer_Dns_Domain{}
+ err = json.Unmarshal(response, &dns_domain)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain{}, err
+ }
+
+ return dns_domain, nil
+}
+
+func (sldds *softLayer_Dns_Domain_Service) DeleteObject(dnsId int) (bool, error) {
+ response, errorCode, err := sldds.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", sldds.GetName(), dnsId), "DELETE", new(bytes.Buffer))
+
+ if response_value := string(response[:]); response_value != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to delete dns domain with id '%d', got '%s' as response from the API", dnsId, response_value))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain#deleteObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain_resource_record.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain_resource_record.go
new file mode 100644
index 000000000000..7ca21611d127
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_dns_domain_resource_record.go
@@ -0,0 +1,162 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type SoftLayer_Dns_Domain_ResourceRecord_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Dns_Domain_ResourceRecord_Service(client softlayer.Client) *SoftLayer_Dns_Domain_ResourceRecord_Service {
+ return &SoftLayer_Dns_Domain_ResourceRecord_Service{
+ client: client,
+ }
+}
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) GetName() string {
+ return "SoftLayer_Dns_Domain_ResourceRecord"
+}
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) CreateObject(template datatypes.SoftLayer_Dns_Domain_ResourceRecord_Template) (datatypes.SoftLayer_Dns_Domain_ResourceRecord, error) {
+ parameters := datatypes.SoftLayer_Dns_Domain_ResourceRecord_Template_Parameters{
+ Parameters: []datatypes.SoftLayer_Dns_Domain_ResourceRecord_Template{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ response, errorCode, err := sldr.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/createObject", sldr.getNameByType(template.Type)), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain_ResourceRecord#createObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, errors.New(errorMessage)
+ }
+
+ err = sldr.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ dns_record := datatypes.SoftLayer_Dns_Domain_ResourceRecord{}
+ err = json.Unmarshal(response, &dns_record)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ return dns_record, nil
+}
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) GetObject(id int) (datatypes.SoftLayer_Dns_Domain_ResourceRecord, error) {
+ objectMask := []string{
+ "data",
+ "domainId",
+ "expire",
+ "host",
+ "id",
+ "minimum",
+ "mxPriority",
+ "refresh",
+ "responsiblePerson",
+ "retry",
+ "ttl",
+ "type",
+ "service",
+ "priority",
+ "protocol",
+ "port",
+ "weight",
+ }
+
+ response, errorCode, err := sldr.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getObject.json", sldr.GetName(), id), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ err = sldr.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain_ResourceRecord#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, errors.New(errorMessage)
+ }
+
+ dns_record := datatypes.SoftLayer_Dns_Domain_ResourceRecord{}
+ err = json.Unmarshal(response, &dns_record)
+ if err != nil {
+ return datatypes.SoftLayer_Dns_Domain_ResourceRecord{}, err
+ }
+
+ return dns_record, nil
+}
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) DeleteObject(recordId int) (bool, error) {
+ response, errorCode, err := sldr.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", sldr.GetName(), recordId), "DELETE", new(bytes.Buffer))
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to delete DNS Domain Record with id '%d', got '%s' as response from the API.", recordId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain_ResourceRecord#deleteObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) EditObject(recordId int, template datatypes.SoftLayer_Dns_Domain_ResourceRecord) (bool, error) {
+ parameters := datatypes.SoftLayer_Dns_Domain_ResourceRecord_Parameters{
+ Parameters: []datatypes.SoftLayer_Dns_Domain_ResourceRecord{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := sldr.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/editObject.json", sldr.getNameByType(template.Type), recordId), "POST", bytes.NewBuffer(requestBody))
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to edit DNS Domain Record with id: %d, got '%s' as response from the API.", recordId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Dns_Domain_ResourceRecord#editObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+//Private methods
+
+func (sldr *SoftLayer_Dns_Domain_ResourceRecord_Service) getNameByType(dnsType string) string {
+ switch dnsType {
+ case "srv":
+ // Currently only SRV record type requires additional fields for Create and Update, while all other record types
+ // use basic default resource type. Therefore there is no need for now to implement each resource type as separate service
+ // https://sldn.softlayer.com/reference/datatypes/SoftLayer_Dns_Domain_ResourceRecord_SrvType
+ return "SoftLayer_Dns_Domain_ResourceRecord_SrvType"
+ default:
+ return "SoftLayer_Dns_Domain_ResourceRecord"
+ }
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_hardware.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_hardware.go
new file mode 100644
index 000000000000..3b6822b78325
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_hardware.go
@@ -0,0 +1,101 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Hardware_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Hardware_Service(client softlayer.Client) *softLayer_Hardware_Service {
+ return &softLayer_Hardware_Service{
+ client: client,
+ }
+}
+
+func (slhs *softLayer_Hardware_Service) GetName() string {
+ return "SoftLayer_Hardware"
+}
+
+func (slhs *softLayer_Hardware_Service) CreateObject(template datatypes.SoftLayer_Hardware_Template) (datatypes.SoftLayer_Hardware, error) {
+ parameters := datatypes.SoftLayer_Hardware_Template_Parameters{
+ Parameters: []datatypes.SoftLayer_Hardware_Template{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ response, errorCode, err := slhs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s.json", slhs.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Hardware#createObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Hardware{}, errors.New(errorMessage)
+ }
+
+ err = slhs.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ bare_metal_server := datatypes.SoftLayer_Hardware{}
+ err = json.Unmarshal(response, &bare_metal_server)
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ return bare_metal_server, nil
+}
+
+func (slhs *softLayer_Hardware_Service) GetObject(id string) (datatypes.SoftLayer_Hardware, error) {
+
+ objectMask := []string{
+ "bareMetalInstanceFlag",
+ "domain",
+ "hostname",
+ "id",
+ "hardwareStatusId",
+ "provisionDate",
+ "globalIdentifier",
+ "primaryIpAddress",
+ "operatingSystem.passwords.password",
+ "operatingSystem.passwords.username",
+ }
+
+ response, errorCode, err := slhs.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%s.json", slhs.GetName(), id), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Hardware#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Hardware{}, errors.New(errorMessage)
+ }
+
+ err = slhs.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ bare_metal_server := datatypes.SoftLayer_Hardware{}
+ err = json.Unmarshal(response, &bare_metal_server)
+ if err != nil {
+ return datatypes.SoftLayer_Hardware{}, err
+ }
+
+ return bare_metal_server, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage.go
new file mode 100644
index 000000000000..0319c3eec810
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage.go
@@ -0,0 +1,347 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "strconv"
+ "time"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+const (
+ NETWORK_PERFORMANCE_STORAGE_PACKAGE_ID = 222
+ BLOCK_ITEM_PRICE_ID = 40678 // file or block item price id
+ CREATE_ISCSI_VOLUME_MAX_RETRY_TIME = 60
+ CREATE_ISCSI_VOLUME_CHECK_INTERVAL = 5 // seconds
+)
+
+type softLayer_Network_Storage_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Network_Storage_Service(client softlayer.Client) *softLayer_Network_Storage_Service {
+ return &softLayer_Network_Storage_Service{
+ client: client,
+ }
+}
+
+func (slns *softLayer_Network_Storage_Service) GetName() string {
+ return "SoftLayer_Network_Storage"
+}
+
+func (slns *softLayer_Network_Storage_Service) CreateIscsiVolume(size int, location string) (datatypes.SoftLayer_Network_Storage, error) {
+ if size < 0 {
+ return datatypes.SoftLayer_Network_Storage{}, errors.New("Cannot create negative sized volumes")
+ }
+
+ sizeItemPriceId, err := slns.getIscsiVolumeItemIdBasedOnSize(size)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ iopsItemPriceId := slns.getPerformanceStorageItemPriceIdByIops(size)
+
+ order := datatypes.SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi{
+ Location: location,
+ ComplexType: "SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi",
+ OsFormatType: datatypes.SoftLayer_Network_Storage_Iscsi_OS_Type{
+ Id: 12,
+ KeyName: "LINUX",
+ },
+ Prices: []datatypes.SoftLayer_Product_Item_Price{
+ datatypes.SoftLayer_Product_Item_Price{
+ Id: sizeItemPriceId,
+ },
+ datatypes.SoftLayer_Product_Item_Price{
+ Id: iopsItemPriceId,
+ },
+ datatypes.SoftLayer_Product_Item_Price{
+ Id: BLOCK_ITEM_PRICE_ID,
+ },
+ },
+ PackageId: NETWORK_PERFORMANCE_STORAGE_PACKAGE_ID,
+ Quantity: 1,
+ }
+
+ productOrderService, err := slns.client.GetSoftLayer_Product_Order_Service()
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ receipt, err := productOrderService.PlaceContainerOrderNetworkPerformanceStorageIscsi(order)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ var iscsiStorage datatypes.SoftLayer_Network_Storage
+
+ for i := 0; i < CREATE_ISCSI_VOLUME_MAX_RETRY_TIME; i++ {
+ iscsiStorage, err = slns.findIscsiVolumeId(receipt.OrderId)
+ if err == nil {
+ break
+ } else if i == CREATE_ISCSI_VOLUME_MAX_RETRY_TIME-1 {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ time.Sleep(CREATE_ISCSI_VOLUME_CHECK_INTERVAL * time.Second)
+ }
+
+ return iscsiStorage, nil
+}
+
+func (slvgs *softLayer_Network_Storage_Service) DeleteObject(volumeId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", slvgs.GetName(), volumeId), "DELETE", new(bytes.Buffer))
+
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to delete volume with id '%d', got '%s' as response from the API.", volumeId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Network_Storage#deleteObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slns *softLayer_Network_Storage_Service) DeleteIscsiVolume(volumeId int, immediateCancellationFlag bool) error {
+
+ billingItem, err := slns.GetBillingItem(volumeId)
+ if err != nil {
+ return err
+ }
+
+ if billingItem.Id > 0 {
+ billingItemService, err := slns.client.GetSoftLayer_Billing_Item_Service()
+ if err != nil {
+ return err
+ }
+
+ deleted, err := billingItemService.CancelService(billingItem.Id)
+ if err != nil {
+ return err
+ }
+
+ if deleted {
+ return nil
+ }
+ }
+
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Network_Storage_Service#deleteIscsiVolume with id: '%d'", volumeId)
+
+ return errors.New(errorMessage)
+}
+
+func (slns *softLayer_Network_Storage_Service) GetIscsiVolume(volumeId int) (datatypes.SoftLayer_Network_Storage, error) {
+ objectMask := []string{
+ "accountId",
+ "capacityGb",
+ "createDate",
+ "guestId",
+ "hardwareId",
+ "hostId",
+ "id",
+ "nasType",
+ "notes",
+ "Password",
+ "serviceProviderId",
+ "upgradableFlag",
+ "username",
+ "billingItem.id",
+ "billingItem.orderItem.order.id",
+ "lunId",
+ "serviceResourceBackendIpAddress",
+ }
+
+ response, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getObject.json", slns.GetName(), volumeId), objectMask, "GET", new(bytes.Buffer))
+
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getAccountStatus, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Network_Storage{}, errors.New(errorMessage)
+ }
+
+ volume := datatypes.SoftLayer_Network_Storage{}
+ err = json.Unmarshal(response, &volume)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ return volume, nil
+}
+
+func (slns *softLayer_Network_Storage_Service) GetBillingItem(volumeId int) (datatypes.SoftLayer_Billing_Item, error) {
+
+ response, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getBillingItem.json", slns.GetName(), volumeId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Billing_Item{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_NetWork_Storage#getBillingItem, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Billing_Item{}, errors.New(errorMessage)
+ }
+
+ billingItem := datatypes.SoftLayer_Billing_Item{}
+ err = json.Unmarshal(response, &billingItem)
+ if err != nil {
+ return datatypes.SoftLayer_Billing_Item{}, err
+ }
+
+ return billingItem, nil
+}
+
+func (slns *softLayer_Network_Storage_Service) HasAllowedVirtualGuest(volumeId int, vmId int) (bool, error) {
+ filter := string(`{"allowedVirtualGuests":{"id":{"operation":"` + strconv.Itoa(vmId) + `"}}}`)
+ response, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequestWithObjectFilterAndObjectMask(fmt.Sprintf("%s/%d/getAllowedVirtualGuests.json", slns.GetName(), volumeId), []string{"id"}, fmt.Sprintf(string(filter)), "GET", new(bytes.Buffer))
+
+ if err != nil {
+ return false, errors.New(fmt.Sprintf("Cannot check authentication for volume %d in vm %d", volumeId, vmId))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Network_Storage#hasAllowedVirtualGuest, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ virtualGuest := []datatypes.SoftLayer_Virtual_Guest{}
+ err = json.Unmarshal(response, &virtualGuest)
+ if err != nil {
+ return false, errors.New(fmt.Sprintf("Failed to unmarshal response of checking authentication for volume %d in vm %d", volumeId, vmId))
+ }
+
+ if len(virtualGuest) > 0 {
+ return true, nil
+ }
+
+ return false, nil
+}
+
+func (slns *softLayer_Network_Storage_Service) AttachIscsiVolume(virtualGuest datatypes.SoftLayer_Virtual_Guest, volumeId int) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Parameters{
+ Parameters: []datatypes.SoftLayer_Virtual_Guest{
+ virtualGuest,
+ },
+ }
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ resp, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/allowAccessFromVirtualGuest.json", slns.GetName(), volumeId), "PUT", bytes.NewBuffer(requestBody))
+
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Network_Storage#attachIscsiVolume, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ allowable, err := strconv.ParseBool(string(resp[:]))
+ if err != nil {
+ return false, nil
+ }
+
+ return allowable, nil
+}
+
+func (slns *softLayer_Network_Storage_Service) DetachIscsiVolume(virtualGuest datatypes.SoftLayer_Virtual_Guest, volumeId int) error {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Parameters{
+ Parameters: []datatypes.SoftLayer_Virtual_Guest{
+ virtualGuest,
+ },
+ }
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return err
+ }
+
+ _, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/removeAccessFromVirtualGuest.json", slns.GetName(), volumeId), "PUT", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getAccountStatus, HTTP error code: '%d'", errorCode)
+ return errors.New(errorMessage)
+ }
+
+ return nil
+}
+
+// Private methods
+
+func (slns *softLayer_Network_Storage_Service) findIscsiVolumeId(orderId int) (datatypes.SoftLayer_Network_Storage, error) {
+ ObjectFilter := string(`{"iscsiNetworkStorage":{"billingItem":{"orderItem":{"order":{"id":{"operation":` + strconv.Itoa(orderId) + `}}}}}}`)
+
+ accountService, err := slns.client.GetSoftLayer_Account_Service()
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ iscsiStorages, err := accountService.GetIscsiNetworkStorageWithFilter(ObjectFilter)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage{}, err
+ }
+
+ if len(iscsiStorages) == 1 {
+ return iscsiStorages[0], nil
+ }
+
+ return datatypes.SoftLayer_Network_Storage{}, errors.New(fmt.Sprintf("Cannot find an performance storage (iSCSI volume) with order id %d", orderId))
+}
+
+func (slns *softLayer_Network_Storage_Service) getIscsiVolumeItemIdBasedOnSize(size int) (int, error) {
+ productPackageService, err := slns.client.GetSoftLayer_Product_Package_Service()
+ if err != nil {
+ return 0, err
+ }
+
+ itemPrices, err := productPackageService.GetItemPricesBySize(NETWORK_PERFORMANCE_STORAGE_PACKAGE_ID, size)
+ if err != nil {
+ return 0, err
+ }
+
+ var currentItemId int
+
+ if len(itemPrices) > 0 {
+ for _, itemPrice := range itemPrices {
+ if itemPrice.LocationGroupId == 0 {
+ currentItemId = itemPrice.Id
+ }
+ }
+ }
+
+ if currentItemId == 0 {
+ return 0, errors.New(fmt.Sprintf("No proper performance storage (iSCSI volume)for size %d", size))
+ }
+
+ return currentItemId, nil
+}
+
+func (slns *softLayer_Network_Storage_Service) getPerformanceStorageItemPriceIdByIops(size int) int {
+ switch size {
+ case 20:
+ return 40838 // 500 IOPS
+ case 40:
+ return 40988 // 1000 IOPS
+ case 80:
+ return 41288 // 2000 IOPS
+ default:
+ return 41788 // 3000 IOPS
+ }
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage_allowed_host.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage_allowed_host.go
new file mode 100644
index 000000000000..ceff4113d9d0
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_network_storage_allowed_host.go
@@ -0,0 +1,46 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Network_Storage_Allowed_Host_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Network_Storage_Allowed_Host_Service(client softlayer.Client) *softLayer_Network_Storage_Allowed_Host_Service {
+ return &softLayer_Network_Storage_Allowed_Host_Service{
+ client: client,
+ }
+}
+
+func (slns *softLayer_Network_Storage_Allowed_Host_Service) GetName() string {
+ return "SoftLayer_Network_Storage_Allowed_Host"
+}
+
+func (slns *softLayer_Network_Storage_Allowed_Host_Service) GetCredential(allowedHostId int) (datatypes.SoftLayer_Network_Storage_Credential, error) {
+ response, errorCode, err := slns.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getCredential.json", slns.GetName(), allowedHostId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage_Credential{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Network_Storage_Allowed_Host#getCredential, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Network_Storage_Credential{}, errors.New(errorMessage)
+ }
+
+ credential := datatypes.SoftLayer_Network_Storage_Credential{}
+ err = json.Unmarshal(response, &credential)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage_Credential{}, err
+ }
+
+ return credential, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_order.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_order.go
new file mode 100644
index 000000000000..6fbbe61d4421
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_order.go
@@ -0,0 +1,119 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Product_Order_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Product_Order_Service(client softlayer.Client) *softLayer_Product_Order_Service {
+ return &softLayer_Product_Order_Service{
+ client: client,
+ }
+}
+
+func (slpo *softLayer_Product_Order_Service) GetName() string {
+ return "SoftLayer_Product_Order"
+}
+
+func (slpo *softLayer_Product_Order_Service) PlaceOrder(order datatypes.SoftLayer_Container_Product_Order) (datatypes.SoftLayer_Container_Product_Order_Receipt, error) {
+ parameters := datatypes.SoftLayer_Container_Product_Order_Parameters{
+ Parameters: []datatypes.SoftLayer_Container_Product_Order{
+ order,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ responseBytes, errorCode, err := slpo.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/placeOrder.json", slpo.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Account#getAccountStatus, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, errors.New(errorMessage)
+ }
+
+ receipt := datatypes.SoftLayer_Container_Product_Order_Receipt{}
+ err = json.Unmarshal(responseBytes, &receipt)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ return receipt, nil
+}
+
+func (slpo *softLayer_Product_Order_Service) PlaceContainerOrderNetworkPerformanceStorageIscsi(order datatypes.SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi) (datatypes.SoftLayer_Container_Product_Order_Receipt, error) {
+ parameters := datatypes.SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi_Parameters{
+ Parameters: []datatypes.SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi{
+ order,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ responseBytes, errorCode, err := slpo.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/placeOrder.json", slpo.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Order#placeContainerOrderNetworkPerformanceStorageIscsi, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, errors.New(errorMessage)
+ }
+
+ receipt := datatypes.SoftLayer_Container_Product_Order_Receipt{}
+ err = json.Unmarshal(responseBytes, &receipt)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ return receipt, nil
+}
+
+func (slpo *softLayer_Product_Order_Service) PlaceContainerOrderVirtualGuestUpgrade(order datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade) (datatypes.SoftLayer_Container_Product_Order_Receipt, error) {
+ parameters := datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade_Parameters{
+ Parameters: []datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade{
+ order,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ responseBytes, errorCode, err := slpo.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/placeOrder.json", slpo.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Order#placeOrder, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, errors.New(errorMessage)
+ }
+
+ receipt := datatypes.SoftLayer_Container_Product_Order_Receipt{}
+ err = json.Unmarshal(responseBytes, &receipt)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ return receipt, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_package.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_package.go
new file mode 100644
index 000000000000..4d9af04def18
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_product_package.go
@@ -0,0 +1,174 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "strconv"
+ "strings"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+const (
+ OUTLET_PACKAGE = "OUTLET"
+)
+
+type softLayer_Product_Package_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Product_Package_Service(client softlayer.Client) *softLayer_Product_Package_Service {
+ return &softLayer_Product_Package_Service{
+ client: client,
+ }
+}
+
+func (slpp *softLayer_Product_Package_Service) GetName() string {
+ return "SoftLayer_Product_Package"
+}
+
+func (slpp *softLayer_Product_Package_Service) GetItemPrices(packageId int) ([]datatypes.SoftLayer_Product_Item_Price, error) {
+ response, errorCode, err := slpp.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getItemPrices.json", slpp.GetName(), packageId), []string{"id", "item.id", "item.description", "item.capacity"}, "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Package#getItemPrices, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Product_Item_Price{}, errors.New(errorMessage)
+ }
+
+ itemPrices := []datatypes.SoftLayer_Product_Item_Price{}
+ err = json.Unmarshal(response, &itemPrices)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ return itemPrices, nil
+}
+
+func (slpp *softLayer_Product_Package_Service) GetItemPricesBySize(packageId int, size int) ([]datatypes.SoftLayer_Product_Item_Price, error) {
+ keyName := strconv.Itoa(size) + "_GB_PERFORMANCE_STORAGE_SPACE"
+ filter := string(`{"itemPrices":{"item":{"keyName":{"operation":"` + keyName + `"}}}}`)
+
+ response, errorCode, err := slpp.client.GetHttpClient().DoRawHttpRequestWithObjectFilterAndObjectMask(fmt.Sprintf("%s/%d/getItemPrices.json", slpp.GetName(), packageId), []string{"id", "locationGroupId", "item.id", "item.keyName", "item.units", "item.description", "item.capacity"}, fmt.Sprintf(string(filter)), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Package#getItemsPricesBySize, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Product_Item_Price{}, errors.New(errorMessage)
+ }
+
+ itemPrices := []datatypes.SoftLayer_Product_Item_Price{}
+ err = json.Unmarshal(response, &itemPrices)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ return itemPrices, nil
+}
+
+func (slpp *softLayer_Product_Package_Service) GetItemsByType(packageType string) ([]datatypes.SoftLayer_Product_Item, error) {
+ productPackage, err := slpp.GetOnePackageByType(packageType)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item{}, err
+ }
+
+ return slpp.GetItems(productPackage.Id)
+}
+
+func (slpp *softLayer_Product_Package_Service) GetItems(packageId int) ([]datatypes.SoftLayer_Product_Item, error) {
+ objectMasks := []string{
+ "id",
+ "capacity",
+ "description",
+ "prices.id",
+ "prices.categories.id",
+ "prices.categories.name",
+ }
+
+ response, errorCode, err := slpp.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getItems.json", slpp.GetName(), packageId), objectMasks, "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Package#getItems, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Product_Item{}, errors.New(errorMessage)
+ }
+
+ productItems := []datatypes.SoftLayer_Product_Item{}
+ err = json.Unmarshal(response, &productItems)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item{}, err
+ }
+
+ return productItems, nil
+}
+
+func (slpp *softLayer_Product_Package_Service) GetOnePackageByType(packageType string) (datatypes.Softlayer_Product_Package, error) {
+ productPackages, err := slpp.GetPackagesByType(packageType)
+ if err != nil {
+ return datatypes.Softlayer_Product_Package{}, err
+ }
+
+ if len(productPackages) == 0 {
+ return datatypes.Softlayer_Product_Package{}, errors.New(fmt.Sprintf("No packages available for type '%s'.", packageType))
+ }
+
+ return productPackages[0], nil
+}
+
+func (slpp *softLayer_Product_Package_Service) GetPackagesByType(packageType string) ([]datatypes.Softlayer_Product_Package, error) {
+ objectMasks := []string{
+ "id",
+ "name",
+ "description",
+ "isActive",
+ "type.keyName",
+ }
+
+ filterObject := string(`{"type":{"keyName":{"operation":"` + packageType + `"}}}`)
+
+ response, errorCode, err := slpp.client.GetHttpClient().DoRawHttpRequestWithObjectFilterAndObjectMask(fmt.Sprintf("%s/getAllObjects.json", slpp.GetName()), objectMasks, filterObject, "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.Softlayer_Product_Package{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Product_Package#getPackagesByType, HTTP error code: '%d'", errorCode)
+ return []datatypes.Softlayer_Product_Package{}, errors.New(errorMessage)
+ }
+
+ productPackages := []*datatypes.Softlayer_Product_Package{}
+ err = json.Unmarshal(response, &productPackages)
+ if err != nil {
+ return []datatypes.Softlayer_Product_Package{}, err
+ }
+
+ // Remove packages designated as OUTLET
+ // See method "#get_packages_of_type" in SoftLayer Python client for details: https://github.com/softlayer/softlayer-python/blob/master/SoftLayer/managers/ordering.py
+ nonOutletPackages := slpp.filterProducts(productPackages, func(productPackage *datatypes.Softlayer_Product_Package) bool {
+ return !strings.Contains(productPackage.Description, OUTLET_PACKAGE) && !strings.Contains(productPackage.Name, OUTLET_PACKAGE)
+ })
+
+ return nonOutletPackages, nil
+}
+
+//Private methods
+
+func (slpp *softLayer_Product_Package_Service) filterProducts(array []*datatypes.Softlayer_Product_Package, predicate func(*datatypes.Softlayer_Product_Package) bool) []datatypes.Softlayer_Product_Package {
+ filtered := make([]datatypes.Softlayer_Product_Package, 0)
+ for _, element := range array {
+ if predicate(element) {
+ filtered = append(filtered, *element)
+ }
+ }
+ return filtered
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_security_ssh_key.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_security_ssh_key.go
new file mode 100644
index 000000000000..30cb5f79db2b
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_security_ssh_key.go
@@ -0,0 +1,159 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Security_Ssh_Key_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Security_Ssh_Key_Service(client softlayer.Client) *softLayer_Security_Ssh_Key_Service {
+ return &softLayer_Security_Ssh_Key_Service{
+ client: client,
+ }
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) GetName() string {
+ return "SoftLayer_Security_Ssh_Key"
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) CreateObject(template datatypes.SoftLayer_Security_Ssh_Key) (datatypes.SoftLayer_Security_Ssh_Key, error) {
+ parameters := datatypes.SoftLayer_Shh_Key_Parameters{
+ Parameters: []datatypes.SoftLayer_Security_Ssh_Key{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ data, errorCode, err := slssks.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/createObject", slssks.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#createObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ err = slssks.client.GetHttpClient().CheckForHttpResponseErrors(data)
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ softLayer_Ssh_Key := datatypes.SoftLayer_Security_Ssh_Key{}
+ err = json.Unmarshal(data, &softLayer_Ssh_Key)
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ return softLayer_Ssh_Key, nil
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) GetObject(sshKeyId int) (datatypes.SoftLayer_Security_Ssh_Key, error) {
+ objectMask := []string{
+ "createDate",
+ "fingerprint",
+ "id",
+ "key",
+ "label",
+ "modifyDate",
+ "notes",
+ }
+
+ response, errorCode, err := slssks.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getObject.json", slssks.GetName(), sshKeyId), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ sshKey := datatypes.SoftLayer_Security_Ssh_Key{}
+ err = json.Unmarshal(response, &sshKey)
+ if err != nil {
+ return datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ return sshKey, nil
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) EditObject(sshKeyId int, template datatypes.SoftLayer_Security_Ssh_Key) (bool, error) {
+ parameters := datatypes.SoftLayer_Shh_Key_Parameters{
+ Parameters: []datatypes.SoftLayer_Security_Ssh_Key{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slssks.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/editObject.json", slssks.GetName(), sshKeyId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to edit SSH key with id: %d, got '%s' as response from the API.", sshKeyId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#editObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) DeleteObject(sshKeyId int) (bool, error) {
+ response, errorCode, err := slssks.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", slssks.GetName(), sshKeyId), "DELETE", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to destroy ssh key with id '%d', got '%s' as response from the API.", sshKeyId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#deleteObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slssks *softLayer_Security_Ssh_Key_Service) GetSoftwarePasswords(sshKeyId int) ([]datatypes.SoftLayer_Software_Component_Password, error) {
+ response, errorCode, err := slssks.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getSoftwarePasswords.json", slssks.GetName(), sshKeyId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Software_Component_Password{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#getSoftwarePasswords, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Software_Component_Password{}, errors.New(errorMessage)
+ }
+
+ passwords := []datatypes.SoftLayer_Software_Component_Password{}
+ err = json.Unmarshal(response, &passwords)
+ if err != nil {
+ return []datatypes.SoftLayer_Software_Component_Password{}, err
+ }
+
+ return passwords, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_disk_image.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_disk_image.go
new file mode 100644
index 000000000000..895c9eb0f5fa
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_disk_image.go
@@ -0,0 +1,46 @@
+package services
+
+import (
+ "bytes"
+ "encoding/json"
+ "errors"
+ "fmt"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+type softLayer_Virtual_Disk_Image_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Virtual_Disk_Image_Service(client softlayer.Client) *softLayer_Virtual_Disk_Image_Service {
+ return &softLayer_Virtual_Disk_Image_Service{
+ client: client,
+ }
+}
+
+func (slvdi *softLayer_Virtual_Disk_Image_Service) GetName() string {
+ return "SoftLayer_Virtual_Disk_Image"
+}
+
+func (slvdi *softLayer_Virtual_Disk_Image_Service) GetObject(vdImageId int) (datatypes.SoftLayer_Virtual_Disk_Image, error) {
+ response, errorCode, err := slvdi.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getObject.json", slvdi.GetName(), vdImageId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Disk_Image{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Disk_Image#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Disk_Image{}, errors.New(errorMessage)
+ }
+
+ vdImage := datatypes.SoftLayer_Virtual_Disk_Image{}
+ err = json.Unmarshal(response, &vdImage)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Disk_Image{}, err
+ }
+
+ return vdImage, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_guest.go b/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_guest.go
new file mode 100644
index 000000000000..1fe5d6ee7717
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/services/softlayer_virtual_guest.go
@@ -0,0 +1,1233 @@
+package services
+
+import (
+ "bytes"
+ "encoding/base64"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "net/url"
+ "strconv"
+ "strings"
+ "time"
+
+ common "github.com/maximilien/softlayer-go/common"
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+ softlayer "github.com/maximilien/softlayer-go/softlayer"
+)
+
+const (
+ EPHEMERAL_DISK_CATEGORY_CODE = "guest_disk1"
+ // Package type for virtual servers: http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder
+ VIRTUAL_SERVER_PACKAGE_TYPE = "VIRTUAL_SERVER_INSTANCE"
+ MAINTENANCE_WINDOW_PROPERTY = "MAINTENANCE_WINDOW"
+ // Described in the following link: http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade
+ UPGRADE_VIRTUAL_SERVER_ORDER_TYPE = "SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade"
+)
+
+type softLayer_Virtual_Guest_Service struct {
+ client softlayer.Client
+}
+
+func NewSoftLayer_Virtual_Guest_Service(client softlayer.Client) *softLayer_Virtual_Guest_Service {
+ return &softLayer_Virtual_Guest_Service{
+ client: client,
+ }
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetName() string {
+ return "SoftLayer_Virtual_Guest"
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) CreateObject(template datatypes.SoftLayer_Virtual_Guest_Template) (datatypes.SoftLayer_Virtual_Guest, error) {
+ err := slvgs.checkCreateObjectRequiredValues(template)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ parameters := datatypes.SoftLayer_Virtual_Guest_Template_Parameters{
+ Parameters: []datatypes.SoftLayer_Virtual_Guest_Template{
+ template,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s.json", slvgs.GetName()), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#createObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ err = slvgs.client.GetHttpClient().CheckForHttpResponseErrors(response)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ softLayer_Virtual_Guest := datatypes.SoftLayer_Virtual_Guest{}
+ err = json.Unmarshal(response, &softLayer_Virtual_Guest)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ return softLayer_Virtual_Guest, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ReloadOperatingSystem(instanceId int, template datatypes.Image_Template_Config) error {
+ parameter := [2]interface{}{"FORCE", template}
+ parameters := map[string]interface{}{
+ "parameters": parameter,
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/reloadOperatingSystem.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#reloadOperatingSystem, HTTP error code: '%d'", errorCode)
+ return errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != `"1"` {
+ return errors.New(fmt.Sprintf("Failed to reload OS on instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ return nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetObject(instanceId int) (datatypes.SoftLayer_Virtual_Guest, error) {
+
+ objectMask := []string{
+ "accountId",
+ "createDate",
+ "dedicatedAccountHostOnlyFlag",
+ "domain",
+ "fullyQualifiedDomainName",
+ "hostname",
+ "hourlyBillingFlag",
+ "id",
+ "lastPowerStateId",
+ "lastVerifiedDate",
+ "maxCpu",
+ "maxCpuUnits",
+ "maxMemory",
+ "metricPollDate",
+ "modifyDate",
+ "notes",
+ "postInstallScriptUri",
+ "privateNetworkOnlyFlag",
+ "startCpus",
+ "statusId",
+ "uuid",
+ "userData.value",
+ "localDiskFlag",
+
+ "globalIdentifier",
+ "managedResourceFlag",
+ "primaryBackendIpAddress",
+ "primaryIpAddress",
+
+ "location.name",
+ "location.longName",
+ "location.id",
+ "datacenter.name",
+ "datacenter.longName",
+ "datacenter.id",
+ "networkComponents.maxSpeed",
+ "operatingSystem.passwords.password",
+ "operatingSystem.passwords.username",
+
+ "blockDeviceTemplateGroup.globalIdentifier",
+ "primaryNetworkComponent.networkVlan.id",
+ "primaryBackendNetworkComponent.networkVlan.id",
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getObject.json", slvgs.GetName(), instanceId), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getObject, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest{}, errors.New(errorMessage)
+ }
+
+ virtualGuest := datatypes.SoftLayer_Virtual_Guest{}
+ err = json.Unmarshal(response, &virtualGuest)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ return virtualGuest, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetObjectByPrimaryIpAddress(ipAddress string) (datatypes.SoftLayer_Virtual_Guest, error) {
+
+ ObjectFilter := string(`{"virtualGuests":{"primaryIpAddress":{"operation":"` + ipAddress + `"}}}`)
+
+ accountService, err := slvgs.client.GetSoftLayer_Account_Service()
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ virtualGuests, err := accountService.GetVirtualGuestsByFilter(ObjectFilter)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ if len(virtualGuests) == 1 {
+ return virtualGuests[0], nil
+ }
+
+ return datatypes.SoftLayer_Virtual_Guest{}, errors.New(fmt.Sprintf("Cannot find virtual guest with primary ip: %s", ipAddress))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetObjectByPrimaryBackendIpAddress(ipAddress string) (datatypes.SoftLayer_Virtual_Guest, error) {
+
+ ObjectFilter := string(`{"virtualGuests":{"primaryBackendIpAddress":{"operation":` + ipAddress + `}}}`)
+
+ accountService, err := slvgs.client.GetSoftLayer_Account_Service()
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ virtualGuests, err := accountService.GetVirtualGuestsByFilter(ObjectFilter)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest{}, err
+ }
+
+ if len(virtualGuests) == 1 {
+ return virtualGuests[0], nil
+ }
+
+ return datatypes.SoftLayer_Virtual_Guest{}, errors.New(fmt.Sprintf("Cannot find virtual guest with primary backend ip: %s", ipAddress))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) EditObject(instanceId int, template datatypes.SoftLayer_Virtual_Guest) (bool, error) {
+ parameters := datatypes.SoftLayer_Virtual_Guest_Parameters{
+ Parameters: []datatypes.SoftLayer_Virtual_Guest{template},
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/editObject.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to edit virtual guest with id: %d, got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#editObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) DeleteObject(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d.json", slvgs.GetName(), instanceId), "DELETE", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to delete instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#deleteObject, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetPowerState(instanceId int) (datatypes.SoftLayer_Virtual_Guest_Power_State, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getPowerState.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Power_State{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getPowerState, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Virtual_Guest_Power_State{}, errors.New(errorMessage)
+ }
+
+ vgPowerState := datatypes.SoftLayer_Virtual_Guest_Power_State{}
+ err = json.Unmarshal(response, &vgPowerState)
+ if err != nil {
+ return datatypes.SoftLayer_Virtual_Guest_Power_State{}, err
+ }
+
+ return vgPowerState, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetPrimaryIpAddress(instanceId int) (string, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getPrimaryIpAddress.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return "", err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getPrimaryIpAddress, HTTP error code: '%d'", errorCode)
+ return "", errors.New(errorMessage)
+ }
+
+ vgPrimaryIpAddress := strings.TrimSpace(string(response))
+ if vgPrimaryIpAddress == "" {
+ return "", errors.New(fmt.Sprintf("Failed to get primary IP address for instance with id '%d', got '%s' as response from the API.", instanceId, response))
+ }
+
+ return vgPrimaryIpAddress, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetActiveTransaction(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getActiveTransaction.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getActiveTransaction, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ activeTransaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &activeTransaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return activeTransaction, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetLastTransaction(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ objectMask := []string{
+ "transactionGroup",
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequestWithObjectMask(fmt.Sprintf("%s/%d/getLastTransaction.json", slvgs.GetName(), instanceId), objectMask, "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getLastTransaction, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ lastTransaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &lastTransaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return lastTransaction, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetActiveTransactions(instanceId int) ([]datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getActiveTransactions.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getActiveTransactions, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ activeTransactions := []datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &activeTransactions)
+ if err != nil {
+ return []datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return activeTransactions, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetSshKeys(instanceId int) ([]datatypes.SoftLayer_Security_Ssh_Key, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getSshKeys.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getSshKeys, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, errors.New(errorMessage)
+ }
+
+ sshKeys := []datatypes.SoftLayer_Security_Ssh_Key{}
+ err = json.Unmarshal(response, &sshKeys)
+ if err != nil {
+ return []datatypes.SoftLayer_Security_Ssh_Key{}, err
+ }
+
+ return sshKeys, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) PowerCycle(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/powerCycle.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to power cycle instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#powerCycle, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) PowerOff(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/powerOff.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to power off instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#powerOff, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) PowerOffSoft(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/powerOffSoft.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to power off soft instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#powerOffSoft, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) PowerOn(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/powerOn.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to power on instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#powerOn, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) RebootDefault(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/rebootDefault.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to default reboot instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#rebootDefault, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) RebootSoft(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/rebootSoft.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to soft reboot instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#rebootSoft, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) RebootHard(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/rebootHard.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to hard reboot instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Security_Ssh_Key#rebootHard, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) SetMetadata(instanceId int, metadata string) (bool, error) {
+ dataBytes := []byte(metadata)
+ base64EncodedMetadata := base64.StdEncoding.EncodeToString(dataBytes)
+
+ parameters := datatypes.SoftLayer_SetUserMetadata_Parameters{
+ Parameters: []datatypes.UserMetadataArray{
+ []datatypes.UserMetadata{datatypes.UserMetadata(base64EncodedMetadata)},
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/setUserMetadata.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to setUserMetadata for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#setUserMetadata, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ return true, err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ConfigureMetadataDisk(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/configureMetadataDisk.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#setUserMetadata, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetUserData(instanceId int) ([]datatypes.SoftLayer_Virtual_Guest_Attribute, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getUserData.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Virtual_Guest_Attribute{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getUserData, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Virtual_Guest_Attribute{}, errors.New(errorMessage)
+ }
+
+ attributes := []datatypes.SoftLayer_Virtual_Guest_Attribute{}
+ err = json.Unmarshal(response, &attributes)
+ if err != nil {
+ return []datatypes.SoftLayer_Virtual_Guest_Attribute{}, err
+ }
+
+ return attributes, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) IsPingable(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/isPingable.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#isPingable, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to checking that virtual guest is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) IsBackendPingable(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/isBackendPingable.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#isBackendPingable, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to checking that virtual guest backend is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) AttachEphemeralDisk(instanceId int, diskSize int) (datatypes.SoftLayer_Container_Product_Order_Receipt, error) {
+ diskItemPrice, err := slvgs.findUpgradeItemPriceForEphemeralDisk(instanceId, diskSize)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ orderService, err := slvgs.client.GetSoftLayer_Product_Order_Service()
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+
+ order := datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade{
+ VirtualGuests: []datatypes.VirtualGuest{
+ datatypes.VirtualGuest{
+ Id: instanceId,
+ },
+ },
+ Prices: []datatypes.SoftLayer_Product_Item_Price{
+ datatypes.SoftLayer_Product_Item_Price{
+ Id: diskItemPrice.Id,
+ Categories: []datatypes.Category{
+ datatypes.Category{
+ CategoryCode: EPHEMERAL_DISK_CATEGORY_CODE,
+ },
+ },
+ },
+ },
+ ComplexType: UPGRADE_VIRTUAL_SERVER_ORDER_TYPE,
+ Properties: []datatypes.Property{
+ datatypes.Property{
+ Name: MAINTENANCE_WINDOW_PROPERTY,
+ Value: time.Now().UTC().Format(time.RFC3339),
+ },
+ datatypes.Property{
+ Name: "NOTE_GENERAL",
+ Value: "addingdisks",
+ },
+ },
+ }
+
+ receipt, err := orderService.PlaceContainerOrderVirtualGuestUpgrade(order)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Product_Order_Receipt{}, err
+ }
+ return receipt, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) UpgradeObject(instanceId int, options *softlayer.UpgradeOptions) (bool, error) {
+ prices, err := slvgs.GetAvailableUpgradeItemPrices(options)
+ if err != nil {
+ return false, err
+ }
+
+ if len(prices) == 0 {
+ // Nothing to order, as all the values are up to date
+ return false, nil
+ }
+
+ orderService, err := slvgs.client.GetSoftLayer_Product_Order_Service()
+ if err != nil {
+ return false, err
+ }
+
+ order := datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade{
+ VirtualGuests: []datatypes.VirtualGuest{
+ datatypes.VirtualGuest{
+ Id: instanceId,
+ },
+ },
+ Prices: prices,
+ ComplexType: UPGRADE_VIRTUAL_SERVER_ORDER_TYPE,
+ Properties: []datatypes.Property{
+ datatypes.Property{
+ Name: MAINTENANCE_WINDOW_PROPERTY,
+ Value: time.Now().UTC().Format(time.RFC3339),
+ },
+ },
+ }
+
+ _, err = orderService.PlaceContainerOrderVirtualGuestUpgrade(order)
+ if err != nil {
+ return false, err
+ }
+
+ return true, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetAvailableUpgradeItemPrices(upgradeOptions *softlayer.UpgradeOptions) ([]datatypes.SoftLayer_Product_Item_Price, error) {
+ itemsCapacity := make(map[string]int)
+ if upgradeOptions.Cpus > 0 {
+ itemsCapacity["cpus"] = upgradeOptions.Cpus
+ }
+ if upgradeOptions.MemoryInGB > 0 {
+ itemsCapacity["memory"] = upgradeOptions.MemoryInGB
+ }
+ if upgradeOptions.NicSpeed > 0 {
+ itemsCapacity["nic_speed"] = upgradeOptions.NicSpeed
+ }
+
+ virtualServerPackageItems, err := slvgs.getVirtualServerItems()
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ prices := make([]datatypes.SoftLayer_Product_Item_Price, 0)
+
+ for item, amount := range itemsCapacity {
+ price, err := slvgs.filterProductItemPrice(virtualServerPackageItems, item, amount)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ prices = append(prices, price)
+ }
+
+ return prices, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetUpgradeItemPrices(instanceId int) ([]datatypes.SoftLayer_Product_Item_Price, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getUpgradeItemPrices.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getUpgradeItemPrices, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Product_Item_Price{}, errors.New(errorMessage)
+ }
+
+ itemPrices := []datatypes.SoftLayer_Product_Item_Price{}
+ err = json.Unmarshal(response, &itemPrices)
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ return itemPrices, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) SetTags(instanceId int, tags []string) (bool, error) {
+ var tagStringBuffer bytes.Buffer
+ for i, tag := range tags {
+ tagStringBuffer.WriteString(tag)
+ if i != len(tags)-1 {
+ tagStringBuffer.WriteString(", ")
+ }
+ }
+
+ setTagsParameters := datatypes.SoftLayer_Virtual_Guest_SetTags_Parameters{
+ Parameters: []string{tagStringBuffer.String()},
+ }
+
+ requestBody, err := json.Marshal(setTagsParameters)
+ if err != nil {
+ return false, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/setTags.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#setTags, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ if res := string(response[:]); res != "true" {
+ return false, errors.New(fmt.Sprintf("Failed to setTags for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+ }
+
+ return true, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetTagReferences(instanceId int) ([]datatypes.SoftLayer_Tag_Reference, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getTagReferences.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Tag_Reference{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getTagReferences, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Tag_Reference{}, errors.New(errorMessage)
+ }
+
+ tagReferences := []datatypes.SoftLayer_Tag_Reference{}
+ err = json.Unmarshal(response, &tagReferences)
+ if err != nil {
+ return []datatypes.SoftLayer_Tag_Reference{}, err
+ }
+
+ return tagReferences, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) AttachDiskImage(instanceId int, imageId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ parameters := datatypes.SoftLayer_Virtual_GuestInit_ImageId_Parameters{
+ Parameters: datatypes.ImageId_Parameter{
+ ImageId: imageId,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/attachDiskImage.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#attachDiskImage, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) DetachDiskImage(instanceId int, imageId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ parameters := datatypes.SoftLayer_Virtual_GuestInit_ImageId_Parameters{
+ Parameters: datatypes.ImageId_Parameter{
+ ImageId: imageId,
+ },
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/detachDiskImage.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#detachDiskImage, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ActivatePrivatePort(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/activatePrivatePort.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#activatePrivatePort, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to activate private port for virtual guest is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ActivatePublicPort(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/activatePublicPort.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#activatePublicPort, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to activate public port for virtual guest is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ShutdownPrivatePort(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/shutdownPrivatePort.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#shutdownPrivatePort, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to shutdown private port for virtual guest is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) ShutdownPublicPort(instanceId int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/shutdownPublicPort.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#shutdownPublicPort, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to shutdown public port for virtual guest is pingable for instance with id '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetAllowedHost(instanceId int) (datatypes.SoftLayer_Network_Storage_Allowed_Host, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getAllowedHost.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage_Allowed_Host{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getAllowedHost, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Network_Storage_Allowed_Host{}, errors.New(errorMessage)
+ }
+
+ allowedHost := datatypes.SoftLayer_Network_Storage_Allowed_Host{}
+ err = json.Unmarshal(response, &allowedHost)
+ if err != nil {
+ return datatypes.SoftLayer_Network_Storage_Allowed_Host{}, err
+ }
+
+ return allowedHost, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) GetNetworkVlans(instanceId int) ([]datatypes.SoftLayer_Network_Vlan, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/getNetworkVlans.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return []datatypes.SoftLayer_Network_Vlan{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#getNetworkVlans, HTTP error code: '%d'", errorCode)
+ return []datatypes.SoftLayer_Network_Vlan{}, errors.New(errorMessage)
+ }
+
+ networkVlans := []datatypes.SoftLayer_Network_Vlan{}
+ err = json.Unmarshal(response, &networkVlans)
+ if err != nil {
+ return []datatypes.SoftLayer_Network_Vlan{}, err
+ }
+
+ return networkVlans, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) CheckHostDiskAvailability(instanceId int, diskCapacity int) (bool, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/checkHostDiskAvailability/%d", slvgs.GetName(), instanceId, diskCapacity), "GET", new(bytes.Buffer))
+ if err != nil {
+ return false, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#checkHostDiskAvailability, HTTP error code: '%d'", errorCode)
+ return false, errors.New(errorMessage)
+ }
+
+ res := string(response)
+
+ if res == "true" {
+ return true, nil
+ }
+
+ if res == "false" {
+ return false, nil
+ }
+
+ return false, errors.New(fmt.Sprintf("Failed to check host disk availability for instance '%d', got '%s' as response from the API.", instanceId, res))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) CaptureImage(instanceId int) (datatypes.SoftLayer_Container_Disk_Image_Capture_Template, error) {
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/captureImage.json", slvgs.GetName(), instanceId), "GET", new(bytes.Buffer))
+ if err != nil {
+ return datatypes.SoftLayer_Container_Disk_Image_Capture_Template{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#captureImage, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Container_Disk_Image_Capture_Template{}, errors.New(errorMessage)
+ }
+
+ diskImageTemplate := datatypes.SoftLayer_Container_Disk_Image_Capture_Template{}
+ err = json.Unmarshal(response, &diskImageTemplate)
+ if err != nil {
+ return datatypes.SoftLayer_Container_Disk_Image_Capture_Template{}, err
+ }
+
+ return diskImageTemplate, nil
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) CreateArchiveTransaction(instanceId int, groupName string, blockDevices []datatypes.SoftLayer_Virtual_Guest_Block_Device, note string) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error) {
+ groupName = url.QueryEscape(groupName)
+ note = url.QueryEscape(note)
+
+ parameters := datatypes.SoftLayer_Virtual_GuestInitParameters{
+ Parameters: []interface{}{groupName, blockDevices, note},
+ }
+
+ requestBody, err := json.Marshal(parameters)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ response, errorCode, err := slvgs.client.GetHttpClient().DoRawHttpRequest(fmt.Sprintf("%s/%d/createArchiveTransaction.json", slvgs.GetName(), instanceId), "POST", bytes.NewBuffer(requestBody))
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ if common.IsHttpErrorCode(errorCode) {
+ errorMessage := fmt.Sprintf("softlayer-go: could not SoftLayer_Virtual_Guest#createArchiveTransaction, HTTP error code: '%d'", errorCode)
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, errors.New(errorMessage)
+ }
+
+ transaction := datatypes.SoftLayer_Provisioning_Version1_Transaction{}
+ err = json.Unmarshal(response, &transaction)
+ if err != nil {
+ return datatypes.SoftLayer_Provisioning_Version1_Transaction{}, err
+ }
+
+ return transaction, nil
+}
+
+//Private methods
+
+func (slvgs *softLayer_Virtual_Guest_Service) getVirtualServerItems() ([]datatypes.SoftLayer_Product_Item, error) {
+ service, err := slvgs.client.GetSoftLayer_Product_Package_Service()
+ if err != nil {
+ return []datatypes.SoftLayer_Product_Item{}, err
+ }
+
+ return service.GetItemsByType(VIRTUAL_SERVER_PACKAGE_TYPE)
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) filterProductItemPrice(packageItems []datatypes.SoftLayer_Product_Item, option string, amount int) (datatypes.SoftLayer_Product_Item_Price, error) {
+ // for now use hardcoded values in the same "style" as Python client does
+ // refer to corresponding Python method #_get_item_id_for_upgrade: https://github.com/softlayer/softlayer-python/blob/master/SoftLayer/managers/vs.py
+ vsId := map[string]int{
+ "memory": 3,
+ "cpus": 80,
+ "nic_speed": 26,
+ }
+
+ for _, packageItem := range packageItems {
+ categories := packageItem.Prices[0].Categories
+ for _, category := range categories {
+
+ if packageItem.Capacity == "" {
+ continue
+ }
+
+ capacity, err := strconv.Atoi(packageItem.Capacity)
+ if err != nil {
+ return datatypes.SoftLayer_Product_Item_Price{}, err
+ }
+
+ if category.Id != vsId[option] || capacity != amount {
+ continue
+ }
+
+ switch option {
+ case "cpus":
+ if !strings.Contains(packageItem.Description, "Private") {
+ return packageItem.Prices[0], nil
+ }
+ case "nic_speed":
+ if strings.Contains(packageItem.Description, "Public") {
+ return packageItem.Prices[0], nil
+ }
+ default:
+ return packageItem.Prices[0], nil
+ }
+ }
+ }
+
+ return datatypes.SoftLayer_Product_Item_Price{}, errors.New(fmt.Sprintf("Failed to find price for '%s' (of size %d)", option, amount))
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) checkCreateObjectRequiredValues(template datatypes.SoftLayer_Virtual_Guest_Template) error {
+ var err error
+ errorMessage, errorTemplate := "", "* %s is required and cannot be empty\n"
+
+ if template.Hostname == "" {
+ errorMessage += fmt.Sprintf(errorTemplate, "Hostname for the computing instance")
+ }
+
+ if template.Domain == "" {
+ errorMessage += fmt.Sprintf(errorTemplate, "Domain for the computing instance")
+ }
+
+ if template.StartCpus <= 0 {
+ errorMessage += fmt.Sprintf(errorTemplate, "StartCpus: the number of CPU cores to allocate")
+ }
+
+ if template.MaxMemory <= 0 {
+ errorMessage += fmt.Sprintf(errorTemplate, "MaxMemory: the amount of memory to allocate in megabytes")
+ }
+
+ for _, device := range template.BlockDevices {
+ if device.DiskImage.Capacity <= 0 {
+ errorMessage += fmt.Sprintf("Disk size must be positive number, the size of block device %s is set to be %dGB.", device.Device, device.DiskImage.Capacity)
+ }
+ }
+
+ if template.Datacenter.Name == "" {
+ errorMessage += fmt.Sprintf(errorTemplate, "Datacenter.Name: specifies which datacenter the instance is to be provisioned in")
+ }
+
+ if errorMessage != "" {
+ err = errors.New(errorMessage)
+ }
+
+ return err
+}
+
+func (slvgs *softLayer_Virtual_Guest_Service) findUpgradeItemPriceForEphemeralDisk(instanceId int, ephemeralDiskSize int) (datatypes.SoftLayer_Product_Item_Price, error) {
+ if ephemeralDiskSize <= 0 {
+ return datatypes.SoftLayer_Product_Item_Price{}, errors.New(fmt.Sprintf("Ephemeral disk size can not be negative: %d", ephemeralDiskSize))
+ }
+
+ itemPrices, err := slvgs.GetUpgradeItemPrices(instanceId)
+ if err != nil {
+ return datatypes.SoftLayer_Product_Item_Price{}, nil
+ }
+
+ var currentDiskCapacity int
+ var currentItemPrice datatypes.SoftLayer_Product_Item_Price
+
+ for _, itemPrice := range itemPrices {
+
+ flag := false
+ for _, category := range itemPrice.Categories {
+ if category.CategoryCode == EPHEMERAL_DISK_CATEGORY_CODE {
+ flag = true
+ break
+ }
+ }
+
+ if flag && strings.Contains(itemPrice.Item.Description, "(LOCAL)") {
+
+ capacity, _ := strconv.Atoi(itemPrice.Item.Capacity)
+
+ if capacity >= ephemeralDiskSize {
+ if currentItemPrice.Id == 0 || currentDiskCapacity >= capacity {
+ currentItemPrice = itemPrice
+ currentDiskCapacity = capacity
+ }
+ }
+ }
+ }
+
+ if currentItemPrice.Id == 0 {
+ return datatypes.SoftLayer_Product_Item_Price{}, errors.New(fmt.Sprintf("No proper local disk for size %d", ephemeralDiskSize))
+ }
+
+ return currentItemPrice, nil
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/client.go b/vendor/github.com/maximilien/softlayer-go/softlayer/client.go
new file mode 100644
index 000000000000..74eb430f90e3
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/client.go
@@ -0,0 +1,37 @@
+package softlayer
+
+import (
+ "bytes"
+)
+
+type Client interface {
+ GetService(name string) (Service, error)
+
+ GetSoftLayer_Account_Service() (SoftLayer_Account_Service, error)
+ GetSoftLayer_Virtual_Guest_Service() (SoftLayer_Virtual_Guest_Service, error)
+ GetSoftLayer_Virtual_Disk_Image_Service() (SoftLayer_Virtual_Disk_Image_Service, error)
+ GetSoftLayer_Security_Ssh_Key_Service() (SoftLayer_Security_Ssh_Key_Service, error)
+ GetSoftLayer_Product_Order_Service() (SoftLayer_Product_Order_Service, error)
+ GetSoftLayer_Product_Package_Service() (SoftLayer_Product_Package_Service, error)
+ GetSoftLayer_Network_Storage_Service() (SoftLayer_Network_Storage_Service, error)
+ GetSoftLayer_Network_Storage_Allowed_Host_Service() (SoftLayer_Network_Storage_Allowed_Host_Service, error)
+ GetSoftLayer_Billing_Item_Cancellation_Request_Service() (SoftLayer_Billing_Item_Cancellation_Request_Service, error)
+ GetSoftLayer_Billing_Item_Service() (SoftLayer_Billing_Item_Service, error)
+ GetSoftLayer_Virtual_Guest_Block_Device_Template_Group_Service() (SoftLayer_Virtual_Guest_Block_Device_Template_Group_Service, error)
+ GetSoftLayer_Hardware_Service() (SoftLayer_Hardware_Service, error)
+ GetSoftLayer_Dns_Domain_Service() (SoftLayer_Dns_Domain_Service, error)
+ GetSoftLayer_Dns_Domain_ResourceRecord_Service() (SoftLayer_Dns_Domain_ResourceRecord_Service, error)
+
+ GetHttpClient() HttpClient
+}
+
+type HttpClient interface {
+ DoRawHttpRequest(path string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error)
+ DoRawHttpRequestWithObjectMask(path string, masks []string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error)
+ DoRawHttpRequestWithObjectFilter(path string, filters string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error)
+ DoRawHttpRequestWithObjectFilterAndObjectMask(path string, masks []string, filters string, requestType string, requestBody *bytes.Buffer) ([]byte, int, error)
+ GenerateRequestBody(templateData interface{}) (*bytes.Buffer, error)
+ HasErrors(body map[string]interface{}) error
+
+ CheckForHttpResponseErrors(data []byte) error
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/service.go
new file mode 100644
index 000000000000..fcae7572ed77
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/service.go
@@ -0,0 +1,5 @@
+package softlayer
+
+type Service interface {
+ GetName() string
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_account_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_account_service.go
new file mode 100644
index 000000000000..2333802211cf
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_account_service.go
@@ -0,0 +1,23 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Account_Service interface {
+ Service
+
+ GetAccountStatus() (datatypes.SoftLayer_Account_Status, error)
+ GetVirtualGuests() ([]datatypes.SoftLayer_Virtual_Guest, error)
+ GetVirtualGuestsByFilter(filters string) ([]datatypes.SoftLayer_Virtual_Guest, error)
+ GetNetworkStorage() ([]datatypes.SoftLayer_Network_Storage, error)
+ GetIscsiNetworkStorage() ([]datatypes.SoftLayer_Network_Storage, error)
+ GetIscsiNetworkStorageWithFilter(filter string) ([]datatypes.SoftLayer_Network_Storage, error)
+ GetVirtualDiskImages() ([]datatypes.SoftLayer_Virtual_Disk_Image, error)
+ GetVirtualDiskImagesWithFilter(filters string) ([]datatypes.SoftLayer_Virtual_Disk_Image, error)
+ GetSshKeys() ([]datatypes.SoftLayer_Security_Ssh_Key, error)
+ GetBlockDeviceTemplateGroups() ([]datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error)
+ GetBlockDeviceTemplateGroupsWithFilter(filters string) ([]datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error)
+ GetDatacentersWithSubnetAllocations() ([]datatypes.SoftLayer_Location, error)
+ GetHardware() ([]datatypes.SoftLayer_Hardware, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_cancellation_request_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_cancellation_request_service.go
new file mode 100644
index 000000000000..f33fe9ff6b70
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_cancellation_request_service.go
@@ -0,0 +1,11 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Billing_Item_Cancellation_Request_Service interface {
+ Service
+
+ CreateObject(request datatypes.SoftLayer_Billing_Item_Cancellation_Request) (datatypes.SoftLayer_Billing_Item_Cancellation_Request, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_service.go
new file mode 100644
index 000000000000..dec956ddb65f
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_billing_item_service.go
@@ -0,0 +1,7 @@
+package softlayer
+
+type SoftLayer_Billing_Item_Service interface {
+ Service
+
+ CancelService(billingId int) (bool, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_resource_record_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_resource_record_service.go
new file mode 100644
index 000000000000..ae0a1592a7ce
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_resource_record_service.go
@@ -0,0 +1,14 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Dns_Domain_ResourceRecord_Service interface {
+ Service
+
+ CreateObject(template datatypes.SoftLayer_Dns_Domain_ResourceRecord_Template) (datatypes.SoftLayer_Dns_Domain_ResourceRecord, error)
+ GetObject(recordId int) (datatypes.SoftLayer_Dns_Domain_ResourceRecord, error)
+ DeleteObject(recordId int) (bool, error)
+ EditObject(recordId int, template datatypes.SoftLayer_Dns_Domain_ResourceRecord) (bool, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_service.go
new file mode 100644
index 000000000000..e3a31cb80ed2
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_dns_domain_service.go
@@ -0,0 +1,15 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+// Modifying existing SoftLayer_Dns_Domain entries is not possible. Changes to zone names should be refactored to creation of new zones.
+// https://sldn.softlayer.com/blog/phil/Getting-started-DNS
+type SoftLayer_Dns_Domain_Service interface {
+ Service
+
+ CreateObject(template datatypes.SoftLayer_Dns_Domain_Template) (datatypes.SoftLayer_Dns_Domain, error)
+ DeleteObject(dnsId int) (bool, error)
+ GetObject(dnsId int) (datatypes.SoftLayer_Dns_Domain, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_hardware_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_hardware_service.go
new file mode 100644
index 000000000000..a6e1ab1d42ab
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_hardware_service.go
@@ -0,0 +1,12 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Hardware_Service interface {
+ Service
+
+ CreateObject(template datatypes.SoftLayer_Hardware_Template) (datatypes.SoftLayer_Hardware, error)
+ GetObject(id string) (datatypes.SoftLayer_Hardware, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_allowed_host_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_allowed_host_service.go
new file mode 100644
index 000000000000..b89a069b7324
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_allowed_host_service.go
@@ -0,0 +1,11 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Network_Storage_Allowed_Host_Service interface {
+ Service
+
+ GetCredential(allowedHostId int) (datatypes.SoftLayer_Network_Storage_Credential, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_service.go
new file mode 100644
index 000000000000..577bdd3123cd
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_network_storage_service.go
@@ -0,0 +1,19 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Network_Storage_Service interface {
+ Service
+
+ DeleteObject(volumeId int) (bool, error)
+
+ CreateIscsiVolume(size int, location string) (datatypes.SoftLayer_Network_Storage, error)
+ DeleteIscsiVolume(volumeId int, immediateCancellationFlag bool) error
+ GetIscsiVolume(volumeId int) (datatypes.SoftLayer_Network_Storage, error)
+ GetBillingItem(volumeId int) (datatypes.SoftLayer_Billing_Item, error)
+ HasAllowedVirtualGuest(volumeId int, vmId int) (bool, error)
+ AttachIscsiVolume(virtualGuest datatypes.SoftLayer_Virtual_Guest, volumeId int) (bool, error)
+ DetachIscsiVolume(virtualGuest datatypes.SoftLayer_Virtual_Guest, volumeId int) error
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_order_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_order_service.go
new file mode 100644
index 000000000000..993fcc6f3376
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_order_service.go
@@ -0,0 +1,13 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Product_Order_Service interface {
+ Service
+
+ PlaceOrder(order datatypes.SoftLayer_Container_Product_Order) (datatypes.SoftLayer_Container_Product_Order_Receipt, error)
+ PlaceContainerOrderNetworkPerformanceStorageIscsi(order datatypes.SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi) (datatypes.SoftLayer_Container_Product_Order_Receipt, error)
+ PlaceContainerOrderVirtualGuestUpgrade(order datatypes.SoftLayer_Container_Product_Order_Virtual_Guest_Upgrade) (datatypes.SoftLayer_Container_Product_Order_Receipt, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_package_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_package_service.go
new file mode 100644
index 000000000000..7b1e6a7ae80a
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_product_package_service.go
@@ -0,0 +1,17 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Product_Package_Service interface {
+ Service
+
+ GetItemPrices(packageId int) ([]datatypes.SoftLayer_Product_Item_Price, error)
+ GetItemPricesBySize(packageId int, size int) ([]datatypes.SoftLayer_Product_Item_Price, error)
+ GetItems(packageId int) ([]datatypes.SoftLayer_Product_Item, error)
+ GetItemsByType(packageType string) ([]datatypes.SoftLayer_Product_Item, error)
+
+ GetPackagesByType(packageType string) ([]datatypes.Softlayer_Product_Package, error)
+ GetOnePackageByType(packageType string) (datatypes.Softlayer_Product_Package, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_security_ssh_key_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_security_ssh_key_service.go
new file mode 100644
index 000000000000..e42cc4c4cc56
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_security_ssh_key_service.go
@@ -0,0 +1,16 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Security_Ssh_Key_Service interface {
+ Service
+
+ CreateObject(template datatypes.SoftLayer_Security_Ssh_Key) (datatypes.SoftLayer_Security_Ssh_Key, error)
+ GetObject(sshkeyId int) (datatypes.SoftLayer_Security_Ssh_Key, error)
+ EditObject(sshkeyId int, template datatypes.SoftLayer_Security_Ssh_Key) (bool, error)
+ DeleteObject(sshKeyId int) (bool, error)
+
+ GetSoftwarePasswords(sshKeyId int) ([]datatypes.SoftLayer_Software_Component_Password, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_disk_image_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_disk_image_service.go
new file mode 100644
index 000000000000..72f4d03e7a37
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_disk_image_service.go
@@ -0,0 +1,11 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Virtual_Disk_Image_Service interface {
+ Service
+
+ GetObject(id int) (datatypes.SoftLayer_Virtual_Disk_Image, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_block_device_template_group.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_block_device_template_group.go
new file mode 100644
index 000000000000..f78e08cf6782
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_block_device_template_group.go
@@ -0,0 +1,36 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type SoftLayer_Virtual_Guest_Block_Device_Template_Group_Service interface {
+ Service
+
+ AddLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error)
+
+ CreateFromExternalSource(configuration datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error)
+ CreatePublicArchiveTransaction(id int, groupName string, summary string, note string, locations []datatypes.SoftLayer_Location) (int, error)
+ CopyToExternalSource(configuration datatypes.SoftLayer_Container_Virtual_Guest_Block_Device_Template_Configuration) (bool, error)
+
+ DeleteObject(id int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ DenySharingAccess(id int, accountId int) (bool, error)
+
+ GetObject(id int) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group, error)
+ GetDatacenters(id int) ([]datatypes.SoftLayer_Location, error)
+ GetSshKeys(id int) ([]datatypes.SoftLayer_Security_Ssh_Key, error)
+ GetStatus(id int) (datatypes.SoftLayer_Virtual_Guest_Block_Device_Template_Group_Status, error)
+
+ GetStorageLocations(id int) ([]datatypes.SoftLayer_Location, error)
+
+ GetImageType(id int) (datatypes.SoftLayer_Image_Type, error)
+ GetImageTypeKeyName(id int) (string, error)
+
+ GetTransaction(id int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+
+ PermitSharingAccess(id int, accountId int) (bool, error)
+
+ RemoveLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error)
+
+ SetAvailableLocations(id int, locations []datatypes.SoftLayer_Location) (bool, error)
+}
diff --git a/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_service.go b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_service.go
new file mode 100644
index 000000000000..ea7daf4c6bec
--- /dev/null
+++ b/vendor/github.com/maximilien/softlayer-go/softlayer/softlayer_virtual_guest_service.go
@@ -0,0 +1,67 @@
+package softlayer
+
+import (
+ datatypes "github.com/maximilien/softlayer-go/data_types"
+)
+
+type UpgradeOptions struct {
+ Cpus int
+ MemoryInGB int // Softlayer allows to upgrade Memory only in GB
+ NicSpeed int
+}
+
+type SoftLayer_Virtual_Guest_Service interface {
+ Service
+
+ ActivatePrivatePort(instanceId int) (bool, error)
+ ActivatePublicPort(instanceId int) (bool, error)
+ AttachDiskImage(instanceId int, imageId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ AttachEphemeralDisk(instanceId int, diskSize int) (datatypes.SoftLayer_Container_Product_Order_Receipt, error)
+
+ CaptureImage(instanceId int) (datatypes.SoftLayer_Container_Disk_Image_Capture_Template, error)
+ CheckHostDiskAvailability(instanceId int, diskCapacity int) (bool, error)
+ ConfigureMetadataDisk(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ CreateArchiveTransaction(instanceId int, groupName string, blockDevices []datatypes.SoftLayer_Virtual_Guest_Block_Device, note string) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ CreateObject(template datatypes.SoftLayer_Virtual_Guest_Template) (datatypes.SoftLayer_Virtual_Guest, error)
+
+ DeleteObject(instanceId int) (bool, error)
+ DetachDiskImage(instanceId int, imageId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+
+ EditObject(instanceId int, template datatypes.SoftLayer_Virtual_Guest) (bool, error)
+
+ IsPingable(instanceId int) (bool, error)
+ IsBackendPingable(instanceId int) (bool, error)
+
+ GetActiveTransaction(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ GetLastTransaction(instanceId int) (datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ GetActiveTransactions(instanceId int) ([]datatypes.SoftLayer_Provisioning_Version1_Transaction, error)
+ GetAllowedHost(instanceId int) (datatypes.SoftLayer_Network_Storage_Allowed_Host, error)
+ GetNetworkVlans(instanceId int) ([]datatypes.SoftLayer_Network_Vlan, error)
+ GetObject(instanceId int) (datatypes.SoftLayer_Virtual_Guest, error)
+ GetObjectByPrimaryIpAddress(ipAddress string) (datatypes.SoftLayer_Virtual_Guest, error)
+ GetObjectByPrimaryBackendIpAddress(ipAddress string) (datatypes.SoftLayer_Virtual_Guest, error)
+ GetPrimaryIpAddress(instanceId int) (string, error)
+ GetPowerState(instanceId int) (datatypes.SoftLayer_Virtual_Guest_Power_State, error)
+ GetSshKeys(instanceId int) ([]datatypes.SoftLayer_Security_Ssh_Key, error)
+ GetTagReferences(instanceId int) ([]datatypes.SoftLayer_Tag_Reference, error)
+ GetUpgradeItemPrices(instanceId int) ([]datatypes.SoftLayer_Product_Item_Price, error)
+ GetUserData(instanceId int) ([]datatypes.SoftLayer_Virtual_Guest_Attribute, error)
+
+ PowerCycle(instanceId int) (bool, error)
+ PowerOff(instanceId int) (bool, error)
+ PowerOffSoft(instanceId int) (bool, error)
+ PowerOn(instanceId int) (bool, error)
+
+ RebootDefault(instanceId int) (bool, error)
+ RebootSoft(instanceId int) (bool, error)
+ RebootHard(instanceId int) (bool, error)
+
+ SetMetadata(instanceId int, metadata string) (bool, error)
+ SetTags(instanceId int, tags []string) (bool, error)
+ ShutdownPrivatePort(instanceId int) (bool, error)
+ ShutdownPublicPort(instanceId int) (bool, error)
+ ReloadOperatingSystem(instanceId int, template datatypes.Image_Template_Config) error
+
+ UpgradeObject(instanceId int, upgradeOptions *UpgradeOptions) (bool, error)
+ GetAvailableUpgradeItemPrices(upgradeOptions *UpgradeOptions) ([]datatypes.SoftLayer_Product_Item_Price, error)
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/.travis.yml b/vendor/github.com/mitchellh/cloudflare-go/.travis.yml
new file mode 100644
index 000000000000..cb3b857f840c
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/.travis.yml
@@ -0,0 +1,23 @@
+language: go
+sudo: false
+
+matrix:
+ include:
+ - go: 1.4
+ - go: 1.5
+ - go: 1.6
+ - go: tip
+ allow_failures:
+ - go: tip
+
+script:
+ - go get -t -v $(go list ./... | grep -v '/vendor/')
+ - diff -u <(echo -n) <(gofmt -d .)
+ - go vet $(go list ./... | grep -v '/vendor/')
+ - go test -v -race ./...
+
+notifications:
+ email:
+ recipients:
+ - jamesog@cloudflare.com
+ - msilverlock@cloudflare.com
diff --git a/vendor/github.com/mitchellh/cloudflare-go/LICENSE b/vendor/github.com/mitchellh/cloudflare-go/LICENSE
new file mode 100644
index 000000000000..a53798b1c688
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/LICENSE
@@ -0,0 +1,26 @@
+Copyright (c) 2015-2016, CloudFlare. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+this list of conditions and the following disclaimer in the documentation and/or
+other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its contributors
+may be used to endorse or promote products derived from this software without
+specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/mitchellh/cloudflare-go/README.md b/vendor/github.com/mitchellh/cloudflare-go/README.md
new file mode 100644
index 000000000000..e8551fa41758
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/README.md
@@ -0,0 +1,44 @@
+[![GoDoc](https://godoc.org/github.com/cloudflare/cloudflare-go?status.svg)](https://godoc.org/github.com/cloudflare/cloudflare-go)
+
+# cloudflare
+
+A Go library for interacting with [CloudFlare's API v4](https://api.cloudflare.com/).
+
+# Installation
+
+You need a working Go environment.
+
+```
+go get github.com/cloudflare/cloudflare-go
+```
+
+# Getting Started
+
+```
+package main
+
+import (
+ "fmt"
+
+ "github.com/cloudflare/cloudflare-go"
+)
+
+var api *cloudflare.API
+
+func main() {
+ // Construct a new API object
+ api = cloudflare.New(os.Getenv("CF_API_KEY"), os.Getenv("CF_API_EMAIL"))
+
+ // Fetch the list of zones on the account
+ zones, err := api.ListZones()
+ if err != nil {
+ fmt.Println(err)
+ }
+ // Print the zone names
+ for _, z := range zones {
+ fmt.Println(z.Name)
+ }
+}
+```
+
+An example application, [flarectl](cmd/flarectl), is in this repository.
diff --git a/vendor/github.com/mitchellh/cloudflare-go/cloudflare.go b/vendor/github.com/mitchellh/cloudflare-go/cloudflare.go
new file mode 100644
index 000000000000..24fa03ed9b5e
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/cloudflare.go
@@ -0,0 +1,451 @@
+/*
+Package cloudflare implements the CloudFlare v4 API.
+
+New API requests created like:
+
+ api := cloudflare.New(apikey, apiemail)
+
+*/
+package cloudflare
+
+import (
+ "bytes"
+ "encoding/json"
+ "io"
+ "io/ioutil"
+ "net/http"
+
+ "github.com/pkg/errors"
+)
+
+const apiURL = "https://api.cloudflare.com/client/v4"
+
+// Error messages
+const errMakeRequestError = "Error from makeRequest"
+const errUnmarshalError = "Error unmarshalling JSON"
+
+type API struct {
+ APIKey string
+ APIEmail string
+}
+
+// Initializes the API configuration.
+func New(key, email string) *API {
+ return &API{key, email}
+}
+
+// Initializes a new zone.
+func NewZone() *Zone {
+ return &Zone{}
+}
+
+// ZoneIDByName retrieves a zone's ID from the name.
+func (api *API) ZoneIDByName(zoneName string) (string, error) {
+ res, err := api.ListZones(zoneName)
+ if err != nil {
+ return "", errors.Wrap(err, "ListZones command failed")
+ }
+ for _, zone := range res {
+ if zone.Name == zoneName {
+ return zone.ID, nil
+ }
+ }
+ return "", errors.New("Zone could not be found")
+}
+
+// Params can be turned into a URL query string or a body
+// TODO: Give this func a better name
+func (api *API) makeRequest(method, uri string, params interface{}) ([]byte, error) {
+ // Replace nil with a JSON object if needed
+ var reqBody io.Reader
+ if params != nil {
+ json, err := json.Marshal(params)
+ if err != nil {
+ return nil, errors.Wrap(err, "Error marshalling params to JSON")
+ }
+ reqBody = bytes.NewReader(json)
+ } else {
+ reqBody = nil
+ }
+ req, err := http.NewRequest(method, apiURL+uri, reqBody)
+ if err != nil {
+ return nil, errors.Wrap(err, "HTTP request creation failed")
+ }
+ req.Header.Add("X-Auth-Key", api.APIKey)
+ req.Header.Add("X-Auth-Email", api.APIEmail)
+ // Could be application/json or multipart/form-data
+ // req.Header.Add("Content-Type", "application/json")
+ client := &http.Client{}
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, errors.Wrap(err, "HTTP request failed")
+ }
+ defer resp.Body.Close()
+ resBody, err := ioutil.ReadAll(resp.Body)
+ if resp.StatusCode != http.StatusOK {
+ if err != nil {
+ return nil, errors.Wrap(err, "Error returned from API")
+ } else if resBody != nil {
+ return nil, errors.New(string(resBody))
+ } else {
+ return nil, errors.New(resp.Status)
+ }
+ }
+ return resBody, nil
+}
+
+// The Response struct is a template. There will also be a result struct.
+// There will be a unique response type for each response, which will include
+// this type.
+type Response struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+}
+
+type ResultInfo struct {
+ Page int `json:"page"`
+ PerPage int `json:"per_page"`
+ Count int `json:"count"`
+ Total int `json:"total_count"`
+}
+
+// An Organization describes a multi-user organization. (Enterprise only.)
+type Organization struct {
+ ID string
+ Name string
+ Status string
+ Permissions []string
+ Roles []string
+}
+
+// A User describes a user account.
+type User struct {
+ ID string `json:"id"`
+ Email string `json:"email"`
+ FirstName string `json:"first_name"`
+ LastName string `json:"last_name"`
+ Username string `json:"username"`
+ Telephone string `json:"telephone"`
+ Country string `json:"country"`
+ Zipcode string `json:"zipcode"`
+ CreatedOn string `json:"created_on"` // Should this be a time.Date?
+ ModifiedOn string `json:"modified_on"`
+ APIKey string `json:"api_key"`
+ TwoFA bool `json:"two_factor_authentication_enabled"`
+ Betas []string `json:"betas"`
+ Organizations []Organization `json:"organizations"`
+}
+
+type UserResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result User `json:"result"`
+}
+
+type Owner struct {
+ ID string `json:"id"`
+ Email string `json:"email"`
+ OwnerType string `json:"owner_type"`
+}
+
+// A Zone describes a CloudFlare zone.
+type Zone struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ DevMode int `json:"development_mode"`
+ OriginalNS []string `json:"original_name_servers"`
+ OriginalRegistrar string `json:"original_registrar"`
+ OriginalDNSHost string `json:"original_dnshost"`
+ CreatedOn string `json:"created_on"`
+ ModifiedOn string `json:"modified_on"`
+ NameServers []string `json:"name_servers"`
+ Owner Owner `json:"owner"`
+ Permissions []string `json:"permissions"`
+ Plan ZonePlan `json:"plan"`
+ Status string `json:"status"`
+ Paused bool `json:"paused"`
+ Type string `json:"type"`
+ Host struct {
+ Name string
+ Website string
+ } `json:"host"`
+ VanityNS []string `json:"vanity_name_servers"`
+ Betas []string `json:"betas"`
+ DeactReason string `json:"deactivation_reason"`
+ Meta ZoneMeta `json:"meta"`
+}
+
+// Contains metadata about a zone.
+type ZoneMeta struct {
+ // custom_certificate_quota is broken - sometimes it's a string, sometimes a number!
+ // CustCertQuota int `json:"custom_certificate_quota"`
+ PageRuleQuota int `json:"page_rule_quota"`
+ WildcardProxiable bool `json:"wildcard_proxiable"`
+ PhishingDetected bool `json:"phishing_detected"`
+}
+
+// Contains the plan information for a zone.
+type ZonePlan struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ Price int `json:"price"`
+ Currency string `json:"currency"`
+ Frequency string `json:"frequency"`
+ LegacyID string `json:"legacy_id"`
+ IsSubscribed bool `json:"is_subscribed"`
+ CanSubscribe bool `json:"can_subscribe"`
+}
+
+type ZoneResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []Zone `json:"result"`
+}
+
+type ZonePlanResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []ZonePlan `json:"result"`
+}
+
+// type zoneSetting struct {
+// ID string `json:"id"`
+// Editable bool `json:"editable"`
+// ModifiedOn string `json:"modified_on"`
+// }
+// type zoneSettingStringVal struct {
+// zoneSetting
+// Value string `json:"value"`
+// }
+// type zoneSettingIntVal struct {
+// zoneSetting
+// Value int64 `json:"value"`
+// }
+
+type ZoneSetting struct {
+ ID string `json:"id"`
+ Editable bool `json:"editable"`
+ ModifiedOn string `json:"modified_on"`
+ Value interface{} `json:"value"`
+ TimeRemaining int `json:"time_remaining"`
+}
+
+type ZoneSettingResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []ZoneSetting `json:"result"`
+}
+
+// Describes a DNS record for a zone.
+type DNSRecord struct {
+ ID string `json:"id,omitempty"`
+ Type string `json:"type,omitempty"`
+ Name string `json:"name,omitempty"`
+ Content string `json:"content,omitempty"`
+ Proxiable bool `json:"proxiable,omitempty"`
+ Proxied bool `json:"proxied,omitempty"`
+ TTL int `json:"ttl,omitempty"`
+ Locked bool `json:"locked,omitempty"`
+ ZoneID string `json:"zone_id,omitempty"`
+ ZoneName string `json:"zone_name,omitempty"`
+ CreatedOn string `json:"created_on,omitempty"`
+ ModifiedOn string `json:"modified_on,omitempty"`
+ Data interface{} `json:"data,omitempty"` // data returned by: SRV, LOC
+ Meta interface{} `json:"meta,omitempty"`
+ Priority int `json:"priority,omitempty"`
+}
+
+// The response for creating or updating a DNS record.
+type DNSRecordResponse struct {
+ Success bool `json:"success"`
+ Errors []interface{} `json:"errors"`
+ Messages []string `json:"messages"`
+ Result DNSRecord `json:"result"`
+}
+
+// The response for listing DNS records.
+type DNSListResponse struct {
+ Success bool `json:"success"`
+ Errors []interface{} `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []DNSRecord `json:"result"`
+}
+
+// Railgun status for a zone.
+type ZoneRailgun struct {
+ ID string `json:"id"`
+ Name string `json:"string"`
+ Enabled bool `json:"enabled"`
+ Connected bool `json:"connected"`
+}
+
+type ZoneRailgunResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []ZoneRailgun `json:"result"`
+}
+
+// Custom SSL certificates for a zone.
+type ZoneCustomSSL struct {
+ ID string `json:"id"`
+ Hosts []string `json:"hosts"`
+ Issuer string `json:"issuer"`
+ Priority int `json:"priority"`
+ Status string `json:"success"`
+ BundleMethod string `json:"bundle_method"`
+ ZoneID string `json:"zone_id"`
+ Permissions []string `json:"permissions"`
+ UploadedOn string `json:"uploaded_on"`
+ ModifiedOn string `json:"modified_on"`
+ ExpiresOn string `json:"expires_on"`
+ KeylessServer KeylessSSL `json:"keyless_server"`
+}
+
+type ZoneCustomSSLResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []ZoneCustomSSL `json:"result"`
+}
+
+type KeylessSSL struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ Host string `json:"host"`
+ Port int `json:"port"`
+ Status string `json:"success"`
+ Enabled bool `json:"enabled"`
+ Permissions []string `json:"permissions"`
+ CreatedOn string `json:"created_on"`
+ ModifiedOn string `json:"modifed_on"`
+}
+
+type KeylessSSLResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []KeylessSSL `json:"result"`
+}
+
+type Railgun struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ Status string `json:"success"`
+ Enabled bool `json:"enabled"`
+ ZonesConnected int `json:"zones_connected"`
+ Build string `json:"build"`
+ Version string `json:"version"`
+ Revision string `json:"revision"`
+ ActivationKey string `json:"activation_key"`
+ ActivatedOn string `json:"activated_on"`
+ CreatedOn string `json:"created_on"`
+ ModifiedOn string `json:"modified_on"`
+ // XXX: UpgradeInfo struct {
+ // version string
+ // url string
+ // } `json:"upgrade_info"`
+}
+
+type RailgunResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []Railgun `json:"result"`
+}
+
+// Custom error pages.
+type CustomPage struct {
+ CreatedOn string `json:"created_on"`
+ ModifiedOn string `json:"modified_on"`
+ URL string `json:"url"`
+ State string `json:"state"`
+ RequiredTokens []string `json:"required_tokens"`
+ PreviewTarget string `json:"preview_target"`
+ Description string `json:"description"`
+}
+
+type CustomPageResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []CustomPage `json:"result"`
+}
+
+// WAF packages
+type WAFPackage struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ Description string `json:"description"`
+ ZoneID string `json:"zone_id"`
+ DetectionMode string `json:"detection_mode"`
+ Sensitivity string `json:"sensitivity"`
+ ActionMode string `json:"action_mode"`
+}
+
+type WAFPackagesResponse struct {
+ Result []WAFPackage `json:"result"`
+ Success bool `json:"success"`
+ ResultInfo struct {
+ Page uint `json:"page"`
+ PerPage uint `json:"per_page"`
+ Count uint `json:"count"`
+ TotalCount uint `json:"total_count"`
+ } `json:"result_info"`
+}
+
+type WAFRule struct {
+ ID string `json:"id"`
+ Description string `json:"description"`
+ Priority string `json:"priority"`
+ PackageID string `json:"package_id"`
+ Group struct {
+ ID string `json:"id"`
+ Name string `json:"name"`
+ } `json:"group"`
+ Mode string `json:"mode"`
+ DefaultMode string `json:"default_mode"`
+ AllowedModes []string `json:"allowed_modes"`
+}
+
+type WAFRulesResponse struct {
+ Result []WAFRule `json:"result"`
+ Success bool `json:"success"`
+ ResultInfo struct {
+ Page uint `json:"page"`
+ PerPage uint `json:"per_page"`
+ Count uint `json:"count"`
+ TotalCount uint `json:"total_count"`
+ } `json:"result_info"`
+}
+
+type PurgeCacheRequest struct {
+ Everything bool `json:"purge_everything,omitempty"`
+ Files []string `json:"files,omitempty"`
+ Tags []string `json:"tags,omitempty"`
+}
+
+type PurgeCacheResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+}
+
+// IPs contains a list of IPv4 and IPv6 CIDRs
+type IPRanges struct {
+ IPv4CIDRs []string `json:"ipv4_cidrs"`
+ IPv6CIDRs []string `json:"ipv6_cidrs"`
+}
+
+// IPsResponse is the API response containing a list of IPs
+type IPsResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result IPRanges `json:"result"`
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/cpage.go b/vendor/github.com/mitchellh/cloudflare-go/cpage.go
new file mode 100644
index 000000000000..73ea3e8384ac
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/cpage.go
@@ -0,0 +1,10 @@
+package cloudflare
+
+// https://api.cloudflare.com/#custom-pages-for-a-zone-available-custom-pages
+// GET /zones/:zone_identifier/custom_pages
+
+// https://api.cloudflare.com/#custom-pages-for-a-zone-custom-page-details
+// GET /zones/:zone_identifier/custom_pages/:identifier
+
+// https://api.cloudflare.com/#custom-pages-for-a-zone-update-custom-page-url
+// PUT /zones/:zone_identifier/custom_pages/:identifier
diff --git a/vendor/github.com/mitchellh/cloudflare-go/dns.go b/vendor/github.com/mitchellh/cloudflare-go/dns.go
new file mode 100644
index 000000000000..b51c9c0f474a
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/dns.go
@@ -0,0 +1,134 @@
+package cloudflare
+
+import (
+ "encoding/json"
+ "net/url"
+
+ "github.com/pkg/errors"
+)
+
+/*
+Create a DNS record.
+
+API reference:
+ https://api.cloudflare.com/#dns-records-for-a-zone-create-dns-record
+ POST /zones/:zone_identifier/dns_records
+*/
+func (api *API) CreateDNSRecord(zoneID string, rr DNSRecord) (DNSRecord, error) {
+ uri := "/zones/" + zoneID + "/dns_records"
+ res, err := api.makeRequest("POST", uri, rr)
+ if err != nil {
+ return DNSRecord{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r DNSRecordResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return DNSRecord{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+Fetches DNS records for a zone.
+
+API reference:
+ https://api.cloudflare.com/#dns-records-for-a-zone-list-dns-records
+ GET /zones/:zone_identifier/dns_records
+*/
+func (api *API) DNSRecords(zoneID string, rr DNSRecord) ([]DNSRecord, error) {
+ // Construct a query string
+ v := url.Values{}
+ if rr.Name != "" {
+ v.Set("name", rr.Name)
+ }
+ if rr.Type != "" {
+ v.Set("type", rr.Type)
+ }
+ if rr.Content != "" {
+ v.Set("content", rr.Content)
+ }
+ var query string
+ if len(v) > 0 {
+ query = "?" + v.Encode()
+ }
+ uri := "/zones/" + zoneID + "/dns_records" + query
+ res, err := api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return []DNSRecord{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r DNSListResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return []DNSRecord{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+Fetches a single DNS record.
+
+API reference:
+ https://api.cloudflare.com/#dns-records-for-a-zone-dns-record-details
+ GET /zones/:zone_identifier/dns_records/:identifier
+*/
+func (api *API) DNSRecord(zoneID, recordID string) (DNSRecord, error) {
+ uri := "/zones/" + zoneID + "/dns_records/" + recordID
+ res, err := api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return DNSRecord{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r DNSRecordResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return DNSRecord{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+Change a DNS record.
+
+API reference:
+ https://api.cloudflare.com/#dns-records-for-a-zone-update-dns-record
+ PUT /zones/:zone_identifier/dns_records/:identifier
+*/
+func (api *API) UpdateDNSRecord(zoneID, recordID string, rr DNSRecord) error {
+ rec, err := api.DNSRecord(zoneID, recordID)
+ if err != nil {
+ return err
+ }
+ rr.Name = rec.Name
+ rr.Type = rec.Type
+ uri := "/zones/" + zoneID + "/dns_records/" + recordID
+ res, err := api.makeRequest("PUT", uri, rr)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r DNSRecordResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
+
+/*
+Delete a DNS record.
+
+API reference:
+ https://api.cloudflare.com/#dns-records-for-a-zone-delete-dns-record
+ DELETE /zones/:zone_identifier/dns_records/:identifier
+*/
+func (api *API) DeleteDNSRecord(zoneID, recordID string) error {
+ uri := "/zones/" + zoneID + "/dns_records/" + recordID
+ res, err := api.makeRequest("DELETE", uri, nil)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r DNSRecordResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/ips.go b/vendor/github.com/mitchellh/cloudflare-go/ips.go
new file mode 100644
index 000000000000..52f79f6a5619
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/ips.go
@@ -0,0 +1,36 @@
+package cloudflare
+
+import (
+ "encoding/json"
+ "io/ioutil"
+ "net/http"
+
+ "github.com/pkg/errors"
+)
+
+/*
+IPs gets a list of CloudFlare's IP ranges
+
+This does not require logging in to the API.
+
+API reference:
+ https://api.cloudflare.com/#cloudflare-ips
+ GET /client/v4/ips
+*/
+func IPs() (IPRanges, error) {
+ resp, err := http.Get(apiURL + "/ips")
+ if err != nil {
+ return IPRanges{}, errors.Wrap(err, "HTTP request failed")
+ }
+ defer resp.Body.Close()
+ body, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return IPRanges{}, errors.Wrap(err, "Response body could not be read")
+ }
+ var r IPsResponse
+ err = json.Unmarshal(body, &r)
+ if err != nil {
+ return IPRanges{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/keyless.go b/vendor/github.com/mitchellh/cloudflare-go/keyless.go
new file mode 100644
index 000000000000..f12c3910cdb4
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/keyless.go
@@ -0,0 +1,26 @@
+package cloudflare
+
+// https://api.cloudflare.com/#keyless-ssl-for-a-zone-create-a-keyless-ssl-configuration
+// POST /zones/:zone_identifier/keyless_certificates
+func (c *API) CreateKeyless() {
+}
+
+// https://api.cloudflare.com/#keyless-ssl-for-a-zone-list-keyless-ssls
+// GET /zones/:zone_identifier/keyless_certificates
+func (c *API) ListKeyless() {
+}
+
+// https://api.cloudflare.com/#keyless-ssl-for-a-zone-keyless-ssl-details
+// GET /zones/:zone_identifier/keyless_certificates/:identifier
+func (c *API) Keyless() {
+}
+
+// https://api.cloudflare.com/#keyless-ssl-for-a-zone-update-keyless-configuration
+// PATCH /zones/:zone_identifier/keyless_certificates/:identifier
+func (c *API) UpdateKeyless() {
+}
+
+// https://api.cloudflare.com/#keyless-ssl-for-a-zone-delete-keyless-configuration
+// DELETE /zones/:zone_identifier/keyless_certificates/:identifier
+func (c *API) DeleteKeyless() {
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/pagerules.go b/vendor/github.com/mitchellh/cloudflare-go/pagerules.go
new file mode 100644
index 000000000000..fae180d712fb
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/pagerules.go
@@ -0,0 +1,231 @@
+package cloudflare
+
+import (
+ "encoding/json"
+
+ "github.com/pkg/errors"
+)
+
+/*
+PageRuleTarget is the target to evaluate on a request.
+
+Currently Target must always be "url" and Operator must be "matches". Value
+is the URL pattern to match against.
+*/
+type PageRuleTarget struct {
+ Target string `json:"target"`
+ Constraint struct {
+ Operator string `json:"operator"`
+ Value string `json:"value"`
+ } `json:"constraint"`
+}
+
+/*
+PageRuleAction is the action to take when the target is matched.
+
+Valid IDs are:
+
+ always_online
+ always_use_https
+ browser_cache_ttl
+ browser_check
+ cache_level
+ disable_apps
+ disable_performance
+ disable_security
+ edge_cache_ttl
+ email_obfuscation
+ forwarding_url
+ ip_geolocation
+ mirage
+ railgun
+ rocket_loader
+ security_level
+ server_side_exclude
+ smart_errors
+ ssl
+ waf
+*/
+type PageRuleAction struct {
+ ID string `json:"id"`
+ Value interface{} `json:"value"`
+}
+
+// PageRuleActions maps API action IDs to human-readable strings
+var PageRuleActions = map[string]string{
+ "always_online": "Always Online", // Value of type string
+ "always_use_https": "Always Use HTTPS", // Value of type interface{}
+ "browser_cache_ttl": "Browser Cache TTL", // Value of type int
+ "browser_check": "Browser Integrity Check", // Value of type string
+ "cache_level": "Cache Level", // Value of type string
+ "disable_apps": "Disable Apps", // Value of type interface{}
+ "disable_performance": "Disable Performance", // Value of type interface{}
+ "disable_security": "Disable Security", // Value of type interface{}
+ "edge_cache_ttl": "Edge Cache TTL", // Value of type int
+ "email_obfuscation": "Email Obfuscation", // Value of type string
+ "forwarding_url": "Forwarding URL", // Value of type map[string]interface
+ "ip_geolocation": "IP Geolocation Header", // Value of type string
+ "mirage": "Mirage", // Value of type string
+ "railgun": "Railgun", // Value of type string
+ "rocket_loader": "Rocker Loader", // Value of type string
+ "security_level": "Security Level", // Value of type string
+ "server_side_exclude": "Server Side Excludes", // Value of type string
+ "smart_errors": "Smart Errors", // Value of type string
+ "ssl": "SSL", // Value of type string
+ "waf": "Web Application Firewall", // Value of type string
+}
+
+// PageRule describes a Page Rule.
+type PageRule struct {
+ ID string `json:"id,omitempty"`
+ Targets []PageRuleTarget `json:"targets"`
+ Actions []PageRuleAction `json:"actions"`
+ Priority int `json:"priority"`
+ Status string `json:"status"` // can be: active, paused
+ ModifiedOn string `json:"modified_on,omitempty"`
+ CreatedOn string `json:"created_on,omitempty"`
+}
+
+// PageRuleDetailResponse is the API response, containing a single PageRule.
+type PageRuleDetailResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result PageRule `json:"result"`
+}
+
+// PageRulesResponse is the API response, containing an array of PageRules.
+type PageRulesResponse struct {
+ Success bool `json:"success"`
+ Errors []string `json:"errors"`
+ Messages []string `json:"messages"`
+ Result []PageRule `json:"result"`
+}
+
+/*
+CreatePageRule creates a new Page Rule for a zone.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-create-a-page-rule
+ POST /zones/:zone_identifier/pagerules
+*/
+func (api *API) CreatePageRule(zoneID string, rule PageRule) error {
+ uri := "/zones/" + zoneID + "/pagerules"
+ res, err := api.makeRequest("POST", uri, rule)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRuleDetailResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
+
+/*
+ListPageRules returns all Page Rules for a zone.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-list-page-rules
+ GET /zones/:zone_identifier/pagerules
+*/
+func (api *API) ListPageRules(zoneID string) ([]PageRule, error) {
+ uri := "/zones/" + zoneID + "/pagerules"
+ res, err := api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return []PageRule{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRulesResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return []PageRule{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+PageRule fetches detail about one Page Rule for a zone.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-page-rule-details
+ GET /zones/:zone_identifier/pagerules/:identifier
+*/
+func (api *API) PageRule(zoneID, ruleID string) (PageRule, error) {
+ uri := "/zones/" + zoneID + "/pagerules/" + ruleID
+ res, err := api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return PageRule{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRuleDetailResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return PageRule{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+ChangePageRule lets change individual settings for a Page Rule. This is in
+contrast to UpdatePageRule which replaces the entire Page Rule.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-change-a-page-rule
+ PATCH /zones/:zone_identifier/pagerules/:identifier
+*/
+func (api *API) ChangePageRule(zoneID, ruleID string, rule PageRule) error {
+ uri := "/zones/" + zoneID + "/pagerules/" + ruleID
+ res, err := api.makeRequest("PATCH", uri, rule)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRuleDetailResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
+
+/*
+UpdatePageRule lets you replace a Page Rule. This is in contrast to
+ChangePageRule which lets you change individual settings.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-update-a-page-rule
+ PUT /zones/:zone_identifier/pagerules/:identifier
+*/
+func (api *API) UpdatePageRule(zoneID, ruleID string, rule PageRule) error {
+ uri := "/zones/" + zoneID + "/pagerules/" + ruleID
+ res, err := api.makeRequest("PUT", uri, nil)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRuleDetailResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
+
+/*
+DeletePageRule deletes a Page Rule for a zone.
+
+API reference:
+ https://api.cloudflare.com/#page-rules-for-a-zone-delete-a-page-rule
+ DELETE /zones/:zone_identifier/pagerules/:identifier
+*/
+func (api *API) DeletePageRule(zoneID, ruleID string) error {
+ uri := "/zones/" + zoneID + "/pagerules/" + ruleID
+ res, err := api.makeRequest("DELETE", uri, nil)
+ if err != nil {
+ return errors.Wrap(err, errMakeRequestError)
+ }
+ var r PageRuleDetailResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return errors.Wrap(err, errUnmarshalError)
+ }
+ return nil
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/railgun.go b/vendor/github.com/mitchellh/cloudflare-go/railgun.go
new file mode 100644
index 000000000000..a4769d206973
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/railgun.go
@@ -0,0 +1,37 @@
+package cloudflare
+
+// Railgun
+
+// https://api.cloudflare.com/#railgun-create-railgun
+// POST /railguns
+func (c *API) CreateRailgun() {
+}
+
+// https://api.cloudflare.com/#railgun-railgun-details
+// GET /railguns/:identifier
+
+// https://api.cloudflare.com/#railgun-get-zones-connected-to-a-railgun
+// GET /railguns/:identifier/zones
+
+// https://api.cloudflare.com/#railgun-enable-or-disable-a-railgun
+// PATCH /railguns/:identifier
+
+// https://api.cloudflare.com/#railgun-delete-railgun
+// DELETE /railguns/:identifier
+
+// Zone railgun info
+
+// https://api.cloudflare.com/#railguns-for-a-zone-get-available-railguns
+// GET /zones/:zone_identifier/railguns
+func (c *API) Railguns() {
+}
+
+// https://api.cloudflare.com/#railguns-for-a-zone-get-railgun-details
+// GET /zones/:zone_identifier/railguns/:identifier
+func (c *API) Railgun() {
+}
+
+// https://api.cloudflare.com/#railguns-for-a-zone-connect-or-disconnect-a-railgun
+// PATCH /zones/:zone_identifier/railguns/:identifier
+func (c *API) ZoneRailgun(connected bool) {
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/ssl.go b/vendor/github.com/mitchellh/cloudflare-go/ssl.go
new file mode 100644
index 000000000000..8dfc74f99cae
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/ssl.go
@@ -0,0 +1,31 @@
+package cloudflare
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-create-ssl-configuration
+// POST /zones/:zone_identifier/custom_certificates
+func (c *API) CreateSSL() {
+}
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-list-ssl-configurations
+// GET /zones/:zone_identifier/custom_certificates
+func (c *API) ListSSL() {
+}
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-ssl-configuration-details
+// GET /zones/:zone_identifier/custom_certificates/:identifier
+func (c *API) SSLDetails() {
+}
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-update-ssl-configuration
+// PATCH /zones/:zone_identifier/custom_certificates/:identifier
+func (c *API) UpdateSSL() {
+}
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-re-prioritize-ssl-certificates
+// PUT /zones/:zone_identifier/custom_certificates/prioritize
+func (c *API) ReprioSSL() {
+}
+
+// https://api.cloudflare.com/#custom-ssl-for-a-zone-delete-an-ssl-certificate
+// DELETE /zones/:zone_identifier/custom_certificates/:identifier
+func (c *API) DeleteSSL() {
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/user.go b/vendor/github.com/mitchellh/cloudflare-go/user.go
new file mode 100644
index 000000000000..8c2344453e4d
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/user.go
@@ -0,0 +1,35 @@
+package cloudflare
+
+import (
+ "encoding/json"
+
+ "github.com/pkg/errors"
+)
+
+/*
+Information about the logged-in user.
+
+API reference: https://api.cloudflare.com/#user-user-details
+*/
+func (api API) UserDetails() (User, error) {
+ var r UserResponse
+ res, err := api.makeRequest("GET", "/user", nil)
+ if err != nil {
+ return User{}, errors.Wrap(err, errMakeRequestError)
+ }
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return User{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r.Result, nil
+}
+
+/*
+Update user properties.
+
+API reference: https://api.cloudflare.com/#user-update-user
+*/
+func (api API) UpdateUser() (User, error) {
+ // api.makeRequest("PATCH", "/user", user)
+ return User{}, nil
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/waf.go b/vendor/github.com/mitchellh/cloudflare-go/waf.go
new file mode 100644
index 000000000000..f3dbe5cfeb6a
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/waf.go
@@ -0,0 +1,55 @@
+package cloudflare
+
+import (
+ "encoding/json"
+
+ "github.com/pkg/errors"
+)
+
+func (api *API) ListWAFPackages(zoneID string) ([]WAFPackage, error) {
+ var p WAFPackagesResponse
+ var packages []WAFPackage
+ var res []byte
+ var err error
+ uri := "/zones/" + zoneID + "/firewall/waf/packages"
+ res, err = api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return []WAFPackage{}, errors.Wrap(err, errMakeRequestError)
+ }
+ err = json.Unmarshal(res, &p)
+ if err != nil {
+ return []WAFPackage{}, errors.Wrap(err, errUnmarshalError)
+ }
+ if !p.Success {
+ // TODO: Provide an actual error message instead of always returning nil
+ return []WAFPackage{}, err
+ }
+ for pi, _ := range p.Result {
+ packages = append(packages, p.Result[pi])
+ }
+ return packages, nil
+}
+
+func (api *API) ListWAFRules(zoneID, packageID string) ([]WAFRule, error) {
+ var r WAFRulesResponse
+ var rules []WAFRule
+ var res []byte
+ var err error
+ uri := "/zones/" + zoneID + "/firewall/waf/packages/" + packageID + "/rules"
+ res, err = api.makeRequest("GET", uri, nil)
+ if err != nil {
+ return []WAFRule{}, errors.Wrap(err, errMakeRequestError)
+ }
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return []WAFRule{}, errors.Wrap(err, errUnmarshalError)
+ }
+ if !r.Success {
+ // TODO: Provide an actual error message instead of always returning nil
+ return []WAFRule{}, err
+ }
+ for ri, _ := range r.Result {
+ rules = append(rules, r.Result[ri])
+ }
+ return rules, nil
+}
diff --git a/vendor/github.com/mitchellh/cloudflare-go/zone.go b/vendor/github.com/mitchellh/cloudflare-go/zone.go
new file mode 100644
index 000000000000..7a681c5deb8f
--- /dev/null
+++ b/vendor/github.com/mitchellh/cloudflare-go/zone.go
@@ -0,0 +1,145 @@
+package cloudflare
+
+import (
+ "encoding/json"
+ "net/url"
+
+ "github.com/pkg/errors"
+)
+
+/*
+Creates a zone on an account.
+
+API reference: https://api.cloudflare.com/#zone-create-a-zone
+*/
+func (api *API) CreateZone(z Zone) {
+ // res, err := api.makeRequest("POST", "/zones", z)
+}
+
+/*
+List zones on an account. Optionally takes a list of zones to filter results.
+
+API reference: https://api.cloudflare.com/#zone-list-zones
+*/
+func (api *API) ListZones(z ...string) ([]Zone, error) {
+ v := url.Values{}
+ var res []byte
+ var r ZoneResponse
+ var zones []Zone
+ var err error
+ if len(z) > 0 {
+ for _, zone := range z {
+ v.Set("name", zone)
+ res, err = api.makeRequest("GET", "/zones?"+v.Encode(), nil)
+ if err != nil {
+ return []Zone{}, errors.Wrap(err, errMakeRequestError)
+ }
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return []Zone{}, errors.Wrap(err, errUnmarshalError)
+ }
+ if !r.Success {
+ // TODO: Provide an actual error message instead of always returning nil
+ return []Zone{}, err
+ }
+ for zi, _ := range r.Result {
+ zones = append(zones, r.Result[zi])
+ }
+ }
+ } else {
+ res, err = api.makeRequest("GET", "/zones", nil)
+ if err != nil {
+ return []Zone{}, errors.Wrap(err, errMakeRequestError)
+ }
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return []Zone{}, errors.Wrap(err, errUnmarshalError)
+ }
+ zones = r.Result
+ }
+
+ return zones, nil
+}
+
+/*
+Fetches information about a zone.
+
+
+ https://api.cloudflare.com/#zone-zone-details
+ GET /zones/:id
+*/
+func (api *API) ZoneDetails(z Zone) {
+ // XXX: Should we make the user get the zone ID themselves with ListZones, or do the hard work here?
+ // ListZones gives the same information as this endpoint anyway so perhaps this is of limited use?
+ // Maybe for users who already know the ID or fetched it in another call.
+ type result struct {
+ Response
+ Result Zone `json:"result"`
+ }
+ // If z has an ID then query for that directly, else call ListZones to
+ // fetch by name.
+ // var zone Zone
+ if z.ID != "" {
+ // res, _ := makeRequest(c, "GET", "/zones/"+z.ID, nil)
+ // zone = res.Result
+ } else {
+ // zones, err := ListZones(c, z.Name)
+ // if err != nil {
+ // return
+ // }
+ // Only one zone should have been returned
+ // zone := zones[0]
+ }
+}
+
+// https://api.cloudflare.com/#zone-edit-zone-properties
+// PATCH /zones/:id
+func EditZone() {
+}
+
+// https://api.cloudflare.com/#zone-purge-all-files
+// DELETE /zones/:id/purge_cache
+func (api *API) PurgeEverything(zoneID string) (PurgeCacheResponse, error) {
+ uri := "/zones/" + zoneID + "/purge_cache"
+ res, err := api.makeRequest("DELETE", uri, PurgeCacheRequest{true, nil, nil})
+ if err != nil {
+ return PurgeCacheResponse{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r PurgeCacheResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return PurgeCacheResponse{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r, nil
+}
+
+// https://api.cloudflare.com/#zone-purge-individual-files-by-url-and-cache-tags
+// DELETE /zones/:id/purge_cache
+func (api *API) PurgeCache(zoneID string, pcr PurgeCacheRequest) (PurgeCacheResponse, error) {
+ uri := "/zones/" + zoneID + "/purge_cache"
+ res, err := api.makeRequest("DELETE", uri, pcr)
+ if err != nil {
+ return PurgeCacheResponse{}, errors.Wrap(err, errMakeRequestError)
+ }
+ var r PurgeCacheResponse
+ err = json.Unmarshal(res, &r)
+ if err != nil {
+ return PurgeCacheResponse{}, errors.Wrap(err, errUnmarshalError)
+ }
+ return r, nil
+}
+
+// https://api.cloudflare.com/#zone-delete-a-zone
+// DELETE /zones/:id
+func DeleteZone() {
+}
+
+// Zone Plan
+// https://api.cloudflare.com/#zone-plan-available-plans
+// https://api.cloudflare.com/#zone-plan-plan-details
+
+// Zone Settings
+// https://api.cloudflare.com/#zone-settings-for-a-zone-get-all-zone-settings
+// e.g.
+// https://api.cloudflare.com/#zone-settings-for-a-zone-get-always-online-setting
+// https://api.cloudflare.com/#zone-settings-for-a-zone-change-always-online-setting
diff --git a/vendor/github.com/pearkes/cloudflare/README.md b/vendor/github.com/pearkes/cloudflare/README.md
deleted file mode 100644
index a1f967aefbd1..000000000000
--- a/vendor/github.com/pearkes/cloudflare/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
-## cloudflare
-
-This package provides the `cloudflare` package which offers
-an interface to the CloudFlare gAPI.
-
-It's intentionally designed to make heavy use of built-ins and strings
-in place of custom data structures and proper types. It also only implements
-specific endpoints, and doesn't have full API coverage.
-
-**For those reasons, I recommend looking elsewhere if you just need
-a standard CloudFlare API client.**
-
-### Documentation
-
-The full documentation is available on [Godoc](http://godoc.org/github.com/pearkes/cloudflare)
diff --git a/vendor/github.com/pearkes/cloudflare/api.go b/vendor/github.com/pearkes/cloudflare/api.go
deleted file mode 100644
index ed7fe93f70a8..000000000000
--- a/vendor/github.com/pearkes/cloudflare/api.go
+++ /dev/null
@@ -1,119 +0,0 @@
-package cloudflare
-
-import (
- "encoding/json"
- "fmt"
- "io/ioutil"
- "net/http"
- "net/url"
- "os"
-
- "github.com/hashicorp/go-cleanhttp"
-)
-
-// Client provides a client to the CloudflAre API
-type Client struct {
- // Access Token
- Token string
-
- // User Email
- Email string
-
- // URL to the DO API to use
- URL string
-
- // HttpClient is the client to use. Default will be
- // used if not provided.
- Http *http.Client
-}
-
-// NewClient returns a new cloudflare client,
-// requires an authorization token. You can generate
-// an OAuth token by visiting the Apps & API section
-// of the CloudflAre control panel for your account.
-func NewClient(email string, token string) (*Client, error) {
- // If it exists, grab teh token from the environment
- if token == "" {
- token = os.Getenv("CLOUDFLARE_TOKEN")
- }
-
- if email == "" {
- email = os.Getenv("CLOUDFLARE_EMAIL")
- }
-
- client := Client{
- Token: token,
- Email: email,
- URL: "https://www.cloudflare.com/api_json.html",
- Http: cleanhttp.DefaultClient(),
- }
- return &client, nil
-}
-
-// Creates a new request with the params
-func (c *Client) NewRequest(params map[string]string, method string, action string) (*http.Request, error) {
- p := url.Values{}
- u, err := url.Parse(c.URL)
-
- if err != nil {
- return nil, fmt.Errorf("Error parsing base URL: %s", err)
- }
-
- // Build up our request parameters
- for k, v := range params {
- p.Add(k, v)
- }
-
- // Add authentication details
- p.Add("tkn", c.Token)
- p.Add("email", c.Email)
-
- // The "action" to take against the API
- p.Add("a", action)
-
- // Add the params to our URL
- u.RawQuery = p.Encode()
-
- // Build the request
- req, err := http.NewRequest(method, u.String(), nil)
-
- if err != nil {
- return nil, fmt.Errorf("Error creating request: %s", err)
- }
-
- return req, nil
-
-}
-
-// decodeBody is used to JSON decode a body
-func decodeBody(resp *http.Response, out interface{}) error {
- body, err := ioutil.ReadAll(resp.Body)
-
- if err != nil {
- return err
- }
-
- if err = json.Unmarshal(body, &out); err != nil {
- return err
- }
-
- return nil
-}
-
-// checkResp wraps http.Client.Do() and verifies that the
-// request was successful. A non-200 request returns an error
-// formatted to included any validation problems or otherwise
-func checkResp(resp *http.Response, err error) (*http.Response, error) {
- // If the err is already there, there was an error higher
- // up the chain, so just return that
- if err != nil {
- return resp, err
- }
-
- switch i := resp.StatusCode; {
- case i == 200:
- return resp, nil
- default:
- return nil, fmt.Errorf("API Error: %s", resp.Status)
- }
-}
diff --git a/vendor/github.com/pearkes/cloudflare/record.go b/vendor/github.com/pearkes/cloudflare/record.go
deleted file mode 100644
index a3bed92a22d7..000000000000
--- a/vendor/github.com/pearkes/cloudflare/record.go
+++ /dev/null
@@ -1,334 +0,0 @@
-package cloudflare
-
-import (
- "errors"
- "fmt"
- "strings"
-)
-
-type RecordsResponse struct {
- Response struct {
- Recs struct {
- Records []Record `json:"objs"`
- } `json:"recs"`
- } `json:"response"`
- Result string `json:"result"`
- Message string `json:"msg"`
-}
-
-func (r *RecordsResponse) FindRecord(id string) (*Record, error) {
- if r.Result == "error" {
- return nil, fmt.Errorf("API Error: %s", r.Message)
- }
-
- objs := r.Response.Recs.Records
- notFoundErr := errors.New("Record not found")
-
- // No objects, return nil
- if len(objs) < 0 {
- return nil, notFoundErr
- }
-
- for _, v := range objs {
- // We have a match, return that
- if v.Id == id {
- return &v, nil
- }
- }
-
- return nil, notFoundErr
-}
-
-func (r *RecordsResponse) FindRecordByName(name string, wildcard bool) ([]Record, error) {
- if r.Result == "error" {
- return nil, fmt.Errorf("API Error: %s", r.Message)
- }
-
- objs := r.Response.Recs.Records
- notFoundErr := errors.New("Record not found")
-
- // No objects, return nil
- if len(objs) < 0 {
- return nil, notFoundErr
- }
-
- var recs []Record
- suffix := "." + name
-
- for _, v := range objs {
- if v.Name == name {
- recs = append(recs, v)
- } else if wildcard && strings.HasSuffix(v.Name, suffix) {
- recs = append(recs, v)
- }
- }
-
- return recs, nil
-}
-
-type RecordResponse struct {
- Response struct {
- Rec struct {
- Record Record `json:"obj"`
- } `json:"rec"`
- } `json:"response"`
- Result string `json:"result"`
- Message string `json:"msg"`
-}
-
-func (r *RecordResponse) GetRecord() (*Record, error) {
- if r.Result == "error" {
- return nil, fmt.Errorf("API Error: %s", r.Message)
- }
-
- return &r.Response.Rec.Record, nil
-}
-
-// Record is used to represent a retrieved Record. All properties
-// are set as strings.
-type Record struct {
- Id string `json:"rec_id"`
- Domain string `json:"zone_name"`
- Name string `json:"display_name"`
- FullName string `json:"name"`
- Value string `json:"content"`
- Type string `json:"type"`
- Priority string `json:"prio"`
- Ttl string `json:"ttl"`
-}
-
-// CreateRecord contains the request parameters to create a new
-// record.
-type CreateRecord struct {
- Type string
- Name string
- Content string
- Ttl string
- Priority string
-}
-
-// CreateRecord creates a record from the parameters specified and
-// returns an error if it fails. If no error and the name is returned,
-// the Record was succesfully created.
-func (c *Client) CreateRecord(domain string, opts *CreateRecord) (*Record, error) {
- // Make the request parameters
- params := make(map[string]string)
- params["z"] = domain
-
- params["type"] = opts.Type
-
- if opts.Name != "" {
- params["name"] = opts.Name
- }
-
- if opts.Content != "" {
- params["content"] = opts.Content
- }
-
- if opts.Priority != "" {
- params["prio"] = opts.Priority
- }
-
- if opts.Ttl != "" {
- params["ttl"] = opts.Ttl
- } else {
- params["ttl"] = "1"
- }
-
- req, err := c.NewRequest(params, "POST", "rec_new")
- if err != nil {
- return nil, err
- }
-
- resp, err := checkResp(c.Http.Do(req))
-
- if err != nil {
- return nil, fmt.Errorf("Error creating record: %s", err)
- }
-
- recordResp := new(RecordResponse)
-
- err = decodeBody(resp, &recordResp)
-
- if err != nil {
- return nil, fmt.Errorf("Error parsing record response: %s", err)
- }
- record, err := recordResp.GetRecord()
- if err != nil {
- return nil, err
- }
-
- // The request was successful
- return record, nil
-}
-
-// DestroyRecord destroys a record by the ID specified and
-// returns an error if it fails. If no error is returned,
-// the Record was succesfully destroyed.
-func (c *Client) DestroyRecord(domain string, id string) error {
- params := make(map[string]string)
-
- params["z"] = domain
- params["id"] = id
-
- req, err := c.NewRequest(params, "POST", "rec_delete")
- if err != nil {
- return err
- }
-
- resp, err := checkResp(c.Http.Do(req))
-
- if err != nil {
- return fmt.Errorf("Error deleting record: %s", err)
- }
-
- recordResp := new(RecordResponse)
-
- err = decodeBody(resp, &recordResp)
-
- if err != nil {
- return fmt.Errorf("Error parsing record response: %s", err)
- }
- _, err = recordResp.GetRecord()
- if err != nil {
- return err
- }
-
- // The request was successful
- return nil
-}
-
-// UpdateRecord contains the request parameters to update a
-// record.
-type UpdateRecord struct {
- Type string
- Name string
- Content string
- Ttl string
- Priority string
-}
-
-// UpdateRecord destroys a record by the ID specified and
-// returns an error if it fails. If no error is returned,
-// the Record was succesfully updated.
-func (c *Client) UpdateRecord(domain string, id string, opts *UpdateRecord) error {
- params := make(map[string]string)
- params["z"] = domain
- params["id"] = id
-
- params["type"] = opts.Type
-
- if opts.Name != "" {
- params["name"] = opts.Name
- }
-
- if opts.Content != "" {
- params["content"] = opts.Content
- }
-
- if opts.Priority != "" {
- params["prio"] = opts.Priority
- }
-
- if opts.Ttl != "" {
- params["ttl"] = opts.Ttl
- } else {
- params["ttl"] = "1"
- }
-
- req, err := c.NewRequest(params, "POST", "rec_edit")
- if err != nil {
- return err
- }
-
- resp, err := checkResp(c.Http.Do(req))
-
- if err != nil {
- return fmt.Errorf("Error updating record: %s", err)
- }
-
- recordResp := new(RecordResponse)
-
- err = decodeBody(resp, &recordResp)
-
- if err != nil {
- return fmt.Errorf("Error parsing record response: %s", err)
- }
- _, err = recordResp.GetRecord()
- if err != nil {
- return err
- }
-
- // The request was successful
- return nil
-}
-
-func (c *Client) RetrieveRecordsByName(domain string, name string, wildcard bool) ([]Record, error) {
- params := make(map[string]string)
- // The zone we want
- params["z"] = domain
-
- req, err := c.NewRequest(params, "GET", "rec_load_all")
-
- if err != nil {
- return nil, err
- }
-
- resp, err := checkResp(c.Http.Do(req))
- if err != nil {
- return nil, fmt.Errorf("Error retrieving record: %s", err)
- }
-
- records := new(RecordsResponse)
-
- err = decodeBody(resp, records)
-
- if err != nil {
- return nil, fmt.Errorf("Error decoding record response: %s", err)
- }
-
- record, err := records.FindRecordByName(name, wildcard)
- if err != nil {
- return nil, err
- }
-
- // The request was successful
- return record, nil
-}
-
-// RetrieveRecord gets a record by the ID specified and
-// returns a Record and an error. An error will be returned for failed
-// requests with a nil Record.
-func (c *Client) RetrieveRecord(domain string, id string) (*Record, error) {
- params := make(map[string]string)
- // The zone we want
- params["z"] = domain
- params["id"] = id
-
- req, err := c.NewRequest(params, "GET", "rec_load_all")
-
- if err != nil {
- return nil, err
- }
-
- resp, err := checkResp(c.Http.Do(req))
- if err != nil {
- return nil, fmt.Errorf("Error retrieving record: %s", err)
- }
-
- records := new(RecordsResponse)
-
- err = decodeBody(resp, records)
-
- if err != nil {
- return nil, fmt.Errorf("Error decoding record response: %s", err)
- }
-
- record, err := records.FindRecord(id)
- if err != nil {
- return nil, err
- }
-
- // The request was successful
- return record, nil
-}
diff --git a/vendor/github.com/pkg/errors/.gitignore b/vendor/github.com/pkg/errors/.gitignore
new file mode 100644
index 000000000000..daf913b1b347
--- /dev/null
+++ b/vendor/github.com/pkg/errors/.gitignore
@@ -0,0 +1,24 @@
+# Compiled Object files, Static and Dynamic libs (Shared Objects)
+*.o
+*.a
+*.so
+
+# Folders
+_obj
+_test
+
+# Architecture specific extensions/prefixes
+*.[568vq]
+[568vq].out
+
+*.cgo1.go
+*.cgo2.c
+_cgo_defun.c
+_cgo_gotypes.go
+_cgo_export.*
+
+_testmain.go
+
+*.exe
+*.test
+*.prof
diff --git a/vendor/github.com/pkg/errors/.travis.yml b/vendor/github.com/pkg/errors/.travis.yml
new file mode 100644
index 000000000000..13f087a7d97d
--- /dev/null
+++ b/vendor/github.com/pkg/errors/.travis.yml
@@ -0,0 +1,10 @@
+language: go
+go_import_path: github.com/pkg/errors
+go:
+ - 1.4.3
+ - 1.5.4
+ - 1.6.1
+ - tip
+
+script:
+ - go test -v ./...
diff --git a/vendor/github.com/pkg/errors/LICENSE b/vendor/github.com/pkg/errors/LICENSE
new file mode 100644
index 000000000000..fafcaafdc75b
--- /dev/null
+++ b/vendor/github.com/pkg/errors/LICENSE
@@ -0,0 +1,24 @@
+Copyright (c) 2015, Dave Cheney
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
diff --git a/vendor/github.com/pkg/errors/README.md b/vendor/github.com/pkg/errors/README.md
new file mode 100644
index 000000000000..a9f7d88ef5fb
--- /dev/null
+++ b/vendor/github.com/pkg/errors/README.md
@@ -0,0 +1,52 @@
+# errors [![Travis-CI](https://travis-ci.org/pkg/errors.svg)](https://travis-ci.org/pkg/errors) [![GoDoc](https://godoc.org/github.com/pkg/errors?status.svg)](http://godoc.org/github.com/pkg/errors) [![Report card](https://goreportcard.com/badge/github.com/pkg/errors)](https://goreportcard.com/report/github.com/pkg/errors)
+
+Package errors implements functions for manipulating errors.
+
+The traditional error handling idiom in Go is roughly akin to
+```
+if err != nil {
+ return err
+}
+```
+which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error.
+
+## Adding context to an error
+
+The errors.Wrap function returns a new error that adds context to the original error. For example
+```
+_, err := ioutil.ReadAll(r)
+if err != nil {
+ return errors.Wrap(err, "read failed")
+}
+```
+In addition, `errors.Wrap` records the file and line where it was called, allowing the programmer to retrieve the path to the original error.
+
+## Retrieving the cause of an error
+
+Using `errors.Wrap` constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to recurse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by `errors.Cause`.
+```
+type causer interface {
+ Cause() error
+}
+```
+`errors.Cause` will recursively retrieve the topmost error which does not implement `causer`, which is assumed to be the original cause. For example:
+```
+switch err := errors.Cause(err).(type) {
+case *MyError:
+ // handle specifically
+default:
+ // unknown error
+}
+```
+
+Would you like to know more? Read the [blog post](http://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully).
+
+## Contributing
+
+We welcome pull requests, bug fixes and issue reports. With that said, the bar for adding new symbols to this package is intentionally set high.
+
+Before proposing a change, please discuss your change by raising an issue.
+
+## Licence
+
+MIT
diff --git a/vendor/github.com/pkg/errors/errors.go b/vendor/github.com/pkg/errors/errors.go
new file mode 100644
index 000000000000..7ec1c5dd5728
--- /dev/null
+++ b/vendor/github.com/pkg/errors/errors.go
@@ -0,0 +1,248 @@
+// Package errors implements functions for manipulating errors.
+//
+// The traditional error handling idiom in Go is roughly akin to
+//
+// if err != nil {
+// return err
+// }
+//
+// which applied recursively up the call stack results in error reports
+// without context or debugging information. The errors package allows
+// programmers to add context to the failure path in their code in a way
+// that does not destroy the original value of the error.
+//
+// Adding context to an error
+//
+// The errors.Wrap function returns a new error that adds context to the
+// original error. For example
+//
+// _, err := ioutil.ReadAll(r)
+// if err != nil {
+// return errors.Wrap(err, "read failed")
+// }
+//
+// In addition, errors.Wrap records the file and line where it was called,
+// allowing the programmer to retrieve the path to the original error.
+//
+// Retrieving the cause of an error
+//
+// Using errors.Wrap constructs a stack of errors, adding context to the
+// preceding error. Depending on the nature of the error it may be necessary
+// to reverse the operation of errors.Wrap to retrieve the original error
+// for inspection. Any error value which implements this interface
+//
+// type causer interface {
+// Cause() error
+// }
+//
+// can be inspected by errors.Cause. errors.Cause will recursively retrieve
+// the topmost error which does nor implement causer, which is assumed to be
+// the original cause. For example:
+//
+// switch err := errors.Cause(err).(type) {
+// case *MyError:
+// // handle specifically
+// default:
+// // unknown error
+// }
+package errors
+
+import (
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "runtime"
+ "strings"
+)
+
+// location represents a program counter that
+// implements the Location() method.
+type location uintptr
+
+func (l location) Location() (string, int) {
+ pc := uintptr(l) - 1
+ fn := runtime.FuncForPC(pc)
+ if fn == nil {
+ return "unknown", 0
+ }
+
+ file, line := fn.FileLine(pc)
+
+ // Here we want to get the source file path relative to the compile time
+ // GOPATH. As of Go 1.6.x there is no direct way to know the compiled
+ // GOPATH at runtime, but we can infer the number of path segments in the
+ // GOPATH. We note that fn.Name() returns the function name qualified by
+ // the import path, which does not include the GOPATH. Thus we can trim
+ // segments from the beginning of the file path until the number of path
+ // separators remaining is one more than the number of path separators in
+ // the function name. For example, given:
+ //
+ // GOPATH /home/user
+ // file /home/user/src/pkg/sub/file.go
+ // fn.Name() pkg/sub.Type.Method
+ //
+ // We want to produce:
+ //
+ // pkg/sub/file.go
+ //
+ // From this we can easily see that fn.Name() has one less path separator
+ // than our desired output. We count separators from the end of the file
+ // path until it finds two more than in the function name and then move
+ // one character forward to preserve the initial path segment without a
+ // leading separator.
+ const sep = "/"
+ goal := strings.Count(fn.Name(), sep) + 2
+ i := len(file)
+ for n := 0; n < goal; n++ {
+ i = strings.LastIndex(file[:i], sep)
+ if i == -1 {
+ // not enough separators found, set i so that the slice expression
+ // below leaves file unmodified
+ i = -len(sep)
+ break
+ }
+ }
+ // get back to 0 or trim the leading separator
+ file = file[i+len(sep):]
+
+ return file, line
+}
+
+// New returns an error that formats as the given text.
+func New(text string) error {
+ pc, _, _, _ := runtime.Caller(1)
+ return struct {
+ error
+ location
+ }{
+ errors.New(text),
+ location(pc),
+ }
+}
+
+type cause struct {
+ cause error
+ message string
+}
+
+func (c cause) Error() string { return c.Message() + ": " + c.Cause().Error() }
+func (c cause) Cause() error { return c.cause }
+func (c cause) Message() string { return c.message }
+
+// Errorf formats according to a format specifier and returns the string
+// as a value that satisfies error.
+func Errorf(format string, args ...interface{}) error {
+ pc, _, _, _ := runtime.Caller(1)
+ return struct {
+ error
+ location
+ }{
+ fmt.Errorf(format, args...),
+ location(pc),
+ }
+}
+
+// Wrap returns an error annotating the cause with message.
+// If cause is nil, Wrap returns nil.
+func Wrap(cause error, message string) error {
+ if cause == nil {
+ return nil
+ }
+ pc, _, _, _ := runtime.Caller(1)
+ return wrap(cause, message, pc)
+}
+
+// Wrapf returns an error annotating the cause with the format specifier.
+// If cause is nil, Wrapf returns nil.
+func Wrapf(cause error, format string, args ...interface{}) error {
+ if cause == nil {
+ return nil
+ }
+ pc, _, _, _ := runtime.Caller(1)
+ return wrap(cause, fmt.Sprintf(format, args...), pc)
+}
+
+func wrap(err error, msg string, pc uintptr) error {
+ return struct {
+ cause
+ location
+ }{
+ cause{
+ cause: err,
+ message: msg,
+ },
+ location(pc),
+ }
+}
+
+type causer interface {
+ Cause() error
+}
+
+// Cause returns the underlying cause of the error, if possible.
+// An error value has a cause if it implements the following
+// interface:
+//
+// type Causer interface {
+// Cause() error
+// }
+//
+// If the error does not implement Cause, the original error will
+// be returned. If the error is nil, nil will be returned without further
+// investigation.
+func Cause(err error) error {
+ for err != nil {
+ cause, ok := err.(causer)
+ if !ok {
+ break
+ }
+ err = cause.Cause()
+ }
+ return err
+}
+
+// Print prints the error to Stderr.
+// If the error implements the Causer interface described in Cause
+// Print will recurse into the error's cause.
+// If the error implements the inteface:
+//
+// type Location interface {
+// Location() (file string, line int)
+// }
+//
+// Print will also print the file and line of the error.
+func Print(err error) {
+ Fprint(os.Stderr, err)
+}
+
+// Fprint prints the error to the supplied writer.
+// The format of the output is the same as Print.
+// If err is nil, nothing is printed.
+func Fprint(w io.Writer, err error) {
+ type location interface {
+ Location() (string, int)
+ }
+ type message interface {
+ Message() string
+ }
+
+ for err != nil {
+ if err, ok := err.(location); ok {
+ file, line := err.Location()
+ fmt.Fprintf(w, "%s:%d: ", file, line)
+ }
+ switch err := err.(type) {
+ case message:
+ fmt.Fprintln(w, err.Message())
+ default:
+ fmt.Fprintln(w, err.Error())
+ }
+
+ cause, ok := err.(causer)
+ if !ok {
+ break
+ }
+ err = cause.Cause()
+ }
+}
diff --git a/vendor/github.com/rackspace/gophercloud/auth_options.go b/vendor/github.com/rackspace/gophercloud/auth_options.go
index d26e16ac1c43..07ace1366ba3 100644
--- a/vendor/github.com/rackspace/gophercloud/auth_options.go
+++ b/vendor/github.com/rackspace/gophercloud/auth_options.go
@@ -42,6 +42,11 @@ type AuthOptions struct {
// re-authenticate automatically if/when your token expires. If you set it to
// false, it will not cache these settings, but re-authentication will not be
// possible. This setting defaults to false.
+ //
+ // NOTE: The reauth function will try to re-authenticate endlessly if left unchecked.
+ // The way to limit the number of attempts is to provide a custom HTTP client to the provider client
+ // and provide a transport that implements the RoundTripper interface and stores the number of failed retries.
+ // For an example of this, see here: https://github.com/rackspace/rack/blob/1.0.0/auth/clients.go#L311
AllowReauth bool
// TokenID allows users to authenticate (possibly as another user) with an
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/requests.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/requests.go
new file mode 100644
index 000000000000..2712ac1621f9
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/requests.go
@@ -0,0 +1,131 @@
+package groups
+
+import (
+ "fmt"
+
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/pagination"
+)
+
+// ListOpts allows the filtering and sorting of paginated collections through
+// the API. Filtering is achieved by passing in struct field values that map to
+// the floating IP attributes you want to see returned. SortKey allows you to
+// sort by a particular network attribute. SortDir sets the direction, and is
+// either `asc' or `desc'. Marker and Limit are used for pagination.
+type ListOpts struct {
+ ID string `q:"id"`
+ Name string `q:"name"`
+ TenantID string `q:"tenant_id"`
+ Limit int `q:"limit"`
+ Marker string `q:"marker"`
+ SortKey string `q:"sort_key"`
+ SortDir string `q:"sort_dir"`
+}
+
+// List returns a Pager which allows you to iterate over a collection of
+// security groups. It accepts a ListOpts struct, which allows you to filter
+// and sort the returned collection for greater efficiency.
+func List(c *gophercloud.ServiceClient, opts ListOpts) pagination.Pager {
+ q, err := gophercloud.BuildQueryString(&opts)
+ if err != nil {
+ return pagination.Pager{Err: err}
+ }
+ u := rootURL(c) + q.String()
+ return pagination.NewPager(c, u, func(r pagination.PageResult) pagination.Page {
+ return SecGroupPage{pagination.LinkedPageBase{PageResult: r}}
+ })
+}
+
+var (
+ errNameRequired = fmt.Errorf("Name is required")
+)
+
+// CreateOpts contains all the values needed to create a new security group.
+type CreateOpts struct {
+ // Required. Human-readable name for the VIP. Does not have to be unique.
+ Name string
+
+ // Required for admins. Indicates the owner of the VIP.
+ TenantID string
+
+ // Optional. Describes the security group.
+ Description string
+}
+
+// Create is an operation which provisions a new security group with default
+// security group rules for the IPv4 and IPv6 ether types.
+func Create(c *gophercloud.ServiceClient, opts CreateOpts) CreateResult {
+ var res CreateResult
+
+ // Validate required opts
+ if opts.Name == "" {
+ res.Err = errNameRequired
+ return res
+ }
+
+ type secgroup struct {
+ Name string `json:"name"`
+ TenantID string `json:"tenant_id,omitempty"`
+ Description string `json:"description,omitempty"`
+ }
+
+ type request struct {
+ SecGroup secgroup `json:"security_group"`
+ }
+
+ reqBody := request{SecGroup: secgroup{
+ Name: opts.Name,
+ TenantID: opts.TenantID,
+ Description: opts.Description,
+ }}
+
+ _, res.Err = c.Post(rootURL(c), reqBody, &res.Body, nil)
+ return res
+}
+
+// Get retrieves a particular security group based on its unique ID.
+func Get(c *gophercloud.ServiceClient, id string) GetResult {
+ var res GetResult
+ _, res.Err = c.Get(resourceURL(c, id), &res.Body, nil)
+ return res
+}
+
+// Delete will permanently delete a particular security group based on its unique ID.
+func Delete(c *gophercloud.ServiceClient, id string) DeleteResult {
+ var res DeleteResult
+ _, res.Err = c.Delete(resourceURL(c, id), nil)
+ return res
+}
+
+// IDFromName is a convenience function that returns a security group's ID given its name.
+func IDFromName(client *gophercloud.ServiceClient, name string) (string, error) {
+ securityGroupCount := 0
+ securityGroupID := ""
+ if name == "" {
+ return "", fmt.Errorf("A security group name must be provided.")
+ }
+ pager := List(client, ListOpts{})
+ pager.EachPage(func(page pagination.Page) (bool, error) {
+ securityGroupList, err := ExtractGroups(page)
+ if err != nil {
+ return false, err
+ }
+
+ for _, s := range securityGroupList {
+ if s.Name == name {
+ securityGroupCount++
+ securityGroupID = s.ID
+ }
+ }
+ return true, nil
+ })
+
+ switch securityGroupCount {
+ case 0:
+ return "", fmt.Errorf("Unable to find security group: %s", name)
+ case 1:
+ return securityGroupID, nil
+ default:
+ return "", fmt.Errorf("Found %d security groups matching %s", securityGroupCount, name)
+ }
+}
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/results.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/results.go
new file mode 100644
index 000000000000..49db261c22ef
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/results.go
@@ -0,0 +1,108 @@
+package groups
+
+import (
+ "github.com/mitchellh/mapstructure"
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules"
+ "github.com/rackspace/gophercloud/pagination"
+)
+
+// SecGroup represents a container for security group rules.
+type SecGroup struct {
+ // The UUID for the security group.
+ ID string
+
+ // Human-readable name for the security group. Might not be unique. Cannot be
+ // named "default" as that is automatically created for a tenant.
+ Name string
+
+ // The security group description.
+ Description string
+
+ // A slice of security group rules that dictate the permitted behaviour for
+ // traffic entering and leaving the group.
+ Rules []rules.SecGroupRule `json:"security_group_rules" mapstructure:"security_group_rules"`
+
+ // Owner of the security group. Only admin users can specify a TenantID
+ // other than their own.
+ TenantID string `json:"tenant_id" mapstructure:"tenant_id"`
+}
+
+// SecGroupPage is the page returned by a pager when traversing over a
+// collection of security groups.
+type SecGroupPage struct {
+ pagination.LinkedPageBase
+}
+
+// NextPageURL is invoked when a paginated collection of security groups has
+// reached the end of a page and the pager seeks to traverse over a new one. In
+// order to do this, it needs to construct the next page's URL.
+func (p SecGroupPage) NextPageURL() (string, error) {
+ type resp struct {
+ Links []gophercloud.Link `mapstructure:"security_groups_links"`
+ }
+
+ var r resp
+ err := mapstructure.Decode(p.Body, &r)
+ if err != nil {
+ return "", err
+ }
+
+ return gophercloud.ExtractNextURL(r.Links)
+}
+
+// IsEmpty checks whether a SecGroupPage struct is empty.
+func (p SecGroupPage) IsEmpty() (bool, error) {
+ is, err := ExtractGroups(p)
+ if err != nil {
+ return true, nil
+ }
+ return len(is) == 0, nil
+}
+
+// ExtractGroups accepts a Page struct, specifically a SecGroupPage struct,
+// and extracts the elements into a slice of SecGroup structs. In other words,
+// a generic collection is mapped into a relevant slice.
+func ExtractGroups(page pagination.Page) ([]SecGroup, error) {
+ var resp struct {
+ SecGroups []SecGroup `mapstructure:"security_groups" json:"security_groups"`
+ }
+
+ err := mapstructure.Decode(page.(SecGroupPage).Body, &resp)
+
+ return resp.SecGroups, err
+}
+
+type commonResult struct {
+ gophercloud.Result
+}
+
+// Extract is a function that accepts a result and extracts a security group.
+func (r commonResult) Extract() (*SecGroup, error) {
+ if r.Err != nil {
+ return nil, r.Err
+ }
+
+ var res struct {
+ SecGroup *SecGroup `mapstructure:"security_group" json:"security_group"`
+ }
+
+ err := mapstructure.Decode(r.Body, &res)
+
+ return res.SecGroup, err
+}
+
+// CreateResult represents the result of a create operation.
+type CreateResult struct {
+ commonResult
+}
+
+// GetResult represents the result of a get operation.
+type GetResult struct {
+ commonResult
+}
+
+// DeleteResult represents the result of a delete operation.
+type DeleteResult struct {
+ gophercloud.ErrResult
+}
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/urls.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/urls.go
new file mode 100644
index 000000000000..84f7324f0901
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/groups/urls.go
@@ -0,0 +1,13 @@
+package groups
+
+import "github.com/rackspace/gophercloud"
+
+const rootPath = "security-groups"
+
+func rootURL(c *gophercloud.ServiceClient) string {
+ return c.ServiceURL(rootPath)
+}
+
+func resourceURL(c *gophercloud.ServiceClient, id string) string {
+ return c.ServiceURL(rootPath, id)
+}
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/requests.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/requests.go
new file mode 100644
index 000000000000..e06934a09afb
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/requests.go
@@ -0,0 +1,174 @@
+package rules
+
+import (
+ "fmt"
+
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/pagination"
+)
+
+// ListOpts allows the filtering and sorting of paginated collections through
+// the API. Filtering is achieved by passing in struct field values that map to
+// the security group attributes you want to see returned. SortKey allows you to
+// sort by a particular network attribute. SortDir sets the direction, and is
+// either `asc' or `desc'. Marker and Limit are used for pagination.
+type ListOpts struct {
+ Direction string `q:"direction"`
+ EtherType string `q:"ethertype"`
+ ID string `q:"id"`
+ PortRangeMax int `q:"port_range_max"`
+ PortRangeMin int `q:"port_range_min"`
+ Protocol string `q:"protocol"`
+ RemoteGroupID string `q:"remote_group_id"`
+ RemoteIPPrefix string `q:"remote_ip_prefix"`
+ SecGroupID string `q:"security_group_id"`
+ TenantID string `q:"tenant_id"`
+ Limit int `q:"limit"`
+ Marker string `q:"marker"`
+ SortKey string `q:"sort_key"`
+ SortDir string `q:"sort_dir"`
+}
+
+// List returns a Pager which allows you to iterate over a collection of
+// security group rules. It accepts a ListOpts struct, which allows you to filter
+// and sort the returned collection for greater efficiency.
+func List(c *gophercloud.ServiceClient, opts ListOpts) pagination.Pager {
+ q, err := gophercloud.BuildQueryString(&opts)
+ if err != nil {
+ return pagination.Pager{Err: err}
+ }
+ u := rootURL(c) + q.String()
+ return pagination.NewPager(c, u, func(r pagination.PageResult) pagination.Page {
+ return SecGroupRulePage{pagination.LinkedPageBase{PageResult: r}}
+ })
+}
+
+// Errors
+var (
+ errValidDirectionRequired = fmt.Errorf("A valid Direction is required")
+ errValidEtherTypeRequired = fmt.Errorf("A valid EtherType is required")
+ errSecGroupIDRequired = fmt.Errorf("A valid SecGroupID is required")
+ errValidProtocolRequired = fmt.Errorf("A valid Protocol is required")
+)
+
+// Constants useful for CreateOpts
+const (
+ DirIngress = "ingress"
+ DirEgress = "egress"
+ Ether4 = "IPv4"
+ Ether6 = "IPv6"
+ ProtocolTCP = "tcp"
+ ProtocolUDP = "udp"
+ ProtocolICMP = "icmp"
+)
+
+// CreateOpts contains all the values needed to create a new security group rule.
+type CreateOpts struct {
+ // Required. Must be either "ingress" or "egress": the direction in which the
+ // security group rule is applied.
+ Direction string
+
+ // Required. Must be "IPv4" or "IPv6", and addresses represented in CIDR must
+ // match the ingress or egress rules.
+ EtherType string
+
+ // Required. The security group ID to associate with this security group rule.
+ SecGroupID string
+
+ // Optional. The maximum port number in the range that is matched by the
+ // security group rule. The PortRangeMin attribute constrains the PortRangeMax
+ // attribute. If the protocol is ICMP, this value must be an ICMP type.
+ PortRangeMax int
+
+ // Optional. The minimum port number in the range that is matched by the
+ // security group rule. If the protocol is TCP or UDP, this value must be
+ // less than or equal to the value of the PortRangeMax attribute. If the
+ // protocol is ICMP, this value must be an ICMP type.
+ PortRangeMin int
+
+ // Optional. The protocol that is matched by the security group rule. Valid
+ // values are "tcp", "udp", "icmp" or an empty string.
+ Protocol string
+
+ // Optional. The remote group ID to be associated with this security group
+ // rule. You can specify either RemoteGroupID or RemoteIPPrefix.
+ RemoteGroupID string
+
+ // Optional. The remote IP prefix to be associated with this security group
+ // rule. You can specify either RemoteGroupID or RemoteIPPrefix. This
+ // attribute matches the specified IP prefix as the source IP address of the
+ // IP packet.
+ RemoteIPPrefix string
+
+ // Required for admins. Indicates the owner of the VIP.
+ TenantID string
+}
+
+// Create is an operation which adds a new security group rule and associates it
+// with an existing security group (whose ID is specified in CreateOpts).
+func Create(c *gophercloud.ServiceClient, opts CreateOpts) CreateResult {
+ var res CreateResult
+
+ // Validate required opts
+ if opts.Direction != DirIngress && opts.Direction != DirEgress {
+ res.Err = errValidDirectionRequired
+ return res
+ }
+ if opts.EtherType != Ether4 && opts.EtherType != Ether6 {
+ res.Err = errValidEtherTypeRequired
+ return res
+ }
+ if opts.SecGroupID == "" {
+ res.Err = errSecGroupIDRequired
+ return res
+ }
+ if opts.Protocol != "" && opts.Protocol != ProtocolTCP && opts.Protocol != ProtocolUDP && opts.Protocol != ProtocolICMP {
+ res.Err = errValidProtocolRequired
+ return res
+ }
+
+ type secrule struct {
+ Direction string `json:"direction"`
+ EtherType string `json:"ethertype"`
+ SecGroupID string `json:"security_group_id"`
+ PortRangeMax int `json:"port_range_max,omitempty"`
+ PortRangeMin int `json:"port_range_min,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ RemoteGroupID string `json:"remote_group_id,omitempty"`
+ RemoteIPPrefix string `json:"remote_ip_prefix,omitempty"`
+ TenantID string `json:"tenant_id,omitempty"`
+ }
+
+ type request struct {
+ SecRule secrule `json:"security_group_rule"`
+ }
+
+ reqBody := request{SecRule: secrule{
+ Direction: opts.Direction,
+ EtherType: opts.EtherType,
+ SecGroupID: opts.SecGroupID,
+ PortRangeMax: opts.PortRangeMax,
+ PortRangeMin: opts.PortRangeMin,
+ Protocol: opts.Protocol,
+ RemoteGroupID: opts.RemoteGroupID,
+ RemoteIPPrefix: opts.RemoteIPPrefix,
+ TenantID: opts.TenantID,
+ }}
+
+ _, res.Err = c.Post(rootURL(c), reqBody, &res.Body, nil)
+ return res
+}
+
+// Get retrieves a particular security group rule based on its unique ID.
+func Get(c *gophercloud.ServiceClient, id string) GetResult {
+ var res GetResult
+ _, res.Err = c.Get(resourceURL(c, id), &res.Body, nil)
+ return res
+}
+
+// Delete will permanently delete a particular security group rule based on its unique ID.
+func Delete(c *gophercloud.ServiceClient, id string) DeleteResult {
+ var res DeleteResult
+ _, res.Err = c.Delete(resourceURL(c, id), nil)
+ return res
+}
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/results.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/results.go
new file mode 100644
index 000000000000..6e1385768932
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/results.go
@@ -0,0 +1,133 @@
+package rules
+
+import (
+ "github.com/mitchellh/mapstructure"
+ "github.com/rackspace/gophercloud"
+ "github.com/rackspace/gophercloud/pagination"
+)
+
+// SecGroupRule represents a rule to dictate the behaviour of incoming or
+// outgoing traffic for a particular security group.
+type SecGroupRule struct {
+ // The UUID for this security group rule.
+ ID string
+
+ // The direction in which the security group rule is applied. The only values
+ // allowed are "ingress" or "egress". For a compute instance, an ingress
+ // security group rule is applied to incoming (ingress) traffic for that
+ // instance. An egress rule is applied to traffic leaving the instance.
+ Direction string
+
+ // Must be IPv4 or IPv6, and addresses represented in CIDR must match the
+ // ingress or egress rules.
+ EtherType string `json:"ethertype" mapstructure:"ethertype"`
+
+ // The security group ID to associate with this security group rule.
+ SecGroupID string `json:"security_group_id" mapstructure:"security_group_id"`
+
+ // The minimum port number in the range that is matched by the security group
+ // rule. If the protocol is TCP or UDP, this value must be less than or equal
+ // to the value of the PortRangeMax attribute. If the protocol is ICMP, this
+ // value must be an ICMP type.
+ PortRangeMin int `json:"port_range_min" mapstructure:"port_range_min"`
+
+ // The maximum port number in the range that is matched by the security group
+ // rule. The PortRangeMin attribute constrains the PortRangeMax attribute. If
+ // the protocol is ICMP, this value must be an ICMP type.
+ PortRangeMax int `json:"port_range_max" mapstructure:"port_range_max"`
+
+ // The protocol that is matched by the security group rule. Valid values are
+ // "tcp", "udp", "icmp" or an empty string.
+ Protocol string
+
+ // The remote group ID to be associated with this security group rule. You
+ // can specify either RemoteGroupID or RemoteIPPrefix.
+ RemoteGroupID string `json:"remote_group_id" mapstructure:"remote_group_id"`
+
+ // The remote IP prefix to be associated with this security group rule. You
+ // can specify either RemoteGroupID or RemoteIPPrefix . This attribute
+ // matches the specified IP prefix as the source IP address of the IP packet.
+ RemoteIPPrefix string `json:"remote_ip_prefix" mapstructure:"remote_ip_prefix"`
+
+ // The owner of this security group rule.
+ TenantID string `json:"tenant_id" mapstructure:"tenant_id"`
+}
+
+// SecGroupRulePage is the page returned by a pager when traversing over a
+// collection of security group rules.
+type SecGroupRulePage struct {
+ pagination.LinkedPageBase
+}
+
+// NextPageURL is invoked when a paginated collection of security group rules has
+// reached the end of a page and the pager seeks to traverse over a new one. In
+// order to do this, it needs to construct the next page's URL.
+func (p SecGroupRulePage) NextPageURL() (string, error) {
+ type resp struct {
+ Links []gophercloud.Link `mapstructure:"security_group_rules_links"`
+ }
+
+ var r resp
+ err := mapstructure.Decode(p.Body, &r)
+ if err != nil {
+ return "", err
+ }
+
+ return gophercloud.ExtractNextURL(r.Links)
+}
+
+// IsEmpty checks whether a SecGroupRulePage struct is empty.
+func (p SecGroupRulePage) IsEmpty() (bool, error) {
+ is, err := ExtractRules(p)
+ if err != nil {
+ return true, nil
+ }
+ return len(is) == 0, nil
+}
+
+// ExtractRules accepts a Page struct, specifically a SecGroupRulePage struct,
+// and extracts the elements into a slice of SecGroupRule structs. In other words,
+// a generic collection is mapped into a relevant slice.
+func ExtractRules(page pagination.Page) ([]SecGroupRule, error) {
+ var resp struct {
+ SecGroupRules []SecGroupRule `mapstructure:"security_group_rules" json:"security_group_rules"`
+ }
+
+ err := mapstructure.Decode(page.(SecGroupRulePage).Body, &resp)
+
+ return resp.SecGroupRules, err
+}
+
+type commonResult struct {
+ gophercloud.Result
+}
+
+// Extract is a function that accepts a result and extracts a security rule.
+func (r commonResult) Extract() (*SecGroupRule, error) {
+ if r.Err != nil {
+ return nil, r.Err
+ }
+
+ var res struct {
+ SecGroupRule *SecGroupRule `mapstructure:"security_group_rule" json:"security_group_rule"`
+ }
+
+ err := mapstructure.Decode(r.Body, &res)
+
+ return res.SecGroupRule, err
+}
+
+// CreateResult represents the result of a create operation.
+type CreateResult struct {
+ commonResult
+}
+
+// GetResult represents the result of a get operation.
+type GetResult struct {
+ commonResult
+}
+
+// DeleteResult represents the result of a delete operation.
+type DeleteResult struct {
+ gophercloud.ErrResult
+}
diff --git a/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/urls.go b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/urls.go
new file mode 100644
index 000000000000..8e2b2bb28d26
--- /dev/null
+++ b/vendor/github.com/rackspace/gophercloud/openstack/networking/v2/extensions/security/rules/urls.go
@@ -0,0 +1,13 @@
+package rules
+
+import "github.com/rackspace/gophercloud"
+
+const rootPath = "security-group-rules"
+
+func rootURL(c *gophercloud.ServiceClient) string {
+ return c.ServiceURL(rootPath)
+}
+
+func resourceURL(c *gophercloud.ServiceClient, id string) string {
+ return c.ServiceURL(rootPath, id)
+}
diff --git a/vendor/github.com/sethvargo/go-fastly/Makefile b/vendor/github.com/sethvargo/go-fastly/Makefile
index 8f3ad1e6e940..8391618cc0a3 100644
--- a/vendor/github.com/sethvargo/go-fastly/Makefile
+++ b/vendor/github.com/sethvargo/go-fastly/Makefile
@@ -4,18 +4,18 @@ default: test
# test runs the test suite and vets the code
test: generate
- go list $(TEST) | xargs -n1 go test -timeout=30s -parallel=8 $(TESTARGS)
+ go list $(TEST) | xargs -n1 go test -timeout=30s -parallel=12 $(TESTARGS)
# updatedeps installs all the dependencies the library needs to run and build
updatedeps:
- go list ./... \
- | xargs go list -f '{{ join .Deps "\n" }}{{ printf "\n" }}{{ join .TestImports "\n" }}' \
- | grep -v github.com/sethvargo/go-fastly \
- | xargs go get -f -u -v
+ go list ./... \
+ | xargs go list -f '{{ join .Deps "\n" }}{{ printf "\n" }}{{ join .TestImports "\n" }}' \
+ | grep -v github.com/sethvargo/go-fastly \
+ | xargs go get -f -u -v
# generate runs `go generate` to build the dynamically generated source files
generate:
- find . -type f -name '.DS_Store' -delete
- go generate ./...
+ find . -type f -name '.DS_Store' -delete
+ go generate ./...
.PHONY: default bin dev dist test testrace updatedeps generate
diff --git a/vendor/github.com/sethvargo/go-fastly/backend.go b/vendor/github.com/sethvargo/go-fastly/backend.go
index 9c4d967dda16..6a734894da63 100644
--- a/vendor/github.com/sethvargo/go-fastly/backend.go
+++ b/vendor/github.com/sethvargo/go-fastly/backend.go
@@ -1,35 +1,36 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Backend represents a backend response from the Fastly API.
type Backend struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
- Port uint `mapstructure:"port"`
- ConnectTimeout uint `mapstructure:"connect_timeout"`
- MaxConn uint `mapstructure:"max_conn"`
- ErrorThreshold uint `mapstructure:"error_threshold"`
- FirstByteTimeout uint `mapstructure:"first_byte_timeout"`
- BetweenBytesTimeout uint `mapstructure:"between_bytes_timeout"`
- AutoLoadbalance bool `mapstructure:"auto_loadbalance"`
- Weight uint `mapstructure:"weight"`
- RequestCondition string `mapstructure:"request_condition"`
- HealthCheck string `mapstructure:"healthcheck"`
- UseSSL bool `mapstructure:"use_ssl"`
- SSLCheckCert bool `mapstructure:"ssl_check_cert"`
- SSLHostname string `mapstructure:"ssl_hostname"`
- SSLCertHostname string `mapstructure:"ssl_cert_hostname"`
- SSLSNIHostname string `mapstructure:"ssl_sni_hostname"`
- MinTLSVersion string `mapstructure:"min_tls_version"`
- MaxTLSVersion string `mapstructure:"max_tls_version"`
- SSLCiphers []string `mapstructure:"ssl_ciphers"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
+ Port uint `mapstructure:"port"`
+ ConnectTimeout uint `mapstructure:"connect_timeout"`
+ MaxConn uint `mapstructure:"max_conn"`
+ ErrorThreshold uint `mapstructure:"error_threshold"`
+ FirstByteTimeout uint `mapstructure:"first_byte_timeout"`
+ BetweenBytesTimeout uint `mapstructure:"between_bytes_timeout"`
+ AutoLoadbalance bool `mapstructure:"auto_loadbalance"`
+ Weight uint `mapstructure:"weight"`
+ RequestCondition string `mapstructure:"request_condition"`
+ HealthCheck string `mapstructure:"healthcheck"`
+ Hostname string `mapstructure:"hostname"`
+ UseSSL bool `mapstructure:"use_ssl"`
+ SSLCheckCert bool `mapstructure:"ssl_check_cert"`
+ SSLHostname string `mapstructure:"ssl_hostname"`
+ SSLCertHostname string `mapstructure:"ssl_cert_hostname"`
+ SSLSNIHostname string `mapstructure:"ssl_sni_hostname"`
+ MinTLSVersion string `mapstructure:"min_tls_version"`
+ MaxTLSVersion string `mapstructure:"max_tls_version"`
+ SSLCiphers []string `mapstructure:"ssl_ciphers"`
}
// backendsByName is a sortable list of backends.
@@ -39,228 +40,228 @@ type backendsByName []*Backend
func (s backendsByName) Len() int { return len(s) }
func (s backendsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s backendsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListBackendsInput is used as input to the ListBackends function.
type ListBackendsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListBackends returns the list of backends for the configuration version.
func (c *Client) ListBackends(i *ListBackendsInput) ([]*Backend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/backend", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*Backend
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(backendsByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/backend", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*Backend
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(backendsByName(bs))
+ return bs, nil
}
// CreateBackendInput is used as input to the CreateBackend function.
type CreateBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- ConnectTimeout uint `form:"connect_timeout,omitempty"`
- MaxConn uint `form:"max_conn,omitempty"`
- ErrorThreshold uint `form:"error_threshold,omitempty"`
- FirstByteTimeout uint `form:"first_byte_timeout,omitempty"`
- BetweenBytesTimeout uint `form:"between_bytes_timeout,omitempty"`
- AutoLoadbalance bool `form:"auto_loadbalance,omitempty"`
- Weight uint `form:"weight,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- HealthCheck string `form:"healthcheck,omitempty"`
- UseSSL bool `form:"use_ssl,omitempty"`
- SSLCheckCert bool `form:"ssl_check_cert,omitempty"`
- SSLHostname string `form:"ssl_hostname,omitempty"`
- SSLCertHostname string `form:"ssl_cert_hostname,omitempty"`
- SSLSNIHostname string `form:"ssl_sni_hostname,omitempty"`
- MinTLSVersion string `form:"min_tls_version,omitempty"`
- MaxTLSVersion string `form:"max_tls_version,omitempty"`
- SSLCiphers []string `form:"ssl_ciphers,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ ConnectTimeout uint `form:"connect_timeout,omitempty"`
+ MaxConn uint `form:"max_conn,omitempty"`
+ ErrorThreshold uint `form:"error_threshold,omitempty"`
+ FirstByteTimeout uint `form:"first_byte_timeout,omitempty"`
+ BetweenBytesTimeout uint `form:"between_bytes_timeout,omitempty"`
+ AutoLoadbalance bool `form:"auto_loadbalance,omitempty"`
+ Weight uint `form:"weight,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ HealthCheck string `form:"healthcheck,omitempty"`
+ UseSSL bool `form:"use_ssl,omitempty"`
+ SSLCheckCert bool `form:"ssl_check_cert,omitempty"`
+ SSLHostname string `form:"ssl_hostname,omitempty"`
+ SSLCertHostname string `form:"ssl_cert_hostname,omitempty"`
+ SSLSNIHostname string `form:"ssl_sni_hostname,omitempty"`
+ MinTLSVersion string `form:"min_tls_version,omitempty"`
+ MaxTLSVersion string `form:"max_tls_version,omitempty"`
+ SSLCiphers []string `form:"ssl_ciphers,omitempty"`
}
// CreateBackend creates a new Fastly backend.
func (c *Client) CreateBackend(i *CreateBackendInput) (*Backend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/backend", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Backend
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/backend", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Backend
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetBackendInput is used as input to the GetBackend function.
type GetBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the backend to fetch.
- Name string
+ // Name is the name of the backend to fetch.
+ Name string
}
// GetBackend gets the backend configuration with the given parameters.
func (c *Client) GetBackend(i *GetBackendInput) (*Backend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Backend
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Backend
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateBackendInput is used as input to the UpdateBackend function.
type UpdateBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the backend to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- ConnectTimeout uint `form:"connect_timeout,omitempty"`
- MaxConn uint `form:"max_conn,omitempty"`
- ErrorThreshold uint `form:"error_threshold,omitempty"`
- FirstByteTimeout uint `form:"first_byte_timeout,omitempty"`
- BetweenBytesTimeout uint `form:"between_bytes_timeout,omitempty"`
- AutoLoadbalance bool `form:"auto_loadbalance,omitempty"`
- Weight uint `form:"weight,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- HealthCheck string `form:"healthcheck,omitempty"`
- UseSSL bool `form:"use_ssl,omitempty"`
- SSLCheckCert bool `form:"ssl_check_cert,omitempty"`
- SSLHostname string `form:"ssl_hostname,omitempty"`
- SSLCertHostname string `form:"ssl_cert_hostname,omitempty"`
- SSLSNIHostname string `form:"ssl_sni_hostname,omitempty"`
- MinTLSVersion string `form:"min_tls_version,omitempty"`
- MaxTLSVersion string `form:"max_tls_version,omitempty"`
- SSLCiphers []string `form:"ssl_ciphers,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the backend to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ ConnectTimeout uint `form:"connect_timeout,omitempty"`
+ MaxConn uint `form:"max_conn,omitempty"`
+ ErrorThreshold uint `form:"error_threshold,omitempty"`
+ FirstByteTimeout uint `form:"first_byte_timeout,omitempty"`
+ BetweenBytesTimeout uint `form:"between_bytes_timeout,omitempty"`
+ AutoLoadbalance bool `form:"auto_loadbalance,omitempty"`
+ Weight uint `form:"weight,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ HealthCheck string `form:"healthcheck,omitempty"`
+ UseSSL bool `form:"use_ssl,omitempty"`
+ SSLCheckCert bool `form:"ssl_check_cert,omitempty"`
+ SSLHostname string `form:"ssl_hostname,omitempty"`
+ SSLCertHostname string `form:"ssl_cert_hostname,omitempty"`
+ SSLSNIHostname string `form:"ssl_sni_hostname,omitempty"`
+ MinTLSVersion string `form:"min_tls_version,omitempty"`
+ MaxTLSVersion string `form:"max_tls_version,omitempty"`
+ SSLCiphers []string `form:"ssl_ciphers,omitempty"`
}
// UpdateBackend updates a specific backend.
func (c *Client) UpdateBackend(i *UpdateBackendInput) (*Backend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Backend
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Backend
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteBackendInput is the input parameter to DeleteBackend.
type DeleteBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the backend to delete (required).
- Name string
+ // Name is the name of the backend to delete (required).
+ Name string
}
// DeleteBackend deletes the given backend version.
func (c *Client) DeleteBackend(i *DeleteBackendInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/backend/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/billing.go b/vendor/github.com/sethvargo/go-fastly/billing.go
index 719989d013bc..30eea20aa848 100644
--- a/vendor/github.com/sethvargo/go-fastly/billing.go
+++ b/vendor/github.com/sethvargo/go-fastly/billing.go
@@ -1,81 +1,81 @@
package fastly
import (
- "fmt"
- "time"
+ "fmt"
+ "time"
)
// Billing is the top-level representation of a billing response from the Fastly
// API.
type Billing struct {
- InvoiceID string `mapstructure:"invoice_id"`
- StartTime *time.Time `mapstructure:"start_time"`
- EndTime *time.Time `mapstructure:"end_time"`
- Status *BillingStatus `mapstructure:"status"`
- Total *BillingTotal `mapstructure:"total"`
+ InvoiceID string `mapstructure:"invoice_id"`
+ StartTime *time.Time `mapstructure:"start_time"`
+ EndTime *time.Time `mapstructure:"end_time"`
+ Status *BillingStatus `mapstructure:"status"`
+ Total *BillingTotal `mapstructure:"total"`
}
// BillingStatus is a representation of the status of the bill from the Fastly
// API.
type BillingStatus struct {
- InvoiceID string `mapstructure:"invoice_id"`
- Status string `mapstructure:"status"`
- SentAt *time.Time `mapstructure:"sent_at"`
+ InvoiceID string `mapstructure:"invoice_id"`
+ Status string `mapstructure:"status"`
+ SentAt *time.Time `mapstructure:"sent_at"`
}
// BillingTotal is a repsentation of the status of the usage for this bill from
// the Fastly API.
type BillingTotal struct {
- PlanName string `mapstructure:"plan_name"`
- PlanCode string `mapstructure:"plan_code"`
- PlanMinimum string `mapstructure:"plan_minimum"`
- Bandwidth float64 `mapstructure:"bandwidth"`
- BandwidthCost float64 `mapstructure:"bandwidth_cost"`
- Requests uint64 `mapstructure:"requests"`
- RequestsCost float64 `mapstructure:"requests_cost"`
- IncurredCost float64 `mapstructure:"incurred_cost"`
- Overage float64 `mapstructure:"overage"`
- Extras []*BillingExtra `mapstructure:"extras"`
- ExtrasCost float64 `mapstructure:"extras_cost"`
- CostBeforeDiscount float64 `mapstructure:"cost_before_discount"`
- Discount float64 `mapstructure:"discount"`
- Cost float64 `mapstructure:"cost"`
- Terms string `mapstructure:"terms"`
+ PlanName string `mapstructure:"plan_name"`
+ PlanCode string `mapstructure:"plan_code"`
+ PlanMinimum string `mapstructure:"plan_minimum"`
+ Bandwidth float64 `mapstructure:"bandwidth"`
+ BandwidthCost float64 `mapstructure:"bandwidth_cost"`
+ Requests uint64 `mapstructure:"requests"`
+ RequestsCost float64 `mapstructure:"requests_cost"`
+ IncurredCost float64 `mapstructure:"incurred_cost"`
+ Overage float64 `mapstructure:"overage"`
+ Extras []*BillingExtra `mapstructure:"extras"`
+ ExtrasCost float64 `mapstructure:"extras_cost"`
+ CostBeforeDiscount float64 `mapstructure:"cost_before_discount"`
+ Discount float64 `mapstructure:"discount"`
+ Cost float64 `mapstructure:"cost"`
+ Terms string `mapstructure:"terms"`
}
// BillingExtra is a representation of extras (such as SSL addons) from the
// Fastly API.
type BillingExtra struct {
- Name string `mapstructure:"name"`
- Setup float64 `mapstructure:"setup"`
- Recurring float64 `mapstructure:"recurring"`
+ Name string `mapstructure:"name"`
+ Setup float64 `mapstructure:"setup"`
+ Recurring float64 `mapstructure:"recurring"`
}
// GetBillingInput is used as input to the GetBilling function.
type GetBillingInput struct {
- Year uint16
- Month uint8
+ Year uint16
+ Month uint8
}
// GetBilling returns the billing information for the current account.
func (c *Client) GetBilling(i *GetBillingInput) (*Billing, error) {
- if i.Year == 0 {
- return nil, ErrMissingYear
- }
+ if i.Year == 0 {
+ return nil, ErrMissingYear
+ }
- if i.Month == 0 {
- return nil, ErrMissingMonth
- }
+ if i.Month == 0 {
+ return nil, ErrMissingMonth
+ }
- path := fmt.Sprintf("/billing/year/%d/month/%02d", i.Year, i.Month)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
+ path := fmt.Sprintf("/billing/year/%d/month/%02d", i.Year, i.Month)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
- var b *Billing
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ var b *Billing
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/cache_setting.go b/vendor/github.com/sethvargo/go-fastly/cache_setting.go
index e6dbf0b8df90..3f5aebe24cf4 100644
--- a/vendor/github.com/sethvargo/go-fastly/cache_setting.go
+++ b/vendor/github.com/sethvargo/go-fastly/cache_setting.go
@@ -1,19 +1,19 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
const (
- // CacheSettingActionCache sets the cache to cache.
- CacheSettingActionCache CacheSettingAction = "cache"
+ // CacheSettingActionCache sets the cache to cache.
+ CacheSettingActionCache CacheSettingAction = "cache"
- // CacheSettingActionPass sets the cache to pass through.
- CacheSettingActionPass CacheSettingAction = "pass"
+ // CacheSettingActionPass sets the cache to pass through.
+ CacheSettingActionPass CacheSettingAction = "pass"
- // CacheSettingActionRestart sets the cache to restart the request.
- CacheSettingActionRestart CacheSettingAction = "restart"
+ // CacheSettingActionRestart sets the cache to restart the request.
+ CacheSettingActionRestart CacheSettingAction = "restart"
)
// CacheSettingAction is the type of cache action.
@@ -21,14 +21,14 @@ type CacheSettingAction string
// CacheSetting represents a response from Fastly's API for cache settings.
type CacheSetting struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Action CacheSettingAction `mapstructure:"action"`
- TTL uint `mapstructure:"ttl"`
- StaleTTL uint `mapstructure:"stale_ttl"`
- CacheCondition string `mapstructure:"cache_condition"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Action CacheSettingAction `mapstructure:"action"`
+ TTL uint `mapstructure:"ttl"`
+ StaleTTL uint `mapstructure:"stale_ttl"`
+ CacheCondition string `mapstructure:"cache_condition"`
}
// cacheSettingsByName is a sortable list of cache settings.
@@ -38,200 +38,200 @@ type cacheSettingsByName []*CacheSetting
func (s cacheSettingsByName) Len() int { return len(s) }
func (s cacheSettingsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s cacheSettingsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListCacheSettingsInput is used as input to the ListCacheSettings function.
type ListCacheSettingsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListCacheSettings returns the list of cache settings for the configuration
// version.
func (c *Client) ListCacheSettings(i *ListCacheSettingsInput) ([]*CacheSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/cache_settings", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var cs []*CacheSetting
- if err := decodeJSON(&cs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(cacheSettingsByName(cs))
- return cs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/cache_settings", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var cs []*CacheSetting
+ if err := decodeJSON(&cs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(cacheSettingsByName(cs))
+ return cs, nil
}
// CreateCacheSettingInput is used as input to the CreateCacheSetting function.
type CreateCacheSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Action CacheSettingAction `form:"action,omitempty"`
- TTL uint `form:"ttl,omitempty"`
- StaleTTL uint `form:"stale_ttl,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Action CacheSettingAction `form:"action,omitempty"`
+ TTL uint `form:"ttl,omitempty"`
+ StaleTTL uint `form:"stale_ttl,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
}
// CreateCacheSetting creates a new Fastly cache setting.
func (c *Client) CreateCacheSetting(i *CreateCacheSettingInput) (*CacheSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/cache_settings", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var cs *CacheSetting
- if err := decodeJSON(&cs, resp.Body); err != nil {
- return nil, err
- }
- return cs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/cache_settings", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var cs *CacheSetting
+ if err := decodeJSON(&cs, resp.Body); err != nil {
+ return nil, err
+ }
+ return cs, nil
}
// GetCacheSettingInput is used as input to the GetCacheSetting function.
type GetCacheSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the cache setting to fetch.
- Name string
+ // Name is the name of the cache setting to fetch.
+ Name string
}
// GetCacheSetting gets the cache setting configuration with the given
// parameters.
func (c *Client) GetCacheSetting(i *GetCacheSettingInput) (*CacheSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var cs *CacheSetting
- if err := decodeJSON(&cs, resp.Body); err != nil {
- return nil, err
- }
- return cs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var cs *CacheSetting
+ if err := decodeJSON(&cs, resp.Body); err != nil {
+ return nil, err
+ }
+ return cs, nil
}
// UpdateCacheSettingInput is used as input to the UpdateCacheSetting function.
type UpdateCacheSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the cache setting to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Action CacheSettingAction `form:"action,omitempty"`
- TTL uint `form:"ttl,omitempty"`
- StateTTL uint `form:"stale_ttl,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the cache setting to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Action CacheSettingAction `form:"action,omitempty"`
+ TTL uint `form:"ttl,omitempty"`
+ StateTTL uint `form:"stale_ttl,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
}
// UpdateCacheSetting updates a specific cache setting.
func (c *Client) UpdateCacheSetting(i *UpdateCacheSettingInput) (*CacheSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var cs *CacheSetting
- if err := decodeJSON(&cs, resp.Body); err != nil {
- return nil, err
- }
- return cs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var cs *CacheSetting
+ if err := decodeJSON(&cs, resp.Body); err != nil {
+ return nil, err
+ }
+ return cs, nil
}
// DeleteCacheSettingInput is the input parameter to DeleteCacheSetting.
type DeleteCacheSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the cache setting to delete (required).
- Name string
+ // Name is the name of the cache setting to delete (required).
+ Name string
}
// DeleteCacheSetting deletes the given cache setting version.
func (c *Client) DeleteCacheSetting(i *DeleteCacheSettingInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/cache_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/client.go b/vendor/github.com/sethvargo/go-fastly/client.go
index 99bd27a6263e..8006beecadf3 100644
--- a/vendor/github.com/sethvargo/go-fastly/client.go
+++ b/vendor/github.com/sethvargo/go-fastly/client.go
@@ -1,18 +1,18 @@
package fastly
import (
- "encoding/json"
- "fmt"
- "io"
- "net/http"
- "net/url"
- "os"
- "runtime"
- "strings"
-
- "github.com/ajg/form"
- "github.com/hashicorp/go-cleanhttp"
- "github.com/mitchellh/mapstructure"
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+ "net/url"
+ "os"
+ "runtime"
+ "strings"
+
+ "github.com/ajg/form"
+ "github.com/hashicorp/go-cleanhttp"
+ "github.com/mitchellh/mapstructure"
)
// APIKeyEnvVar is the name of the environment variable where the Fastly API
@@ -30,37 +30,37 @@ const DefaultEndpoint = "https://api.fastly.com"
var ProjectURL = "github.com/sethvargo/go-fastly"
// ProjectVersion is the version of this library.
-var ProjectVersion = "0.1"
+var ProjectVersion = "0.2"
// UserAgent is the user agent for this particular client.
var UserAgent = fmt.Sprintf("FastlyGo/%s (+%s; %s)",
- ProjectVersion, ProjectURL, runtime.Version())
+ ProjectVersion, ProjectURL, runtime.Version())
// Client is the main entrypoint to the Fastly golang API library.
type Client struct {
- // Address is the address of Fastly's API endpoint.
- Address string
+ // Address is the address of Fastly's API endpoint.
+ Address string
- // HTTPClient is the HTTP client to use. If one is not provided, a default
- // client will be used.
- HTTPClient *http.Client
+ // HTTPClient is the HTTP client to use. If one is not provided, a default
+ // client will be used.
+ HTTPClient *http.Client
- // apiKey is the Fastly API key to authenticate requests.
- apiKey string
+ // apiKey is the Fastly API key to authenticate requests.
+ apiKey string
- // url is the parsed URL from Address
- url *url.URL
+ // url is the parsed URL from Address
+ url *url.URL
}
// DefaultClient instantiates a new Fastly API client. This function requires
// the environment variable `FASTLY_API_KEY` is set and contains a valid API key
// to authenticate with Fastly.
func DefaultClient() *Client {
- client, err := NewClient(os.Getenv(APIKeyEnvVar))
- if err != nil {
- panic(err)
- }
- return client
+ client, err := NewClient(os.Getenv(APIKeyEnvVar))
+ if err != nil {
+ panic(err)
+ }
+ return client
}
// NewClient creates a new API client with the given key. Because Fastly allows
@@ -68,145 +68,145 @@ func DefaultClient() *Client {
// token is not supplied. Attempts to make a request that requires an API key
// will return a 403 response.
func NewClient(key string) (*Client, error) {
- client := &Client{apiKey: key}
- return client.init()
+ client := &Client{apiKey: key}
+ return client.init()
}
func (c *Client) init() (*Client, error) {
- if len(c.Address) == 0 {
- c.Address = DefaultEndpoint
- }
+ if len(c.Address) == 0 {
+ c.Address = DefaultEndpoint
+ }
- u, err := url.Parse(c.Address)
- if err != nil {
- return nil, err
- }
- c.url = u
+ u, err := url.Parse(c.Address)
+ if err != nil {
+ return nil, err
+ }
+ c.url = u
- if c.HTTPClient == nil {
- c.HTTPClient = cleanhttp.DefaultClient()
- }
+ if c.HTTPClient == nil {
+ c.HTTPClient = cleanhttp.DefaultClient()
+ }
- return c, nil
+ return c, nil
}
// Get issues an HTTP GET request.
func (c *Client) Get(p string, ro *RequestOptions) (*http.Response, error) {
- return c.Request("GET", p, ro)
+ return c.Request("GET", p, ro)
}
// Head issues an HTTP HEAD request.
func (c *Client) Head(p string, ro *RequestOptions) (*http.Response, error) {
- return c.Request("HEAD", p, ro)
+ return c.Request("HEAD", p, ro)
}
// Post issues an HTTP POST request.
func (c *Client) Post(p string, ro *RequestOptions) (*http.Response, error) {
- return c.Request("POST", p, ro)
+ return c.Request("POST", p, ro)
}
// PostForm issues an HTTP POST request with the given interface form-encoded.
func (c *Client) PostForm(p string, i interface{}, ro *RequestOptions) (*http.Response, error) {
- return c.RequestForm("POST", p, i, ro)
+ return c.RequestForm("POST", p, i, ro)
}
// Put issues an HTTP PUT request.
func (c *Client) Put(p string, ro *RequestOptions) (*http.Response, error) {
- return c.Request("PUT", p, ro)
+ return c.Request("PUT", p, ro)
}
// PutForm issues an HTTP PUT request with the given interface form-encoded.
func (c *Client) PutForm(p string, i interface{}, ro *RequestOptions) (*http.Response, error) {
- return c.RequestForm("PUT", p, i, ro)
+ return c.RequestForm("PUT", p, i, ro)
}
// Delete issues an HTTP DELETE request.
func (c *Client) Delete(p string, ro *RequestOptions) (*http.Response, error) {
- return c.Request("DELETE", p, ro)
+ return c.Request("DELETE", p, ro)
}
// Request makes an HTTP request against the HTTPClient using the given verb,
// Path, and request options.
func (c *Client) Request(verb, p string, ro *RequestOptions) (*http.Response, error) {
- req, err := c.RawRequest(verb, p, ro)
- if err != nil {
- return nil, err
- }
+ req, err := c.RawRequest(verb, p, ro)
+ if err != nil {
+ return nil, err
+ }
- resp, err := checkResp(c.HTTPClient.Do(req))
- if err != nil {
- return resp, err
- }
+ resp, err := checkResp(c.HTTPClient.Do(req))
+ if err != nil {
+ return resp, err
+ }
- return resp, nil
+ return resp, nil
}
// RequestForm makes an HTTP request with the given interface being encoded as
// form data.
func (c *Client) RequestForm(verb, p string, i interface{}, ro *RequestOptions) (*http.Response, error) {
- values, err := form.EncodeToValues(i)
- if err != nil {
- return nil, err
- }
+ values, err := form.EncodeToValues(i)
+ if err != nil {
+ return nil, err
+ }
- if ro == nil {
- ro = new(RequestOptions)
- }
+ if ro == nil {
+ ro = new(RequestOptions)
+ }
- if ro.Headers == nil {
- ro.Headers = make(map[string]string)
- }
- ro.Headers["Content-Type"] = "application/x-www-form-urlencoded"
+ if ro.Headers == nil {
+ ro.Headers = make(map[string]string)
+ }
+ ro.Headers["Content-Type"] = "application/x-www-form-urlencoded"
- // There is a super-jank implementation in the form library where fields with
- // a "dot" are replaced with "/.". That is then URL encoded and Fastly just
- // dies. We fix that here.
- body := strings.Replace(values.Encode(), "%5C.", ".", -1)
+ // There is a super-jank implementation in the form library where fields with
+ // a "dot" are replaced with "/.". That is then URL encoded and Fastly just
+ // dies. We fix that here.
+ body := strings.Replace(values.Encode(), "%5C.", ".", -1)
- ro.Body = strings.NewReader(body)
- ro.BodyLength = int64(len(body))
+ ro.Body = strings.NewReader(body)
+ ro.BodyLength = int64(len(body))
- return c.Request(verb, p, ro)
+ return c.Request(verb, p, ro)
}
// checkResp wraps an HTTP request from the default client and verifies that the
// request was successful. A non-200 request returns an error formatted to
// included any validation problems or otherwise.
func checkResp(resp *http.Response, err error) (*http.Response, error) {
- // If the err is already there, there was an error higher up the chain, so
- // just return that.
- if err != nil {
- return resp, err
- }
-
- switch resp.StatusCode {
- case 200, 201, 202, 204, 205, 206:
- return resp, nil
- default:
- return resp, NewHTTPError(resp)
- }
+ // If the err is already there, there was an error higher up the chain, so
+ // just return that.
+ if err != nil {
+ return resp, err
+ }
+
+ switch resp.StatusCode {
+ case 200, 201, 202, 204, 205, 206:
+ return resp, nil
+ default:
+ return resp, NewHTTPError(resp)
+ }
}
// decodeJSON is used to decode an HTTP response body into an interface as JSON.
func decodeJSON(out interface{}, body io.ReadCloser) error {
- defer body.Close()
-
- var parsed interface{}
- dec := json.NewDecoder(body)
- if err := dec.Decode(&parsed); err != nil {
- return err
- }
-
- decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
- DecodeHook: mapstructure.ComposeDecodeHookFunc(
- mapToHTTPHeaderHookFunc(),
- stringToTimeHookFunc(),
- ),
- WeaklyTypedInput: true,
- Result: out,
- })
- if err != nil {
- return err
- }
- return decoder.Decode(parsed)
+ defer body.Close()
+
+ var parsed interface{}
+ dec := json.NewDecoder(body)
+ if err := dec.Decode(&parsed); err != nil {
+ return err
+ }
+
+ decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
+ DecodeHook: mapstructure.ComposeDecodeHookFunc(
+ mapToHTTPHeaderHookFunc(),
+ stringToTimeHookFunc(),
+ ),
+ WeaklyTypedInput: true,
+ Result: out,
+ })
+ if err != nil {
+ return err
+ }
+ return decoder.Decode(parsed)
}
diff --git a/vendor/github.com/sethvargo/go-fastly/condition.go b/vendor/github.com/sethvargo/go-fastly/condition.go
index 5e86779be739..b88b61ad0aa0 100644
--- a/vendor/github.com/sethvargo/go-fastly/condition.go
+++ b/vendor/github.com/sethvargo/go-fastly/condition.go
@@ -1,19 +1,19 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Condition represents a condition response from the Fastly API.
type Condition struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- Name string `mapstructure:"name"`
- Statement string `mapstructure:"statement"`
- Type string `mapstructure:"type"`
- Priority int `mapstructure:"priority"`
+ Name string `mapstructure:"name"`
+ Statement string `mapstructure:"statement"`
+ Type string `mapstructure:"type"`
+ Priority int `mapstructure:"priority"`
}
// conditionsByName is a sortable list of conditions.
@@ -23,195 +23,195 @@ type conditionsByName []*Condition
func (s conditionsByName) Len() int { return len(s) }
func (s conditionsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s conditionsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListConditionsInput is used as input to the ListConditions function.
type ListConditionsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListConditions returns the list of conditions for the configuration version.
func (c *Client) ListConditions(i *ListConditionsInput) ([]*Condition, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/condition", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var cs []*Condition
- if err := decodeJSON(&cs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(conditionsByName(cs))
- return cs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/condition", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var cs []*Condition
+ if err := decodeJSON(&cs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(conditionsByName(cs))
+ return cs, nil
}
// CreateConditionInput is used as input to the CreateCondition function.
type CreateConditionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Statement string `form:"statement,omitempty"`
- Type string `form:"type,omitempty"`
- Priority int `form:"priority,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Statement string `form:"statement,omitempty"`
+ Type string `form:"type,omitempty"`
+ Priority int `form:"priority,omitempty"`
}
// CreateCondition creates a new Fastly condition.
func (c *Client) CreateCondition(i *CreateConditionInput) (*Condition, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/condition", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var co *Condition
- if err := decodeJSON(&co, resp.Body); err != nil {
- return nil, err
- }
- return co, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/condition", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var co *Condition
+ if err := decodeJSON(&co, resp.Body); err != nil {
+ return nil, err
+ }
+ return co, nil
}
// GetConditionInput is used as input to the GetCondition function.
type GetConditionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the condition to fetch.
- Name string
+ // Name is the name of the condition to fetch.
+ Name string
}
// GetCondition gets the condition configuration with the given parameters.
func (c *Client) GetCondition(i *GetConditionInput) (*Condition, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var co *Condition
- if err := decodeJSON(&co, resp.Body); err != nil {
- return nil, err
- }
- return co, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var co *Condition
+ if err := decodeJSON(&co, resp.Body); err != nil {
+ return nil, err
+ }
+ return co, nil
}
// UpdateConditionInput is used as input to the UpdateCondition function.
type UpdateConditionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the condition to update.
- Name string
+ // Name is the name of the condition to update.
+ Name string
- Statement string `form:"statement,omitempty"`
- Type string `form:"type,omitempty"`
- Priority int `form:"priority,omitempty"`
+ Statement string `form:"statement,omitempty"`
+ Type string `form:"type,omitempty"`
+ Priority int `form:"priority,omitempty"`
}
// UpdateCondition updates a specific condition.
func (c *Client) UpdateCondition(i *UpdateConditionInput) (*Condition, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var co *Condition
- if err := decodeJSON(&co, resp.Body); err != nil {
- return nil, err
- }
- return co, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var co *Condition
+ if err := decodeJSON(&co, resp.Body); err != nil {
+ return nil, err
+ }
+ return co, nil
}
// DeleteConditionInput is the input parameter to DeleteCondition.
type DeleteConditionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the condition to delete (required).
- Name string
+ // Name is the name of the condition to delete (required).
+ Name string
}
// DeleteCondition deletes the given condition version.
func (c *Client) DeleteCondition(i *DeleteConditionInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/condition/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/content.go b/vendor/github.com/sethvargo/go-fastly/content.go
index 01f4f8354e9f..91c3e25b7192 100644
--- a/vendor/github.com/sethvargo/go-fastly/content.go
+++ b/vendor/github.com/sethvargo/go-fastly/content.go
@@ -4,46 +4,46 @@ import "net/http"
// EdgeCheck represents an edge check response from the Fastly API.
type EdgeCheck struct {
- Hash string `mapstructure:"hash"`
- Server string `mapstructure:"server"`
- ResponseTime float64 `mapstructure:"response_time"`
- Request *EdgeCheckRequest `mapstructure:"request"`
- Response *EdgeCheckResponse `mapstructure:"response"`
+ Hash string `mapstructure:"hash"`
+ Server string `mapstructure:"server"`
+ ResponseTime float64 `mapstructure:"response_time"`
+ Request *EdgeCheckRequest `mapstructure:"request"`
+ Response *EdgeCheckResponse `mapstructure:"response"`
}
// EdgeCheckRequest is the request part of an EdgeCheck response.
type EdgeCheckRequest struct {
- URL string `mapstructure:"url"`
- Method string `mapstructure:"method"`
- Headers *http.Header `mapstructure:"headers"`
+ URL string `mapstructure:"url"`
+ Method string `mapstructure:"method"`
+ Headers *http.Header `mapstructure:"headers"`
}
// EdgeCheckResponse is the response part of an EdgeCheck response.
type EdgeCheckResponse struct {
- Status uint `mapstructure:"status"`
- Headers *http.Header `mapstructure:"headers"`
+ Status uint `mapstructure:"status"`
+ Headers *http.Header `mapstructure:"headers"`
}
// EdgeCheckInput is used as input to the EdgeCheck function.
type EdgeCheckInput struct {
- URL string `form:"url,omitempty"`
+ URL string `form:"url,omitempty"`
}
// EdgeCheck queries the edge cache for all of Fastly's servers for the given
// URL.
func (c *Client) EdgeCheck(i *EdgeCheckInput) ([]*EdgeCheck, error) {
- resp, err := c.Get("/content/edge_check", &RequestOptions{
- Params: map[string]string{
- "url": i.URL,
- },
- })
- if err != nil {
- return nil, err
- }
+ resp, err := c.Get("/content/edge_check", &RequestOptions{
+ Params: map[string]string{
+ "url": i.URL,
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
- var e []*EdgeCheck
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ var e []*EdgeCheck
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/decode_hooks.go b/vendor/github.com/sethvargo/go-fastly/decode_hooks.go
index ed9e0c6d7aaa..aba5172b90cc 100644
--- a/vendor/github.com/sethvargo/go-fastly/decode_hooks.go
+++ b/vendor/github.com/sethvargo/go-fastly/decode_hooks.go
@@ -1,64 +1,64 @@
package fastly
import (
- "fmt"
- "net/http"
- "reflect"
- "time"
+ "fmt"
+ "net/http"
+ "reflect"
+ "time"
- "github.com/mitchellh/mapstructure"
+ "github.com/mitchellh/mapstructure"
)
// mapToHTTPHeaderHookFunc returns a function that converts maps into an
// http.Header value.
func mapToHTTPHeaderHookFunc() mapstructure.DecodeHookFunc {
- return func(
- f reflect.Type,
- t reflect.Type,
- data interface{}) (interface{}, error) {
- if f.Kind() != reflect.Map {
- return data, nil
- }
- if t != reflect.TypeOf(new(http.Header)) {
- return data, nil
- }
+ return func(
+ f reflect.Type,
+ t reflect.Type,
+ data interface{}) (interface{}, error) {
+ if f.Kind() != reflect.Map {
+ return data, nil
+ }
+ if t != reflect.TypeOf(new(http.Header)) {
+ return data, nil
+ }
- typed, ok := data.(map[string]interface{})
- if !ok {
- return nil, fmt.Errorf("cannot convert %T to http.Header", data)
- }
+ typed, ok := data.(map[string]interface{})
+ if !ok {
+ return nil, fmt.Errorf("cannot convert %T to http.Header", data)
+ }
- n := map[string][]string{}
- for k, v := range typed {
- switch v.(type) {
- case string:
- n[k] = []string{v.(string)}
- case []string:
- n[k] = v.([]string)
- default:
- return nil, fmt.Errorf("cannot convert %T to http.Header", v)
- }
- }
+ n := map[string][]string{}
+ for k, v := range typed {
+ switch v.(type) {
+ case string:
+ n[k] = []string{v.(string)}
+ case []string:
+ n[k] = v.([]string)
+ default:
+ return nil, fmt.Errorf("cannot convert %T to http.Header", v)
+ }
+ }
- return n, nil
- }
+ return n, nil
+ }
}
// stringToTimeHookFunc returns a function that converts strings to a time.Time
// value.
func stringToTimeHookFunc() mapstructure.DecodeHookFunc {
- return func(
- f reflect.Type,
- t reflect.Type,
- data interface{}) (interface{}, error) {
- if f.Kind() != reflect.String {
- return data, nil
- }
- if t != reflect.TypeOf(time.Now()) {
- return data, nil
- }
+ return func(
+ f reflect.Type,
+ t reflect.Type,
+ data interface{}) (interface{}, error) {
+ if f.Kind() != reflect.String {
+ return data, nil
+ }
+ if t != reflect.TypeOf(time.Now()) {
+ return data, nil
+ }
- // Convert it by parsing
- return time.Parse(time.RFC3339, data.(string))
- }
+ // Convert it by parsing
+ return time.Parse(time.RFC3339, data.(string))
+ }
}
diff --git a/vendor/github.com/sethvargo/go-fastly/dictionary.go b/vendor/github.com/sethvargo/go-fastly/dictionary.go
index 94a4db458843..dd3c11bdffa6 100644
--- a/vendor/github.com/sethvargo/go-fastly/dictionary.go
+++ b/vendor/github.com/sethvargo/go-fastly/dictionary.go
@@ -1,18 +1,18 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Dictionary represents a dictionary response from the Fastly API.
type Dictionary struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- ID string `mapstructure:"id"`
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
+ ID string `mapstructure:"id"`
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
}
// dictionariesByName is a sortable list of dictionaries.
@@ -22,185 +22,185 @@ type dictionariesByName []*Dictionary
func (s dictionariesByName) Len() int { return len(s) }
func (s dictionariesByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s dictionariesByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListDictionariesInput is used as input to the ListDictionaries function.
type ListDictionariesInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListDictionaries returns the list of dictionaries for the configuration version.
func (c *Client) ListDictionaries(i *ListDictionariesInput) ([]*Dictionary, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/dictionary", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*Dictionary
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(dictionariesByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/dictionary", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*Dictionary
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(dictionariesByName(bs))
+ return bs, nil
}
// CreateDictionaryInput is used as input to the CreateDictionary function.
type CreateDictionaryInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- Name string `form:"name,omitempty"`
+ Name string `form:"name,omitempty"`
}
// CreateDictionary creates a new Fastly dictionary.
func (c *Client) CreateDictionary(i *CreateDictionaryInput) (*Dictionary, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/dictionary", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Dictionary
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/dictionary", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Dictionary
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetDictionaryInput is used as input to the GetDictionary function.
type GetDictionaryInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the dictionary to fetch.
- Name string
+ // Name is the name of the dictionary to fetch.
+ Name string
}
// GetDictionary gets the dictionary configuration with the given parameters.
func (c *Client) GetDictionary(i *GetDictionaryInput) (*Dictionary, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Dictionary
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Dictionary
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateDictionaryInput is used as input to the UpdateDictionary function.
type UpdateDictionaryInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the dictionary to update.
- Name string
+ // Name is the name of the dictionary to update.
+ Name string
- NewName string `form:"name,omitempty"`
+ NewName string `form:"name,omitempty"`
}
// UpdateDictionary updates a specific dictionary.
func (c *Client) UpdateDictionary(i *UpdateDictionaryInput) (*Dictionary, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Dictionary
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Dictionary
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteDictionaryInput is the input parameter to DeleteDictionary.
type DeleteDictionaryInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the dictionary to delete (required).
- Name string
+ // Name is the name of the dictionary to delete (required).
+ Name string
}
// DeleteDictionary deletes the given dictionary version.
func (c *Client) DeleteDictionary(i *DeleteDictionaryInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
- _, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- // Unlike other endpoints, the dictionary endpoint does not return a status
- // response - it just returns a 200 OK.
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/dictionary/%s", i.Service, i.Version, i.Name)
+ _, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ // Unlike other endpoints, the dictionary endpoint does not return a status
+ // response - it just returns a 200 OK.
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/dictionary_item.go b/vendor/github.com/sethvargo/go-fastly/dictionary_item.go
index 03f58b0c37a8..f71fcf5148ea 100644
--- a/vendor/github.com/sethvargo/go-fastly/dictionary_item.go
+++ b/vendor/github.com/sethvargo/go-fastly/dictionary_item.go
@@ -1,17 +1,17 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// DictionaryItem represents a dictionary item response from the Fastly API.
type DictionaryItem struct {
- ServiceID string `mapstructure:"service_id"`
- DictionaryID string `mapstructure:"dictionary_id"`
+ ServiceID string `mapstructure:"service_id"`
+ DictionaryID string `mapstructure:"dictionary_id"`
- ItemKey string `mapstructure:"item_key"`
- ItemValue string `mapstructure:"item_value"`
+ ItemKey string `mapstructure:"item_key"`
+ ItemValue string `mapstructure:"item_value"`
}
// dictionaryItemsByKey is a sortable list of dictionary items.
@@ -21,187 +21,187 @@ type dictionaryItemsByKey []*DictionaryItem
func (s dictionaryItemsByKey) Len() int { return len(s) }
func (s dictionaryItemsByKey) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s dictionaryItemsByKey) Less(i, j int) bool {
- return s[i].ItemKey < s[j].ItemKey
+ return s[i].ItemKey < s[j].ItemKey
}
// ListDictionaryItemsInput is used as input to the ListDictionaryItems function.
type ListDictionaryItemsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Dictionary is the ID of the dictionary to retrieve items for (required).
- Dictionary string
+ // Dictionary is the ID of the dictionary to retrieve items for (required).
+ Dictionary string
}
// ListDictionaryItems returns the list of dictionary items for the
// configuration version.
func (c *Client) ListDictionaryItems(i *ListDictionaryItemsInput) ([]*DictionaryItem, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Dictionary == "" {
- return nil, ErrMissingDictionary
- }
-
- path := fmt.Sprintf("/service/%s/dictionary/%s/items", i.Service, i.Dictionary)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*DictionaryItem
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(dictionaryItemsByKey(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Dictionary == "" {
+ return nil, ErrMissingDictionary
+ }
+
+ path := fmt.Sprintf("/service/%s/dictionary/%s/items", i.Service, i.Dictionary)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*DictionaryItem
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(dictionaryItemsByKey(bs))
+ return bs, nil
}
// CreateDictionaryItemInput is used as input to the CreateDictionaryItem function.
type CreateDictionaryItemInput struct {
- // Service is the ID of the service. Dictionary is the ID of the dictionary.
- // Both fields are required.
- Service string
- Dictionary string
+ // Service is the ID of the service. Dictionary is the ID of the dictionary.
+ // Both fields are required.
+ Service string
+ Dictionary string
- ItemKey string `form:"item_key,omitempty"`
- ItemValue string `form:"item_value,omitempty"`
+ ItemKey string `form:"item_key,omitempty"`
+ ItemValue string `form:"item_value,omitempty"`
}
// CreateDictionaryItem creates a new Fastly dictionary item.
func (c *Client) CreateDictionaryItem(i *CreateDictionaryItemInput) (*DictionaryItem, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Dictionary == "" {
- return nil, ErrMissingDictionary
- }
-
- path := fmt.Sprintf("/service/%s/dictionary/%s/item", i.Service, i.Dictionary)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *DictionaryItem
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Dictionary == "" {
+ return nil, ErrMissingDictionary
+ }
+
+ path := fmt.Sprintf("/service/%s/dictionary/%s/item", i.Service, i.Dictionary)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *DictionaryItem
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetDictionaryItemInput is used as input to the GetDictionaryItem function.
type GetDictionaryItemInput struct {
- // Service is the ID of the service. Dictionary is the ID of the dictionary.
- // Both fields are required.
- Service string
- Dictionary string
+ // Service is the ID of the service. Dictionary is the ID of the dictionary.
+ // Both fields are required.
+ Service string
+ Dictionary string
- // ItemKey is the name of the dictionary item to fetch.
- ItemKey string
+ // ItemKey is the name of the dictionary item to fetch.
+ ItemKey string
}
// GetDictionaryItem gets the dictionary item with the given parameters.
func (c *Client) GetDictionaryItem(i *GetDictionaryItemInput) (*DictionaryItem, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Dictionary == "" {
- return nil, ErrMissingDictionary
- }
-
- if i.ItemKey == "" {
- return nil, ErrMissingItemKey
- }
-
- path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *DictionaryItem
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Dictionary == "" {
+ return nil, ErrMissingDictionary
+ }
+
+ if i.ItemKey == "" {
+ return nil, ErrMissingItemKey
+ }
+
+ path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *DictionaryItem
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateDictionaryItemInput is used as input to the UpdateDictionaryItem function.
type UpdateDictionaryItemInput struct {
- // Service is the ID of the service. Dictionary is the ID of the dictionary.
- // Both fields are required.
- Service string
- Dictionary string
+ // Service is the ID of the service. Dictionary is the ID of the dictionary.
+ // Both fields are required.
+ Service string
+ Dictionary string
- // ItemKey is the name of the dictionary item to fetch.
- ItemKey string
+ // ItemKey is the name of the dictionary item to fetch.
+ ItemKey string
- ItemValue string `form:"item_value,omitempty"`
+ ItemValue string `form:"item_value,omitempty"`
}
// UpdateDictionaryItem updates a specific dictionary item.
func (c *Client) UpdateDictionaryItem(i *UpdateDictionaryItemInput) (*DictionaryItem, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Dictionary == "" {
- return nil, ErrMissingDictionary
- }
-
- if i.ItemKey == "" {
- return nil, ErrMissingItemKey
- }
-
- path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *DictionaryItem
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Dictionary == "" {
+ return nil, ErrMissingDictionary
+ }
+
+ if i.ItemKey == "" {
+ return nil, ErrMissingItemKey
+ }
+
+ path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *DictionaryItem
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteDictionaryItemInput is the input parameter to DeleteDictionaryItem.
type DeleteDictionaryItemInput struct {
- // Service is the ID of the service. Dictionary is the ID of the dictionary.
- // Both fields are required.
- Service string
- Dictionary string
+ // Service is the ID of the service. Dictionary is the ID of the dictionary.
+ // Both fields are required.
+ Service string
+ Dictionary string
- // ItemKey is the name of the dictionary item to delete.
- ItemKey string
+ // ItemKey is the name of the dictionary item to delete.
+ ItemKey string
}
// DeleteDictionaryItem deletes the given dictionary item.
func (c *Client) DeleteDictionaryItem(i *DeleteDictionaryItemInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Dictionary == "" {
- return ErrMissingDictionary
- }
-
- if i.ItemKey == "" {
- return ErrMissingItemKey
- }
-
- path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
- _, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- // Unlike other endpoints, the dictionary endpoint does not return a status
- // response - it just returns a 200 OK.
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Dictionary == "" {
+ return ErrMissingDictionary
+ }
+
+ if i.ItemKey == "" {
+ return ErrMissingItemKey
+ }
+
+ path := fmt.Sprintf("/service/%s/dictionary/%s/item/%s", i.Service, i.Dictionary, i.ItemKey)
+ _, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ // Unlike other endpoints, the dictionary endpoint does not return a status
+ // response - it just returns a 200 OK.
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/diff.go b/vendor/github.com/sethvargo/go-fastly/diff.go
index 615fdab013cf..3cb07b94bc63 100644
--- a/vendor/github.com/sethvargo/go-fastly/diff.go
+++ b/vendor/github.com/sethvargo/go-fastly/diff.go
@@ -4,54 +4,54 @@ import "fmt"
// Diff represents a diff of two versions as a response from the Fastly API.
type Diff struct {
- Format string `mapstructure:"format"`
- From string `mapstructure:"from"`
- To string `mapstructure:"to"`
- Diff string `mapstructure:"diff"`
+ Format string `mapstructure:"format"`
+ From string `mapstructure:"from"`
+ To string `mapstructure:"to"`
+ Diff string `mapstructure:"diff"`
}
// GetDiffInput is used as input to the GetDiff function.
type GetDiffInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // From is the version to diff from. This can either be a string indicating a
- // positive number (e.g. "1") or a negative number from "-1" down ("-1" is the
- // latest version).
- From string
+ // From is the version to diff from. This can either be a string indicating a
+ // positive number (e.g. "1") or a negative number from "-1" down ("-1" is the
+ // latest version).
+ From string
- // To is the version to diff up to. The same rules for From apply.
- To string
+ // To is the version to diff up to. The same rules for From apply.
+ To string
- // Format is an optional field to specify the format with which the diff will
- // be returned. Acceptable values are "text" (default), "html", or
- // "html_simple".
- Format string
+ // Format is an optional field to specify the format with which the diff will
+ // be returned. Acceptable values are "text" (default), "html", or
+ // "html_simple".
+ Format string
}
// GetDiff returns the diff of the given versions.
func (c *Client) GetDiff(i *GetDiffInput) (*Diff, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.From == "" {
- return nil, ErrMissingFrom
- }
-
- if i.To == "" {
- return nil, ErrMissingTo
- }
-
- path := fmt.Sprintf("service/%s/diff/from/%s/to/%s", i.Service, i.From, i.To)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Diff
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.From == "" {
+ return nil, ErrMissingFrom
+ }
+
+ if i.To == "" {
+ return nil, ErrMissingTo
+ }
+
+ path := fmt.Sprintf("service/%s/diff/from/%s/to/%s", i.Service, i.From, i.To)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Diff
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/director.go b/vendor/github.com/sethvargo/go-fastly/director.go
index 73de7fb11297..d5c3dee6390b 100644
--- a/vendor/github.com/sethvargo/go-fastly/director.go
+++ b/vendor/github.com/sethvargo/go-fastly/director.go
@@ -1,22 +1,22 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
const (
- // DirectorTypeRandom is a director that does random direction.
- DirectorTypeRandom DirectorType = 1
+ // DirectorTypeRandom is a director that does random direction.
+ DirectorTypeRandom DirectorType = 1
- // DirectorTypeRoundRobin is a director that does round-robin direction.
- DirectorTypeRoundRobin DirectorType = 2
+ // DirectorTypeRoundRobin is a director that does round-robin direction.
+ DirectorTypeRoundRobin DirectorType = 2
- // DirectorTypeHash is a director that does hash direction.
- DirectorTypeHash DirectorType = 3
+ // DirectorTypeHash is a director that does hash direction.
+ DirectorTypeHash DirectorType = 3
- // DirectorTypeClient is a director that does client direction.
- DirectorTypeClient DirectorType = 4
+ // DirectorTypeClient is a director that does client direction.
+ DirectorTypeClient DirectorType = 4
)
// DirectorType is a type of director.
@@ -24,15 +24,15 @@ type DirectorType uint8
// Director represents a director response from the Fastly API.
type Director struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Comment string `mapstructure:"comment"`
- Quorum uint `mapstructure:"quorum"`
- Type DirectorType `mapstructure:"type"`
- Retries uint `mapstructure:"retries"`
- Capacity uint `mapstructure:"capacity"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Comment string `mapstructure:"comment"`
+ Quorum uint `mapstructure:"quorum"`
+ Type DirectorType `mapstructure:"type"`
+ Retries uint `mapstructure:"retries"`
+ Capacity uint `mapstructure:"capacity"`
}
// directorsByName is a sortable list of directors.
@@ -42,197 +42,197 @@ type directorsByName []*Director
func (s directorsByName) Len() int { return len(s) }
func (s directorsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s directorsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListDirectorsInput is used as input to the ListDirectors function.
type ListDirectorsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListDirectors returns the list of directors for the configuration version.
func (c *Client) ListDirectors(i *ListDirectorsInput) ([]*Director, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ds []*Director
- if err := decodeJSON(&ds, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(directorsByName(ds))
- return ds, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ds []*Director
+ if err := decodeJSON(&ds, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(directorsByName(ds))
+ return ds, nil
}
// CreateDirectorInput is used as input to the CreateDirector function.
type CreateDirectorInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Comment string `form:"comment,omitempty"`
- Quorum uint `form:"quorum,omitempty"`
- Type DirectorType `form:"type,omitempty"`
- Retries uint `form:"retries,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Comment string `form:"comment,omitempty"`
+ Quorum uint `form:"quorum,omitempty"`
+ Type DirectorType `form:"type,omitempty"`
+ Retries uint `form:"retries,omitempty"`
}
// CreateDirector creates a new Fastly director.
func (c *Client) CreateDirector(i *CreateDirectorInput) (*Director, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Director
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Director
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// GetDirectorInput is used as input to the GetDirector function.
type GetDirectorInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the director to fetch.
- Name string
+ // Name is the name of the director to fetch.
+ Name string
}
// GetDirector gets the director configuration with the given parameters.
func (c *Client) GetDirector(i *GetDirectorInput) (*Director, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Director
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Director
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// UpdateDirectorInput is used as input to the UpdateDirector function.
type UpdateDirectorInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the director to update.
- Name string
-
- Comment string `form:"comment,omitempty"`
- Quorum uint `form:"quorum,omitempty"`
- Type DirectorType `form:"type,omitempty"`
- Retries uint `form:"retries,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the director to update.
+ Name string
+
+ Comment string `form:"comment,omitempty"`
+ Quorum uint `form:"quorum,omitempty"`
+ Type DirectorType `form:"type,omitempty"`
+ Retries uint `form:"retries,omitempty"`
}
// UpdateDirector updates a specific director.
func (c *Client) UpdateDirector(i *UpdateDirectorInput) (*Director, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Director
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Director
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// DeleteDirectorInput is the input parameter to DeleteDirector.
type DeleteDirectorInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the director to delete (required).
- Name string
+ // Name is the name of the director to delete (required).
+ Name string
}
// DeleteDirector deletes the given director version.
func (c *Client) DeleteDirector(i *DeleteDirectorInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/director_backend.go b/vendor/github.com/sethvargo/go-fastly/director_backend.go
index 16d5a5690b82..5cbde76b27b0 100644
--- a/vendor/github.com/sethvargo/go-fastly/director_backend.go
+++ b/vendor/github.com/sethvargo/go-fastly/director_backend.go
@@ -1,161 +1,161 @@
package fastly
import (
- "fmt"
- "time"
+ "fmt"
+ "time"
)
// DirectorBackend is the relationship between a director and a backend in the
// Fastly API.
type DirectorBackend struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Director string `mapstructure:"director_name"`
- Backend string `mapstructure:"backend_name"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Director string `mapstructure:"director_name"`
+ Backend string `mapstructure:"backend_name"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// CreateDirectorBackendInput is used as input to the CreateDirectorBackend
// function.
type CreateDirectorBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Director is the name of the director (required).
- Director string
+ // Director is the name of the director (required).
+ Director string
- // Backend is the name of the backend (required).
- Backend string
+ // Backend is the name of the backend (required).
+ Backend string
}
// CreateDirectorBackend creates a new Fastly backend.
func (c *Client) CreateDirectorBackend(i *CreateDirectorBackendInput) (*DirectorBackend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Director == "" {
- return nil, ErrMissingDirector
- }
-
- if i.Backend == "" {
- return nil, ErrMissingBackend
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
- i.Service, i.Version, i.Director, i.Backend)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *DirectorBackend
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Director == "" {
+ return nil, ErrMissingDirector
+ }
+
+ if i.Backend == "" {
+ return nil, ErrMissingBackend
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
+ i.Service, i.Version, i.Director, i.Backend)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *DirectorBackend
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetDirectorBackendInput is used as input to the GetDirectorBackend function.
type GetDirectorBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Director is the name of the director (required).
- Director string
+ // Director is the name of the director (required).
+ Director string
- // Backend is the name of the backend (required).
- Backend string
+ // Backend is the name of the backend (required).
+ Backend string
}
// GetDirectorBackend gets the backend configuration with the given parameters.
func (c *Client) GetDirectorBackend(i *GetDirectorBackendInput) (*DirectorBackend, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Director == "" {
- return nil, ErrMissingDirector
- }
-
- if i.Backend == "" {
- return nil, ErrMissingBackend
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
- i.Service, i.Version, i.Director, i.Backend)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *DirectorBackend
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Director == "" {
+ return nil, ErrMissingDirector
+ }
+
+ if i.Backend == "" {
+ return nil, ErrMissingBackend
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
+ i.Service, i.Version, i.Director, i.Backend)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *DirectorBackend
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteDirectorBackendInput is the input parameter to DeleteDirectorBackend.
type DeleteDirectorBackendInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Director is the name of the director (required).
- Director string
+ // Director is the name of the director (required).
+ Director string
- // Backend is the name of the backend (required).
- Backend string
+ // Backend is the name of the backend (required).
+ Backend string
}
// DeleteDirectorBackend deletes the given backend version.
func (c *Client) DeleteDirectorBackend(i *DeleteDirectorBackendInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Director == "" {
- return ErrMissingDirector
- }
-
- if i.Backend == "" {
- return ErrMissingBackend
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
- i.Service, i.Version, i.Director, i.Backend)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Director == "" {
+ return ErrMissingDirector
+ }
+
+ if i.Backend == "" {
+ return ErrMissingBackend
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/director/%s/backend/%s",
+ i.Service, i.Version, i.Director, i.Backend)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/domain.go b/vendor/github.com/sethvargo/go-fastly/domain.go
index 68f39fefd26d..56356f8eb0e9 100644
--- a/vendor/github.com/sethvargo/go-fastly/domain.go
+++ b/vendor/github.com/sethvargo/go-fastly/domain.go
@@ -1,18 +1,18 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Domain represents the the domain name Fastly will serve content for.
type Domain struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- Name string `mapstructure:"name"`
- Comment string `mapstructure:"comment"`
- Locked bool `mapstructure:"locked"`
+ Name string `mapstructure:"name"`
+ Comment string `mapstructure:"comment"`
+ Locked bool `mapstructure:"locked"`
}
// domainsByName is a sortable list of backends.
@@ -22,190 +22,190 @@ type domainsByName []*Domain
func (s domainsByName) Len() int { return len(s) }
func (s domainsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s domainsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListDomainsInput is used as input to the ListDomains function.
type ListDomainsInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
-// ListDomains returns the list of domains for this account.
+// ListDomains returns the list of domains for this Service.
func (c *Client) ListDomains(i *ListDomainsInput) ([]*Domain, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/domain", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ds []*Domain
- if err := decodeJSON(&ds, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(domainsByName(ds))
- return ds, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/domain", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ds []*Domain
+ if err := decodeJSON(&ds, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(domainsByName(ds))
+ return ds, nil
}
// CreateDomainInput is used as input to the CreateDomain function.
type CreateDomainInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the domain that the service will respond to (required).
- Name string `form:"name"`
+ // Name is the name of the domain that the service will respond to (required).
+ Name string `form:"name"`
- // Comment is a personal, freeform descriptive note.
- Comment string `form:"comment,omitempty"`
+ // Comment is a personal, freeform descriptive note.
+ Comment string `form:"comment,omitempty"`
}
// CreateDomain creates a new domain with the given information.
func (c *Client) CreateDomain(i *CreateDomainInput) (*Domain, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/domain", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Domain
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/domain", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Domain
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// GetDomainInput is used as input to the GetDomain function.
type GetDomainInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the domain to fetch.
- Name string `form:"name"`
+ // Name is the name of the domain to fetch.
+ Name string `form:"name"`
}
// GetDomain retrieves information about the given domain name.
func (c *Client) GetDomain(i *GetDomainInput) (*Domain, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Domain
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Domain
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// UpdateDomainInput is used as input to the UpdateDomain function.
type UpdateDomainInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the domain that the service will respond to (required).
- Name string
+ // Name is the name of the domain that the service will respond to (required).
+ Name string
- // NewName is the updated name of the domain
- NewName string `form:"name"`
+ // NewName is the updated name of the domain
+ NewName string `form:"name"`
- // Comment is a personal, freeform descriptive note.
- Comment string `form:"comment,omitempty"`
+ // Comment is a personal, freeform descriptive note.
+ Comment string `form:"comment,omitempty"`
}
// UpdateDomain updates a single domain for the current service. The only allowed
// parameters are `Name` and `Comment`.
func (c *Client) UpdateDomain(i *UpdateDomainInput) (*Domain, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var d *Domain
- if err := decodeJSON(&d, resp.Body); err != nil {
- return nil, err
- }
- return d, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var d *Domain
+ if err := decodeJSON(&d, resp.Body); err != nil {
+ return nil, err
+ }
+ return d, nil
}
// DeleteDomainInput is used as input to the DeleteDomain function.
type DeleteDomainInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the domain that the service will respond to (required).
- Name string `form:"name"`
+ // Name is the name of the domain that the service will respond to (required).
+ Name string `form:"name"`
}
// DeleteDomain removes a single domain by the given name.
func (c *Client) DeleteDomain(i *DeleteDomainInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
- _, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/domain/%s", i.Service, i.Version, i.Name)
+ _, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/errors.go b/vendor/github.com/sethvargo/go-fastly/errors.go
index d971796afffb..e5a617323432 100644
--- a/vendor/github.com/sethvargo/go-fastly/errors.go
+++ b/vendor/github.com/sethvargo/go-fastly/errors.go
@@ -1,10 +1,10 @@
package fastly
import (
- "bytes"
- "errors"
- "fmt"
- "net/http"
+ "bytes"
+ "errors"
+ "fmt"
+ "net/http"
)
// ErrMissingService is an error that is returned when an input struct requires
@@ -69,48 +69,48 @@ var _ error = (*HTTPError)(nil)
// HTTPError is a custom error type that wraps an HTTP status code with some
// helper functions.
type HTTPError struct {
- // StatusCode is the HTTP status code (2xx-5xx).
- StatusCode int
+ // StatusCode is the HTTP status code (2xx-5xx).
+ StatusCode int
- // Message and Detail are information returned by the Fastly API.
- Message string `mapstructure:"msg"`
- Detail string `mapstructure:"detail"`
+ // Message and Detail are information returned by the Fastly API.
+ Message string `mapstructure:"msg"`
+ Detail string `mapstructure:"detail"`
}
// NewHTTPError creates a new HTTP error from the given code.
func NewHTTPError(resp *http.Response) *HTTPError {
- var e *HTTPError
- if resp.Body != nil {
- decodeJSON(&e, resp.Body)
- }
- e.StatusCode = resp.StatusCode
- return e
+ var e *HTTPError
+ if resp.Body != nil {
+ decodeJSON(&e, resp.Body)
+ }
+ e.StatusCode = resp.StatusCode
+ return e
}
// Error implements the error interface and returns the string representing the
// error text that includes the status code and the corresponding status text.
func (e *HTTPError) Error() string {
- var r bytes.Buffer
- fmt.Fprintf(&r, "%d - %s", e.StatusCode, http.StatusText(e.StatusCode))
+ var r bytes.Buffer
+ fmt.Fprintf(&r, "%d - %s", e.StatusCode, http.StatusText(e.StatusCode))
- if e.Message != "" {
- fmt.Fprintf(&r, "\nMessage: %s", e.Message)
- }
+ if e.Message != "" {
+ fmt.Fprintf(&r, "\nMessage: %s", e.Message)
+ }
- if e.Detail != "" {
- fmt.Fprintf(&r, "\nDetail: %s", e.Detail)
- }
+ if e.Detail != "" {
+ fmt.Fprintf(&r, "\nDetail: %s", e.Detail)
+ }
- return r.String()
+ return r.String()
}
// String implements the stringer interface and returns the string representing
// the string text that includes the status code and corresponding status text.
func (e *HTTPError) String() string {
- return e.Error()
+ return e.Error()
}
// IsNotFound returns true if the HTTP error code is a 404, false otherwise.
func (e *HTTPError) IsNotFound() bool {
- return e.StatusCode == 404
+ return e.StatusCode == 404
}
diff --git a/vendor/github.com/sethvargo/go-fastly/fastly.go b/vendor/github.com/sethvargo/go-fastly/fastly.go
index 693939a38488..438aa4ad0e74 100644
--- a/vendor/github.com/sethvargo/go-fastly/fastly.go
+++ b/vendor/github.com/sethvargo/go-fastly/fastly.go
@@ -1,23 +1,23 @@
package fastly
import (
- "bytes"
- "encoding"
+ "bytes"
+ "encoding"
)
type statusResp struct {
- Status string
- Msg string
+ Status string
+ Msg string
}
func (t *statusResp) Ok() bool {
- return t.Status == "ok"
+ return t.Status == "ok"
}
// Ensure Compatibool implements the proper interfaces.
var (
- _ encoding.TextMarshaler = new(Compatibool)
- _ encoding.TextUnmarshaler = new(Compatibool)
+ _ encoding.TextMarshaler = new(Compatibool)
+ _ encoding.TextUnmarshaler = new(Compatibool)
)
// Compatibool is a boolean value that marshalls to 0/1 instead of true/false
@@ -26,16 +26,16 @@ type Compatibool bool
// MarshalText implements the encoding.TextMarshaler interface.
func (b Compatibool) MarshalText() ([]byte, error) {
- if b {
- return []byte("1"), nil
- }
- return []byte("0"), nil
+ if b {
+ return []byte("1"), nil
+ }
+ return []byte("0"), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
func (b Compatibool) UnmarshalText(t []byte) error {
- if bytes.Equal(t, []byte("1")) {
- b = Compatibool(true)
- }
- return nil
+ if bytes.Equal(t, []byte("1")) {
+ b = Compatibool(true)
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/ftp.go b/vendor/github.com/sethvargo/go-fastly/ftp.go
index 371cbda43d06..95cdf828ed8c 100644
--- a/vendor/github.com/sethvargo/go-fastly/ftp.go
+++ b/vendor/github.com/sethvargo/go-fastly/ftp.go
@@ -1,30 +1,30 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// FTP represents an FTP logging response from the Fastly API.
type FTP struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
- Port uint `mapstructure:"port"`
- Username string `mapstructure:"user"`
- Password string `mapstructure:"password"`
- Directory string `mapstructure:"directory"`
- Period uint `mapstructure:"period"`
- GzipLevel uint8 `mapstructure:"gzip_level"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- TimestampFormat string `mapstructure:"timestamp_format"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
+ Port uint `mapstructure:"port"`
+ Username string `mapstructure:"user"`
+ Password string `mapstructure:"password"`
+ Path string `mapstructure:"path"`
+ Period uint `mapstructure:"period"`
+ GzipLevel uint8 `mapstructure:"gzip_level"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ TimestampFormat string `mapstructure:"timestamp_format"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// ftpsByName is a sortable list of ftps.
@@ -34,210 +34,210 @@ type ftpsByName []*FTP
func (s ftpsByName) Len() int { return len(s) }
func (s ftpsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s ftpsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListFTPsInput is used as input to the ListFTPs function.
type ListFTPsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListFTPs returns the list of ftps for the configuration version.
func (c *Client) ListFTPs(i *ListFTPsInput) ([]*FTP, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/ftp", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ftps []*FTP
- if err := decodeJSON(&ftps, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(ftpsByName(ftps))
- return ftps, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/ftp", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ftps []*FTP
+ if err := decodeJSON(&ftps, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(ftpsByName(ftps))
+ return ftps, nil
}
// CreateFTPInput is used as input to the CreateFTP function.
type CreateFTPInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- Username string `form:"user,omitempty"`
- Password string `form:"password,omitempty"`
- Directory string `form:"directory,omitempty"`
- Period uint `form:"period,omitempty"`
- GzipLevel uint8 `form:"gzip_level,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- TimestampFormat string `form:"timestamp_format,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ Username string `form:"user,omitempty"`
+ Password string `form:"password,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint8 `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
}
// CreateFTP creates a new Fastly FTP.
func (c *Client) CreateFTP(i *CreateFTPInput) (*FTP, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/ftp", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var ftp *FTP
- if err := decodeJSON(&ftp, resp.Body); err != nil {
- return nil, err
- }
- return ftp, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/ftp", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ftp *FTP
+ if err := decodeJSON(&ftp, resp.Body); err != nil {
+ return nil, err
+ }
+ return ftp, nil
}
// GetFTPInput is used as input to the GetFTP function.
type GetFTPInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the FTP to fetch.
- Name string
+ // Name is the name of the FTP to fetch.
+ Name string
}
// GetFTP gets the FTP configuration with the given parameters.
func (c *Client) GetFTP(i *GetFTPInput) (*FTP, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *FTP
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *FTP
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateFTPInput is used as input to the UpdateFTP function.
type UpdateFTPInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the FTP to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- Username string `form:"user,omitempty"`
- Password string `form:"password,omitempty"`
- Directory string `form:"directory,omitempty"`
- Period uint `form:"period,omitempty"`
- GzipLevel uint8 `form:"gzip_level,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- TimestampFormat string `form:"timestamp_format,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the FTP to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ Username string `form:"user,omitempty"`
+ Password string `form:"password,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint8 `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
}
// UpdateFTP updates a specific FTP.
func (c *Client) UpdateFTP(i *UpdateFTPInput) (*FTP, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *FTP
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *FTP
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteFTPInput is the input parameter to DeleteFTP.
type DeleteFTPInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the FTP to delete (required).
- Name string
+ // Name is the name of the FTP to delete (required).
+ Name string
}
// DeleteFTP deletes the given FTP version.
func (c *Client) DeleteFTP(i *DeleteFTPInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/ftp/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/gcs.go b/vendor/github.com/sethvargo/go-fastly/gcs.go
new file mode 100644
index 000000000000..ef813ac5129e
--- /dev/null
+++ b/vendor/github.com/sethvargo/go-fastly/gcs.go
@@ -0,0 +1,236 @@
+package fastly
+
+import (
+ "fmt"
+ "sort"
+)
+
+// GCS represents an GCS logging response from the Fastly API.
+type GCS struct {
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Bucket string `mapstructure:"bucket_name"`
+ User string `mapstructure:"user"`
+ SecretKey string `mapstructure:"secret_key"`
+ Path string `mapstructure:"path"`
+ Period uint `mapstructure:"period"`
+ GzipLevel uint8 `mapstructure:"gzip_level"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ TimestampFormat string `mapstructure:"timestamp_format"`
+}
+
+// gcsesByName is a sortable list of gcses.
+type gcsesByName []*GCS
+
+// Len, Swap, and Less implement the sortable interface.
+func (s gcsesByName) Len() int { return len(s) }
+func (s gcsesByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+func (s gcsesByName) Less(i, j int) bool {
+ return s[i].Name < s[j].Name
+}
+
+// ListGCSsInput is used as input to the ListGCSs function.
+type ListGCSsInput struct {
+ // Service is the ID of the service (required).
+ Service string
+
+ // Version is the specific configuration version (required).
+ Version string
+}
+
+// ListGCSs returns the list of gcses for the configuration version.
+func (c *Client) ListGCSs(i *ListGCSsInput) ([]*GCS, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/gcs", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var gcses []*GCS
+ if err := decodeJSON(&gcses, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(gcsesByName(gcses))
+ return gcses, nil
+}
+
+// CreateGCSInput is used as input to the CreateGCS function.
+type CreateGCSInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Bucket string `form:"bucket_name,omitempty"`
+ User string `form:"user,omitempty"`
+ SecretKey string `form:"secret_key,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint8 `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
+}
+
+// CreateGCS creates a new Fastly GCS.
+func (c *Client) CreateGCS(i *CreateGCSInput) (*GCS, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/gcs", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var gcs *GCS
+ if err := decodeJSON(&gcs, resp.Body); err != nil {
+ return nil, err
+ }
+ return gcs, nil
+}
+
+// GetGCSInput is used as input to the GetGCS function.
+type GetGCSInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the GCS to fetch.
+ Name string
+}
+
+// GetGCS gets the GCS configuration with the given parameters.
+func (c *Client) GetGCS(i *GetGCSInput) (*GCS, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/gcs/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *GCS
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
+}
+
+// UpdateGCSInput is used as input to the UpdateGCS function.
+type UpdateGCSInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the GCS to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Bucket string `form:"bucket_name,omitempty"`
+ User string `form:"user,omitempty"`
+ SecretKey string `form:"secret_key,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint8 `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
+}
+
+// UpdateGCS updates a specific GCS.
+func (c *Client) UpdateGCS(i *UpdateGCSInput) (*GCS, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/gcs/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *GCS
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
+}
+
+// DeleteGCSInput is the input parameter to DeleteGCS.
+type DeleteGCSInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the GCS to delete (required).
+ Name string
+}
+
+// DeleteGCS deletes the given GCS version.
+func (c *Client) DeleteGCS(i *DeleteGCSInput) error {
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/gcs/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
+}
diff --git a/vendor/github.com/sethvargo/go-fastly/gzip.go b/vendor/github.com/sethvargo/go-fastly/gzip.go
new file mode 100644
index 000000000000..2b9f6d80b3d7
--- /dev/null
+++ b/vendor/github.com/sethvargo/go-fastly/gzip.go
@@ -0,0 +1,218 @@
+package fastly
+
+import (
+ "fmt"
+ "sort"
+)
+
+// Gzip represents an Gzip logging response from the Fastly API.
+type Gzip struct {
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ ContentTypes string `mapstructure:"content_types"`
+ Extensions string `mapstructure:"extensions"`
+ CacheCondition string `mapstructure:"cache_condition"`
+}
+
+// gzipsByName is a sortable list of gzips.
+type gzipsByName []*Gzip
+
+// Len, Swap, and Less implement the sortable interface.
+func (s gzipsByName) Len() int { return len(s) }
+func (s gzipsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+func (s gzipsByName) Less(i, j int) bool {
+ return s[i].Name < s[j].Name
+}
+
+// ListGzipsInput is used as input to the ListGzips function.
+type ListGzipsInput struct {
+ // Service is the ID of the service (required).
+ Service string
+
+ // Version is the specific configuration version (required).
+ Version string
+}
+
+// ListGzips returns the list of gzips for the configuration version.
+func (c *Client) ListGzips(i *ListGzipsInput) ([]*Gzip, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/gzip", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var gzips []*Gzip
+ if err := decodeJSON(&gzips, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(gzipsByName(gzips))
+ return gzips, nil
+}
+
+// CreateGzipInput is used as input to the CreateGzip function.
+type CreateGzipInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ ContentTypes string `form:"content_types"`
+ Extensions string `form:"extensions"`
+ CacheCondition string `form:"cache_condition,omitempty"`
+}
+
+// CreateGzip creates a new Fastly Gzip.
+func (c *Client) CreateGzip(i *CreateGzipInput) (*Gzip, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/gzip", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var gzip *Gzip
+ if err := decodeJSON(&gzip, resp.Body); err != nil {
+ return nil, err
+ }
+ return gzip, nil
+}
+
+// GetGzipInput is used as input to the GetGzip function.
+type GetGzipInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the Gzip to fetch.
+ Name string
+}
+
+// GetGzip gets the Gzip configuration with the given parameters.
+func (c *Client) GetGzip(i *GetGzipInput) (*Gzip, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/gzip/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Gzip
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
+}
+
+// UpdateGzipInput is used as input to the UpdateGzip function.
+type UpdateGzipInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the Gzip to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ ContentTypes string `form:"content_types,omitempty"`
+ Extensions string `form:"extensions,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
+}
+
+// UpdateGzip updates a specific Gzip.
+func (c *Client) UpdateGzip(i *UpdateGzipInput) (*Gzip, error) {
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/gzip/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Gzip
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
+}
+
+// DeleteGzipInput is the input parameter to DeleteGzip.
+type DeleteGzipInput struct {
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the Gzip to delete (required).
+ Name string
+}
+
+// DeleteGzip deletes the given Gzip version.
+func (c *Client) DeleteGzip(i *DeleteGzipInput) error {
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/gzip/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
+}
diff --git a/vendor/github.com/sethvargo/go-fastly/header.go b/vendor/github.com/sethvargo/go-fastly/header.go
index 8357ad0ed9cd..476d1195ed85 100644
--- a/vendor/github.com/sethvargo/go-fastly/header.go
+++ b/vendor/github.com/sethvargo/go-fastly/header.go
@@ -1,48 +1,48 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
const (
- // HeaderActionSet is a header action that sets or resets a header.
- HeaderActionSet HeaderAction = "set"
+ // HeaderActionSet is a header action that sets or resets a header.
+ HeaderActionSet HeaderAction = "set"
- // HeaderActionAppend is a header action that appends to an existing header.
- HeaderActionAppend HeaderAction = "append"
+ // HeaderActionAppend is a header action that appends to an existing header.
+ HeaderActionAppend HeaderAction = "append"
- // HeaderActionDelete is a header action that deletes a header.
- HeaderActionDelete HeaderAction = "delete"
+ // HeaderActionDelete is a header action that deletes a header.
+ HeaderActionDelete HeaderAction = "delete"
- // HeaderActionRegex is a header action that performs a single regex
- // replacement on a header.
- HeaderActionRegex HeaderAction = "regex"
+ // HeaderActionRegex is a header action that performs a single regex
+ // replacement on a header.
+ HeaderActionRegex HeaderAction = "regex"
- // HeaderActionRegexRepeat is a header action that performs a global regex
- // replacement on a header.
- HeaderActionRegexRepeat HeaderAction = "regex_repeat"
+ // HeaderActionRegexRepeat is a header action that performs a global regex
+ // replacement on a header.
+ HeaderActionRegexRepeat HeaderAction = "regex_repeat"
)
// HeaderAction is a type of header action.
type HeaderAction string
const (
- // HeaderTypeRequest is a header type that performs on the request before
- // lookups.
- HeaderTypeRequest HeaderType = "request"
+ // HeaderTypeRequest is a header type that performs on the request before
+ // lookups.
+ HeaderTypeRequest HeaderType = "request"
- // HeaderTypeFetch is a header type that performs on the request to the origin
- // server.
- HeaderTypeFetch HeaderType = "fetch"
+ // HeaderTypeFetch is a header type that performs on the request to the origin
+ // server.
+ HeaderTypeFetch HeaderType = "fetch"
- // HeaderTypeCache is a header type that performs on the response before it's
- // store in the cache.
- HeaderTypeCache HeaderType = "cache"
+ // HeaderTypeCache is a header type that performs on the response before it's
+ // store in the cache.
+ HeaderTypeCache HeaderType = "cache"
- // HeaderTypeResponse is a header type that performs on the response before
- // delivering to the client.
- HeaderTypeResponse HeaderType = "response"
+ // HeaderTypeResponse is a header type that performs on the response before
+ // delivering to the client.
+ HeaderTypeResponse HeaderType = "response"
)
// HeaderType is a type of header.
@@ -50,21 +50,21 @@ type HeaderType string
// Header represents a header response from the Fastly API.
type Header struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Action HeaderAction `mapstructure:"action"`
- IgnoreIfSet bool `mapstructure:"ignore_if_set"`
- Type HeaderType `mapstructure:"type"`
- Destination string `mapstructure:"dst"`
- Source string `mapstructure:"src"`
- Regex string `mapstructure:"regex"`
- Substitution string `mapstructure:"substitution"`
- Priority uint `mapstructure:"priority"`
- RequestCondition string `mapstructure:"request_condition"`
- CacheCondition string `mapstructure:"cache_condition"`
- ResponseCondition string `mapstructure:"response_condition"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Action HeaderAction `mapstructure:"action"`
+ IgnoreIfSet bool `mapstructure:"ignore_if_set"`
+ Type HeaderType `mapstructure:"type"`
+ Destination string `mapstructure:"dst"`
+ Source string `mapstructure:"src"`
+ Regex string `mapstructure:"regex"`
+ Substitution string `mapstructure:"substitution"`
+ Priority uint `mapstructure:"priority"`
+ RequestCondition string `mapstructure:"request_condition"`
+ CacheCondition string `mapstructure:"cache_condition"`
+ ResponseCondition string `mapstructure:"response_condition"`
}
// headersByName is a sortable list of headers.
@@ -74,212 +74,212 @@ type headersByName []*Header
func (s headersByName) Len() int { return len(s) }
func (s headersByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s headersByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListHeadersInput is used as input to the ListHeaders function.
type ListHeadersInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListHeaders returns the list of headers for the configuration version.
func (c *Client) ListHeaders(i *ListHeadersInput) ([]*Header, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/header", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*Header
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(headersByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/header", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*Header
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(headersByName(bs))
+ return bs, nil
}
// CreateHeaderInput is used as input to the CreateHeader function.
type CreateHeaderInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Action HeaderAction `form:"action,omitempty"`
- IgnoreIfSet bool `form:"ignore_if_set,omitempty"`
- Type HeaderType `form:"type,omitempty"`
- Destination string `form:"dst,omitempty"`
- Source string `form:"src,omitempty"`
- Regex string `form:"regex,omitempty"`
- Substitution string `form:"substitution,omitempty"`
- Priority uint `form:"priority,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Action HeaderAction `form:"action,omitempty"`
+ IgnoreIfSet bool `form:"ignore_if_set,omitempty"`
+ Type HeaderType `form:"type,omitempty"`
+ Destination string `form:"dst,omitempty"`
+ Source string `form:"src,omitempty"`
+ Regex string `form:"regex,omitempty"`
+ Substitution string `form:"substitution,omitempty"`
+ Priority uint `form:"priority,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// CreateHeader creates a new Fastly header.
func (c *Client) CreateHeader(i *CreateHeaderInput) (*Header, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/header", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Header
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/header", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Header
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetHeaderInput is used as input to the GetHeader function.
type GetHeaderInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the header to fetch.
- Name string
+ // Name is the name of the header to fetch.
+ Name string
}
// GetHeader gets the header configuration with the given parameters.
func (c *Client) GetHeader(i *GetHeaderInput) (*Header, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Header
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Header
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateHeaderInput is used as input to the UpdateHeader function.
type UpdateHeaderInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the header to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Action HeaderAction `form:"action,omitempty"`
- IgnoreIfSet bool `form:"ignore_if_set,omitempty"`
- Type HeaderType `form:"type,omitempty"`
- Destination string `form:"dst,omitempty"`
- Source string `form:"src,omitempty"`
- Regex string `form:"regex,omitempty"`
- Substitution string `form:"substitution,omitempty"`
- Priority uint `form:"priority,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the header to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Action HeaderAction `form:"action,omitempty"`
+ IgnoreIfSet bool `form:"ignore_if_set,omitempty"`
+ Type HeaderType `form:"type,omitempty"`
+ Destination string `form:"dst,omitempty"`
+ Source string `form:"src,omitempty"`
+ Regex string `form:"regex,omitempty"`
+ Substitution string `form:"substitution,omitempty"`
+ Priority uint `form:"priority,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// UpdateHeader updates a specific header.
func (c *Client) UpdateHeader(i *UpdateHeaderInput) (*Header, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Header
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Header
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteHeaderInput is the input parameter to DeleteHeader.
type DeleteHeaderInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the header to delete (required).
- Name string
+ // Name is the name of the header to delete (required).
+ Name string
}
// DeleteHeader deletes the given header version.
func (c *Client) DeleteHeader(i *DeleteHeaderInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/header/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/health_check.go b/vendor/github.com/sethvargo/go-fastly/health_check.go
index c31fa359fbfc..5a091d2c238e 100644
--- a/vendor/github.com/sethvargo/go-fastly/health_check.go
+++ b/vendor/github.com/sethvargo/go-fastly/health_check.go
@@ -1,26 +1,26 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// HealthCheck represents a health check response from the Fastly API.
type HealthCheck struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Method string `mapstructure:"method"`
- Host string `mapstructure:"host"`
- Path string `mapstructure:"path"`
- HTTPVersion string `mapstructure:"http_version"`
- Timeout uint `mapstructure:"timeout"`
- CheckInterval uint `mapstructure:"check_interval"`
- ExpectedResponse uint `mapstructure:"expected_response"`
- Window uint `mapstructure:"window"`
- Threshold uint `mapstructure:"threshold"`
- Initial uint `mapstructure:"initial"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Method string `mapstructure:"method"`
+ Host string `mapstructure:"host"`
+ Path string `mapstructure:"path"`
+ HTTPVersion string `mapstructure:"http_version"`
+ Timeout uint `mapstructure:"timeout"`
+ CheckInterval uint `mapstructure:"check_interval"`
+ ExpectedResponse uint `mapstructure:"expected_response"`
+ Window uint `mapstructure:"window"`
+ Threshold uint `mapstructure:"threshold"`
+ Initial uint `mapstructure:"initial"`
}
// healthChecksByName is a sortable list of health checks.
@@ -30,211 +30,211 @@ type healthChecksByName []*HealthCheck
func (s healthChecksByName) Len() int { return len(s) }
func (s healthChecksByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s healthChecksByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListHealthChecksInput is used as input to the ListHealthChecks function.
type ListHealthChecksInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListHealthChecks returns the list of health checks for the configuration
// version.
func (c *Client) ListHealthChecks(i *ListHealthChecksInput) ([]*HealthCheck, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/healthcheck", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var hcs []*HealthCheck
- if err := decodeJSON(&hcs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(healthChecksByName(hcs))
- return hcs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/healthcheck", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var hcs []*HealthCheck
+ if err := decodeJSON(&hcs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(healthChecksByName(hcs))
+ return hcs, nil
}
// CreateHealthCheckInput is used as input to the CreateHealthCheck function.
type CreateHealthCheckInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Method string `form:"method,omitempty"`
- Host string `form:"host,omitempty"`
- Path string `form:"path,omitempty"`
- HTTPVersion string `form:"http_version,omitempty"`
- Timeout uint `form:"timeout,omitempty"`
- CheckInterval uint `form:"check_interval,omitempty"`
- ExpectedResponse uint `form:"expected_response,omitempty"`
- Window uint `form:"window,omitempty"`
- Threshold uint `form:"threshold,omitempty"`
- Initial uint `form:"initial,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Method string `form:"method,omitempty"`
+ Host string `form:"host,omitempty"`
+ Path string `form:"path,omitempty"`
+ HTTPVersion string `form:"http_version,omitempty"`
+ Timeout uint `form:"timeout,omitempty"`
+ CheckInterval uint `form:"check_interval,omitempty"`
+ ExpectedResponse uint `form:"expected_response,omitempty"`
+ Window uint `form:"window,omitempty"`
+ Threshold uint `form:"threshold,omitempty"`
+ Initial uint `form:"initial,omitempty"`
}
// CreateHealthCheck creates a new Fastly health check.
func (c *Client) CreateHealthCheck(i *CreateHealthCheckInput) (*HealthCheck, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/healthcheck", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var h *HealthCheck
- if err := decodeJSON(&h, resp.Body); err != nil {
- return nil, err
- }
- return h, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/healthcheck", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var h *HealthCheck
+ if err := decodeJSON(&h, resp.Body); err != nil {
+ return nil, err
+ }
+ return h, nil
}
// GetHealthCheckInput is used as input to the GetHealthCheck function.
type GetHealthCheckInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the health check to fetch.
- Name string
+ // Name is the name of the health check to fetch.
+ Name string
}
// GetHealthCheck gets the health check configuration with the given parameters.
func (c *Client) GetHealthCheck(i *GetHealthCheckInput) (*HealthCheck, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var h *HealthCheck
- if err := decodeJSON(&h, resp.Body); err != nil {
- return nil, err
- }
- return h, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var h *HealthCheck
+ if err := decodeJSON(&h, resp.Body); err != nil {
+ return nil, err
+ }
+ return h, nil
}
// UpdateHealthCheckInput is used as input to the UpdateHealthCheck function.
type UpdateHealthCheckInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the health check to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Method string `form:"method,omitempty"`
- Host string `form:"host,omitempty"`
- Path string `form:"path,omitempty"`
- HTTPVersion string `form:"http_version,omitempty"`
- Timeout uint `form:"timeout,omitempty"`
- CheckInterval uint `form:"check_interval,omitempty"`
- ExpectedResponse uint `form:"expected_response,omitempty"`
- Window uint `form:"window,omitempty"`
- Threshold uint `form:"threshold,omitempty"`
- Initial uint `form:"initial,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the health check to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Method string `form:"method,omitempty"`
+ Host string `form:"host,omitempty"`
+ Path string `form:"path,omitempty"`
+ HTTPVersion string `form:"http_version,omitempty"`
+ Timeout uint `form:"timeout,omitempty"`
+ CheckInterval uint `form:"check_interval,omitempty"`
+ ExpectedResponse uint `form:"expected_response,omitempty"`
+ Window uint `form:"window,omitempty"`
+ Threshold uint `form:"threshold,omitempty"`
+ Initial uint `form:"initial,omitempty"`
}
// UpdateHealthCheck updates a specific health check.
func (c *Client) UpdateHealthCheck(i *UpdateHealthCheckInput) (*HealthCheck, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var h *HealthCheck
- if err := decodeJSON(&h, resp.Body); err != nil {
- return nil, err
- }
- return h, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var h *HealthCheck
+ if err := decodeJSON(&h, resp.Body); err != nil {
+ return nil, err
+ }
+ return h, nil
}
// DeleteHealthCheckInput is the input parameter to DeleteHealthCheck.
type DeleteHealthCheckInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the health check to delete (required).
- Name string
+ // Name is the name of the health check to delete (required).
+ Name string
}
// DeleteHealthCheck deletes the given health check.
func (c *Client) DeleteHealthCheck(i *DeleteHealthCheckInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/healthcheck/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/ip.go b/vendor/github.com/sethvargo/go-fastly/ip.go
index bd7789398418..d7d362c7a357 100644
--- a/vendor/github.com/sethvargo/go-fastly/ip.go
+++ b/vendor/github.com/sethvargo/go-fastly/ip.go
@@ -5,14 +5,14 @@ type IPAddrs []string
// IPs returns the list of public IP addresses for Fastly's network.
func (c *Client) IPs() (IPAddrs, error) {
- resp, err := c.Get("/public-ip-list", nil)
- if err != nil {
- return nil, err
- }
+ resp, err := c.Get("/public-ip-list", nil)
+ if err != nil {
+ return nil, err
+ }
- var m map[string][]string
- if err := decodeJSON(&m, resp.Body); err != nil {
- return nil, err
- }
- return IPAddrs(m["addresses"]), nil
+ var m map[string][]string
+ if err := decodeJSON(&m, resp.Body); err != nil {
+ return nil, err
+ }
+ return IPAddrs(m["addresses"]), nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/logentries.go b/vendor/github.com/sethvargo/go-fastly/logentries.go
index 771f6a299ca8..99688b657fd4 100644
--- a/vendor/github.com/sethvargo/go-fastly/logentries.go
+++ b/vendor/github.com/sethvargo/go-fastly/logentries.go
@@ -1,25 +1,25 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// Logentries represents a logentries response from the Fastly API.
type Logentries struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Port uint `mapstructure:"port"`
- UseTLS bool `mapstructure:"use_tls"`
- Token string `mapstructure:"token"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Port uint `mapstructure:"port"`
+ UseTLS bool `mapstructure:"use_tls"`
+ Token string `mapstructure:"token"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// logentriesByName is a sortable list of logentries.
@@ -29,200 +29,200 @@ type logentriesByName []*Logentries
func (s logentriesByName) Len() int { return len(s) }
func (s logentriesByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s logentriesByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListLogentriesInput is used as input to the ListLogentries function.
type ListLogentriesInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListLogentries returns the list of logentries for the configuration version.
func (c *Client) ListLogentries(i *ListLogentriesInput) ([]*Logentries, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/logentries", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ls []*Logentries
- if err := decodeJSON(&ls, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(logentriesByName(ls))
- return ls, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/logentries", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ls []*Logentries
+ if err := decodeJSON(&ls, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(logentriesByName(ls))
+ return ls, nil
}
// CreateLogentriesInput is used as input to the CreateLogentries function.
type CreateLogentriesInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Port uint `form:"port,omitempty"`
- UseTLS Compatibool `form:"use_tls,omitempty"`
- Token string `form:"token,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Port uint `form:"port,omitempty"`
+ UseTLS Compatibool `form:"use_tls,omitempty"`
+ Token string `form:"token,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// CreateLogentries creates a new Fastly logentries.
func (c *Client) CreateLogentries(i *CreateLogentriesInput) (*Logentries, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/logentries", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var l *Logentries
- if err := decodeJSON(&l, resp.Body); err != nil {
- return nil, err
- }
- return l, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/logentries", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var l *Logentries
+ if err := decodeJSON(&l, resp.Body); err != nil {
+ return nil, err
+ }
+ return l, nil
}
// GetLogentriesInput is used as input to the GetLogentries function.
type GetLogentriesInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the logentries to fetch.
- Name string
+ // Name is the name of the logentries to fetch.
+ Name string
}
// GetLogentries gets the logentries configuration with the given parameters.
func (c *Client) GetLogentries(i *GetLogentriesInput) (*Logentries, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var l *Logentries
- if err := decodeJSON(&l, resp.Body); err != nil {
- return nil, err
- }
- return l, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var l *Logentries
+ if err := decodeJSON(&l, resp.Body); err != nil {
+ return nil, err
+ }
+ return l, nil
}
// UpdateLogentriesInput is used as input to the UpdateLogentries function.
type UpdateLogentriesInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the logentries to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Port uint `form:"port,omitempty"`
- UseTLS Compatibool `form:"use_tls,omitempty"`
- Token string `form:"token,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the logentries to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Port uint `form:"port,omitempty"`
+ UseTLS Compatibool `form:"use_tls,omitempty"`
+ Token string `form:"token,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// UpdateLogentries updates a specific logentries.
func (c *Client) UpdateLogentries(i *UpdateLogentriesInput) (*Logentries, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var l *Logentries
- if err := decodeJSON(&l, resp.Body); err != nil {
- return nil, err
- }
- return l, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var l *Logentries
+ if err := decodeJSON(&l, resp.Body); err != nil {
+ return nil, err
+ }
+ return l, nil
}
// DeleteLogentriesInput is the input parameter to DeleteLogentries.
type DeleteLogentriesInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the logentries to delete (required).
- Name string
+ // Name is the name of the logentries to delete (required).
+ Name string
}
// DeleteLogentries deletes the given logentries version.
func (c *Client) DeleteLogentries(i *DeleteLogentriesInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/logentries/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/papertrail.go b/vendor/github.com/sethvargo/go-fastly/papertrail.go
index ec34c81f95d1..294c5b1c1f3a 100644
--- a/vendor/github.com/sethvargo/go-fastly/papertrail.go
+++ b/vendor/github.com/sethvargo/go-fastly/papertrail.go
@@ -1,24 +1,24 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// Papertrail represents a papertrail response from the Fastly API.
type Papertrail struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
- Port uint `mapstructure:"port"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
+ Port uint `mapstructure:"port"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// papertrailsByName is a sortable list of papertrails.
@@ -28,204 +28,204 @@ type papertrailsByName []*Papertrail
func (s papertrailsByName) Len() int { return len(s) }
func (s papertrailsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s papertrailsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListPapertrailsInput is used as input to the ListPapertrails function.
type ListPapertrailsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListPapertrails returns the list of papertrails for the configuration version.
func (c *Client) ListPapertrails(i *ListPapertrailsInput) ([]*Papertrail, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ps []*Papertrail
- if err := decodeJSON(&ps, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(papertrailsByName(ps))
- return ps, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ps []*Papertrail
+ if err := decodeJSON(&ps, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(papertrailsByName(ps))
+ return ps, nil
}
// CreatePapertrailInput is used as input to the CreatePapertrail function.
type CreatePapertrailInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- CreatedAt *time.Time `form:"created_at,omitempty"`
- UpdatedAt *time.Time `form:"updated_at,omitempty"`
- DeletedAt *time.Time `form:"deleted_at,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ CreatedAt *time.Time `form:"created_at,omitempty"`
+ UpdatedAt *time.Time `form:"updated_at,omitempty"`
+ DeletedAt *time.Time `form:"deleted_at,omitempty"`
}
// CreatePapertrail creates a new Fastly papertrail.
func (c *Client) CreatePapertrail(i *CreatePapertrailInput) (*Papertrail, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var p *Papertrail
- if err := decodeJSON(&p, resp.Body); err != nil {
- return nil, err
- }
- return p, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var p *Papertrail
+ if err := decodeJSON(&p, resp.Body); err != nil {
+ return nil, err
+ }
+ return p, nil
}
// GetPapertrailInput is used as input to the GetPapertrail function.
type GetPapertrailInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the papertrail to fetch.
- Name string
+ // Name is the name of the papertrail to fetch.
+ Name string
}
// GetPapertrail gets the papertrail configuration with the given parameters.
func (c *Client) GetPapertrail(i *GetPapertrailInput) (*Papertrail, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var p *Papertrail
- if err := decodeJSON(&p, resp.Body); err != nil {
- return nil, err
- }
- return p, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var p *Papertrail
+ if err := decodeJSON(&p, resp.Body); err != nil {
+ return nil, err
+ }
+ return p, nil
}
// UpdatePapertrailInput is used as input to the UpdatePapertrail function.
type UpdatePapertrailInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the papertrail to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- CreatedAt *time.Time `form:"created_at,omitempty"`
- UpdatedAt *time.Time `form:"updated_at,omitempty"`
- DeletedAt *time.Time `form:"deleted_at,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the papertrail to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ CreatedAt *time.Time `form:"created_at,omitempty"`
+ UpdatedAt *time.Time `form:"updated_at,omitempty"`
+ DeletedAt *time.Time `form:"deleted_at,omitempty"`
}
// UpdatePapertrail updates a specific papertrail.
func (c *Client) UpdatePapertrail(i *UpdatePapertrailInput) (*Papertrail, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var p *Papertrail
- if err := decodeJSON(&p, resp.Body); err != nil {
- return nil, err
- }
- return p, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var p *Papertrail
+ if err := decodeJSON(&p, resp.Body); err != nil {
+ return nil, err
+ }
+ return p, nil
}
// DeletePapertrailInput is the input parameter to DeletePapertrail.
type DeletePapertrailInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the papertrail to delete (required).
- Name string
+ // Name is the name of the papertrail to delete (required).
+ Name string
}
// DeletePapertrail deletes the given papertrail version.
func (c *Client) DeletePapertrail(i *DeletePapertrailInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/papertrail/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/purge.go b/vendor/github.com/sethvargo/go-fastly/purge.go
index 83172f7f8ddf..8109ce8dd1d4 100644
--- a/vendor/github.com/sethvargo/go-fastly/purge.go
+++ b/vendor/github.com/sethvargo/go-fastly/purge.go
@@ -4,127 +4,127 @@ import "fmt"
// Purge is a response from a purge request.
type Purge struct {
- // Status is the status of the purge, usually "ok".
- Status string `mapstructure:"status"`
+ // Status is the status of the purge, usually "ok".
+ Status string `mapstructure:"status"`
- // ID is the unique ID of the purge request.
- ID string `mapstructure:"id"`
+ // ID is the unique ID of the purge request.
+ ID string `mapstructure:"id"`
}
// PurgeInput is used as input to the Purge function.
type PurgeInput struct {
- // URL is the URL to purge (required).
- URL string
+ // URL is the URL to purge (required).
+ URL string
- // Soft performs a soft purge.
- Soft bool
+ // Soft performs a soft purge.
+ Soft bool
}
// Purge instantly purges an individual URL.
func (c *Client) Purge(i *PurgeInput) (*Purge, error) {
- if i.URL == "" {
- return nil, ErrMissingURL
- }
-
- req, err := c.RawRequest("PURGE", i.URL, nil)
- if err != nil {
- return nil, err
- }
-
- if i.Soft {
- req.Header.Set("Fastly-Soft-Purge", "1")
- }
-
- resp, err := checkResp(c.HTTPClient.Do(req))
- if err != nil {
- return nil, err
- }
-
- var r *Purge
- if err := decodeJSON(&r, resp.Body); err != nil {
- return nil, err
- }
- return r, nil
+ if i.URL == "" {
+ return nil, ErrMissingURL
+ }
+
+ req, err := c.RawRequest("PURGE", i.URL, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ if i.Soft {
+ req.Header.Set("Fastly-Soft-Purge", "1")
+ }
+
+ resp, err := checkResp(c.HTTPClient.Do(req))
+ if err != nil {
+ return nil, err
+ }
+
+ var r *Purge
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return nil, err
+ }
+ return r, nil
}
// PurgeKeyInput is used as input to the Purge function.
type PurgeKeyInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Key is the key to purge (required).
- Key string
+ // Key is the key to purge (required).
+ Key string
- // Soft performs a soft purge.
- Soft bool
+ // Soft performs a soft purge.
+ Soft bool
}
// PurgeKey instantly purges a particular service of items tagged with a key.
func (c *Client) PurgeKey(i *PurgeKeyInput) (*Purge, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Key == "" {
- return nil, ErrMissingKey
- }
-
- path := fmt.Sprintf("/service/%s/purge/%s", i.Service, i.Key)
- req, err := c.RawRequest("POST", path, nil)
- if err != nil {
- return nil, err
- }
-
- if i.Soft {
- req.Header.Set("Fastly-Soft-Purge", "1")
- }
-
- resp, err := checkResp(c.HTTPClient.Do(req))
- if err != nil {
- return nil, err
- }
-
- var r *Purge
- if err := decodeJSON(&r, resp.Body); err != nil {
- return nil, err
- }
- return r, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Key == "" {
+ return nil, ErrMissingKey
+ }
+
+ path := fmt.Sprintf("/service/%s/purge/%s", i.Service, i.Key)
+ req, err := c.RawRequest("POST", path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ if i.Soft {
+ req.Header.Set("Fastly-Soft-Purge", "1")
+ }
+
+ resp, err := checkResp(c.HTTPClient.Do(req))
+ if err != nil {
+ return nil, err
+ }
+
+ var r *Purge
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return nil, err
+ }
+ return r, nil
}
// PurgeAllInput is used as input to the Purge function.
type PurgeAllInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Soft performs a soft purge.
- Soft bool
+ // Soft performs a soft purge.
+ Soft bool
}
// PurgeAll instantly purges everything from a service.
func (c *Client) PurgeAll(i *PurgeAllInput) (*Purge, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- path := fmt.Sprintf("/service/%s/purge_all", i.Service)
- req, err := c.RawRequest("POST", path, nil)
- if err != nil {
- return nil, err
- }
-
- if i.Soft {
- req.Header.Set("Fastly-Soft-Purge", "1")
- }
-
- resp, err := checkResp(c.HTTPClient.Do(req))
- if err != nil {
- return nil, err
- }
-
- var r *Purge
- if err := decodeJSON(&r, resp.Body); err != nil {
- return nil, err
- }
- return r, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ path := fmt.Sprintf("/service/%s/purge_all", i.Service)
+ req, err := c.RawRequest("POST", path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ if i.Soft {
+ req.Header.Set("Fastly-Soft-Purge", "1")
+ }
+
+ resp, err := checkResp(c.HTTPClient.Do(req))
+ if err != nil {
+ return nil, err
+ }
+
+ var r *Purge
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return nil, err
+ }
+ return r, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/request.go b/vendor/github.com/sethvargo/go-fastly/request.go
index 2337e8267580..3e1de3a3ba32 100644
--- a/vendor/github.com/sethvargo/go-fastly/request.go
+++ b/vendor/github.com/sethvargo/go-fastly/request.go
@@ -1,68 +1,68 @@
package fastly
import (
- "io"
- "net/http"
- "net/url"
- "path"
+ "io"
+ "net/http"
+ "net/url"
+ "path"
)
// RequestOptions is the list of options to pass to the request.
type RequestOptions struct {
- // Params is a map of key-value pairs that will be added to the Request.
- Params map[string]string
+ // Params is a map of key-value pairs that will be added to the Request.
+ Params map[string]string
- // Headers is a map of key-value pairs that will be added to the Request.
- Headers map[string]string
+ // Headers is a map of key-value pairs that will be added to the Request.
+ Headers map[string]string
- // Body is an io.Reader object that will be streamed or uploaded with the
- // Request. BodyLength is the final size of the Body.
- Body io.Reader
- BodyLength int64
+ // Body is an io.Reader object that will be streamed or uploaded with the
+ // Request. BodyLength is the final size of the Body.
+ Body io.Reader
+ BodyLength int64
}
// RawRequest accepts a verb, URL, and RequestOptions struct and returns the
// constructed http.Request and any errors that occurred
func (c *Client) RawRequest(verb, p string, ro *RequestOptions) (*http.Request, error) {
- // Ensure we have request options.
- if ro == nil {
- ro = new(RequestOptions)
- }
+ // Ensure we have request options.
+ if ro == nil {
+ ro = new(RequestOptions)
+ }
- // Append the path to the URL.
- u := *c.url
- u.Path = path.Join(c.url.Path, p)
+ // Append the path to the URL.
+ u := *c.url
+ u.Path = path.Join(c.url.Path, p)
- // Add the token and other params.
- var params = make(url.Values)
- for k, v := range ro.Params {
- params.Add(k, v)
- }
- u.RawQuery = params.Encode()
+ // Add the token and other params.
+ var params = make(url.Values)
+ for k, v := range ro.Params {
+ params.Add(k, v)
+ }
+ u.RawQuery = params.Encode()
- // Create the request object.
- request, err := http.NewRequest(verb, u.String(), ro.Body)
- if err != nil {
- return nil, err
- }
+ // Create the request object.
+ request, err := http.NewRequest(verb, u.String(), ro.Body)
+ if err != nil {
+ return nil, err
+ }
- // Set the API key.
- if len(c.apiKey) > 0 {
- request.Header.Set(APIKeyHeader, c.apiKey)
- }
+ // Set the API key.
+ if len(c.apiKey) > 0 {
+ request.Header.Set(APIKeyHeader, c.apiKey)
+ }
- // Set the User-Agent.
- request.Header.Set("User-Agent", UserAgent)
+ // Set the User-Agent.
+ request.Header.Set("User-Agent", UserAgent)
- // Add any custom headers.
- for k, v := range ro.Headers {
- request.Header.Add(k, v)
- }
+ // Add any custom headers.
+ for k, v := range ro.Headers {
+ request.Header.Add(k, v)
+ }
- // Add Content-Length if we have it.
- if ro.BodyLength > 0 {
- request.ContentLength = ro.BodyLength
- }
+ // Add Content-Length if we have it.
+ if ro.BodyLength > 0 {
+ request.ContentLength = ro.BodyLength
+ }
- return request, nil
+ return request, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/request_setting.go b/vendor/github.com/sethvargo/go-fastly/request_setting.go
index 409286b72c55..b952c4aff723 100644
--- a/vendor/github.com/sethvargo/go-fastly/request_setting.go
+++ b/vendor/github.com/sethvargo/go-fastly/request_setting.go
@@ -1,37 +1,37 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
const (
- // RequestSettingActionLookup sets request handling to lookup via the cache.
- RequestSettingActionLookup RequestSettingAction = "lookup"
+ // RequestSettingActionLookup sets request handling to lookup via the cache.
+ RequestSettingActionLookup RequestSettingAction = "lookup"
- // RequestSettingActionPass sets request handling to pass the cache.
- RequestSettingActionPass RequestSettingAction = "pass"
+ // RequestSettingActionPass sets request handling to pass the cache.
+ RequestSettingActionPass RequestSettingAction = "pass"
)
// RequestSettingAction is a type of request setting action.
type RequestSettingAction string
const (
- // RequestSettingXFFClear clears any X-Forwarded-For headers.
- RequestSettingXFFClear RequestSettingXFF = "clear"
+ // RequestSettingXFFClear clears any X-Forwarded-For headers.
+ RequestSettingXFFClear RequestSettingXFF = "clear"
- // RequestSettingXFFLeave leaves any X-Forwarded-For headers untouched.
- RequestSettingXFFLeave RequestSettingXFF = "leave"
+ // RequestSettingXFFLeave leaves any X-Forwarded-For headers untouched.
+ RequestSettingXFFLeave RequestSettingXFF = "leave"
- // RequestSettingXFFAppend adds Fastly X-Forwarded-For headers.
- RequestSettingXFFAppend RequestSettingXFF = "append"
+ // RequestSettingXFFAppend adds Fastly X-Forwarded-For headers.
+ RequestSettingXFFAppend RequestSettingXFF = "append"
- // RequestSettingXFFAppendAll appends all Fastly X-Forwarded-For headers.
- RequestSettingXFFAppendAll RequestSettingXFF = "append_all"
+ // RequestSettingXFFAppendAll appends all Fastly X-Forwarded-For headers.
+ RequestSettingXFFAppendAll RequestSettingXFF = "append_all"
- // RequestSettingXFFOverwrite clears any X-Forwarded-For headers and replaces
- // with Fastly ones.
- RequestSettingXFFOverwrite RequestSettingXFF = "overwrite"
+ // RequestSettingXFFOverwrite clears any X-Forwarded-For headers and replaces
+ // with Fastly ones.
+ RequestSettingXFFOverwrite RequestSettingXFF = "overwrite"
)
// RequestSettingXFF is a type of X-Forwarded-For value to set.
@@ -39,21 +39,21 @@ type RequestSettingXFF string
// RequestSetting represents a request setting response from the Fastly API.
type RequestSetting struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- ForceMiss bool `mapstructure:"force_miss"`
- ForceSSL bool `mapstructure:"force_ssl"`
- Action RequestSettingAction `mapstructure:"action"`
- BypassBusyWait bool `mapstructure:"bypass_busy_wait"`
- MaxStaleAge uint `mapstructure:"max_stale_age"`
- HashKeys string `mapstructure:"hash_keys"`
- XForwardedFor RequestSettingXFF `mapstructure:"xff"`
- TimerSupport bool `mapstructure:"timer_support"`
- GeoHeaders bool `mapstructure:"geo_headers"`
- DefaultHost string `mapstructure:"default_host"`
- RequestCondition string `mapstructure:"request_condition"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ ForceMiss bool `mapstructure:"force_miss"`
+ ForceSSL bool `mapstructure:"force_ssl"`
+ Action RequestSettingAction `mapstructure:"action"`
+ BypassBusyWait bool `mapstructure:"bypass_busy_wait"`
+ MaxStaleAge uint `mapstructure:"max_stale_age"`
+ HashKeys string `mapstructure:"hash_keys"`
+ XForwardedFor RequestSettingXFF `mapstructure:"xff"`
+ TimerSupport bool `mapstructure:"timer_support"`
+ GeoHeaders bool `mapstructure:"geo_headers"`
+ DefaultHost string `mapstructure:"default_host"`
+ RequestCondition string `mapstructure:"request_condition"`
}
// requestSettingsByName is a sortable list of request settings.
@@ -63,217 +63,217 @@ type requestSettingsByName []*RequestSetting
func (s requestSettingsByName) Len() int { return len(s) }
func (s requestSettingsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s requestSettingsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListRequestSettingsInput is used as input to the ListRequestSettings
// function.
type ListRequestSettingsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListRequestSettings returns the list of request settings for the
// configuration version.
func (c *Client) ListRequestSettings(i *ListRequestSettingsInput) ([]*RequestSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/request_settings", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*RequestSetting
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(requestSettingsByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/request_settings", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*RequestSetting
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(requestSettingsByName(bs))
+ return bs, nil
}
// CreateRequestSettingInput is used as input to the CreateRequestSetting
// function.
type CreateRequestSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- ForceMiss Compatibool `form:"force_miss,omitempty"`
- ForceSSL Compatibool `form:"force_ssl,omitempty"`
- Action RequestSettingAction `form:"action,omitempty"`
- BypassBusyWait Compatibool `form:"bypass_busy_wait,omitempty"`
- MaxStaleAge uint `form:"max_stale_age,omitempty"`
- HashKeys string `form:"hash_keys,omitempty"`
- XForwardedFor RequestSettingXFF `form:"xff,omitempty"`
- TimerSupport Compatibool `form:"timer_support,omitempty"`
- GeoHeaders Compatibool `form:"geo_headers,omitempty"`
- DefaultHost string `form:"default_host,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ ForceMiss Compatibool `form:"force_miss,omitempty"`
+ ForceSSL Compatibool `form:"force_ssl,omitempty"`
+ Action RequestSettingAction `form:"action,omitempty"`
+ BypassBusyWait Compatibool `form:"bypass_busy_wait,omitempty"`
+ MaxStaleAge uint `form:"max_stale_age,omitempty"`
+ HashKeys string `form:"hash_keys,omitempty"`
+ XForwardedFor RequestSettingXFF `form:"xff,omitempty"`
+ TimerSupport Compatibool `form:"timer_support,omitempty"`
+ GeoHeaders Compatibool `form:"geo_headers,omitempty"`
+ DefaultHost string `form:"default_host,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
}
// CreateRequestSetting creates a new Fastly request settings.
func (c *Client) CreateRequestSetting(i *CreateRequestSettingInput) (*RequestSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/request_settings", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *RequestSetting
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/request_settings", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *RequestSetting
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetRequestSettingInput is used as input to the GetRequestSetting function.
type GetRequestSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the request settings to fetch.
- Name string
+ // Name is the name of the request settings to fetch.
+ Name string
}
// GetRequestSetting gets the request settings configuration with the given
// parameters.
func (c *Client) GetRequestSetting(i *GetRequestSettingInput) (*RequestSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *RequestSetting
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *RequestSetting
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateRequestSettingInput is used as input to the UpdateRequestSetting
// function.
type UpdateRequestSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the request settings to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- ForceMiss Compatibool `form:"force_miss,omitempty"`
- ForceSSL Compatibool `form:"force_ssl,omitempty"`
- Action RequestSettingAction `form:"action,omitempty"`
- BypassBusyWait Compatibool `form:"bypass_busy_wait,omitempty"`
- MaxStaleAge uint `form:"max_stale_age,omitempty"`
- HashKeys string `form:"hash_keys,omitempty"`
- XForwardedFor RequestSettingXFF `form:"xff,omitempty"`
- TimerSupport Compatibool `form:"timer_support,omitempty"`
- GeoHeaders Compatibool `form:"geo_headers,omitempty"`
- DefaultHost string `form:"default_host,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the request settings to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ ForceMiss Compatibool `form:"force_miss,omitempty"`
+ ForceSSL Compatibool `form:"force_ssl,omitempty"`
+ Action RequestSettingAction `form:"action,omitempty"`
+ BypassBusyWait Compatibool `form:"bypass_busy_wait,omitempty"`
+ MaxStaleAge uint `form:"max_stale_age,omitempty"`
+ HashKeys string `form:"hash_keys,omitempty"`
+ XForwardedFor RequestSettingXFF `form:"xff,omitempty"`
+ TimerSupport Compatibool `form:"timer_support,omitempty"`
+ GeoHeaders Compatibool `form:"geo_headers,omitempty"`
+ DefaultHost string `form:"default_host,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
}
// UpdateRequestSetting updates a specific request settings.
func (c *Client) UpdateRequestSetting(i *UpdateRequestSettingInput) (*RequestSetting, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *RequestSetting
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *RequestSetting
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteRequestSettingInput is the input parameter to DeleteRequestSetting.
type DeleteRequestSettingInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the request settings to delete (required).
- Name string
+ // Name is the name of the request settings to delete (required).
+ Name string
}
// DeleteRequestSetting deletes the given request settings version.
func (c *Client) DeleteRequestSetting(i *DeleteRequestSettingInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/request_settings/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/response_object.go b/vendor/github.com/sethvargo/go-fastly/response_object.go
index 539b2ca45f32..1579d58c8405 100644
--- a/vendor/github.com/sethvargo/go-fastly/response_object.go
+++ b/vendor/github.com/sethvargo/go-fastly/response_object.go
@@ -1,22 +1,22 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// ResponseObject represents a response object response from the Fastly API.
type ResponseObject struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Status uint `mapstructure:"status"`
- Response string `mapstructure:"response"`
- Content string `mapstructure:"content"`
- ContentType string `mapstructure:"content_type"`
- RequestCondition string `mapstructure:"request_condition"`
- CacheCondition string `mapstructure:"cache_condition"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Status uint `mapstructure:"status"`
+ Response string `mapstructure:"response"`
+ Content string `mapstructure:"content"`
+ ContentType string `mapstructure:"content_type"`
+ RequestCondition string `mapstructure:"request_condition"`
+ CacheCondition string `mapstructure:"cache_condition"`
}
// responseObjectsByName is a sortable list of response objects.
@@ -26,207 +26,207 @@ type responseObjectsByName []*ResponseObject
func (s responseObjectsByName) Len() int { return len(s) }
func (s responseObjectsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s responseObjectsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListResponseObjectsInput is used as input to the ListResponseObjects
// function.
type ListResponseObjectsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListResponseObjects returns the list of response objects for the
// configuration version.
func (c *Client) ListResponseObjects(i *ListResponseObjectsInput) ([]*ResponseObject, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/response_object", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*ResponseObject
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(responseObjectsByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/response_object", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*ResponseObject
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(responseObjectsByName(bs))
+ return bs, nil
}
// CreateResponseObjectInput is used as input to the CreateResponseObject
// function.
type CreateResponseObjectInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Status uint `form:"status,omitempty"`
- Response string `form:"response,omitempty"`
- Content string `form:"content,omitempty"`
- ContentType string `form:"content_type,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Status uint `form:"status,omitempty"`
+ Response string `form:"response,omitempty"`
+ Content string `form:"content,omitempty"`
+ ContentType string `form:"content_type,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
}
// CreateResponseObject creates a new Fastly response object.
func (c *Client) CreateResponseObject(i *CreateResponseObjectInput) (*ResponseObject, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/response_object", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *ResponseObject
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/response_object", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *ResponseObject
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetResponseObjectInput is used as input to the GetResponseObject function.
type GetResponseObjectInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the response object to fetch.
- Name string
+ // Name is the name of the response object to fetch.
+ Name string
}
// GetResponseObject gets the response object configuration with the given
// parameters.
func (c *Client) GetResponseObject(i *GetResponseObjectInput) (*ResponseObject, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *ResponseObject
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *ResponseObject
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateResponseObjectInput is used as input to the UpdateResponseObject
// function.
type UpdateResponseObjectInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the response object to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Status uint `form:"status,omitempty"`
- Response string `form:"response,omitempty"`
- Content string `form:"content,omitempty"`
- ContentType string `form:"content_type,omitempty"`
- RequestCondition string `form:"request_condition,omitempty"`
- CacheCondition string `form:"cache_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the response object to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Status uint `form:"status,omitempty"`
+ Response string `form:"response,omitempty"`
+ Content string `form:"content,omitempty"`
+ ContentType string `form:"content_type,omitempty"`
+ RequestCondition string `form:"request_condition,omitempty"`
+ CacheCondition string `form:"cache_condition,omitempty"`
}
// UpdateResponseObject updates a specific response object.
func (c *Client) UpdateResponseObject(i *UpdateResponseObjectInput) (*ResponseObject, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *ResponseObject
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *ResponseObject
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteResponseObjectInput is the input parameter to DeleteResponseObject.
type DeleteResponseObjectInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the response object to delete (required).
- Name string
+ // Name is the name of the response object to delete (required).
+ Name string
}
// DeleteResponseObject deletes the given response object version.
func (c *Client) DeleteResponseObject(i *DeleteResponseObjectInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/response_object/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/s3.go b/vendor/github.com/sethvargo/go-fastly/s3.go
index ef98d0edb82d..96d848825866 100644
--- a/vendor/github.com/sethvargo/go-fastly/s3.go
+++ b/vendor/github.com/sethvargo/go-fastly/s3.go
@@ -1,29 +1,30 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// S3 represents a S3 response from the Fastly API.
type S3 struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- BucketName string `mapstructure:"bucket_name"`
- AccessKey string `mapstructure:"access_key"`
- SecretKey string `mapstructure:"secret_key"`
- Path string `mapstructure:"path"`
- Period uint `mapstructure:"period"`
- GzipLevel uint `mapstructure:"gzip_level"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- TimestampFormat string `mapstructure:"timestamp_format"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ BucketName string `mapstructure:"bucket_name"`
+ Domain string `mapstructure:"domain"`
+ AccessKey string `mapstructure:"access_key"`
+ SecretKey string `mapstructure:"secret_key"`
+ Path string `mapstructure:"path"`
+ Period uint `mapstructure:"period"`
+ GzipLevel uint `mapstructure:"gzip_level"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ TimestampFormat string `mapstructure:"timestamp_format"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// s3sByName is a sortable list of S3s.
@@ -33,208 +34,210 @@ type s3sByName []*S3
func (s s3sByName) Len() int { return len(s) }
func (s s3sByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s s3sByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListS3sInput is used as input to the ListS3s function.
type ListS3sInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListS3s returns the list of S3s for the configuration version.
func (c *Client) ListS3s(i *ListS3sInput) ([]*S3, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/s3", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s3s []*S3
- if err := decodeJSON(&s3s, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(s3sByName(s3s))
- return s3s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/s3", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s3s []*S3
+ if err := decodeJSON(&s3s, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(s3sByName(s3s))
+ return s3s, nil
}
// CreateS3Input is used as input to the CreateS3 function.
type CreateS3Input struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- BucketName string `form:"bucket_name,omitempty"`
- AccessKey string `form:"access_key,omitempty"`
- SecretKey string `form:"secret_key,omitempty"`
- Path string `form:"path,omitempty"`
- Period uint `form:"period,omitempty"`
- GzipLevel uint `form:"gzip_level,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- TimestampFormat string `form:"timestamp_format,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ BucketName string `form:"bucket_name,omitempty"`
+ Domain string `form:"domain,omitempty"`
+ AccessKey string `form:"access_key,omitempty"`
+ SecretKey string `form:"secret_key,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
}
// CreateS3 creates a new Fastly S3.
func (c *Client) CreateS3(i *CreateS3Input) (*S3, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/s3", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s3 *S3
- if err := decodeJSON(&s3, resp.Body); err != nil {
- return nil, err
- }
- return s3, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/s3", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s3 *S3
+ if err := decodeJSON(&s3, resp.Body); err != nil {
+ return nil, err
+ }
+ return s3, nil
}
// GetS3Input is used as input to the GetS3 function.
type GetS3Input struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the S3 to fetch.
- Name string
+ // Name is the name of the S3 to fetch.
+ Name string
}
// GetS3 gets the S3 configuration with the given parameters.
func (c *Client) GetS3(i *GetS3Input) (*S3, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s3 *S3
- if err := decodeJSON(&s3, resp.Body); err != nil {
- return nil, err
- }
- return s3, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s3 *S3
+ if err := decodeJSON(&s3, resp.Body); err != nil {
+ return nil, err
+ }
+ return s3, nil
}
// UpdateS3Input is used as input to the UpdateS3 function.
type UpdateS3Input struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the S3 to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- BucketName string `form:"bucket_name,omitempty"`
- AccessKey string `form:"access_key,omitempty"`
- SecretKey string `form:"secret_key,omitempty"`
- Path string `form:"path,omitempty"`
- Period uint `form:"period,omitempty"`
- GzipLevel uint `form:"gzip_level,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
- TimestampFormat string `form:"timestamp_format,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the S3 to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ BucketName string `form:"bucket_name,omitempty"`
+ Domain string `form:"domain,omitempty"`
+ AccessKey string `form:"access_key,omitempty"`
+ SecretKey string `form:"secret_key,omitempty"`
+ Path string `form:"path,omitempty"`
+ Period uint `form:"period,omitempty"`
+ GzipLevel uint `form:"gzip_level,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
+ TimestampFormat string `form:"timestamp_format,omitempty"`
}
// UpdateS3 updates a specific S3.
func (c *Client) UpdateS3(i *UpdateS3Input) (*S3, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s3 *S3
- if err := decodeJSON(&s3, resp.Body); err != nil {
- return nil, err
- }
- return s3, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s3 *S3
+ if err := decodeJSON(&s3, resp.Body); err != nil {
+ return nil, err
+ }
+ return s3, nil
}
// DeleteS3Input is the input parameter to DeleteS3.
type DeleteS3Input struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the S3 to delete (required).
- Name string
+ // Name is the name of the S3 to delete (required).
+ Name string
}
// DeleteS3 deletes the given S3 version.
func (c *Client) DeleteS3(i *DeleteS3Input) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/s3/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/service.go b/vendor/github.com/sethvargo/go-fastly/service.go
index a1fecdfc19d3..a05d224fa729 100644
--- a/vendor/github.com/sethvargo/go-fastly/service.go
+++ b/vendor/github.com/sethvargo/go-fastly/service.go
@@ -1,28 +1,31 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Service represents a single service for the Fastly account.
type Service struct {
- ID string `mapstructure:"id"`
- Name string `mapstructure:"name"`
- Comment string `mapstructure:"comment"`
- CustomerID string `mapstructure:"customer_id"`
- ActiveVersion uint `mapstructure:"version"`
- Versions []*Version `mapstructure:"versions"`
+ ID string `mapstructure:"id"`
+ Name string `mapstructure:"name"`
+ Comment string `mapstructure:"comment"`
+ CustomerID string `mapstructure:"customer_id"`
+ CreatedAt string `mapstructure:"created_at"`
+ UpdatedAt string `mapstructure:"updated_at"`
+ DeletedAt string `mapstructure:"deleted_at"`
+ ActiveVersion uint `mapstructure:"version"`
+ Versions []*Version `mapstructure:"versions"`
}
type ServiceDetail struct {
- ID string `mapstructure:"id"`
- Name string `mapstructure:"name"`
- Comment string `mapstructure:"comment"`
- CustomerID string `mapstructure:"customer_id"`
- ActiveVersion Version `mapstructure:"active_version"`
- Version Version `mapstructure:"version"`
- Versions []*Version `mapstructure:"versions"`
+ ID string `mapstructure:"id"`
+ Name string `mapstructure:"name"`
+ Comment string `mapstructure:"comment"`
+ CustomerID string `mapstructure:"customer_id"`
+ ActiveVersion Version `mapstructure:"active_version"`
+ Version Version `mapstructure:"version"`
+ Versions []*Version `mapstructure:"versions"`
}
// servicesByName is a sortable list of services.
@@ -32,7 +35,7 @@ type servicesByName []*Service
func (s servicesByName) Len() int { return len(s) }
func (s servicesByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s servicesByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListServicesInput is used as input to the ListServices function.
@@ -40,166 +43,166 @@ type ListServicesInput struct{}
// ListServices returns the full list of services for the current account.
func (c *Client) ListServices(i *ListServicesInput) ([]*Service, error) {
- resp, err := c.Get("/service", nil)
- if err != nil {
- return nil, err
- }
-
- var s []*Service
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(servicesByName(s))
- return s, nil
+ resp, err := c.Get("/service", nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s []*Service
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(servicesByName(s))
+ return s, nil
}
// CreateServiceInput is used as input to the CreateService function.
type CreateServiceInput struct {
- Name string `form:"name,omitempty"`
- Comment string `form:"comment,omitempty"`
+ Name string `form:"name,omitempty"`
+ Comment string `form:"comment,omitempty"`
}
// CreateService creates a new service with the given information.
func (c *Client) CreateService(i *CreateServiceInput) (*Service, error) {
- resp, err := c.PostForm("/service", i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Service
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ resp, err := c.PostForm("/service", i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Service
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// GetServiceInput is used as input to the GetService function.
type GetServiceInput struct {
- ID string
+ ID string
}
// GetService retrieves the service information for the service with the given
// id. If no service exists for the given id, the API returns a 400 response
// (not a 404).
func (c *Client) GetService(i *GetServiceInput) (*Service, error) {
- if i.ID == "" {
- return nil, ErrMissingID
- }
-
- path := fmt.Sprintf("/service/%s", i.ID)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Service
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
-
- return s, nil
+ if i.ID == "" {
+ return nil, ErrMissingID
+ }
+
+ path := fmt.Sprintf("/service/%s", i.ID)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Service
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+
+ return s, nil
}
// GetService retrieves the details for the service with the given id. If no
// service exists for the given id, the API returns a 400 response (not a 404).
func (c *Client) GetServiceDetails(i *GetServiceInput) (*ServiceDetail, error) {
- if i.ID == "" {
- return nil, ErrMissingID
- }
-
- path := fmt.Sprintf("/service/%s/details", i.ID)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s *ServiceDetail
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
-
- return s, nil
+ if i.ID == "" {
+ return nil, ErrMissingID
+ }
+
+ path := fmt.Sprintf("/service/%s/details", i.ID)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *ServiceDetail
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+
+ return s, nil
}
// UpdateServiceInput is used as input to the UpdateService function.
type UpdateServiceInput struct {
- ID string
+ ID string
- Name string `form:"name,omitempty"`
- Comment string `form:"comment,omitempty"`
+ Name string `form:"name,omitempty"`
+ Comment string `form:"comment,omitempty"`
}
// UpdateService updates the service with the given input.
func (c *Client) UpdateService(i *UpdateServiceInput) (*Service, error) {
- if i.ID == "" {
- return nil, ErrMissingID
- }
-
- path := fmt.Sprintf("/service/%s", i.ID)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Service
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.ID == "" {
+ return nil, ErrMissingID
+ }
+
+ path := fmt.Sprintf("/service/%s", i.ID)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Service
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// DeleteServiceInput is used as input to the DeleteService function.
type DeleteServiceInput struct {
- ID string
+ ID string
}
// DeleteService updates the service with the given input.
func (c *Client) DeleteService(i *DeleteServiceInput) error {
- if i.ID == "" {
- return ErrMissingID
- }
-
- path := fmt.Sprintf("/service/%s", i.ID)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.ID == "" {
+ return ErrMissingID
+ }
+
+ path := fmt.Sprintf("/service/%s", i.ID)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
// SearchServiceInput is used as input to the SearchService function.
type SearchServiceInput struct {
- Name string
+ Name string
}
// SearchService gets a specific service by name. If no service exists by that
// name, the API returns a 400 response (not a 404).
func (c *Client) SearchService(i *SearchServiceInput) (*Service, error) {
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- resp, err := c.Get("/service/search", &RequestOptions{
- Params: map[string]string{
- "name": i.Name,
- },
- })
- if err != nil {
- return nil, err
- }
-
- var s *Service
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
-
- return s, nil
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ resp, err := c.Get("/service/search", &RequestOptions{
+ Params: map[string]string{
+ "name": i.Name,
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Service
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+
+ return s, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/settings.go b/vendor/github.com/sethvargo/go-fastly/settings.go
index b2b742f5e6b2..02ea6f234e00 100644
--- a/vendor/github.com/sethvargo/go-fastly/settings.go
+++ b/vendor/github.com/sethvargo/go-fastly/settings.go
@@ -4,74 +4,74 @@ import "fmt"
// Settings represents a backend response from the Fastly API.
type Settings struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- DefaultTTL uint `mapstructure:"general.default_ttl"`
- DefaultHost string `mapstructure:"general.default_host"`
+ DefaultTTL uint `mapstructure:"general.default_ttl"`
+ DefaultHost string `mapstructure:"general.default_host"`
}
// GetSettingsInput is used as input to the GetSettings function.
type GetSettingsInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// GetSettings gets the backend configuration with the given parameters.
func (c *Client) GetSettings(i *GetSettingsInput) (*Settings, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
- path := fmt.Sprintf("/service/%s/version/%s/settings", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
+ path := fmt.Sprintf("/service/%s/version/%s/settings", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
- var b *Settings
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ var b *Settings
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateSettingsInput is used as input to the UpdateSettings function.
type UpdateSettingsInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- DefaultTTL uint `form:"general.default_ttl,omitempty"`
- DefaultHost string `form:"general.default_host,omitempty"`
+ DefaultTTL uint `form:"general.default_ttl,omitempty"`
+ DefaultHost string `form:"general.default_host,omitempty"`
}
// UpdateSettings updates a specific backend.
func (c *Client) UpdateSettings(i *UpdateSettingsInput) (*Settings, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
- path := fmt.Sprintf("/service/%s/version/%s/settings", i.Service, i.Version)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
+ path := fmt.Sprintf("/service/%s/version/%s/settings", i.Service, i.Version)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
- var b *Settings
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ var b *Settings
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/sumologic.go b/vendor/github.com/sethvargo/go-fastly/sumologic.go
index 39ff9b1834c2..371e7a6e8880 100644
--- a/vendor/github.com/sethvargo/go-fastly/sumologic.go
+++ b/vendor/github.com/sethvargo/go-fastly/sumologic.go
@@ -1,24 +1,24 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// Sumologic represents a sumologic response from the Fastly API.
type Sumologic struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
- URL string `mapstructure:"url"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
+ URL string `mapstructure:"url"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// sumologicsByName is a sortable list of sumologics.
@@ -28,198 +28,198 @@ type sumologicsByName []*Sumologic
func (s sumologicsByName) Len() int { return len(s) }
func (s sumologicsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s sumologicsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListSumologicsInput is used as input to the ListSumologics function.
type ListSumologicsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListSumologics returns the list of sumologics for the configuration version.
func (c *Client) ListSumologics(i *ListSumologicsInput) ([]*Sumologic, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ss []*Sumologic
- if err := decodeJSON(&ss, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(sumologicsByName(ss))
- return ss, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ss []*Sumologic
+ if err := decodeJSON(&ss, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(sumologicsByName(ss))
+ return ss, nil
}
// CreateSumologicInput is used as input to the CreateSumologic function.
type CreateSumologicInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- URL string `form:"url,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ URL string `form:"url,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// CreateSumologic creates a new Fastly sumologic.
func (c *Client) CreateSumologic(i *CreateSumologicInput) (*Sumologic, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Sumologic
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Sumologic
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// GetSumologicInput is used as input to the GetSumologic function.
type GetSumologicInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the sumologic to fetch.
- Name string
+ // Name is the name of the sumologic to fetch.
+ Name string
}
// GetSumologic gets the sumologic configuration with the given parameters.
func (c *Client) GetSumologic(i *GetSumologicInput) (*Sumologic, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Sumologic
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Sumologic
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// UpdateSumologicInput is used as input to the UpdateSumologic function.
type UpdateSumologicInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the sumologic to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- URL string `form:"url,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the sumologic to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ URL string `form:"url,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// UpdateSumologic updates a specific sumologic.
func (c *Client) UpdateSumologic(i *UpdateSumologicInput) (*Sumologic, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Sumologic
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Sumologic
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// DeleteSumologicInput is the input parameter to DeleteSumologic.
type DeleteSumologicInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the sumologic to delete (required).
- Name string
+ // Name is the name of the sumologic to delete (required).
+ Name string
}
// DeleteSumologic deletes the given sumologic version.
func (c *Client) DeleteSumologic(i *DeleteSumologicInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/sumologic/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/syslog.go b/vendor/github.com/sethvargo/go-fastly/syslog.go
index 429576beae2b..3e4183014e38 100644
--- a/vendor/github.com/sethvargo/go-fastly/syslog.go
+++ b/vendor/github.com/sethvargo/go-fastly/syslog.go
@@ -1,27 +1,27 @@
package fastly
import (
- "fmt"
- "sort"
- "time"
+ "fmt"
+ "sort"
+ "time"
)
// Syslog represents a syslog response from the Fastly API.
type Syslog struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
-
- Name string `mapstructure:"name"`
- Address string `mapstructure:"address"`
- Port uint `mapstructure:"port"`
- UseTLS bool `mapstructure:"use_tls"`
- TLSCACert string `mapstructure:"tls_ca_cert"`
- Token string `mapstructure:"token"`
- Format string `mapstructure:"format"`
- ResponseCondition string `mapstructure:"response_condition"`
- CreatedAt *time.Time `mapstructure:"created_at"`
- UpdatedAt *time.Time `mapstructure:"updated_at"`
- DeletedAt *time.Time `mapstructure:"deleted_at"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
+
+ Name string `mapstructure:"name"`
+ Address string `mapstructure:"address"`
+ Port uint `mapstructure:"port"`
+ UseTLS bool `mapstructure:"use_tls"`
+ TLSCACert string `mapstructure:"tls_ca_cert"`
+ Token string `mapstructure:"token"`
+ Format string `mapstructure:"format"`
+ ResponseCondition string `mapstructure:"response_condition"`
+ CreatedAt *time.Time `mapstructure:"created_at"`
+ UpdatedAt *time.Time `mapstructure:"updated_at"`
+ DeletedAt *time.Time `mapstructure:"deleted_at"`
}
// syslogsByName is a sortable list of syslogs.
@@ -31,204 +31,204 @@ type syslogsByName []*Syslog
func (s syslogsByName) Len() int { return len(s) }
func (s syslogsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s syslogsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListSyslogsInput is used as input to the ListSyslogs function.
type ListSyslogsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListSyslogs returns the list of syslogs for the configuration version.
func (c *Client) ListSyslogs(i *ListSyslogsInput) ([]*Syslog, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/syslog", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var ss []*Syslog
- if err := decodeJSON(&ss, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(syslogsByName(ss))
- return ss, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/syslog", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var ss []*Syslog
+ if err := decodeJSON(&ss, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(syslogsByName(ss))
+ return ss, nil
}
// CreateSyslogInput is used as input to the CreateSyslog function.
type CreateSyslogInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- UseTLS Compatibool `form:"use_tls,omitempty"`
- TLSCACert string `form:"tls_ca_cert,omitempty"`
- Token string `form:"token,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ UseTLS Compatibool `form:"use_tls,omitempty"`
+ TLSCACert string `form:"tls_ca_cert,omitempty"`
+ Token string `form:"token,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// CreateSyslog creates a new Fastly syslog.
func (c *Client) CreateSyslog(i *CreateSyslogInput) (*Syslog, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/syslog", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Syslog
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/syslog", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Syslog
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// GetSyslogInput is used as input to the GetSyslog function.
type GetSyslogInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the syslog to fetch.
- Name string
+ // Name is the name of the syslog to fetch.
+ Name string
}
// GetSyslog gets the syslog configuration with the given parameters.
func (c *Client) GetSyslog(i *GetSyslogInput) (*Syslog, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Syslog
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Syslog
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// UpdateSyslogInput is used as input to the UpdateSyslog function.
type UpdateSyslogInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- // Name is the name of the syslog to update.
- Name string
-
- NewName string `form:"name,omitempty"`
- Address string `form:"address,omitempty"`
- Port uint `form:"port,omitempty"`
- UseTLS Compatibool `form:"use_tls,omitempty"`
- TLSCACert string `form:"tls_ca_cert,omitempty"`
- Token string `form:"token,omitempty"`
- Format string `form:"format,omitempty"`
- ResponseCondition string `form:"response_condition,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ // Name is the name of the syslog to update.
+ Name string
+
+ NewName string `form:"name,omitempty"`
+ Address string `form:"address,omitempty"`
+ Port uint `form:"port,omitempty"`
+ UseTLS Compatibool `form:"use_tls,omitempty"`
+ TLSCACert string `form:"tls_ca_cert,omitempty"`
+ Token string `form:"token,omitempty"`
+ Format string `form:"format,omitempty"`
+ ResponseCondition string `form:"response_condition,omitempty"`
}
// UpdateSyslog updates a specific syslog.
func (c *Client) UpdateSyslog(i *UpdateSyslogInput) (*Syslog, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var s *Syslog
- if err := decodeJSON(&s, resp.Body); err != nil {
- return nil, err
- }
- return s, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var s *Syslog
+ if err := decodeJSON(&s, resp.Body); err != nil {
+ return nil, err
+ }
+ return s, nil
}
// DeleteSyslogInput is the input parameter to DeleteSyslog.
type DeleteSyslogInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the syslog to delete (required).
- Name string
+ // Name is the name of the syslog to delete (required).
+ Name string
}
// DeleteSyslog deletes the given syslog version.
func (c *Client) DeleteSyslog(i *DeleteSyslogInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/logging/syslog/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/vcl.go b/vendor/github.com/sethvargo/go-fastly/vcl.go
index 72805f71ac0d..63be5a65202c 100644
--- a/vendor/github.com/sethvargo/go-fastly/vcl.go
+++ b/vendor/github.com/sethvargo/go-fastly/vcl.go
@@ -1,18 +1,18 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// VCL represents a response about VCL from the Fastly API.
type VCL struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- Name string `mapstructure:"name"`
- Main bool `mapstructure:"main"`
- Content string `mapstructure:"content"`
+ Name string `mapstructure:"name"`
+ Main bool `mapstructure:"main"`
+ Content string `mapstructure:"content"`
}
// vclsByName is a sortable list of VCLs.
@@ -22,261 +22,261 @@ type vclsByName []*VCL
func (s vclsByName) Len() int { return len(s) }
func (s vclsByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s vclsByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListVCLsInput is used as input to the ListVCLs function.
type ListVCLsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the specific configuration version (required).
- Version string
+ // Version is the specific configuration version (required).
+ Version string
}
// ListVCLs returns the list of VCLs for the configuration version.
func (c *Client) ListVCLs(i *ListVCLsInput) ([]*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var vcls []*VCL
- if err := decodeJSON(&vcls, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(vclsByName(vcls))
- return vcls, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcls []*VCL
+ if err := decodeJSON(&vcls, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(vclsByName(vcls))
+ return vcls, nil
}
// GetVCLInput is used as input to the GetVCL function.
type GetVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the VCL to fetch.
- Name string
+ // Name is the name of the VCL to fetch.
+ Name string
}
// GetVCL gets the VCL configuration with the given parameters.
func (c *Client) GetVCL(i *GetVCLInput) (*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var vcl *VCL
- if err := decodeJSON(&vcl, resp.Body); err != nil {
- return nil, err
- }
- return vcl, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcl *VCL
+ if err := decodeJSON(&vcl, resp.Body); err != nil {
+ return nil, err
+ }
+ return vcl, nil
}
// GetGeneratedVCLInput is used as input to the GetGeneratedVCL function.
type GetGeneratedVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// GetGeneratedVCL gets the VCL configuration with the given parameters.
func (c *Client) GetGeneratedVCL(i *GetGeneratedVCLInput) (*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/generated_vcl", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var vcl *VCL
- if err := decodeJSON(&vcl, resp.Body); err != nil {
- return nil, err
- }
- return vcl, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/generated_vcl", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcl *VCL
+ if err := decodeJSON(&vcl, resp.Body); err != nil {
+ return nil, err
+ }
+ return vcl, nil
}
// CreateVCLInput is used as input to the CreateVCL function.
type CreateVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- Name string `form:"name,omitempty"`
- Content string `form:"content,omitempty"`
+ Name string `form:"name,omitempty"`
+ Content string `form:"content,omitempty"`
}
// CreateVCL creates a new Fastly VCL.
func (c *Client) CreateVCL(i *CreateVCLInput) (*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var vcl *VCL
- if err := decodeJSON(&vcl, resp.Body); err != nil {
- return nil, err
- }
- return vcl, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcl *VCL
+ if err := decodeJSON(&vcl, resp.Body); err != nil {
+ return nil, err
+ }
+ return vcl, nil
}
// UpdateVCLInput is used as input to the UpdateVCL function.
type UpdateVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the VCL to update (required).
- Name string
+ // Name is the name of the VCL to update (required).
+ Name string
- NewName string `form:"name,omitempty"`
- Content string `form:"content,omitempty"`
+ NewName string `form:"name,omitempty"`
+ Content string `form:"content,omitempty"`
}
// UpdateVCL creates a new Fastly VCL.
func (c *Client) UpdateVCL(i *UpdateVCLInput) (*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var vcl *VCL
- if err := decodeJSON(&vcl, resp.Body); err != nil {
- return nil, err
- }
- return vcl, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcl *VCL
+ if err := decodeJSON(&vcl, resp.Body); err != nil {
+ return nil, err
+ }
+ return vcl, nil
}
// ActivateVCLInput is used as input to the ActivateVCL function.
type ActivateVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the VCL to mark as main (required).
- Name string
+ // Name is the name of the VCL to mark as main (required).
+ Name string
}
// ActivateVCL creates a new Fastly VCL.
func (c *Client) ActivateVCL(i *ActivateVCLInput) (*VCL, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl/%s/main", i.Service, i.Version, i.Name)
- resp, err := c.Put(path, nil)
- if err != nil {
- return nil, err
- }
-
- var vcl *VCL
- if err := decodeJSON(&vcl, resp.Body); err != nil {
- return nil, err
- }
- return vcl, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl/%s/main", i.Service, i.Version, i.Name)
+ resp, err := c.Put(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var vcl *VCL
+ if err := decodeJSON(&vcl, resp.Body); err != nil {
+ return nil, err
+ }
+ return vcl, nil
}
// DeleteVCLInput is the input parameter to DeleteVCL.
type DeleteVCLInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the VCL to delete (required).
- Name string
+ // Name is the name of the VCL to delete (required).
+ Name string
}
// DeleteVCL deletes the given VCL version.
func (c *Client) DeleteVCL(i *DeleteVCLInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/vcl/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/version.go b/vendor/github.com/sethvargo/go-fastly/version.go
index 40eb0f251195..8b54c9ceea53 100644
--- a/vendor/github.com/sethvargo/go-fastly/version.go
+++ b/vendor/github.com/sethvargo/go-fastly/version.go
@@ -1,20 +1,20 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Version represents a distinct configuration version.
type Version struct {
- Number string `mapstructure:"number"`
- Comment string `mapstructure:"comment"`
- ServiceID string `mapstructure:"service_id"`
- Active bool `mapstructure:"active"`
- Locked bool `mapstructure:"locked"`
- Deployed bool `mapstructure:"deployed"`
- Staging bool `mapstructure:"staging"`
- Testing bool `mapstructure:"testing"`
+ Number string `mapstructure:"number"`
+ Comment string `mapstructure:"comment"`
+ ServiceID string `mapstructure:"service_id"`
+ Active bool `mapstructure:"active"`
+ Locked bool `mapstructure:"locked"`
+ Deployed bool `mapstructure:"deployed"`
+ Staging bool `mapstructure:"staging"`
+ Testing bool `mapstructure:"testing"`
}
// versionsByNumber is a sortable list of versions. This is used by the version
@@ -25,65 +25,65 @@ type versionsByNumber []*Version
func (s versionsByNumber) Len() int { return len(s) }
func (s versionsByNumber) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s versionsByNumber) Less(i, j int) bool {
- return s[i].Number < s[j].Number
+ return s[i].Number < s[j].Number
}
// ListVersionsInput is the input to the ListVersions function.
type ListVersionsInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
}
// ListVersions returns the full list of all versions of the given service.
func (c *Client) ListVersions(i *ListVersionsInput) ([]*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- path := fmt.Sprintf("/service/%s/version", i.Service)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e []*Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- sort.Sort(versionsByNumber(e))
-
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ path := fmt.Sprintf("/service/%s/version", i.Service)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e []*Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Sort(versionsByNumber(e))
+
+ return e, nil
}
// LatestVersionInput is the input to the LatestVersion function.
type LatestVersionInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
}
// LatestVersion fetches the latest version. If there are no versions, this
// function will return nil (but not an error).
func (c *Client) LatestVersion(i *LatestVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- list, err := c.ListVersions(&ListVersionsInput{Service: i.Service})
- if err != nil {
- return nil, err
- }
- if len(list) < 1 {
- return nil, nil
- }
-
- e := list[len(list)-1]
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ list, err := c.ListVersions(&ListVersionsInput{Service: i.Service})
+ if err != nil {
+ return nil, err
+ }
+ if len(list) < 1 {
+ return nil, nil
+ }
+
+ e := list[len(list)-1]
+ return e, nil
}
// CreateVersionInput is the input to the CreateVersion function.
type CreateVersionInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
}
// CreateVersion constructs a new version. There are no request parameters, but
@@ -91,245 +91,245 @@ type CreateVersionInput struct {
// preferred in almost all scenarios, since `Create()` creates a _blank_
// configuration where `Clone()` builds off of an existing configuration.
func (c *Client) CreateVersion(i *CreateVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- path := fmt.Sprintf("/service/%s/version", i.Service)
- resp, err := c.Post(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ path := fmt.Sprintf("/service/%s/version", i.Service)
+ resp, err := c.Post(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// GetVersionInput is the input to the GetVersion function.
type GetVersionInput struct {
- // Service is the ID of the service (required).
- Service string
+ // Service is the ID of the service (required).
+ Service string
- // Version is the version number to fetch (required).
- Version string
+ // Version is the version number to fetch (required).
+ Version string
}
// GetVersion fetches a version with the given information.
func (c *Client) GetVersion(i *GetVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// UpdateVersionInput is the input to the UpdateVersion function.
type UpdateVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- Comment string `form:"comment,omitempty"`
+ Comment string `form:"comment,omitempty"`
}
// UpdateVersion updates the given version
func (c *Client) UpdateVersion(i *UpdateVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s", i.Service, i.Version)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s", i.Service, i.Version)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// ActivateVersionInput is the input to the ActivateVersion function.
type ActivateVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// ActivateVersion activates the given version.
func (c *Client) ActivateVersion(i *ActivateVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/activate", i.Service, i.Version)
- resp, err := c.Put(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/activate", i.Service, i.Version)
+ resp, err := c.Put(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// DeactivateVersionInput is the input to the DeactivateVersion function.
type DeactivateVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// DeactivateVersion deactivates the given version.
func (c *Client) DeactivateVersion(i *DeactivateVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/deactivate", i.Service, i.Version)
- resp, err := c.Put(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/deactivate", i.Service, i.Version)
+ resp, err := c.Put(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// CloneVersionInput is the input to the CloneVersion function.
type CloneVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// CloneVersion creates a clone of the version with and returns a new
// configuration version with all the same configuration options, but an
// incremented number.
func (c *Client) CloneVersion(i *CloneVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/clone", i.Service, i.Version)
- resp, err := c.Put(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/clone", i.Service, i.Version)
+ resp, err := c.Put(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
// ValidateVersionInput is the input to the ValidateVersion function.
type ValidateVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// ValidateVersion validates if the given version is okay.
func (c *Client) ValidateVersion(i *ValidateVersionInput) (bool, string, error) {
- var msg string
+ var msg string
- if i.Service == "" {
- return false, msg, ErrMissingService
- }
+ if i.Service == "" {
+ return false, msg, ErrMissingService
+ }
- if i.Version == "" {
- return false, msg, ErrMissingVersion
- }
+ if i.Version == "" {
+ return false, msg, ErrMissingVersion
+ }
- path := fmt.Sprintf("/service/%s/version/%s/validate", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return false, msg, err
- }
+ path := fmt.Sprintf("/service/%s/version/%s/validate", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return false, msg, err
+ }
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return false, msg, err
- }
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return false, msg, err
+ }
- msg = r.Msg
- return r.Ok(), msg, nil
+ msg = r.Msg
+ return r.Ok(), msg, nil
}
// LockVersionInput is the input to the LockVersion function.
type LockVersionInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// LockVersion locks the specified version.
func (c *Client) LockVersion(i *LockVersionInput) (*Version, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/lock", i.Service, i.Version)
- resp, err := c.Put(path, nil)
- if err != nil {
- return nil, err
- }
-
- var e *Version
- if err := decodeJSON(&e, resp.Body); err != nil {
- return nil, err
- }
- return e, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/lock", i.Service, i.Version)
+ resp, err := c.Put(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var e *Version
+ if err := decodeJSON(&e, resp.Body); err != nil {
+ return nil, err
+ }
+ return e, nil
}
diff --git a/vendor/github.com/sethvargo/go-fastly/wordpress.go b/vendor/github.com/sethvargo/go-fastly/wordpress.go
index abd20e70037e..661d61348c7f 100644
--- a/vendor/github.com/sethvargo/go-fastly/wordpress.go
+++ b/vendor/github.com/sethvargo/go-fastly/wordpress.go
@@ -1,18 +1,18 @@
package fastly
import (
- "fmt"
- "sort"
+ "fmt"
+ "sort"
)
// Wordpress represents a wordpress response from the Fastly API.
type Wordpress struct {
- ServiceID string `mapstructure:"service_id"`
- Version string `mapstructure:"version"`
+ ServiceID string `mapstructure:"service_id"`
+ Version string `mapstructure:"version"`
- Name string `mapstructure:"name"`
- Path string `mapstructure:"path"`
- Comment string `mapstructure:"comment"`
+ Name string `mapstructure:"name"`
+ Path string `mapstructure:"path"`
+ Comment string `mapstructure:"comment"`
}
// wordpressesByName is a sortable list of wordpresses.
@@ -22,193 +22,193 @@ type wordpressesByName []*Wordpress
func (s wordpressesByName) Len() int { return len(s) }
func (s wordpressesByName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s wordpressesByName) Less(i, j int) bool {
- return s[i].Name < s[j].Name
+ return s[i].Name < s[j].Name
}
// ListWordpressesInput is used as input to the ListWordpresses function.
type ListWordpressesInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
}
// ListWordpresses returns the list of wordpresses for the configuration version.
func (c *Client) ListWordpresses(i *ListWordpressesInput) ([]*Wordpress, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/wordpress", i.Service, i.Version)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var bs []*Wordpress
- if err := decodeJSON(&bs, resp.Body); err != nil {
- return nil, err
- }
- sort.Stable(wordpressesByName(bs))
- return bs, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/wordpress", i.Service, i.Version)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var bs []*Wordpress
+ if err := decodeJSON(&bs, resp.Body); err != nil {
+ return nil, err
+ }
+ sort.Stable(wordpressesByName(bs))
+ return bs, nil
}
// CreateWordpressInput is used as input to the CreateWordpress function.
type CreateWordpressInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
-
- Name string `form:"name,omitempty"`
- Path string `form:"path,omitempty"`
- Comment string `form:"comment,omitempty"`
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
+
+ Name string `form:"name,omitempty"`
+ Path string `form:"path,omitempty"`
+ Comment string `form:"comment,omitempty"`
}
// CreateWordpress creates a new Fastly wordpress.
func (c *Client) CreateWordpress(i *CreateWordpressInput) (*Wordpress, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/wordpress", i.Service, i.Version)
- resp, err := c.PostForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Wordpress
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/wordpress", i.Service, i.Version)
+ resp, err := c.PostForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Wordpress
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// GetWordpressInput is used as input to the GetWordpress function.
type GetWordpressInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the wordpress to fetch.
- Name string
+ // Name is the name of the wordpress to fetch.
+ Name string
}
// GetWordpress gets the wordpress configuration with the given parameters.
func (c *Client) GetWordpress(i *GetWordpressInput) (*Wordpress, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
- resp, err := c.Get(path, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Wordpress
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Get(path, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Wordpress
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// UpdateWordpressInput is used as input to the UpdateWordpress function.
type UpdateWordpressInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the wordpress to update.
- Name string
+ // Name is the name of the wordpress to update.
+ Name string
- NewName string `form:"name,omitempty"`
- Path string `form:"path,omitempty"`
- Comment string `form:"comment,omitempty"`
+ NewName string `form:"name,omitempty"`
+ Path string `form:"path,omitempty"`
+ Comment string `form:"comment,omitempty"`
}
// UpdateWordpress updates a specific wordpress.
func (c *Client) UpdateWordpress(i *UpdateWordpressInput) (*Wordpress, error) {
- if i.Service == "" {
- return nil, ErrMissingService
- }
-
- if i.Version == "" {
- return nil, ErrMissingVersion
- }
-
- if i.Name == "" {
- return nil, ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
- resp, err := c.PutForm(path, i, nil)
- if err != nil {
- return nil, err
- }
-
- var b *Wordpress
- if err := decodeJSON(&b, resp.Body); err != nil {
- return nil, err
- }
- return b, nil
+ if i.Service == "" {
+ return nil, ErrMissingService
+ }
+
+ if i.Version == "" {
+ return nil, ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return nil, ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
+ resp, err := c.PutForm(path, i, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ var b *Wordpress
+ if err := decodeJSON(&b, resp.Body); err != nil {
+ return nil, err
+ }
+ return b, nil
}
// DeleteWordpressInput is the input parameter to DeleteWordpress.
type DeleteWordpressInput struct {
- // Service is the ID of the service. Version is the specific configuration
- // version. Both fields are required.
- Service string
- Version string
+ // Service is the ID of the service. Version is the specific configuration
+ // version. Both fields are required.
+ Service string
+ Version string
- // Name is the name of the wordpress to delete (required).
- Name string
+ // Name is the name of the wordpress to delete (required).
+ Name string
}
// DeleteWordpress deletes the given wordpress version.
func (c *Client) DeleteWordpress(i *DeleteWordpressInput) error {
- if i.Service == "" {
- return ErrMissingService
- }
-
- if i.Version == "" {
- return ErrMissingVersion
- }
-
- if i.Name == "" {
- return ErrMissingName
- }
-
- path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
- resp, err := c.Delete(path, nil)
- if err != nil {
- return err
- }
-
- var r *statusResp
- if err := decodeJSON(&r, resp.Body); err != nil {
- return err
- }
- if !r.Ok() {
- return fmt.Errorf("Not Ok")
- }
- return nil
+ if i.Service == "" {
+ return ErrMissingService
+ }
+
+ if i.Version == "" {
+ return ErrMissingVersion
+ }
+
+ if i.Name == "" {
+ return ErrMissingName
+ }
+
+ path := fmt.Sprintf("/service/%s/version/%s/wordpress/%s", i.Service, i.Version, i.Name)
+ resp, err := c.Delete(path, nil)
+ if err != nil {
+ return err
+ }
+
+ var r *statusResp
+ if err := decodeJSON(&r, resp.Body); err != nil {
+ return err
+ }
+ if !r.Ok() {
+ return fmt.Errorf("Not Ok")
+ }
+ return nil
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/APIDiscoveryService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/APIDiscoveryService.go
index 0802f3351db6..490d36d650e6 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/APIDiscoveryService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/APIDiscoveryService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AccountService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AccountService.go
index 3c55ac0a55b2..35ef09546e2a 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AccountService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AccountService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -894,7 +894,7 @@ func (s *AccountService) NewLockAccountParams(account string, domainid string) *
return p
}
-// Locks an account
+// This deprecated function used to locks an account. Look for the API DisableAccount instead
func (s *AccountService) LockAccount(p *LockAccountParams) (*LockAccountResponse, error) {
resp, err := s.cs.newRequest("lockAccount", p.toURLValues())
if err != nil {
@@ -1130,12 +1130,18 @@ func (s *AccountService) NewListAccountsParams() *ListAccountsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AccountService) GetAccountID(name string) (string, error) {
+func (s *AccountService) GetAccountID(name string, opts ...OptionFunc) (string, error) {
p := &ListAccountsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListAccounts(p)
if err != nil {
return "", err
@@ -1160,13 +1166,13 @@ func (s *AccountService) GetAccountID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AccountService) GetAccountByName(name string) (*Account, int, error) {
- id, err := s.GetAccountID(name)
+func (s *AccountService) GetAccountByName(name string, opts ...OptionFunc) (*Account, int, error) {
+ id, err := s.GetAccountID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetAccountByID(id)
+ r, count, err := s.GetAccountByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1174,12 +1180,18 @@ func (s *AccountService) GetAccountByName(name string) (*Account, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AccountService) GetAccountByID(id string) (*Account, int, error) {
+func (s *AccountService) GetAccountByID(id string, opts ...OptionFunc) (*Account, int, error) {
p := &ListAccountsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAccounts(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1512,7 +1524,7 @@ func (s *AccountService) NewAddAccountToProjectParams(projectid string) *AddAcco
return p
}
-// Adds acoount to a project
+// Adds account to a project
func (s *AccountService) AddAccountToProject(p *AddAccountToProjectParams) (*AddAccountToProjectResponse, error) {
resp, err := s.cs.newRequest("addAccountToProject", p.toURLValues())
if err != nil {
@@ -1716,28 +1728,24 @@ func (s *AccountService) NewListProjectAccountsParams(projectid string) *ListPro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AccountService) GetProjectAccountID(keyword string, projectid string) (string, error) {
+func (s *AccountService) GetProjectAccountID(keyword string, projectid string, opts ...OptionFunc) (string, error) {
p := &ListProjectAccountsParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
p.p["projectid"] = projectid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListProjectAccounts(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListProjectAccounts(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
}
@@ -1831,3 +1839,65 @@ type ProjectAccount struct {
Vpclimit string `json:"vpclimit,omitempty"`
Vpctotal int64 `json:"vpctotal,omitempty"`
}
+
+type GetSolidFireAccountIdParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetSolidFireAccountIdParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["accountid"]; found {
+ u.Set("accountid", v.(string))
+ }
+ if v, found := p.p["storageid"]; found {
+ u.Set("storageid", v.(string))
+ }
+ return u
+}
+
+func (p *GetSolidFireAccountIdParams) SetAccountid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["accountid"] = v
+ return
+}
+
+func (p *GetSolidFireAccountIdParams) SetStorageid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["storageid"] = v
+ return
+}
+
+// You should always use this function to get a new GetSolidFireAccountIdParams instance,
+// as then you are sure you have configured all required params
+func (s *AccountService) NewGetSolidFireAccountIdParams(accountid string, storageid string) *GetSolidFireAccountIdParams {
+ p := &GetSolidFireAccountIdParams{}
+ p.p = make(map[string]interface{})
+ p.p["accountid"] = accountid
+ p.p["storageid"] = storageid
+ return p
+}
+
+// Get SolidFire Account ID
+func (s *AccountService) GetSolidFireAccountId(p *GetSolidFireAccountIdParams) (*GetSolidFireAccountIdResponse, error) {
+ resp, err := s.cs.newRequest("getSolidFireAccountId", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetSolidFireAccountIdResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetSolidFireAccountIdResponse struct {
+ SolidFireAccountId int64 `json:"solidFireAccountId,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AddressService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AddressService.go
index fb5c586f7988..9ffc6d9b81db 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AddressService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AddressService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -257,7 +257,7 @@ func (s *AddressService) NewDisassociateIpAddressParams(id string) *Disassociate
return p
}
-// Disassociates an ip address from the account.
+// Disassociates an IP address from the account.
func (s *AddressService) DisassociateIpAddress(p *DisassociateIpAddressParams) (*DisassociateIpAddressResponse, error) {
resp, err := s.cs.newRequest("disassociateIpAddress", p.toURLValues())
if err != nil {
@@ -365,6 +365,9 @@ func (p *ListPublicIpAddressesParams) toURLValues() url.Values {
if v, found := p.p["projectid"]; found {
u.Set("projectid", v.(string))
}
+ if v, found := p.p["state"]; found {
+ u.Set("state", v.(string))
+ }
if v, found := p.p["tags"]; found {
i := 0
for k, vv := range v.(map[string]string) {
@@ -529,6 +532,14 @@ func (p *ListPublicIpAddressesParams) SetProjectid(v string) {
return
}
+func (p *ListPublicIpAddressesParams) SetState(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["state"] = v
+ return
+}
+
func (p *ListPublicIpAddressesParams) SetTags(v map[string]string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -570,12 +581,18 @@ func (s *AddressService) NewListPublicIpAddressesParams() *ListPublicIpAddresses
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AddressService) GetPublicIpAddressByID(id string) (*PublicIpAddress, int, error) {
+func (s *AddressService) GetPublicIpAddressByID(id string, opts ...OptionFunc) (*PublicIpAddress, int, error) {
p := &ListPublicIpAddressesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPublicIpAddresses(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -586,21 +603,6 @@ func (s *AddressService) GetPublicIpAddressByID(id string) (*PublicIpAddress, in
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListPublicIpAddresses(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -729,7 +731,7 @@ func (s *AddressService) NewUpdateIpAddressParams(id string) *UpdateIpAddressPar
return p
}
-// Updates an ip address
+// Updates an IP address
func (s *AddressService) UpdateIpAddress(p *UpdateIpAddressParams) (*UpdateIpAddressResponse, error) {
resp, err := s.cs.newRequest("updateIpAddress", p.toURLValues())
if err != nil {
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AffinityGroupService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AffinityGroupService.go
index fd36305905d4..5511cbcad644 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AffinityGroupService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AffinityGroupService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -45,6 +45,9 @@ func (p *CreateAffinityGroupParams) toURLValues() url.Values {
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
if v, found := p.p["type"]; found {
u.Set("type", v.(string))
}
@@ -83,6 +86,14 @@ func (p *CreateAffinityGroupParams) SetName(v string) {
return
}
+func (p *CreateAffinityGroupParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
func (p *CreateAffinityGroupParams) SetType(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -143,6 +154,8 @@ type CreateAffinityGroupResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
}
@@ -168,6 +181,9 @@ func (p *DeleteAffinityGroupParams) toURLValues() url.Values {
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
return u
}
@@ -203,6 +219,14 @@ func (p *DeleteAffinityGroupParams) SetName(v string) {
return
}
+func (p *DeleteAffinityGroupParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
// You should always use this function to get a new DeleteAffinityGroupParams instance,
// as then you are sure you have configured all required params
func (s *AffinityGroupService) NewDeleteAffinityGroupParams() *DeleteAffinityGroupParams {
@@ -286,6 +310,9 @@ func (p *ListAffinityGroupsParams) toURLValues() url.Values {
vv := strconv.Itoa(v.(int))
u.Set("pagesize", vv)
}
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
if v, found := p.p["type"]; found {
u.Set("type", v.(string))
}
@@ -367,6 +394,14 @@ func (p *ListAffinityGroupsParams) SetPagesize(v int) {
return
}
+func (p *ListAffinityGroupsParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
func (p *ListAffinityGroupsParams) SetType(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -392,12 +427,18 @@ func (s *AffinityGroupService) NewListAffinityGroupsParams() *ListAffinityGroups
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AffinityGroupService) GetAffinityGroupID(name string) (string, error) {
+func (s *AffinityGroupService) GetAffinityGroupID(name string, opts ...OptionFunc) (string, error) {
p := &ListAffinityGroupsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListAffinityGroups(p)
if err != nil {
return "", err
@@ -422,13 +463,13 @@ func (s *AffinityGroupService) GetAffinityGroupID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AffinityGroupService) GetAffinityGroupByName(name string) (*AffinityGroup, int, error) {
- id, err := s.GetAffinityGroupID(name)
+func (s *AffinityGroupService) GetAffinityGroupByName(name string, opts ...OptionFunc) (*AffinityGroup, int, error) {
+ id, err := s.GetAffinityGroupID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetAffinityGroupByID(id)
+ r, count, err := s.GetAffinityGroupByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -436,12 +477,18 @@ func (s *AffinityGroupService) GetAffinityGroupByName(name string) (*AffinityGro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AffinityGroupService) GetAffinityGroupByID(id string) (*AffinityGroup, int, error) {
+func (s *AffinityGroupService) GetAffinityGroupByID(id string, opts ...OptionFunc) (*AffinityGroup, int, error) {
p := &ListAffinityGroupsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAffinityGroups(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -488,6 +535,8 @@ type AffinityGroup struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
}
@@ -592,6 +641,8 @@ type UpdateVMAffinityGroupResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -728,6 +779,8 @@ type UpdateVMAffinityGroupResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -748,6 +801,8 @@ type UpdateVMAffinityGroupResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AlertService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AlertService.go
index 2cfd4b291e51..a91f17cdf5da 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AlertService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AlertService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -113,12 +113,18 @@ func (s *AlertService) NewListAlertsParams() *ListAlertsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AlertService) GetAlertID(name string) (string, error) {
+func (s *AlertService) GetAlertID(name string, opts ...OptionFunc) (string, error) {
p := &ListAlertsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListAlerts(p)
if err != nil {
return "", err
@@ -143,13 +149,13 @@ func (s *AlertService) GetAlertID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AlertService) GetAlertByName(name string) (*Alert, int, error) {
- id, err := s.GetAlertID(name)
+func (s *AlertService) GetAlertByName(name string, opts ...OptionFunc) (*Alert, int, error) {
+ id, err := s.GetAlertID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetAlertByID(id)
+ r, count, err := s.GetAlertByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -157,12 +163,18 @@ func (s *AlertService) GetAlertByName(name string) (*Alert, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AlertService) GetAlertByID(id string) (*Alert, int, error) {
+func (s *AlertService) GetAlertByID(id string, opts ...OptionFunc) (*Alert, int, error) {
p := &ListAlertsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAlerts(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AsyncjobService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AsyncjobService.go
index ca21e7f04317..b3e1fd89a91d 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AsyncjobService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AsyncjobService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AuthenticationService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AuthenticationService.go
new file mode 100644
index 000000000000..b97e70cfac17
--- /dev/null
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AuthenticationService.go
@@ -0,0 +1,156 @@
+//
+// Copyright 2016, Sander van Harmelen
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+package cloudstack
+
+import (
+ "encoding/json"
+ "net/url"
+ "strconv"
+)
+
+type LoginParams struct {
+ p map[string]interface{}
+}
+
+func (p *LoginParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["domain"]; found {
+ u.Set("domain", v.(string))
+ }
+ if v, found := p.p["domainId"]; found {
+ vv := strconv.FormatInt(v.(int64), 10)
+ u.Set("domainId", vv)
+ }
+ if v, found := p.p["password"]; found {
+ u.Set("password", v.(string))
+ }
+ if v, found := p.p["username"]; found {
+ u.Set("username", v.(string))
+ }
+ return u
+}
+
+func (p *LoginParams) SetDomain(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domain"] = v
+ return
+}
+
+func (p *LoginParams) SetDomainId(v int64) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domainId"] = v
+ return
+}
+
+func (p *LoginParams) SetPassword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["password"] = v
+ return
+}
+
+func (p *LoginParams) SetUsername(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["username"] = v
+ return
+}
+
+// You should always use this function to get a new LoginParams instance,
+// as then you are sure you have configured all required params
+func (s *AuthenticationService) NewLoginParams(password string, username string) *LoginParams {
+ p := &LoginParams{}
+ p.p = make(map[string]interface{})
+ p.p["password"] = password
+ p.p["username"] = username
+ return p
+}
+
+// Logs a user into the CloudStack. A successful login attempt will generate a JSESSIONID cookie value that can be passed in subsequent Query command calls until the "logout" command has been issued or the session has expired.
+func (s *AuthenticationService) Login(p *LoginParams) (*LoginResponse, error) {
+ resp, err := s.cs.newRequest("login", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r LoginResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type LoginResponse struct {
+ Account string `json:"account,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Firstname string `json:"firstname,omitempty"`
+ Lastname string `json:"lastname,omitempty"`
+ Registered string `json:"registered,omitempty"`
+ Sessionkey string `json:"sessionkey,omitempty"`
+ Timeout int `json:"timeout,omitempty"`
+ Timezone string `json:"timezone,omitempty"`
+ Type string `json:"type,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
+}
+
+type LogoutParams struct {
+ p map[string]interface{}
+}
+
+func (p *LogoutParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ return u
+}
+
+// You should always use this function to get a new LogoutParams instance,
+// as then you are sure you have configured all required params
+func (s *AuthenticationService) NewLogoutParams() *LogoutParams {
+ p := &LogoutParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// Logs out the user
+func (s *AuthenticationService) Logout(p *LogoutParams) (*LogoutResponse, error) {
+ resp, err := s.cs.newRequest("logout", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r LogoutResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type LogoutResponse struct {
+ Description string `json:"description,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AutoScaleService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AutoScaleService.go
index 65a1d2e05b31..eaec8dd038e4 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/AutoScaleService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/AutoScaleService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -1134,12 +1134,18 @@ func (s *AutoScaleService) NewListCountersParams() *ListCountersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetCounterID(name string) (string, error) {
+func (s *AutoScaleService) GetCounterID(name string, opts ...OptionFunc) (string, error) {
p := &ListCountersParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListCounters(p)
if err != nil {
return "", err
@@ -1164,13 +1170,13 @@ func (s *AutoScaleService) GetCounterID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetCounterByName(name string) (*Counter, int, error) {
- id, err := s.GetCounterID(name)
+func (s *AutoScaleService) GetCounterByName(name string, opts ...OptionFunc) (*Counter, int, error) {
+ id, err := s.GetCounterID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetCounterByID(id)
+ r, count, err := s.GetCounterByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1178,12 +1184,18 @@ func (s *AutoScaleService) GetCounterByName(name string) (*Counter, int, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetCounterByID(id string) (*Counter, int, error) {
+func (s *AutoScaleService) GetCounterByID(id string, opts ...OptionFunc) (*Counter, int, error) {
p := &ListCountersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListCounters(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1366,12 +1378,18 @@ func (s *AutoScaleService) NewListConditionsParams() *ListConditionsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetConditionByID(id string) (*Condition, int, error) {
+func (s *AutoScaleService) GetConditionByID(id string, opts ...OptionFunc) (*Condition, int, error) {
p := &ListConditionsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListConditions(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1570,12 +1588,18 @@ func (s *AutoScaleService) NewListAutoScalePoliciesParams() *ListAutoScalePolici
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetAutoScalePolicyByID(id string) (*AutoScalePolicy, int, error) {
+func (s *AutoScaleService) GetAutoScalePolicyByID(id string, opts ...OptionFunc) (*AutoScalePolicy, int, error) {
p := &ListAutoScalePoliciesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAutoScalePolicies(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1808,12 +1832,18 @@ func (s *AutoScaleService) NewListAutoScaleVmProfilesParams() *ListAutoScaleVmPr
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetAutoScaleVmProfileByID(id string) (*AutoScaleVmProfile, int, error) {
+func (s *AutoScaleService) GetAutoScaleVmProfileByID(id string, opts ...OptionFunc) (*AutoScaleVmProfile, int, error) {
p := &ListAutoScaleVmProfilesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAutoScaleVmProfiles(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1824,21 +1854,6 @@ func (s *AutoScaleService) GetAutoScaleVmProfileByID(id string) (*AutoScaleVmPro
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListAutoScaleVmProfiles(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2064,12 +2079,18 @@ func (s *AutoScaleService) NewListAutoScaleVmGroupsParams() *ListAutoScaleVmGrou
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *AutoScaleService) GetAutoScaleVmGroupByID(id string) (*AutoScaleVmGroup, int, error) {
+func (s *AutoScaleService) GetAutoScaleVmGroupByID(id string, opts ...OptionFunc) (*AutoScaleVmGroup, int, error) {
p := &ListAutoScaleVmGroupsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListAutoScaleVmGroups(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2080,21 +2101,6 @@ func (s *AutoScaleService) GetAutoScaleVmGroupByID(id string) (*AutoScaleVmGroup
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListAutoScaleVmGroups(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/BaremetalService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/BaremetalService.go
index a37aa1c608d8..c1b7eaaba8f1 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/BaremetalService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/BaremetalService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -504,6 +504,9 @@ func (p *ListBaremetalDhcpParams) toURLValues() url.Values {
vv := strconv.Itoa(v.(int))
u.Set("pagesize", vv)
}
+ if v, found := p.p["physicalnetworkid"]; found {
+ u.Set("physicalnetworkid", v.(string))
+ }
return u
}
@@ -547,11 +550,20 @@ func (p *ListBaremetalDhcpParams) SetPagesize(v int) {
return
}
+func (p *ListBaremetalDhcpParams) SetPhysicalnetworkid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["physicalnetworkid"] = v
+ return
+}
+
// You should always use this function to get a new ListBaremetalDhcpParams instance,
// as then you are sure you have configured all required params
-func (s *BaremetalService) NewListBaremetalDhcpParams() *ListBaremetalDhcpParams {
+func (s *BaremetalService) NewListBaremetalDhcpParams(physicalnetworkid string) *ListBaremetalDhcpParams {
p := &ListBaremetalDhcpParams{}
p.p = make(map[string]interface{})
+ p.p["physicalnetworkid"] = physicalnetworkid
return p
}
@@ -606,6 +618,9 @@ func (p *ListBaremetalPxeServersParams) toURLValues() url.Values {
vv := strconv.Itoa(v.(int))
u.Set("pagesize", vv)
}
+ if v, found := p.p["physicalnetworkid"]; found {
+ u.Set("physicalnetworkid", v.(string))
+ }
return u
}
@@ -641,11 +656,20 @@ func (p *ListBaremetalPxeServersParams) SetPagesize(v int) {
return
}
+func (p *ListBaremetalPxeServersParams) SetPhysicalnetworkid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["physicalnetworkid"] = v
+ return
+}
+
// You should always use this function to get a new ListBaremetalPxeServersParams instance,
// as then you are sure you have configured all required params
-func (s *BaremetalService) NewListBaremetalPxeServersParams() *ListBaremetalPxeServersParams {
+func (s *BaremetalService) NewListBaremetalPxeServersParams(physicalnetworkid string) *ListBaremetalPxeServersParams {
p := &ListBaremetalPxeServersParams{}
p.p = make(map[string]interface{})
+ p.p["physicalnetworkid"] = physicalnetworkid
return p
}
@@ -674,3 +698,221 @@ type BaremetalPxeServer struct {
Provider string `json:"provider,omitempty"`
Url string `json:"url,omitempty"`
}
+
+type AddBaremetalRctParams struct {
+ p map[string]interface{}
+}
+
+func (p *AddBaremetalRctParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["baremetalrcturl"]; found {
+ u.Set("baremetalrcturl", v.(string))
+ }
+ return u
+}
+
+func (p *AddBaremetalRctParams) SetBaremetalrcturl(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["baremetalrcturl"] = v
+ return
+}
+
+// You should always use this function to get a new AddBaremetalRctParams instance,
+// as then you are sure you have configured all required params
+func (s *BaremetalService) NewAddBaremetalRctParams(baremetalrcturl string) *AddBaremetalRctParams {
+ p := &AddBaremetalRctParams{}
+ p.p = make(map[string]interface{})
+ p.p["baremetalrcturl"] = baremetalrcturl
+ return p
+}
+
+// adds baremetal rack configuration text
+func (s *BaremetalService) AddBaremetalRct(p *AddBaremetalRctParams) (*AddBaremetalRctResponse, error) {
+ resp, err := s.cs.newRequest("addBaremetalRct", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r AddBaremetalRctResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ b, err = getRawValue(b)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type AddBaremetalRctResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Id string `json:"id,omitempty"`
+ Url string `json:"url,omitempty"`
+}
+
+type DeleteBaremetalRctParams struct {
+ p map[string]interface{}
+}
+
+func (p *DeleteBaremetalRctParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["id"]; found {
+ u.Set("id", v.(string))
+ }
+ return u
+}
+
+func (p *DeleteBaremetalRctParams) SetId(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["id"] = v
+ return
+}
+
+// You should always use this function to get a new DeleteBaremetalRctParams instance,
+// as then you are sure you have configured all required params
+func (s *BaremetalService) NewDeleteBaremetalRctParams(id string) *DeleteBaremetalRctParams {
+ p := &DeleteBaremetalRctParams{}
+ p.p = make(map[string]interface{})
+ p.p["id"] = id
+ return p
+}
+
+// deletes baremetal rack configuration text
+func (s *BaremetalService) DeleteBaremetalRct(p *DeleteBaremetalRctParams) (*DeleteBaremetalRctResponse, error) {
+ resp, err := s.cs.newRequest("deleteBaremetalRct", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r DeleteBaremetalRctResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type DeleteBaremetalRctResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Displaytext string `json:"displaytext,omitempty"`
+ Success bool `json:"success,omitempty"`
+}
+
+type ListBaremetalRctParams struct {
+ p map[string]interface{}
+}
+
+func (p *ListBaremetalRctParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["keyword"]; found {
+ u.Set("keyword", v.(string))
+ }
+ if v, found := p.p["page"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("page", vv)
+ }
+ if v, found := p.p["pagesize"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("pagesize", vv)
+ }
+ return u
+}
+
+func (p *ListBaremetalRctParams) SetKeyword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["keyword"] = v
+ return
+}
+
+func (p *ListBaremetalRctParams) SetPage(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["page"] = v
+ return
+}
+
+func (p *ListBaremetalRctParams) SetPagesize(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["pagesize"] = v
+ return
+}
+
+// You should always use this function to get a new ListBaremetalRctParams instance,
+// as then you are sure you have configured all required params
+func (s *BaremetalService) NewListBaremetalRctParams() *ListBaremetalRctParams {
+ p := &ListBaremetalRctParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// list baremetal rack configuration
+func (s *BaremetalService) ListBaremetalRct(p *ListBaremetalRctParams) (*ListBaremetalRctResponse, error) {
+ resp, err := s.cs.newRequest("listBaremetalRct", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r ListBaremetalRctResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type ListBaremetalRctResponse struct {
+ Count int `json:"count"`
+ BaremetalRct []*BaremetalRct `json:"baremetalrct"`
+}
+
+type BaremetalRct struct {
+ Id string `json:"id,omitempty"`
+ Url string `json:"url,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/BigSwitchVNSService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/BigSwitchVNSService.go
deleted file mode 100644
index f6bc551066f0..000000000000
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/BigSwitchVNSService.go
+++ /dev/null
@@ -1,281 +0,0 @@
-//
-// Copyright 2014, Sander van Harmelen
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-
-package cloudstack
-
-import (
- "encoding/json"
- "net/url"
- "strconv"
-)
-
-type AddBigSwitchVnsDeviceParams struct {
- p map[string]interface{}
-}
-
-func (p *AddBigSwitchVnsDeviceParams) toURLValues() url.Values {
- u := url.Values{}
- if p.p == nil {
- return u
- }
- if v, found := p.p["hostname"]; found {
- u.Set("hostname", v.(string))
- }
- if v, found := p.p["physicalnetworkid"]; found {
- u.Set("physicalnetworkid", v.(string))
- }
- return u
-}
-
-func (p *AddBigSwitchVnsDeviceParams) SetHostname(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["hostname"] = v
- return
-}
-
-func (p *AddBigSwitchVnsDeviceParams) SetPhysicalnetworkid(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["physicalnetworkid"] = v
- return
-}
-
-// You should always use this function to get a new AddBigSwitchVnsDeviceParams instance,
-// as then you are sure you have configured all required params
-func (s *BigSwitchVNSService) NewAddBigSwitchVnsDeviceParams(hostname string, physicalnetworkid string) *AddBigSwitchVnsDeviceParams {
- p := &AddBigSwitchVnsDeviceParams{}
- p.p = make(map[string]interface{})
- p.p["hostname"] = hostname
- p.p["physicalnetworkid"] = physicalnetworkid
- return p
-}
-
-// Adds a BigSwitch VNS device
-func (s *BigSwitchVNSService) AddBigSwitchVnsDevice(p *AddBigSwitchVnsDeviceParams) (*AddBigSwitchVnsDeviceResponse, error) {
- resp, err := s.cs.newRequest("addBigSwitchVnsDevice", p.toURLValues())
- if err != nil {
- return nil, err
- }
-
- var r AddBigSwitchVnsDeviceResponse
- if err := json.Unmarshal(resp, &r); err != nil {
- return nil, err
- }
-
- // If we have a async client, we need to wait for the async result
- if s.cs.async {
- b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
- if err != nil {
- if err == AsyncTimeoutErr {
- return &r, err
- }
- return nil, err
- }
-
- b, err = getRawValue(b)
- if err != nil {
- return nil, err
- }
-
- if err := json.Unmarshal(b, &r); err != nil {
- return nil, err
- }
- }
- return &r, nil
-}
-
-type AddBigSwitchVnsDeviceResponse struct {
- JobID string `json:"jobid,omitempty"`
- Bigswitchdevicename string `json:"bigswitchdevicename,omitempty"`
- Hostname string `json:"hostname,omitempty"`
- Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
- Provider string `json:"provider,omitempty"`
- Vnsdeviceid string `json:"vnsdeviceid,omitempty"`
-}
-
-type DeleteBigSwitchVnsDeviceParams struct {
- p map[string]interface{}
-}
-
-func (p *DeleteBigSwitchVnsDeviceParams) toURLValues() url.Values {
- u := url.Values{}
- if p.p == nil {
- return u
- }
- if v, found := p.p["vnsdeviceid"]; found {
- u.Set("vnsdeviceid", v.(string))
- }
- return u
-}
-
-func (p *DeleteBigSwitchVnsDeviceParams) SetVnsdeviceid(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["vnsdeviceid"] = v
- return
-}
-
-// You should always use this function to get a new DeleteBigSwitchVnsDeviceParams instance,
-// as then you are sure you have configured all required params
-func (s *BigSwitchVNSService) NewDeleteBigSwitchVnsDeviceParams(vnsdeviceid string) *DeleteBigSwitchVnsDeviceParams {
- p := &DeleteBigSwitchVnsDeviceParams{}
- p.p = make(map[string]interface{})
- p.p["vnsdeviceid"] = vnsdeviceid
- return p
-}
-
-// delete a bigswitch vns device
-func (s *BigSwitchVNSService) DeleteBigSwitchVnsDevice(p *DeleteBigSwitchVnsDeviceParams) (*DeleteBigSwitchVnsDeviceResponse, error) {
- resp, err := s.cs.newRequest("deleteBigSwitchVnsDevice", p.toURLValues())
- if err != nil {
- return nil, err
- }
-
- var r DeleteBigSwitchVnsDeviceResponse
- if err := json.Unmarshal(resp, &r); err != nil {
- return nil, err
- }
-
- // If we have a async client, we need to wait for the async result
- if s.cs.async {
- b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
- if err != nil {
- if err == AsyncTimeoutErr {
- return &r, err
- }
- return nil, err
- }
-
- if err := json.Unmarshal(b, &r); err != nil {
- return nil, err
- }
- }
- return &r, nil
-}
-
-type DeleteBigSwitchVnsDeviceResponse struct {
- JobID string `json:"jobid,omitempty"`
- Displaytext string `json:"displaytext,omitempty"`
- Success bool `json:"success,omitempty"`
-}
-
-type ListBigSwitchVnsDevicesParams struct {
- p map[string]interface{}
-}
-
-func (p *ListBigSwitchVnsDevicesParams) toURLValues() url.Values {
- u := url.Values{}
- if p.p == nil {
- return u
- }
- if v, found := p.p["keyword"]; found {
- u.Set("keyword", v.(string))
- }
- if v, found := p.p["page"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("page", vv)
- }
- if v, found := p.p["pagesize"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("pagesize", vv)
- }
- if v, found := p.p["physicalnetworkid"]; found {
- u.Set("physicalnetworkid", v.(string))
- }
- if v, found := p.p["vnsdeviceid"]; found {
- u.Set("vnsdeviceid", v.(string))
- }
- return u
-}
-
-func (p *ListBigSwitchVnsDevicesParams) SetKeyword(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["keyword"] = v
- return
-}
-
-func (p *ListBigSwitchVnsDevicesParams) SetPage(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["page"] = v
- return
-}
-
-func (p *ListBigSwitchVnsDevicesParams) SetPagesize(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["pagesize"] = v
- return
-}
-
-func (p *ListBigSwitchVnsDevicesParams) SetPhysicalnetworkid(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["physicalnetworkid"] = v
- return
-}
-
-func (p *ListBigSwitchVnsDevicesParams) SetVnsdeviceid(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["vnsdeviceid"] = v
- return
-}
-
-// You should always use this function to get a new ListBigSwitchVnsDevicesParams instance,
-// as then you are sure you have configured all required params
-func (s *BigSwitchVNSService) NewListBigSwitchVnsDevicesParams() *ListBigSwitchVnsDevicesParams {
- p := &ListBigSwitchVnsDevicesParams{}
- p.p = make(map[string]interface{})
- return p
-}
-
-// Lists BigSwitch Vns devices
-func (s *BigSwitchVNSService) ListBigSwitchVnsDevices(p *ListBigSwitchVnsDevicesParams) (*ListBigSwitchVnsDevicesResponse, error) {
- resp, err := s.cs.newRequest("listBigSwitchVnsDevices", p.toURLValues())
- if err != nil {
- return nil, err
- }
-
- var r ListBigSwitchVnsDevicesResponse
- if err := json.Unmarshal(resp, &r); err != nil {
- return nil, err
- }
- return &r, nil
-}
-
-type ListBigSwitchVnsDevicesResponse struct {
- Count int `json:"count"`
- BigSwitchVnsDevices []*BigSwitchVnsDevice `json:"bigswitchvnsdevice"`
-}
-
-type BigSwitchVnsDevice struct {
- Bigswitchdevicename string `json:"bigswitchdevicename,omitempty"`
- Hostname string `json:"hostname,omitempty"`
- Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
- Provider string `json:"provider,omitempty"`
- Vnsdeviceid string `json:"vnsdeviceid,omitempty"`
-}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/CertificateService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/CertificateService.go
index dc125a02f207..a13fcdfc21a1 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/CertificateService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/CertificateService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/CloudIdentifierService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/CloudIdentifierService.go
index 0572bcdee153..99eeba2c8ecc 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/CloudIdentifierService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/CloudIdentifierService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ClusterService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ClusterService.go
index 24d0214ee0ff..a4fe1fe9c2f0 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ClusterService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ClusterService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -51,6 +51,15 @@ func (p *AddClusterParams) toURLValues() url.Values {
if v, found := p.p["hypervisor"]; found {
u.Set("hypervisor", v.(string))
}
+ if v, found := p.p["ovm3cluster"]; found {
+ u.Set("ovm3cluster", v.(string))
+ }
+ if v, found := p.p["ovm3pool"]; found {
+ u.Set("ovm3pool", v.(string))
+ }
+ if v, found := p.p["ovm3vip"]; found {
+ u.Set("ovm3vip", v.(string))
+ }
if v, found := p.p["password"]; found {
u.Set("password", v.(string))
}
@@ -132,6 +141,30 @@ func (p *AddClusterParams) SetHypervisor(v string) {
return
}
+func (p *AddClusterParams) SetOvm3cluster(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ovm3cluster"] = v
+ return
+}
+
+func (p *AddClusterParams) SetOvm3pool(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ovm3pool"] = v
+ return
+}
+
+func (p *AddClusterParams) SetOvm3vip(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ovm3vip"] = v
+ return
+}
+
func (p *AddClusterParams) SetPassword(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -260,6 +293,7 @@ type AddClusterResponse struct {
Managedstate string `json:"managedstate,omitempty"`
Memoryovercommitratio string `json:"memoryovercommitratio,omitempty"`
Name string `json:"name,omitempty"`
+ Ovm3vip string `json:"ovm3vip,omitempty"`
Podid string `json:"podid,omitempty"`
Podname string `json:"podname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
@@ -439,6 +473,7 @@ type UpdateClusterResponse struct {
Managedstate string `json:"managedstate,omitempty"`
Memoryovercommitratio string `json:"memoryovercommitratio,omitempty"`
Name string `json:"name,omitempty"`
+ Ovm3vip string `json:"ovm3vip,omitempty"`
Podid string `json:"podid,omitempty"`
Podname string `json:"podname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
@@ -601,12 +636,18 @@ func (s *ClusterService) NewListClustersParams() *ListClustersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ClusterService) GetClusterID(name string) (string, error) {
+func (s *ClusterService) GetClusterID(name string, opts ...OptionFunc) (string, error) {
p := &ListClustersParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListClusters(p)
if err != nil {
return "", err
@@ -631,13 +672,13 @@ func (s *ClusterService) GetClusterID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ClusterService) GetClusterByName(name string) (*Cluster, int, error) {
- id, err := s.GetClusterID(name)
+func (s *ClusterService) GetClusterByName(name string, opts ...OptionFunc) (*Cluster, int, error) {
+ id, err := s.GetClusterID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetClusterByID(id)
+ r, count, err := s.GetClusterByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -645,12 +686,18 @@ func (s *ClusterService) GetClusterByName(name string) (*Cluster, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ClusterService) GetClusterByID(id string) (*Cluster, int, error) {
+func (s *ClusterService) GetClusterByID(id string, opts ...OptionFunc) (*Cluster, int, error) {
p := &ListClustersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListClusters(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -711,6 +758,7 @@ type Cluster struct {
Managedstate string `json:"managedstate,omitempty"`
Memoryovercommitratio string `json:"memoryovercommitratio,omitempty"`
Name string `json:"name,omitempty"`
+ Ovm3vip string `json:"ovm3vip,omitempty"`
Podid string `json:"podid,omitempty"`
Podname string `json:"podname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ConfigurationService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ConfigurationService.go
index d4a2f55d765e..b4a6a7b1bfdd 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ConfigurationService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ConfigurationService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -322,6 +322,8 @@ type ListCapabilitiesResponse struct {
type Capability struct {
Allowusercreateprojects bool `json:"allowusercreateprojects,omitempty"`
+ Allowuserexpungerecovervm bool `json:"allowuserexpungerecovervm,omitempty"`
+ Allowuserviewdestroyedvm bool `json:"allowuserviewdestroyedvm,omitempty"`
Apilimitinterval int `json:"apilimitinterval,omitempty"`
Apilimitmax int `json:"apilimitmax,omitempty"`
Cloudstackversion string `json:"cloudstackversion,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/DiskOfferingService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/DiskOfferingService.go
index e97c784f8a3b..6511df393f6c 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/DiskOfferingService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/DiskOfferingService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -86,6 +86,9 @@ func (p *CreateDiskOfferingParams) toURLValues() url.Values {
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
+ if v, found := p.p["provisioningtype"]; found {
+ u.Set("provisioningtype", v.(string))
+ }
if v, found := p.p["storagetype"]; found {
u.Set("storagetype", v.(string))
}
@@ -207,6 +210,14 @@ func (p *CreateDiskOfferingParams) SetName(v string) {
return
}
+func (p *CreateDiskOfferingParams) SetProvisioningtype(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["provisioningtype"] = v
+ return
+}
+
func (p *CreateDiskOfferingParams) SetStoragetype(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -266,6 +277,7 @@ type CreateDiskOfferingResponse struct {
Maxiops int64 `json:"maxiops,omitempty"`
Miniops int64 `json:"miniops,omitempty"`
Name string `json:"name,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Tags string `json:"tags,omitempty"`
}
@@ -381,6 +393,7 @@ type UpdateDiskOfferingResponse struct {
Maxiops int64 `json:"maxiops,omitempty"`
Miniops int64 `json:"miniops,omitempty"`
Name string `json:"name,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Tags string `json:"tags,omitempty"`
}
@@ -451,9 +464,17 @@ func (p *ListDiskOfferingsParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
+ if v, found := p.p["isrecursive"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isrecursive", vv)
+ }
if v, found := p.p["keyword"]; found {
u.Set("keyword", v.(string))
}
+ if v, found := p.p["listall"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("listall", vv)
+ }
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
@@ -484,6 +505,14 @@ func (p *ListDiskOfferingsParams) SetId(v string) {
return
}
+func (p *ListDiskOfferingsParams) SetIsrecursive(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isrecursive"] = v
+ return
+}
+
func (p *ListDiskOfferingsParams) SetKeyword(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -492,6 +521,14 @@ func (p *ListDiskOfferingsParams) SetKeyword(v string) {
return
}
+func (p *ListDiskOfferingsParams) SetListall(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["listall"] = v
+ return
+}
+
func (p *ListDiskOfferingsParams) SetName(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -525,12 +562,18 @@ func (s *DiskOfferingService) NewListDiskOfferingsParams() *ListDiskOfferingsPar
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DiskOfferingService) GetDiskOfferingID(name string) (string, error) {
+func (s *DiskOfferingService) GetDiskOfferingID(name string, opts ...OptionFunc) (string, error) {
p := &ListDiskOfferingsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListDiskOfferings(p)
if err != nil {
return "", err
@@ -555,13 +598,13 @@ func (s *DiskOfferingService) GetDiskOfferingID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DiskOfferingService) GetDiskOfferingByName(name string) (*DiskOffering, int, error) {
- id, err := s.GetDiskOfferingID(name)
+func (s *DiskOfferingService) GetDiskOfferingByName(name string, opts ...OptionFunc) (*DiskOffering, int, error) {
+ id, err := s.GetDiskOfferingID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetDiskOfferingByID(id)
+ r, count, err := s.GetDiskOfferingByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -569,12 +612,18 @@ func (s *DiskOfferingService) GetDiskOfferingByName(name string) (*DiskOffering,
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DiskOfferingService) GetDiskOfferingByID(id string) (*DiskOffering, int, error) {
+func (s *DiskOfferingService) GetDiskOfferingByID(id string, opts ...OptionFunc) (*DiskOffering, int, error) {
p := &ListDiskOfferingsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListDiskOfferings(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -633,6 +682,7 @@ type DiskOffering struct {
Maxiops int64 `json:"maxiops,omitempty"`
Miniops int64 `json:"miniops,omitempty"`
Name string `json:"name,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Tags string `json:"tags,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/DomainService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/DomainService.go
index 8839764998bb..01107241b6fa 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/DomainService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/DomainService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -104,14 +104,51 @@ func (s *DomainService) CreateDomain(p *CreateDomainParams) (*CreateDomainRespon
}
type CreateDomainResponse struct {
- Haschild bool `json:"haschild,omitempty"`
- Id string `json:"id,omitempty"`
- Level int `json:"level,omitempty"`
- Name string `json:"name,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Parentdomainid string `json:"parentdomainid,omitempty"`
- Parentdomainname string `json:"parentdomainname,omitempty"`
- Path string `json:"path,omitempty"`
+ Cpuavailable string `json:"cpuavailable,omitempty"`
+ Cpulimit string `json:"cpulimit,omitempty"`
+ Cputotal int64 `json:"cputotal,omitempty"`
+ Haschild bool `json:"haschild,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ipavailable string `json:"ipavailable,omitempty"`
+ Iplimit string `json:"iplimit,omitempty"`
+ Iptotal int64 `json:"iptotal,omitempty"`
+ Level int `json:"level,omitempty"`
+ Memoryavailable string `json:"memoryavailable,omitempty"`
+ Memorylimit string `json:"memorylimit,omitempty"`
+ Memorytotal int64 `json:"memorytotal,omitempty"`
+ Name string `json:"name,omitempty"`
+ Networkavailable string `json:"networkavailable,omitempty"`
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Networklimit string `json:"networklimit,omitempty"`
+ Networktotal int64 `json:"networktotal,omitempty"`
+ Parentdomainid string `json:"parentdomainid,omitempty"`
+ Parentdomainname string `json:"parentdomainname,omitempty"`
+ Path string `json:"path,omitempty"`
+ Primarystorageavailable string `json:"primarystorageavailable,omitempty"`
+ Primarystoragelimit string `json:"primarystoragelimit,omitempty"`
+ Primarystoragetotal int64 `json:"primarystoragetotal,omitempty"`
+ Projectavailable string `json:"projectavailable,omitempty"`
+ Projectlimit string `json:"projectlimit,omitempty"`
+ Projecttotal int64 `json:"projecttotal,omitempty"`
+ Secondarystorageavailable string `json:"secondarystorageavailable,omitempty"`
+ Secondarystoragelimit string `json:"secondarystoragelimit,omitempty"`
+ Secondarystoragetotal int64 `json:"secondarystoragetotal,omitempty"`
+ Snapshotavailable string `json:"snapshotavailable,omitempty"`
+ Snapshotlimit string `json:"snapshotlimit,omitempty"`
+ Snapshottotal int64 `json:"snapshottotal,omitempty"`
+ State string `json:"state,omitempty"`
+ Templateavailable string `json:"templateavailable,omitempty"`
+ Templatelimit string `json:"templatelimit,omitempty"`
+ Templatetotal int64 `json:"templatetotal,omitempty"`
+ Vmavailable string `json:"vmavailable,omitempty"`
+ Vmlimit string `json:"vmlimit,omitempty"`
+ Vmtotal int64 `json:"vmtotal,omitempty"`
+ Volumeavailable string `json:"volumeavailable,omitempty"`
+ Volumelimit string `json:"volumelimit,omitempty"`
+ Volumetotal int64 `json:"volumetotal,omitempty"`
+ Vpcavailable string `json:"vpcavailable,omitempty"`
+ Vpclimit string `json:"vpclimit,omitempty"`
+ Vpctotal int64 `json:"vpctotal,omitempty"`
}
type UpdateDomainParams struct {
@@ -183,14 +220,51 @@ func (s *DomainService) UpdateDomain(p *UpdateDomainParams) (*UpdateDomainRespon
}
type UpdateDomainResponse struct {
- Haschild bool `json:"haschild,omitempty"`
- Id string `json:"id,omitempty"`
- Level int `json:"level,omitempty"`
- Name string `json:"name,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Parentdomainid string `json:"parentdomainid,omitempty"`
- Parentdomainname string `json:"parentdomainname,omitempty"`
- Path string `json:"path,omitempty"`
+ Cpuavailable string `json:"cpuavailable,omitempty"`
+ Cpulimit string `json:"cpulimit,omitempty"`
+ Cputotal int64 `json:"cputotal,omitempty"`
+ Haschild bool `json:"haschild,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ipavailable string `json:"ipavailable,omitempty"`
+ Iplimit string `json:"iplimit,omitempty"`
+ Iptotal int64 `json:"iptotal,omitempty"`
+ Level int `json:"level,omitempty"`
+ Memoryavailable string `json:"memoryavailable,omitempty"`
+ Memorylimit string `json:"memorylimit,omitempty"`
+ Memorytotal int64 `json:"memorytotal,omitempty"`
+ Name string `json:"name,omitempty"`
+ Networkavailable string `json:"networkavailable,omitempty"`
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Networklimit string `json:"networklimit,omitempty"`
+ Networktotal int64 `json:"networktotal,omitempty"`
+ Parentdomainid string `json:"parentdomainid,omitempty"`
+ Parentdomainname string `json:"parentdomainname,omitempty"`
+ Path string `json:"path,omitempty"`
+ Primarystorageavailable string `json:"primarystorageavailable,omitempty"`
+ Primarystoragelimit string `json:"primarystoragelimit,omitempty"`
+ Primarystoragetotal int64 `json:"primarystoragetotal,omitempty"`
+ Projectavailable string `json:"projectavailable,omitempty"`
+ Projectlimit string `json:"projectlimit,omitempty"`
+ Projecttotal int64 `json:"projecttotal,omitempty"`
+ Secondarystorageavailable string `json:"secondarystorageavailable,omitempty"`
+ Secondarystoragelimit string `json:"secondarystoragelimit,omitempty"`
+ Secondarystoragetotal int64 `json:"secondarystoragetotal,omitempty"`
+ Snapshotavailable string `json:"snapshotavailable,omitempty"`
+ Snapshotlimit string `json:"snapshotlimit,omitempty"`
+ Snapshottotal int64 `json:"snapshottotal,omitempty"`
+ State string `json:"state,omitempty"`
+ Templateavailable string `json:"templateavailable,omitempty"`
+ Templatelimit string `json:"templatelimit,omitempty"`
+ Templatetotal int64 `json:"templatetotal,omitempty"`
+ Vmavailable string `json:"vmavailable,omitempty"`
+ Vmlimit string `json:"vmlimit,omitempty"`
+ Vmtotal int64 `json:"vmtotal,omitempty"`
+ Volumeavailable string `json:"volumeavailable,omitempty"`
+ Volumelimit string `json:"volumelimit,omitempty"`
+ Volumetotal int64 `json:"volumetotal,omitempty"`
+ Vpcavailable string `json:"vpcavailable,omitempty"`
+ Vpclimit string `json:"vpclimit,omitempty"`
+ Vpctotal int64 `json:"vpctotal,omitempty"`
}
type DeleteDomainParams struct {
@@ -374,12 +448,18 @@ func (s *DomainService) NewListDomainsParams() *ListDomainsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainID(name string) (string, error) {
+func (s *DomainService) GetDomainID(name string, opts ...OptionFunc) (string, error) {
p := &ListDomainsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListDomains(p)
if err != nil {
return "", err
@@ -404,13 +484,13 @@ func (s *DomainService) GetDomainID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainByName(name string) (*Domain, int, error) {
- id, err := s.GetDomainID(name)
+func (s *DomainService) GetDomainByName(name string, opts ...OptionFunc) (*Domain, int, error) {
+ id, err := s.GetDomainID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetDomainByID(id)
+ r, count, err := s.GetDomainByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -418,12 +498,18 @@ func (s *DomainService) GetDomainByName(name string) (*Domain, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainByID(id string) (*Domain, int, error) {
+func (s *DomainService) GetDomainByID(id string, opts ...OptionFunc) (*Domain, int, error) {
p := &ListDomainsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListDomains(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -464,14 +550,51 @@ type ListDomainsResponse struct {
}
type Domain struct {
- Haschild bool `json:"haschild,omitempty"`
- Id string `json:"id,omitempty"`
- Level int `json:"level,omitempty"`
- Name string `json:"name,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Parentdomainid string `json:"parentdomainid,omitempty"`
- Parentdomainname string `json:"parentdomainname,omitempty"`
- Path string `json:"path,omitempty"`
+ Cpuavailable string `json:"cpuavailable,omitempty"`
+ Cpulimit string `json:"cpulimit,omitempty"`
+ Cputotal int64 `json:"cputotal,omitempty"`
+ Haschild bool `json:"haschild,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ipavailable string `json:"ipavailable,omitempty"`
+ Iplimit string `json:"iplimit,omitempty"`
+ Iptotal int64 `json:"iptotal,omitempty"`
+ Level int `json:"level,omitempty"`
+ Memoryavailable string `json:"memoryavailable,omitempty"`
+ Memorylimit string `json:"memorylimit,omitempty"`
+ Memorytotal int64 `json:"memorytotal,omitempty"`
+ Name string `json:"name,omitempty"`
+ Networkavailable string `json:"networkavailable,omitempty"`
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Networklimit string `json:"networklimit,omitempty"`
+ Networktotal int64 `json:"networktotal,omitempty"`
+ Parentdomainid string `json:"parentdomainid,omitempty"`
+ Parentdomainname string `json:"parentdomainname,omitempty"`
+ Path string `json:"path,omitempty"`
+ Primarystorageavailable string `json:"primarystorageavailable,omitempty"`
+ Primarystoragelimit string `json:"primarystoragelimit,omitempty"`
+ Primarystoragetotal int64 `json:"primarystoragetotal,omitempty"`
+ Projectavailable string `json:"projectavailable,omitempty"`
+ Projectlimit string `json:"projectlimit,omitempty"`
+ Projecttotal int64 `json:"projecttotal,omitempty"`
+ Secondarystorageavailable string `json:"secondarystorageavailable,omitempty"`
+ Secondarystoragelimit string `json:"secondarystoragelimit,omitempty"`
+ Secondarystoragetotal int64 `json:"secondarystoragetotal,omitempty"`
+ Snapshotavailable string `json:"snapshotavailable,omitempty"`
+ Snapshotlimit string `json:"snapshotlimit,omitempty"`
+ Snapshottotal int64 `json:"snapshottotal,omitempty"`
+ State string `json:"state,omitempty"`
+ Templateavailable string `json:"templateavailable,omitempty"`
+ Templatelimit string `json:"templatelimit,omitempty"`
+ Templatetotal int64 `json:"templatetotal,omitempty"`
+ Vmavailable string `json:"vmavailable,omitempty"`
+ Vmlimit string `json:"vmlimit,omitempty"`
+ Vmtotal int64 `json:"vmtotal,omitempty"`
+ Volumeavailable string `json:"volumeavailable,omitempty"`
+ Volumelimit string `json:"volumelimit,omitempty"`
+ Volumetotal int64 `json:"volumetotal,omitempty"`
+ Vpcavailable string `json:"vpcavailable,omitempty"`
+ Vpclimit string `json:"vpclimit,omitempty"`
+ Vpctotal int64 `json:"vpctotal,omitempty"`
}
type ListDomainChildrenParams struct {
@@ -576,12 +699,18 @@ func (s *DomainService) NewListDomainChildrenParams() *ListDomainChildrenParams
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainChildrenID(name string) (string, error) {
+func (s *DomainService) GetDomainChildrenID(name string, opts ...OptionFunc) (string, error) {
p := &ListDomainChildrenParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListDomainChildren(p)
if err != nil {
return "", err
@@ -606,13 +735,13 @@ func (s *DomainService) GetDomainChildrenID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainChildrenByName(name string) (*DomainChildren, int, error) {
- id, err := s.GetDomainChildrenID(name)
+func (s *DomainService) GetDomainChildrenByName(name string, opts ...OptionFunc) (*DomainChildren, int, error) {
+ id, err := s.GetDomainChildrenID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetDomainChildrenByID(id)
+ r, count, err := s.GetDomainChildrenByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -620,12 +749,18 @@ func (s *DomainService) GetDomainChildrenByName(name string) (*DomainChildren, i
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *DomainService) GetDomainChildrenByID(id string) (*DomainChildren, int, error) {
+func (s *DomainService) GetDomainChildrenByID(id string, opts ...OptionFunc) (*DomainChildren, int, error) {
p := &ListDomainChildrenParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListDomainChildren(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -666,12 +801,151 @@ type ListDomainChildrenResponse struct {
}
type DomainChildren struct {
- Haschild bool `json:"haschild,omitempty"`
- Id string `json:"id,omitempty"`
- Level int `json:"level,omitempty"`
- Name string `json:"name,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Parentdomainid string `json:"parentdomainid,omitempty"`
- Parentdomainname string `json:"parentdomainname,omitempty"`
- Path string `json:"path,omitempty"`
+ Cpuavailable string `json:"cpuavailable,omitempty"`
+ Cpulimit string `json:"cpulimit,omitempty"`
+ Cputotal int64 `json:"cputotal,omitempty"`
+ Haschild bool `json:"haschild,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ipavailable string `json:"ipavailable,omitempty"`
+ Iplimit string `json:"iplimit,omitempty"`
+ Iptotal int64 `json:"iptotal,omitempty"`
+ Level int `json:"level,omitempty"`
+ Memoryavailable string `json:"memoryavailable,omitempty"`
+ Memorylimit string `json:"memorylimit,omitempty"`
+ Memorytotal int64 `json:"memorytotal,omitempty"`
+ Name string `json:"name,omitempty"`
+ Networkavailable string `json:"networkavailable,omitempty"`
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Networklimit string `json:"networklimit,omitempty"`
+ Networktotal int64 `json:"networktotal,omitempty"`
+ Parentdomainid string `json:"parentdomainid,omitempty"`
+ Parentdomainname string `json:"parentdomainname,omitempty"`
+ Path string `json:"path,omitempty"`
+ Primarystorageavailable string `json:"primarystorageavailable,omitempty"`
+ Primarystoragelimit string `json:"primarystoragelimit,omitempty"`
+ Primarystoragetotal int64 `json:"primarystoragetotal,omitempty"`
+ Projectavailable string `json:"projectavailable,omitempty"`
+ Projectlimit string `json:"projectlimit,omitempty"`
+ Projecttotal int64 `json:"projecttotal,omitempty"`
+ Secondarystorageavailable string `json:"secondarystorageavailable,omitempty"`
+ Secondarystoragelimit string `json:"secondarystoragelimit,omitempty"`
+ Secondarystoragetotal int64 `json:"secondarystoragetotal,omitempty"`
+ Snapshotavailable string `json:"snapshotavailable,omitempty"`
+ Snapshotlimit string `json:"snapshotlimit,omitempty"`
+ Snapshottotal int64 `json:"snapshottotal,omitempty"`
+ State string `json:"state,omitempty"`
+ Templateavailable string `json:"templateavailable,omitempty"`
+ Templatelimit string `json:"templatelimit,omitempty"`
+ Templatetotal int64 `json:"templatetotal,omitempty"`
+ Vmavailable string `json:"vmavailable,omitempty"`
+ Vmlimit string `json:"vmlimit,omitempty"`
+ Vmtotal int64 `json:"vmtotal,omitempty"`
+ Volumeavailable string `json:"volumeavailable,omitempty"`
+ Volumelimit string `json:"volumelimit,omitempty"`
+ Volumetotal int64 `json:"volumetotal,omitempty"`
+ Vpcavailable string `json:"vpcavailable,omitempty"`
+ Vpclimit string `json:"vpclimit,omitempty"`
+ Vpctotal int64 `json:"vpctotal,omitempty"`
+}
+
+type LinkDomainToLdapParams struct {
+ p map[string]interface{}
+}
+
+func (p *LinkDomainToLdapParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["accounttype"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("accounttype", vv)
+ }
+ if v, found := p.p["admin"]; found {
+ u.Set("admin", v.(string))
+ }
+ if v, found := p.p["domainid"]; found {
+ u.Set("domainid", v.(string))
+ }
+ if v, found := p.p["name"]; found {
+ u.Set("name", v.(string))
+ }
+ if v, found := p.p["type"]; found {
+ u.Set("type", v.(string))
+ }
+ return u
+}
+
+func (p *LinkDomainToLdapParams) SetAccounttype(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["accounttype"] = v
+ return
+}
+
+func (p *LinkDomainToLdapParams) SetAdmin(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["admin"] = v
+ return
+}
+
+func (p *LinkDomainToLdapParams) SetDomainid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domainid"] = v
+ return
+}
+
+func (p *LinkDomainToLdapParams) SetName(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["name"] = v
+ return
+}
+
+func (p *LinkDomainToLdapParams) SetType(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domainType"] = v
+ return
+}
+
+// You should always use this function to get a new LinkDomainToLdapParams instance,
+// as then you are sure you have configured all required params
+func (s *DomainService) NewLinkDomainToLdapParams(accounttype int, domainid string, name string, domainType string) *LinkDomainToLdapParams {
+ p := &LinkDomainToLdapParams{}
+ p.p = make(map[string]interface{})
+ p.p["accounttype"] = accounttype
+ p.p["domainid"] = domainid
+ p.p["name"] = name
+ p.p["domainType"] = domainType
+ return p
+}
+
+// link an existing cloudstack domain to group or OU in ldap
+func (s *DomainService) LinkDomainToLdap(p *LinkDomainToLdapParams) (*LinkDomainToLdapResponse, error) {
+ resp, err := s.cs.newRequest("linkDomainToLdap", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r LinkDomainToLdapResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type LinkDomainToLdapResponse struct {
+ Accountid string `json:"accountid,omitempty"`
+ Accounttype int `json:"accounttype,omitempty"`
+ Domainid int64 `json:"domainid,omitempty"`
+ Name string `json:"name,omitempty"`
+ Type string `json:"type,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/EventService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/EventService.go
index 23e53bb54801..dee829713382 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/EventService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/EventService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -216,12 +216,18 @@ func (s *EventService) NewListEventsParams() *ListEventsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *EventService) GetEventByID(id string) (*Event, int, error) {
+func (s *EventService) GetEventByID(id string, opts ...OptionFunc) (*Event, int, error) {
p := &ListEventsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListEvents(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -232,21 +238,6 @@ func (s *EventService) GetEventByID(id string) (*Event, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListEvents(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/FirewallService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/FirewallService.go
index 08cb56418342..3c7c17ddb2a2 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/FirewallService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/FirewallService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -24,6 +24,56 @@ import (
"strings"
)
+// Helper function for maintaining backwards compatibility
+func convertFirewallServiceResponse(b []byte) ([]byte, error) {
+ var raw map[string]interface{}
+ if err := json.Unmarshal(b, &raw); err != nil {
+ return nil, err
+ }
+
+ if _, ok := raw["firewallrule"]; ok {
+ return convertFirewallServiceListResponse(b)
+ }
+
+ for _, k := range []string{"endport", "startport"} {
+ if sVal, ok := raw[k].(string); ok {
+ iVal, err := strconv.Atoi(sVal)
+ if err != nil {
+ return nil, err
+ }
+ raw[k] = iVal
+ }
+ }
+
+ return json.Marshal(raw)
+}
+
+// Helper function for maintaining backwards compatibility
+func convertFirewallServiceListResponse(b []byte) ([]byte, error) {
+ var rawList struct {
+ Count int `json:"count"`
+ FirewallRules []map[string]interface{} `json:"firewallrule"`
+ }
+
+ if err := json.Unmarshal(b, &rawList); err != nil {
+ return nil, err
+ }
+
+ for _, r := range rawList.FirewallRules {
+ for _, k := range []string{"endport", "startport"} {
+ if sVal, ok := r[k].(string); ok {
+ iVal, err := strconv.Atoi(sVal)
+ if err != nil {
+ return nil, err
+ }
+ r[k] = iVal
+ }
+ }
+ }
+
+ return json.Marshal(rawList)
+}
+
type ListPortForwardingRulesParams struct {
p map[string]interface{}
}
@@ -198,12 +248,18 @@ func (s *FirewallService) NewListPortForwardingRulesParams() *ListPortForwarding
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *FirewallService) GetPortForwardingRuleByID(id string) (*PortForwardingRule, int, error) {
+func (s *FirewallService) GetPortForwardingRuleByID(id string, opts ...OptionFunc) (*PortForwardingRule, int, error) {
p := &ListPortForwardingRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPortForwardingRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -214,21 +270,6 @@ func (s *FirewallService) GetPortForwardingRuleByID(id string) (*PortForwardingR
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListPortForwardingRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -246,6 +287,11 @@ func (s *FirewallService) ListPortForwardingRules(p *ListPortForwardingRulesPara
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r ListPortForwardingRulesResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -460,6 +506,11 @@ func (s *FirewallService) CreatePortForwardingRule(p *CreatePortForwardingRulePa
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r CreatePortForwardingRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -480,6 +531,11 @@ func (s *FirewallService) CreatePortForwardingRule(p *CreatePortForwardingRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -558,6 +614,11 @@ func (s *FirewallService) DeletePortForwardingRule(p *DeletePortForwardingRulePa
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r DeletePortForwardingRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -573,6 +634,11 @@ func (s *FirewallService) DeletePortForwardingRule(p *DeletePortForwardingRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -605,24 +671,16 @@ func (p *UpdatePortForwardingRuleParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
- if v, found := p.p["ipaddressid"]; found {
- u.Set("ipaddressid", v.(string))
- }
- if v, found := p.p["privateip"]; found {
- u.Set("privateip", v.(string))
- }
if v, found := p.p["privateport"]; found {
- u.Set("privateport", v.(string))
- }
- if v, found := p.p["protocol"]; found {
- u.Set("protocol", v.(string))
- }
- if v, found := p.p["publicport"]; found {
- u.Set("publicport", v.(string))
+ vv := strconv.Itoa(v.(int))
+ u.Set("privateport", vv)
}
if v, found := p.p["virtualmachineid"]; found {
u.Set("virtualmachineid", v.(string))
}
+ if v, found := p.p["vmguestip"]; found {
+ u.Set("vmguestip", v.(string))
+ }
return u
}
@@ -650,23 +708,7 @@ func (p *UpdatePortForwardingRuleParams) SetId(v string) {
return
}
-func (p *UpdatePortForwardingRuleParams) SetIpaddressid(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["ipaddressid"] = v
- return
-}
-
-func (p *UpdatePortForwardingRuleParams) SetPrivateip(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["privateip"] = v
- return
-}
-
-func (p *UpdatePortForwardingRuleParams) SetPrivateport(v string) {
+func (p *UpdatePortForwardingRuleParams) SetPrivateport(v int) {
if p.p == nil {
p.p = make(map[string]interface{})
}
@@ -674,27 +716,19 @@ func (p *UpdatePortForwardingRuleParams) SetPrivateport(v string) {
return
}
-func (p *UpdatePortForwardingRuleParams) SetProtocol(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["protocol"] = v
- return
-}
-
-func (p *UpdatePortForwardingRuleParams) SetPublicport(v string) {
+func (p *UpdatePortForwardingRuleParams) SetVirtualmachineid(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
}
- p.p["publicport"] = v
+ p.p["virtualmachineid"] = v
return
}
-func (p *UpdatePortForwardingRuleParams) SetVirtualmachineid(v string) {
+func (p *UpdatePortForwardingRuleParams) SetVmguestip(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
}
- p.p["virtualmachineid"] = v
+ p.p["vmguestip"] = v
return
}
@@ -707,13 +741,18 @@ func (s *FirewallService) NewUpdatePortForwardingRuleParams(id string) *UpdatePo
return p
}
-// Updates a port forwarding rule. Only the private port and the virtual machine can be updated.
+// Updates a port forwarding rule. Only the private port and the virtual machine can be updated.
func (s *FirewallService) UpdatePortForwardingRule(p *UpdatePortForwardingRuleParams) (*UpdatePortForwardingRuleResponse, error) {
resp, err := s.cs.newRequest("updatePortForwardingRule", p.toURLValues())
if err != nil {
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r UpdatePortForwardingRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -734,6 +773,11 @@ func (s *FirewallService) UpdatePortForwardingRule(p *UpdatePortForwardingRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -900,13 +944,18 @@ func (s *FirewallService) NewCreateFirewallRuleParams(ipaddressid string, protoc
return p
}
-// Creates a firewall rule for a given ip address
+// Creates a firewall rule for a given IP address
func (s *FirewallService) CreateFirewallRule(p *CreateFirewallRuleParams) (*CreateFirewallRuleResponse, error) {
resp, err := s.cs.newRequest("createFirewallRule", p.toURLValues())
if err != nil {
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r CreateFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -927,6 +976,11 @@ func (s *FirewallService) CreateFirewallRule(p *CreateFirewallRuleParams) (*Crea
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -937,7 +991,7 @@ func (s *FirewallService) CreateFirewallRule(p *CreateFirewallRuleParams) (*Crea
type CreateFirewallRuleResponse struct {
JobID string `json:"jobid,omitempty"`
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -946,7 +1000,7 @@ type CreateFirewallRuleResponse struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -1001,6 +1055,11 @@ func (s *FirewallService) DeleteFirewallRule(p *DeleteFirewallRuleParams) (*Dele
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r DeleteFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1016,6 +1075,11 @@ func (s *FirewallService) DeleteFirewallRule(p *DeleteFirewallRuleParams) (*Dele
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -1203,12 +1267,18 @@ func (s *FirewallService) NewListFirewallRulesParams() *ListFirewallRulesParams
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *FirewallService) GetFirewallRuleByID(id string) (*FirewallRule, int, error) {
+func (s *FirewallService) GetFirewallRuleByID(id string, opts ...OptionFunc) (*FirewallRule, int, error) {
p := &ListFirewallRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListFirewallRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1219,21 +1289,6 @@ func (s *FirewallService) GetFirewallRuleByID(id string) (*FirewallRule, int, er
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListFirewallRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1251,6 +1306,11 @@ func (s *FirewallService) ListFirewallRules(p *ListFirewallRulesParams) (*ListFi
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r ListFirewallRulesResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1265,7 +1325,7 @@ type ListFirewallRulesResponse struct {
type FirewallRule struct {
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -1274,7 +1334,7 @@ type FirewallRule struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -1352,6 +1412,11 @@ func (s *FirewallService) UpdateFirewallRule(p *UpdateFirewallRuleParams) (*Upda
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r UpdateFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1372,6 +1437,11 @@ func (s *FirewallService) UpdateFirewallRule(p *UpdateFirewallRuleParams) (*Upda
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -1382,7 +1452,7 @@ func (s *FirewallService) UpdateFirewallRule(p *UpdateFirewallRuleParams) (*Upda
type UpdateFirewallRuleResponse struct {
JobID string `json:"jobid,omitempty"`
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -1391,7 +1461,7 @@ type UpdateFirewallRuleResponse struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -1541,6 +1611,11 @@ func (s *FirewallService) CreateEgressFirewallRule(p *CreateEgressFirewallRulePa
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r CreateEgressFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1561,6 +1636,11 @@ func (s *FirewallService) CreateEgressFirewallRule(p *CreateEgressFirewallRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -1571,7 +1651,7 @@ func (s *FirewallService) CreateEgressFirewallRule(p *CreateEgressFirewallRulePa
type CreateEgressFirewallRuleResponse struct {
JobID string `json:"jobid,omitempty"`
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -1580,7 +1660,7 @@ type CreateEgressFirewallRuleResponse struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -1628,13 +1708,18 @@ func (s *FirewallService) NewDeleteEgressFirewallRuleParams(id string) *DeleteEg
return p
}
-// Deletes an ggress firewall rule
+// Deletes an egress firewall rule
func (s *FirewallService) DeleteEgressFirewallRule(p *DeleteEgressFirewallRuleParams) (*DeleteEgressFirewallRuleResponse, error) {
resp, err := s.cs.newRequest("deleteEgressFirewallRule", p.toURLValues())
if err != nil {
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r DeleteEgressFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1650,6 +1735,11 @@ func (s *FirewallService) DeleteEgressFirewallRule(p *DeleteEgressFirewallRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -1685,9 +1775,6 @@ func (p *ListEgressFirewallRulesParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
- if v, found := p.p["id"]; found {
- u.Set("id", v.(string))
- }
if v, found := p.p["ipaddressid"]; found {
u.Set("ipaddressid", v.(string))
}
@@ -1705,9 +1792,6 @@ func (p *ListEgressFirewallRulesParams) toURLValues() url.Values {
if v, found := p.p["networkid"]; found {
u.Set("networkid", v.(string))
}
- if v, found := p.p["networkid"]; found {
- u.Set("networkid", v.(string))
- }
if v, found := p.p["page"]; found {
vv := strconv.Itoa(v.(int))
u.Set("page", vv)
@@ -1843,12 +1927,18 @@ func (s *FirewallService) NewListEgressFirewallRulesParams() *ListEgressFirewall
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *FirewallService) GetEgressFirewallRuleByID(id string) (*EgressFirewallRule, int, error) {
+func (s *FirewallService) GetEgressFirewallRuleByID(id string, opts ...OptionFunc) (*EgressFirewallRule, int, error) {
p := &ListEgressFirewallRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListEgressFirewallRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1859,21 +1949,6 @@ func (s *FirewallService) GetEgressFirewallRuleByID(id string) (*EgressFirewallR
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListEgressFirewallRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1884,13 +1959,18 @@ func (s *FirewallService) GetEgressFirewallRuleByID(id string) (*EgressFirewallR
return nil, l.Count, fmt.Errorf("There is more then one result for EgressFirewallRule UUID: %s!", id)
}
-// Lists all egress firewall rules for network id.
+// Lists all egress firewall rules for network ID.
func (s *FirewallService) ListEgressFirewallRules(p *ListEgressFirewallRulesParams) (*ListEgressFirewallRulesResponse, error) {
resp, err := s.cs.newRequest("listEgressFirewallRules", p.toURLValues())
if err != nil {
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r ListEgressFirewallRulesResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -1905,7 +1985,7 @@ type ListEgressFirewallRulesResponse struct {
type EgressFirewallRule struct {
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -1914,7 +1994,7 @@ type EgressFirewallRule struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -1992,6 +2072,11 @@ func (s *FirewallService) UpdateEgressFirewallRule(p *UpdateEgressFirewallRulePa
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r UpdateEgressFirewallRuleResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -2012,6 +2097,11 @@ func (s *FirewallService) UpdateEgressFirewallRule(p *UpdateEgressFirewallRulePa
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -2022,7 +2112,7 @@ func (s *FirewallService) UpdateEgressFirewallRule(p *UpdateEgressFirewallRulePa
type UpdateEgressFirewallRuleResponse struct {
JobID string `json:"jobid,omitempty"`
Cidrlist string `json:"cidrlist,omitempty"`
- Endport string `json:"endport,omitempty"`
+ Endport int `json:"endport,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Icmpcode int `json:"icmpcode,omitempty"`
Icmptype int `json:"icmptype,omitempty"`
@@ -2031,7 +2121,7 @@ type UpdateEgressFirewallRuleResponse struct {
Ipaddressid string `json:"ipaddressid,omitempty"`
Networkid string `json:"networkid,omitempty"`
Protocol string `json:"protocol,omitempty"`
- Startport string `json:"startport,omitempty"`
+ Startport int `json:"startport,omitempty"`
State string `json:"state,omitempty"`
Tags []struct {
Account string `json:"account,omitempty"`
@@ -2134,6 +2224,11 @@ func (s *FirewallService) AddPaloAltoFirewall(p *AddPaloAltoFirewallParams) (*Ad
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r AddPaloAltoFirewallResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -2154,6 +2249,11 @@ func (s *FirewallService) AddPaloAltoFirewall(p *AddPaloAltoFirewallParams) (*Ad
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -2220,6 +2320,11 @@ func (s *FirewallService) DeletePaloAltoFirewall(p *DeletePaloAltoFirewallParams
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r DeletePaloAltoFirewallResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -2235,6 +2340,11 @@ func (s *FirewallService) DeletePaloAltoFirewall(p *DeletePaloAltoFirewallParams
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -2299,6 +2409,11 @@ func (s *FirewallService) ConfigurePaloAltoFirewall(p *ConfigurePaloAltoFirewall
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r ConfigurePaloAltoFirewallResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
@@ -2319,6 +2434,11 @@ func (s *FirewallService) ConfigurePaloAltoFirewall(p *ConfigurePaloAltoFirewall
return nil, err
}
+ b, err = convertFirewallServiceResponse(b)
+ if err != nil {
+ return nil, err
+ }
+
if err := json.Unmarshal(b, &r); err != nil {
return nil, err
}
@@ -2430,6 +2550,11 @@ func (s *FirewallService) ListPaloAltoFirewalls(p *ListPaloAltoFirewallsParams)
return nil, err
}
+ resp, err = convertFirewallServiceResponse(resp)
+ if err != nil {
+ return nil, err
+ }
+
var r ListPaloAltoFirewallsResponse
if err := json.Unmarshal(resp, &r); err != nil {
return nil, err
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/GuestOSService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/GuestOSService.go
index 1cbff4db0e5b..0a8ce305da49 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/GuestOSService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/GuestOSService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -113,12 +113,18 @@ func (s *GuestOSService) NewListOsTypesParams() *ListOsTypesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *GuestOSService) GetOsTypeByID(id string) (*OsType, int, error) {
+func (s *GuestOSService) GetOsTypeByID(id string, opts ...OptionFunc) (*OsType, int, error) {
p := &ListOsTypesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListOsTypes(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -243,12 +249,18 @@ func (s *GuestOSService) NewListOsCategoriesParams() *ListOsCategoriesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *GuestOSService) GetOsCategoryID(name string) (string, error) {
+func (s *GuestOSService) GetOsCategoryID(name string, opts ...OptionFunc) (string, error) {
p := &ListOsCategoriesParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListOsCategories(p)
if err != nil {
return "", err
@@ -273,13 +285,13 @@ func (s *GuestOSService) GetOsCategoryID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *GuestOSService) GetOsCategoryByName(name string) (*OsCategory, int, error) {
- id, err := s.GetOsCategoryID(name)
+func (s *GuestOSService) GetOsCategoryByName(name string, opts ...OptionFunc) (*OsCategory, int, error) {
+ id, err := s.GetOsCategoryID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetOsCategoryByID(id)
+ r, count, err := s.GetOsCategoryByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -287,12 +299,18 @@ func (s *GuestOSService) GetOsCategoryByName(name string) (*OsCategory, int, err
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *GuestOSService) GetOsCategoryByID(id string) (*OsCategory, int, error) {
+func (s *GuestOSService) GetOsCategoryByID(id string, opts ...OptionFunc) (*OsCategory, int, error) {
p := &ListOsCategoriesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListOsCategories(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -687,12 +705,18 @@ func (s *GuestOSService) NewListGuestOsMappingParams() *ListGuestOsMappingParams
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *GuestOSService) GetGuestOsMappingByID(id string) (*GuestOsMapping, int, error) {
+func (s *GuestOSService) GetGuestOsMappingByID(id string, opts ...OptionFunc) (*GuestOsMapping, int, error) {
p := &ListGuestOsMappingParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListGuestOsMapping(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/HostService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/HostService.go
index 427f278f56be..6d0836696be6 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/HostService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/HostService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -176,22 +176,23 @@ func (s *HostService) AddHost(p *AddHostParams) (*AddHostResponse, error) {
}
type AddHostResponse struct {
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -302,23 +303,24 @@ func (s *HostService) ReconnectHost(p *ReconnectHostParams) (*ReconnectHostRespo
}
type ReconnectHostResponse struct {
- JobID string `json:"jobid,omitempty"`
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ JobID string `json:"jobid,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -454,22 +456,23 @@ func (s *HostService) UpdateHost(p *UpdateHostParams) (*UpdateHostResponse, erro
}
type UpdateHostResponse struct {
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -655,23 +658,24 @@ func (s *HostService) PrepareHostForMaintenance(p *PrepareHostForMaintenancePara
}
type PrepareHostForMaintenanceResponse struct {
- JobID string `json:"jobid,omitempty"`
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ JobID string `json:"jobid,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -782,23 +786,24 @@ func (s *HostService) CancelHostMaintenance(p *CancelHostMaintenanceParams) (*Ca
}
type CancelHostMaintenanceResponse struct {
- JobID string `json:"jobid,omitempty"`
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ JobID string `json:"jobid,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -1032,12 +1037,18 @@ func (s *HostService) NewListHostsParams() *ListHostsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *HostService) GetHostID(name string) (string, error) {
+func (s *HostService) GetHostID(name string, opts ...OptionFunc) (string, error) {
p := &ListHostsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListHosts(p)
if err != nil {
return "", err
@@ -1062,13 +1073,13 @@ func (s *HostService) GetHostID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *HostService) GetHostByName(name string) (*Host, int, error) {
- id, err := s.GetHostID(name)
+func (s *HostService) GetHostByName(name string, opts ...OptionFunc) (*Host, int, error) {
+ id, err := s.GetHostID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetHostByID(id)
+ r, count, err := s.GetHostByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1076,12 +1087,18 @@ func (s *HostService) GetHostByName(name string) (*Host, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *HostService) GetHostByID(id string) (*Host, int, error) {
+func (s *HostService) GetHostByID(id string, opts ...OptionFunc) (*Host, int, error) {
p := &ListHostsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListHosts(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1122,22 +1139,23 @@ type ListHostsResponse struct {
}
type Host struct {
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -1181,6 +1199,122 @@ type Host struct {
Zonename string `json:"zonename,omitempty"`
}
+type ListHostTagsParams struct {
+ p map[string]interface{}
+}
+
+func (p *ListHostTagsParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["keyword"]; found {
+ u.Set("keyword", v.(string))
+ }
+ if v, found := p.p["page"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("page", vv)
+ }
+ if v, found := p.p["pagesize"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("pagesize", vv)
+ }
+ return u
+}
+
+func (p *ListHostTagsParams) SetKeyword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["keyword"] = v
+ return
+}
+
+func (p *ListHostTagsParams) SetPage(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["page"] = v
+ return
+}
+
+func (p *ListHostTagsParams) SetPagesize(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["pagesize"] = v
+ return
+}
+
+// You should always use this function to get a new ListHostTagsParams instance,
+// as then you are sure you have configured all required params
+func (s *HostService) NewListHostTagsParams() *ListHostTagsParams {
+ p := &ListHostTagsParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// This is a courtesy helper function, which in some cases may not work as expected!
+func (s *HostService) GetHostTagID(keyword string, opts ...OptionFunc) (string, error) {
+ p := &ListHostTagsParams{}
+ p.p = make(map[string]interface{})
+
+ p.p["keyword"] = keyword
+
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
+ l, err := s.ListHostTags(p)
+ if err != nil {
+ return "", err
+ }
+
+ if l.Count == 0 {
+ return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
+ }
+
+ if l.Count == 1 {
+ return l.HostTags[0].Id, nil
+ }
+
+ if l.Count > 1 {
+ for _, v := range l.HostTags {
+ if v.Name == keyword {
+ return v.Id, nil
+ }
+ }
+ }
+ return "", fmt.Errorf("Could not find an exact match for %s: %+v", keyword, l)
+}
+
+// Lists host tags
+func (s *HostService) ListHostTags(p *ListHostTagsParams) (*ListHostTagsResponse, error) {
+ resp, err := s.cs.newRequest("listHostTags", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r ListHostTagsResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type ListHostTagsResponse struct {
+ Count int `json:"count"`
+ HostTags []*HostTag `json:"hosttag"`
+}
+
+type HostTag struct {
+ Hostid int64 `json:"hostid,omitempty"`
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+}
+
type FindHostsForMigrationParams struct {
p map[string]interface{}
}
@@ -1396,6 +1530,10 @@ func (p *UpdateHostPasswordParams) toURLValues() url.Values {
if v, found := p.p["password"]; found {
u.Set("password", v.(string))
}
+ if v, found := p.p["update_passwd_on_host"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("update_passwd_on_host", vv)
+ }
if v, found := p.p["username"]; found {
u.Set("username", v.(string))
}
@@ -1426,6 +1564,14 @@ func (p *UpdateHostPasswordParams) SetPassword(v string) {
return
}
+func (p *UpdateHostPasswordParams) SetUpdate_passwd_on_host(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["update_passwd_on_host"] = v
+ return
+}
+
func (p *UpdateHostPasswordParams) SetUsername(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1693,22 +1839,23 @@ func (s *HostService) AddBaremetalHost(p *AddBaremetalHostParams) (*AddBaremetal
}
type AddBaremetalHostResponse struct {
- Averageload int64 `json:"averageload,omitempty"`
- Capabilities string `json:"capabilities,omitempty"`
- Clusterid string `json:"clusterid,omitempty"`
- Clustername string `json:"clustername,omitempty"`
- Clustertype string `json:"clustertype,omitempty"`
- Cpuallocated string `json:"cpuallocated,omitempty"`
- Cpunumber int `json:"cpunumber,omitempty"`
- Cpusockets int `json:"cpusockets,omitempty"`
- Cpuspeed int64 `json:"cpuspeed,omitempty"`
- Cpuused string `json:"cpuused,omitempty"`
- Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
- Created string `json:"created,omitempty"`
- Disconnected string `json:"disconnected,omitempty"`
- Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
- Disksizetotal int64 `json:"disksizetotal,omitempty"`
- Events string `json:"events,omitempty"`
+ Averageload int64 `json:"averageload,omitempty"`
+ Capabilities string `json:"capabilities,omitempty"`
+ Clusterid string `json:"clusterid,omitempty"`
+ Clustername string `json:"clustername,omitempty"`
+ Clustertype string `json:"clustertype,omitempty"`
+ Cpuallocated string `json:"cpuallocated,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpusockets int `json:"cpusockets,omitempty"`
+ Cpuspeed int64 `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Cpuwithoverprovisioning string `json:"cpuwithoverprovisioning,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Disconnected string `json:"disconnected,omitempty"`
+ Disksizeallocated int64 `json:"disksizeallocated,omitempty"`
+ Disksizetotal int64 `json:"disksizetotal,omitempty"`
+ Events string `json:"events,omitempty"`
Gpugroup []struct {
Gpugroupname string `json:"gpugroupname,omitempty"`
Vgpu []struct {
@@ -2044,3 +2191,106 @@ type DedicatedHost struct {
Hostname string `json:"hostname,omitempty"`
Id string `json:"id,omitempty"`
}
+
+type AddGloboDnsHostParams struct {
+ p map[string]interface{}
+}
+
+func (p *AddGloboDnsHostParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["password"]; found {
+ u.Set("password", v.(string))
+ }
+ if v, found := p.p["physicalnetworkid"]; found {
+ u.Set("physicalnetworkid", v.(string))
+ }
+ if v, found := p.p["url"]; found {
+ u.Set("url", v.(string))
+ }
+ if v, found := p.p["username"]; found {
+ u.Set("username", v.(string))
+ }
+ return u
+}
+
+func (p *AddGloboDnsHostParams) SetPassword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["password"] = v
+ return
+}
+
+func (p *AddGloboDnsHostParams) SetPhysicalnetworkid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["physicalnetworkid"] = v
+ return
+}
+
+func (p *AddGloboDnsHostParams) SetUrl(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["url"] = v
+ return
+}
+
+func (p *AddGloboDnsHostParams) SetUsername(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["username"] = v
+ return
+}
+
+// You should always use this function to get a new AddGloboDnsHostParams instance,
+// as then you are sure you have configured all required params
+func (s *HostService) NewAddGloboDnsHostParams(password string, physicalnetworkid string, url string, username string) *AddGloboDnsHostParams {
+ p := &AddGloboDnsHostParams{}
+ p.p = make(map[string]interface{})
+ p.p["password"] = password
+ p.p["physicalnetworkid"] = physicalnetworkid
+ p.p["url"] = url
+ p.p["username"] = username
+ return p
+}
+
+// Adds the GloboDNS external host
+func (s *HostService) AddGloboDnsHost(p *AddGloboDnsHostParams) (*AddGloboDnsHostResponse, error) {
+ resp, err := s.cs.newRequest("addGloboDnsHost", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r AddGloboDnsHostResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type AddGloboDnsHostResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Displaytext string `json:"displaytext,omitempty"`
+ Success bool `json:"success,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/HypervisorService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/HypervisorService.go
index aaabfadaabcc..c172615c9874 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/HypervisorService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/HypervisorService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -236,12 +236,18 @@ func (s *HypervisorService) NewListHypervisorCapabilitiesParams() *ListHyperviso
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *HypervisorService) GetHypervisorCapabilityByID(id string) (*HypervisorCapability, int, error) {
+func (s *HypervisorService) GetHypervisorCapabilityByID(id string, opts ...OptionFunc) (*HypervisorCapability, int, error) {
p := &ListHypervisorCapabilitiesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListHypervisorCapabilities(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ISOService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ISOService.go
index 62348a7f0000..7cf1c92ec5c0 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ISOService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ISOService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -112,6 +112,8 @@ type AttachIsoResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -248,6 +250,8 @@ type AttachIsoResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -268,6 +272,8 @@ type AttachIsoResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -349,6 +355,8 @@ type DetachIsoResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -485,6 +493,8 @@ type DetachIsoResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -505,6 +515,8 @@ type DetachIsoResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -742,7 +754,7 @@ func (s *ISOService) NewListIsosParams() *ListIsosParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ISOService) GetIsoID(name string, isofilter string, zoneid string) (string, error) {
+func (s *ISOService) GetIsoID(name string, isofilter string, zoneid string, opts ...OptionFunc) (string, error) {
p := &ListIsosParams{}
p.p = make(map[string]interface{})
@@ -750,21 +762,17 @@ func (s *ISOService) GetIsoID(name string, isofilter string, zoneid string) (str
p.p["isofilter"] = isofilter
p.p["zoneid"] = zoneid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListIsos(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListIsos(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -784,13 +792,13 @@ func (s *ISOService) GetIsoID(name string, isofilter string, zoneid string) (str
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ISOService) GetIsoByName(name string, isofilter string, zoneid string) (*Iso, int, error) {
- id, err := s.GetIsoID(name, isofilter, zoneid)
+func (s *ISOService) GetIsoByName(name string, isofilter string, zoneid string, opts ...OptionFunc) (*Iso, int, error) {
+ id, err := s.GetIsoID(name, isofilter, zoneid, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetIsoByID(id)
+ r, count, err := s.GetIsoByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -798,12 +806,18 @@ func (s *ISOService) GetIsoByName(name string, isofilter string, zoneid string)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ISOService) GetIsoByID(id string) (*Iso, int, error) {
+func (s *ISOService) GetIsoByID(id string, opts ...OptionFunc) (*Iso, int, error) {
p := &ListIsosParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListIsos(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -814,21 +828,6 @@ func (s *ISOService) GetIsoByID(id string) (*Iso, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListIsos(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1214,6 +1213,10 @@ func (p *UpdateIsoParams) toURLValues() url.Values {
vv := strconv.FormatBool(v.(bool))
u.Set("passwordenabled", vv)
}
+ if v, found := p.p["requireshvm"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("requireshvm", vv)
+ }
if v, found := p.p["sortkey"]; found {
vv := strconv.Itoa(v.(int))
u.Set("sortkey", vv)
@@ -1301,6 +1304,14 @@ func (p *UpdateIsoParams) SetPasswordenabled(v bool) {
return
}
+func (p *UpdateIsoParams) SetRequireshvm(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["requireshvm"] = v
+ return
+}
+
func (p *UpdateIsoParams) SetSortkey(v int) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1515,7 +1526,7 @@ func (s *ISOService) NewCopyIsoParams(destzoneid string, id string) *CopyIsoPara
return p
}
-// Copies an iso from one zone to another.
+// Copies an ISO from one zone to another.
func (s *ISOService) CopyIso(p *CopyIsoParams) (*CopyIsoResponse, error) {
resp, err := s.cs.newRequest("copyIso", p.toURLValues())
if err != nil {
@@ -1703,7 +1714,7 @@ func (s *ISOService) NewUpdateIsoPermissionsParams(id string) *UpdateIsoPermissi
return p
}
-// Updates iso permissions
+// Updates ISO permissions
func (s *ISOService) UpdateIsoPermissions(p *UpdateIsoPermissionsParams) (*UpdateIsoPermissionsResponse, error) {
resp, err := s.cs.newRequest("updateIsoPermissions", p.toURLValues())
if err != nil {
@@ -1755,13 +1766,19 @@ func (s *ISOService) NewListIsoPermissionsParams(id string) *ListIsoPermissionsP
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ISOService) GetIsoPermissionByID(id string) (*IsoPermission, int, error) {
+func (s *ISOService) GetIsoPermissionByID(id string, opts ...OptionFunc) (*IsoPermission, int, error) {
p := &ListIsoPermissionsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListIsoPermissions(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ImageStoreService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ImageStoreService.go
index 83194d8204ad..1795e4c9bac8 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ImageStoreService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ImageStoreService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -131,6 +131,183 @@ type AddImageStoreResponse struct {
Zonename string `json:"zonename,omitempty"`
}
+type AddImageStoreS3Params struct {
+ p map[string]interface{}
+}
+
+func (p *AddImageStoreS3Params) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["accesskey"]; found {
+ u.Set("accesskey", v.(string))
+ }
+ if v, found := p.p["bucket"]; found {
+ u.Set("bucket", v.(string))
+ }
+ if v, found := p.p["connectiontimeout"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("connectiontimeout", vv)
+ }
+ if v, found := p.p["connectionttl"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("connectionttl", vv)
+ }
+ if v, found := p.p["endpoint"]; found {
+ u.Set("endpoint", v.(string))
+ }
+ if v, found := p.p["maxerrorretry"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("maxerrorretry", vv)
+ }
+ if v, found := p.p["s3signer"]; found {
+ u.Set("s3signer", v.(string))
+ }
+ if v, found := p.p["secretkey"]; found {
+ u.Set("secretkey", v.(string))
+ }
+ if v, found := p.p["sockettimeout"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("sockettimeout", vv)
+ }
+ if v, found := p.p["usehttps"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("usehttps", vv)
+ }
+ if v, found := p.p["usetcpkeepalive"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("usetcpkeepalive", vv)
+ }
+ return u
+}
+
+func (p *AddImageStoreS3Params) SetAccesskey(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["accesskey"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetBucket(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["bucket"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetConnectiontimeout(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["connectiontimeout"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetConnectionttl(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["connectionttl"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetEndpoint(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["endpoint"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetMaxerrorretry(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["maxerrorretry"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetS3signer(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["s3signer"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetSecretkey(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["secretkey"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetSockettimeout(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["sockettimeout"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetUsehttps(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["usehttps"] = v
+ return
+}
+
+func (p *AddImageStoreS3Params) SetUsetcpkeepalive(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["usetcpkeepalive"] = v
+ return
+}
+
+// You should always use this function to get a new AddImageStoreS3Params instance,
+// as then you are sure you have configured all required params
+func (s *ImageStoreService) NewAddImageStoreS3Params(accesskey string, bucket string, endpoint string, secretkey string) *AddImageStoreS3Params {
+ p := &AddImageStoreS3Params{}
+ p.p = make(map[string]interface{})
+ p.p["accesskey"] = accesskey
+ p.p["bucket"] = bucket
+ p.p["endpoint"] = endpoint
+ p.p["secretkey"] = secretkey
+ return p
+}
+
+// Adds S3 Image Store
+func (s *ImageStoreService) AddImageStoreS3(p *AddImageStoreS3Params) (*AddImageStoreS3Response, error) {
+ resp, err := s.cs.newRequest("addImageStoreS3", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r AddImageStoreS3Response
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type AddImageStoreS3Response struct {
+ Details []string `json:"details,omitempty"`
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ Providername string `json:"providername,omitempty"`
+ Scope string `json:"scope,omitempty"`
+ Url string `json:"url,omitempty"`
+ Zoneid string `json:"zoneid,omitempty"`
+ Zonename string `json:"zonename,omitempty"`
+}
+
type ListImageStoresParams struct {
p map[string]interface{}
}
@@ -242,12 +419,18 @@ func (s *ImageStoreService) NewListImageStoresParams() *ListImageStoresParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetImageStoreID(name string) (string, error) {
+func (s *ImageStoreService) GetImageStoreID(name string, opts ...OptionFunc) (string, error) {
p := &ListImageStoresParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListImageStores(p)
if err != nil {
return "", err
@@ -272,13 +455,13 @@ func (s *ImageStoreService) GetImageStoreID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetImageStoreByName(name string) (*ImageStore, int, error) {
- id, err := s.GetImageStoreID(name)
+func (s *ImageStoreService) GetImageStoreByName(name string, opts ...OptionFunc) (*ImageStore, int, error) {
+ id, err := s.GetImageStoreID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetImageStoreByID(id)
+ r, count, err := s.GetImageStoreByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -286,12 +469,18 @@ func (s *ImageStoreService) GetImageStoreByName(name string) (*ImageStore, int,
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetImageStoreByID(id string) (*ImageStore, int, error) {
+func (s *ImageStoreService) GetImageStoreByID(id string, opts ...OptionFunc) (*ImageStore, int, error) {
p := &ListImageStoresParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListImageStores(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -375,7 +564,7 @@ func (s *ImageStoreService) NewDeleteImageStoreParams(id string) *DeleteImageSto
return p
}
-// Deletes an image store .
+// Deletes an image store or Secondary Storage.
func (s *ImageStoreService) DeleteImageStore(p *DeleteImageStoreParams) (*DeleteImageStoreResponse, error) {
resp, err := s.cs.newRequest("deleteImageStore", p.toURLValues())
if err != nil {
@@ -612,12 +801,18 @@ func (s *ImageStoreService) NewListSecondaryStagingStoresParams() *ListSecondary
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetSecondaryStagingStoreID(name string) (string, error) {
+func (s *ImageStoreService) GetSecondaryStagingStoreID(name string, opts ...OptionFunc) (string, error) {
p := &ListSecondaryStagingStoresParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListSecondaryStagingStores(p)
if err != nil {
return "", err
@@ -642,13 +837,13 @@ func (s *ImageStoreService) GetSecondaryStagingStoreID(name string) (string, err
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetSecondaryStagingStoreByName(name string) (*SecondaryStagingStore, int, error) {
- id, err := s.GetSecondaryStagingStoreID(name)
+func (s *ImageStoreService) GetSecondaryStagingStoreByName(name string, opts ...OptionFunc) (*SecondaryStagingStore, int, error) {
+ id, err := s.GetSecondaryStagingStoreID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetSecondaryStagingStoreByID(id)
+ r, count, err := s.GetSecondaryStagingStoreByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -656,12 +851,18 @@ func (s *ImageStoreService) GetSecondaryStagingStoreByName(name string) (*Second
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ImageStoreService) GetSecondaryStagingStoreByID(id string) (*SecondaryStagingStore, int, error) {
+func (s *ImageStoreService) GetSecondaryStagingStoreByID(id string, opts ...OptionFunc) (*SecondaryStagingStore, int, error) {
p := &ListSecondaryStagingStoresParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListSecondaryStagingStores(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/InternalLBService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/InternalLBService.go
index b4fceda57dc6..bc0505c41bbc 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/InternalLBService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/InternalLBService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -273,12 +273,18 @@ func (s *InternalLBService) NewListInternalLoadBalancerElementsParams() *ListInt
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *InternalLBService) GetInternalLoadBalancerElementByID(id string) (*InternalLoadBalancerElement, int, error) {
+func (s *InternalLBService) GetInternalLoadBalancerElementByID(id string, opts ...OptionFunc) (*InternalLoadBalancerElement, int, error) {
p := &ListInternalLoadBalancerElementsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListInternalLoadBalancerElements(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -415,8 +421,10 @@ type StopInternalLoadBalancerVMResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -467,6 +475,7 @@ type StopInternalLoadBalancerVMResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -550,8 +559,10 @@ type StartInternalLoadBalancerVMResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -602,6 +613,7 @@ type StartInternalLoadBalancerVMResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -819,27 +831,23 @@ func (s *InternalLBService) NewListInternalLoadBalancerVMsParams() *ListInternal
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *InternalLBService) GetInternalLoadBalancerVMID(name string) (string, error) {
+func (s *InternalLBService) GetInternalLoadBalancerVMID(name string, opts ...OptionFunc) (string, error) {
p := &ListInternalLoadBalancerVMsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListInternalLoadBalancerVMs(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListInternalLoadBalancerVMs(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -859,13 +867,13 @@ func (s *InternalLBService) GetInternalLoadBalancerVMID(name string) (string, er
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *InternalLBService) GetInternalLoadBalancerVMByName(name string) (*InternalLoadBalancerVM, int, error) {
- id, err := s.GetInternalLoadBalancerVMID(name)
+func (s *InternalLBService) GetInternalLoadBalancerVMByName(name string, opts ...OptionFunc) (*InternalLoadBalancerVM, int, error) {
+ id, err := s.GetInternalLoadBalancerVMID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetInternalLoadBalancerVMByID(id)
+ r, count, err := s.GetInternalLoadBalancerVMByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -873,12 +881,18 @@ func (s *InternalLBService) GetInternalLoadBalancerVMByName(name string) (*Inter
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *InternalLBService) GetInternalLoadBalancerVMByID(id string) (*InternalLoadBalancerVM, int, error) {
+func (s *InternalLBService) GetInternalLoadBalancerVMByID(id string, opts ...OptionFunc) (*InternalLoadBalancerVM, int, error) {
p := &ListInternalLoadBalancerVMsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListInternalLoadBalancerVMs(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -889,21 +903,6 @@ func (s *InternalLBService) GetInternalLoadBalancerVMByID(id string) (*InternalL
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListInternalLoadBalancerVMs(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -945,8 +944,10 @@ type InternalLoadBalancerVM struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -997,6 +998,7 @@ type InternalLoadBalancerVM struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LDAPService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LDAPService.go
index f7890854a276..18935c294af4 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LDAPService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LDAPService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LimitService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LimitService.go
index 6aa7e7292f6d..276abecf8a4b 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LimitService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LimitService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoadBalancerService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoadBalancerService.go
index 03d1ec4ecbfd..c70cf5fb528d 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoadBalancerService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoadBalancerService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -628,7 +628,7 @@ func (s *LoadBalancerService) NewCreateLBStickinessPolicyParams(lbruleid string,
return p
}
-// Creates a Load Balancer stickiness policy
+// Creates a load balancer stickiness policy
func (s *LoadBalancerService) CreateLBStickinessPolicy(p *CreateLBStickinessPolicyParams) (*CreateLBStickinessPolicyResponse, error) {
resp, err := s.cs.newRequest("createLBStickinessPolicy", p.toURLValues())
if err != nil {
@@ -738,7 +738,7 @@ func (s *LoadBalancerService) NewUpdateLBStickinessPolicyParams(id string) *Upda
return p
}
-// Updates LB Stickiness policy
+// Updates load balancer stickiness policy
func (s *LoadBalancerService) UpdateLBStickinessPolicy(p *UpdateLBStickinessPolicyParams) (*UpdateLBStickinessPolicyResponse, error) {
resp, err := s.cs.newRequest("updateLBStickinessPolicy", p.toURLValues())
if err != nil {
@@ -825,7 +825,7 @@ func (s *LoadBalancerService) NewDeleteLBStickinessPolicyParams(id string) *Dele
return p
}
-// Deletes a LB stickiness policy.
+// Deletes a load balancer stickiness policy.
func (s *LoadBalancerService) DeleteLBStickinessPolicy(p *DeleteLBStickinessPolicyParams) (*DeleteLBStickinessPolicyResponse, error) {
resp, err := s.cs.newRequest("deleteLBStickinessPolicy", p.toURLValues())
if err != nil {
@@ -1067,27 +1067,23 @@ func (s *LoadBalancerService) NewListLoadBalancerRulesParams() *ListLoadBalancer
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerRuleID(name string) (string, error) {
+func (s *LoadBalancerService) GetLoadBalancerRuleID(name string, opts ...OptionFunc) (string, error) {
p := &ListLoadBalancerRulesParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListLoadBalancerRules(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListLoadBalancerRules(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -1107,13 +1103,13 @@ func (s *LoadBalancerService) GetLoadBalancerRuleID(name string) (string, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerRuleByName(name string) (*LoadBalancerRule, int, error) {
- id, err := s.GetLoadBalancerRuleID(name)
+func (s *LoadBalancerService) GetLoadBalancerRuleByName(name string, opts ...OptionFunc) (*LoadBalancerRule, int, error) {
+ id, err := s.GetLoadBalancerRuleID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetLoadBalancerRuleByID(id)
+ r, count, err := s.GetLoadBalancerRuleByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1121,12 +1117,18 @@ func (s *LoadBalancerService) GetLoadBalancerRuleByName(name string) (*LoadBalan
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerRuleByID(id string) (*LoadBalancerRule, int, error) {
+func (s *LoadBalancerService) GetLoadBalancerRuleByID(id string, opts ...OptionFunc) (*LoadBalancerRule, int, error) {
p := &ListLoadBalancerRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListLoadBalancerRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1137,21 +1139,6 @@ func (s *LoadBalancerService) GetLoadBalancerRuleByID(id string) (*LoadBalancerR
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListLoadBalancerRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1305,12 +1292,18 @@ func (s *LoadBalancerService) NewListLBStickinessPoliciesParams() *ListLBStickin
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLBStickinessPolicyByID(id string) (*LBStickinessPolicy, int, error) {
+func (s *LoadBalancerService) GetLBStickinessPolicyByID(id string, opts ...OptionFunc) (*LBStickinessPolicy, int, error) {
p := &ListLBStickinessPoliciesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListLBStickinessPolicies(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1331,7 +1324,7 @@ func (s *LoadBalancerService) GetLBStickinessPolicyByID(id string) (*LBStickines
return nil, l.Count, fmt.Errorf("There is more then one result for LBStickinessPolicy UUID: %s!", id)
}
-// Lists LBStickiness policies.
+// Lists load balancer stickiness policies.
func (s *LoadBalancerService) ListLBStickinessPolicies(p *ListLBStickinessPoliciesParams) (*ListLBStickinessPoliciesResponse, error) {
resp, err := s.cs.newRequest("listLBStickinessPolicies", p.toURLValues())
if err != nil {
@@ -1460,12 +1453,18 @@ func (s *LoadBalancerService) NewListLBHealthCheckPoliciesParams() *ListLBHealth
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLBHealthCheckPolicyByID(id string) (*LBHealthCheckPolicy, int, error) {
+func (s *LoadBalancerService) GetLBHealthCheckPolicyByID(id string, opts ...OptionFunc) (*LBHealthCheckPolicy, int, error) {
p := &ListLBHealthCheckPoliciesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListLBHealthCheckPolicies(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1486,7 +1485,7 @@ func (s *LoadBalancerService) GetLBHealthCheckPolicyByID(id string) (*LBHealthCh
return nil, l.Count, fmt.Errorf("There is more then one result for LBHealthCheckPolicy UUID: %s!", id)
}
-// Lists load balancer HealthCheck policies.
+// Lists load balancer health check policies.
func (s *LoadBalancerService) ListLBHealthCheckPolicies(p *ListLBHealthCheckPoliciesParams) (*ListLBHealthCheckPoliciesResponse, error) {
resp, err := s.cs.newRequest("listLBHealthCheckPolicies", p.toURLValues())
if err != nil {
@@ -1638,7 +1637,7 @@ func (s *LoadBalancerService) NewCreateLBHealthCheckPolicyParams(lbruleid string
return p
}
-// Creates a Load Balancer healthcheck policy
+// Creates a load balancer health check policy
func (s *LoadBalancerService) CreateLBHealthCheckPolicy(p *CreateLBHealthCheckPolicyParams) (*CreateLBHealthCheckPolicyResponse, error) {
resp, err := s.cs.newRequest("createLBHealthCheckPolicy", p.toURLValues())
if err != nil {
@@ -1747,7 +1746,7 @@ func (s *LoadBalancerService) NewUpdateLBHealthCheckPolicyParams(id string) *Upd
return p
}
-// Updates LB HealthCheck policy
+// Updates load balancer health check policy
func (s *LoadBalancerService) UpdateLBHealthCheckPolicy(p *UpdateLBHealthCheckPolicyParams) (*UpdateLBHealthCheckPolicyResponse, error) {
resp, err := s.cs.newRequest("updateLBHealthCheckPolicy", p.toURLValues())
if err != nil {
@@ -1833,7 +1832,7 @@ func (s *LoadBalancerService) NewDeleteLBHealthCheckPolicyParams(id string) *Del
return p
}
-// Deletes a load balancer HealthCheck policy.
+// Deletes a load balancer health check policy.
func (s *LoadBalancerService) DeleteLBHealthCheckPolicy(p *DeleteLBHealthCheckPolicyParams) (*DeleteLBHealthCheckPolicyResponse, error) {
resp, err := s.cs.newRequest("deleteLBHealthCheckPolicy", p.toURLValues())
if err != nil {
@@ -1960,13 +1959,19 @@ func (s *LoadBalancerService) NewListLoadBalancerRuleInstancesParams(id string)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerRuleInstanceByID(id string) (*LoadBalancerRuleInstance, int, error) {
+func (s *LoadBalancerService) GetLoadBalancerRuleInstanceByID(id string, opts ...OptionFunc) (*LoadBalancerRuleInstance, int, error) {
p := &ListLoadBalancerRuleInstancesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListLoadBalancerRuleInstances(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2267,7 +2272,7 @@ func (s *LoadBalancerService) NewUploadSslCertParams(certificate string, private
return p
}
-// Upload a certificate to cloudstack
+// Upload a certificate to CloudStack
func (s *LoadBalancerService) UploadSslCert(p *UploadSslCertParams) (*UploadSslCertResponse, error) {
resp, err := s.cs.newRequest("uploadSslCert", p.toURLValues())
if err != nil {
@@ -2326,7 +2331,7 @@ func (s *LoadBalancerService) NewDeleteSslCertParams(id string) *DeleteSslCertPa
return p
}
-// Delete a certificate to cloudstack
+// Delete a certificate to CloudStack
func (s *LoadBalancerService) DeleteSslCert(p *DeleteSslCertParams) (*DeleteSslCertResponse, error) {
resp, err := s.cs.newRequest("deleteSslCert", p.toURLValues())
if err != nil {
@@ -2485,7 +2490,7 @@ func (s *LoadBalancerService) NewAssignCertToLoadBalancerParams(certid string, l
return p
}
-// Assigns a certificate to a Load Balancer Rule
+// Assigns a certificate to a load balancer rule
func (s *LoadBalancerService) AssignCertToLoadBalancer(p *AssignCertToLoadBalancerParams) (*AssignCertToLoadBalancerResponse, error) {
resp, err := s.cs.newRequest("assignCertToLoadBalancer", p.toURLValues())
if err != nil {
@@ -2552,7 +2557,7 @@ func (s *LoadBalancerService) NewRemoveCertFromLoadBalancerParams(lbruleid strin
return p
}
-// Removes a certificate from a Load Balancer Rule
+// Removes a certificate from a load balancer rule
func (s *LoadBalancerService) RemoveCertFromLoadBalancer(p *RemoveCertFromLoadBalancerParams) (*RemoveCertFromLoadBalancerResponse, error) {
resp, err := s.cs.newRequest("removeCertFromLoadBalancer", p.toURLValues())
if err != nil {
@@ -3656,27 +3661,23 @@ func (s *LoadBalancerService) NewListGlobalLoadBalancerRulesParams() *ListGlobal
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetGlobalLoadBalancerRuleID(keyword string) (string, error) {
+func (s *LoadBalancerService) GetGlobalLoadBalancerRuleID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListGlobalLoadBalancerRulesParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListGlobalLoadBalancerRules(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListGlobalLoadBalancerRules(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
}
@@ -3696,13 +3697,13 @@ func (s *LoadBalancerService) GetGlobalLoadBalancerRuleID(keyword string) (strin
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByName(name string) (*GlobalLoadBalancerRule, int, error) {
- id, err := s.GetGlobalLoadBalancerRuleID(name)
+func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByName(name string, opts ...OptionFunc) (*GlobalLoadBalancerRule, int, error) {
+ id, err := s.GetGlobalLoadBalancerRuleID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetGlobalLoadBalancerRuleByID(id)
+ r, count, err := s.GetGlobalLoadBalancerRuleByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -3710,12 +3711,18 @@ func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByName(name string) (*Glo
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByID(id string) (*GlobalLoadBalancerRule, int, error) {
+func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByID(id string, opts ...OptionFunc) (*GlobalLoadBalancerRule, int, error) {
p := &ListGlobalLoadBalancerRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListGlobalLoadBalancerRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -3726,21 +3733,6 @@ func (s *LoadBalancerService) GetGlobalLoadBalancerRuleByID(id string) (*GlobalL
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListGlobalLoadBalancerRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -4135,7 +4127,7 @@ func (s *LoadBalancerService) NewCreateLoadBalancerParams(algorithm string, inst
return p
}
-// Creates a Load Balancer
+// Creates a load balancer
func (s *LoadBalancerService) CreateLoadBalancer(p *CreateLoadBalancerParams) (*CreateLoadBalancerResponse, error) {
resp, err := s.cs.newRequest("createLoadBalancer", p.toURLValues())
if err != nil {
@@ -4416,27 +4408,23 @@ func (s *LoadBalancerService) NewListLoadBalancersParams() *ListLoadBalancersPar
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerID(name string) (string, error) {
+func (s *LoadBalancerService) GetLoadBalancerID(name string, opts ...OptionFunc) (string, error) {
p := &ListLoadBalancersParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListLoadBalancers(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListLoadBalancers(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -4456,13 +4444,13 @@ func (s *LoadBalancerService) GetLoadBalancerID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerByName(name string) (*LoadBalancer, int, error) {
- id, err := s.GetLoadBalancerID(name)
+func (s *LoadBalancerService) GetLoadBalancerByName(name string, opts ...OptionFunc) (*LoadBalancer, int, error) {
+ id, err := s.GetLoadBalancerID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetLoadBalancerByID(id)
+ r, count, err := s.GetLoadBalancerByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -4470,12 +4458,18 @@ func (s *LoadBalancerService) GetLoadBalancerByName(name string) (*LoadBalancer,
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *LoadBalancerService) GetLoadBalancerByID(id string) (*LoadBalancer, int, error) {
+func (s *LoadBalancerService) GetLoadBalancerByID(id string, opts ...OptionFunc) (*LoadBalancer, int, error) {
p := &ListLoadBalancersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListLoadBalancers(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -4486,21 +4480,6 @@ func (s *LoadBalancerService) GetLoadBalancerByID(id string) (*LoadBalancer, int
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListLoadBalancers(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -4511,7 +4490,7 @@ func (s *LoadBalancerService) GetLoadBalancerByID(id string) (*LoadBalancer, int
return nil, l.Count, fmt.Errorf("There is more then one result for LoadBalancer UUID: %s!", id)
}
-// Lists Load Balancers
+// Lists load balancers
func (s *LoadBalancerService) ListLoadBalancers(p *ListLoadBalancersParams) (*ListLoadBalancersResponse, error) {
resp, err := s.cs.newRequest("listLoadBalancers", p.toURLValues())
if err != nil {
@@ -4691,7 +4670,7 @@ func (s *LoadBalancerService) NewUpdateLoadBalancerParams(id string) *UpdateLoad
return p
}
-// Updates a Load Balancer
+// Updates a load balancer
func (s *LoadBalancerService) UpdateLoadBalancer(p *UpdateLoadBalancerParams) (*UpdateLoadBalancerResponse, error) {
resp, err := s.cs.newRequest("updateLoadBalancer", p.toURLValues())
if err != nil {
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoginService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoginService.go
deleted file mode 100644
index bd6db4a97ed5..000000000000
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LoginService.go
+++ /dev/null
@@ -1,17 +0,0 @@
-//
-// Copyright 2014, Sander van Harmelen
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-
-package cloudstack
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LogoutService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/LogoutService.go
deleted file mode 100644
index bd6db4a97ed5..000000000000
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/LogoutService.go
+++ /dev/null
@@ -1,17 +0,0 @@
-//
-// Copyright 2014, Sander van Harmelen
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-
-package cloudstack
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NATService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NATService.go
index 25ad9a1627a6..daaf10edeb6b 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NATService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NATService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -90,7 +90,7 @@ func (s *NATService) NewEnableStaticNatParams(ipaddressid string, virtualmachine
return p
}
-// Enables static nat for given ip address
+// Enables static NAT for given IP address
func (s *NATService) EnableStaticNat(p *EnableStaticNatParams) (*EnableStaticNatResponse, error) {
resp, err := s.cs.newRequest("enableStaticNat", p.toURLValues())
if err != nil {
@@ -202,7 +202,7 @@ func (s *NATService) NewCreateIpForwardingRuleParams(ipaddressid string, protoco
return p
}
-// Creates an ip forwarding rule
+// Creates an IP forwarding rule
func (s *NATService) CreateIpForwardingRule(p *CreateIpForwardingRuleParams) (*CreateIpForwardingRuleResponse, error) {
resp, err := s.cs.newRequest("createIpForwardingRule", p.toURLValues())
if err != nil {
@@ -300,7 +300,7 @@ func (s *NATService) NewDeleteIpForwardingRuleParams(id string) *DeleteIpForward
return p
}
-// Deletes an ip forwarding rule
+// Deletes an IP forwarding rule
func (s *NATService) DeleteIpForwardingRule(p *DeleteIpForwardingRuleParams) (*DeleteIpForwardingRuleResponse, error) {
resp, err := s.cs.newRequest("deleteIpForwardingRule", p.toURLValues())
if err != nil {
@@ -481,12 +481,18 @@ func (s *NATService) NewListIpForwardingRulesParams() *ListIpForwardingRulesPara
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NATService) GetIpForwardingRuleByID(id string) (*IpForwardingRule, int, error) {
+func (s *NATService) GetIpForwardingRuleByID(id string, opts ...OptionFunc) (*IpForwardingRule, int, error) {
p := &ListIpForwardingRulesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListIpForwardingRules(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -497,21 +503,6 @@ func (s *NATService) GetIpForwardingRuleByID(id string) (*IpForwardingRule, int,
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListIpForwardingRules(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -522,7 +513,7 @@ func (s *NATService) GetIpForwardingRuleByID(id string) (*IpForwardingRule, int,
return nil, l.Count, fmt.Errorf("There is more then one result for IpForwardingRule UUID: %s!", id)
}
-// List the ip forwarding rules
+// List the IP forwarding rules
func (s *NATService) ListIpForwardingRules(p *ListIpForwardingRulesParams) (*ListIpForwardingRulesResponse, error) {
resp, err := s.cs.newRequest("listIpForwardingRules", p.toURLValues())
if err != nil {
@@ -604,7 +595,7 @@ func (s *NATService) NewDisableStaticNatParams(ipaddressid string) *DisableStati
return p
}
-// Disables static rule for given ip address
+// Disables static rule for given IP address
func (s *NATService) DisableStaticNat(p *DisableStaticNatParams) (*DisableStaticNatResponse, error) {
resp, err := s.cs.newRequest("disableStaticNat", p.toURLValues())
if err != nil {
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkACLService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkACLService.go
index cc748f0612e1..d195d443b333 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkACLService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkACLService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -407,7 +407,7 @@ func (s *NetworkACLService) NewUpdateNetworkACLItemParams(id string) *UpdateNetw
return p
}
-// Updates ACL Item with specified Id
+// Updates ACL item with specified ID
func (s *NetworkACLService) UpdateNetworkACLItem(p *UpdateNetworkACLItemParams) (*UpdateNetworkACLItemResponse, error) {
resp, err := s.cs.newRequest("updateNetworkACLItem", p.toURLValues())
if err != nil {
@@ -502,7 +502,7 @@ func (s *NetworkACLService) NewDeleteNetworkACLParams(id string) *DeleteNetworkA
return p
}
-// Deletes a Network ACL
+// Deletes a network ACL
func (s *NetworkACLService) DeleteNetworkACL(p *DeleteNetworkACLParams) (*DeleteNetworkACLResponse, error) {
resp, err := s.cs.newRequest("deleteNetworkACL", p.toURLValues())
if err != nil {
@@ -744,12 +744,18 @@ func (s *NetworkACLService) NewListNetworkACLsParams() *ListNetworkACLsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkACLService) GetNetworkACLByID(id string) (*NetworkACL, int, error) {
+func (s *NetworkACLService) GetNetworkACLByID(id string, opts ...OptionFunc) (*NetworkACL, int, error) {
p := &ListNetworkACLsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListNetworkACLs(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -760,21 +766,6 @@ func (s *NetworkACLService) GetNetworkACLByID(id string) (*NetworkACL, int, erro
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListNetworkACLs(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -899,7 +890,7 @@ func (s *NetworkACLService) NewCreateNetworkACLListParams(name string, vpcid str
return p
}
-// Creates a Network ACL for the given VPC
+// Creates a network ACL for the given VPC
func (s *NetworkACLService) CreateNetworkACLList(p *CreateNetworkACLListParams) (*CreateNetworkACLListResponse, error) {
resp, err := s.cs.newRequest("createNetworkACLList", p.toURLValues())
if err != nil {
@@ -974,7 +965,7 @@ func (s *NetworkACLService) NewDeleteNetworkACLListParams(id string) *DeleteNetw
return p
}
-// Deletes a Network ACL
+// Deletes a network ACL
func (s *NetworkACLService) DeleteNetworkACLList(p *DeleteNetworkACLListParams) (*DeleteNetworkACLListResponse, error) {
resp, err := s.cs.newRequest("deleteNetworkACLList", p.toURLValues())
if err != nil {
@@ -1063,7 +1054,7 @@ func (s *NetworkACLService) NewReplaceNetworkACLListParams(aclid string) *Replac
return p
}
-// Replaces ACL associated with a Network or private gateway
+// Replaces ACL associated with a network or private gateway
func (s *NetworkACLService) ReplaceNetworkACLList(p *ReplaceNetworkACLListParams) (*ReplaceNetworkACLListResponse, error) {
resp, err := s.cs.newRequest("replaceNetworkACLList", p.toURLValues())
if err != nil {
@@ -1267,27 +1258,23 @@ func (s *NetworkACLService) NewListNetworkACLListsParams() *ListNetworkACLListsP
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkACLService) GetNetworkACLListID(name string) (string, error) {
+func (s *NetworkACLService) GetNetworkACLListID(name string, opts ...OptionFunc) (string, error) {
p := &ListNetworkACLListsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNetworkACLLists(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListNetworkACLLists(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -1307,13 +1294,13 @@ func (s *NetworkACLService) GetNetworkACLListID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkACLService) GetNetworkACLListByName(name string) (*NetworkACLList, int, error) {
- id, err := s.GetNetworkACLListID(name)
+func (s *NetworkACLService) GetNetworkACLListByName(name string, opts ...OptionFunc) (*NetworkACLList, int, error) {
+ id, err := s.GetNetworkACLListID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetNetworkACLListByID(id)
+ r, count, err := s.GetNetworkACLListByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1321,12 +1308,18 @@ func (s *NetworkACLService) GetNetworkACLListByName(name string) (*NetworkACLLis
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkACLService) GetNetworkACLListByID(id string) (*NetworkACLList, int, error) {
+func (s *NetworkACLService) GetNetworkACLListByID(id string, opts ...OptionFunc) (*NetworkACLList, int, error) {
p := &ListNetworkACLListsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListNetworkACLLists(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1337,21 +1330,6 @@ func (s *NetworkACLService) GetNetworkACLListByID(id string) (*NetworkACLList, i
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListNetworkACLLists(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1444,7 +1422,7 @@ func (s *NetworkACLService) NewUpdateNetworkACLListParams(id string) *UpdateNetw
return p
}
-// Updates Network ACL list
+// Updates network ACL list
func (s *NetworkACLService) UpdateNetworkACLList(p *UpdateNetworkACLListParams) (*UpdateNetworkACLListResponse, error) {
resp, err := s.cs.newRequest("updateNetworkACLList", p.toURLValues())
if err != nil {
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkDeviceService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkDeviceService.go
index 7b8e64bcc431..878d5286ef5a 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkDeviceService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkDeviceService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkOfferingService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkOfferingService.go
index 0e57e8a7d10f..43b03c1c8385 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkOfferingService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkOfferingService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -808,12 +808,18 @@ func (s *NetworkOfferingService) NewListNetworkOfferingsParams() *ListNetworkOff
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkOfferingService) GetNetworkOfferingID(name string) (string, error) {
+func (s *NetworkOfferingService) GetNetworkOfferingID(name string, opts ...OptionFunc) (string, error) {
p := &ListNetworkOfferingsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNetworkOfferings(p)
if err != nil {
return "", err
@@ -838,13 +844,13 @@ func (s *NetworkOfferingService) GetNetworkOfferingID(name string) (string, erro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkOfferingService) GetNetworkOfferingByName(name string) (*NetworkOffering, int, error) {
- id, err := s.GetNetworkOfferingID(name)
+func (s *NetworkOfferingService) GetNetworkOfferingByName(name string, opts ...OptionFunc) (*NetworkOffering, int, error) {
+ id, err := s.GetNetworkOfferingID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetNetworkOfferingByID(id)
+ r, count, err := s.GetNetworkOfferingByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -852,12 +858,18 @@ func (s *NetworkOfferingService) GetNetworkOfferingByName(name string) (*Network
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkOfferingService) GetNetworkOfferingByID(id string) (*NetworkOffering, int, error) {
+func (s *NetworkOfferingService) GetNetworkOfferingByID(id string, opts ...OptionFunc) (*NetworkOffering, int, error) {
p := &ListNetworkOfferingsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListNetworkOfferings(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkService.go
index 1e9bda43c449..43acd05aa3a3 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NetworkService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -82,10 +82,9 @@ func (p *DedicatePublicIpRangeParams) SetProjectid(v string) {
// You should always use this function to get a new DedicatePublicIpRangeParams instance,
// as then you are sure you have configured all required params
-func (s *NetworkService) NewDedicatePublicIpRangeParams(account string, domainid string, id string) *DedicatePublicIpRangeParams {
+func (s *NetworkService) NewDedicatePublicIpRangeParams(domainid string, id string) *DedicatePublicIpRangeParams {
p := &DedicatePublicIpRangeParams{}
p.p = make(map[string]interface{})
- p.p["account"] = account
p.p["domainid"] = domainid
p.p["id"] = id
return p
@@ -258,9 +257,6 @@ func (p *CreateNetworkParams) toURLValues() url.Values {
if v, found := p.p["vlan"]; found {
u.Set("vlan", v.(string))
}
- if v, found := p.p["vlan"]; found {
- u.Set("vlan", v.(string))
- }
if v, found := p.p["vpcid"]; found {
u.Set("vpcid", v.(string))
}
@@ -939,27 +935,23 @@ func (s *NetworkService) NewListNetworksParams() *ListNetworksParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNetworkID(keyword string) (string, error) {
+func (s *NetworkService) GetNetworkID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListNetworksParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNetworks(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListNetworks(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
}
@@ -979,13 +971,13 @@ func (s *NetworkService) GetNetworkID(keyword string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNetworkByName(name string) (*Network, int, error) {
- id, err := s.GetNetworkID(name)
+func (s *NetworkService) GetNetworkByName(name string, opts ...OptionFunc) (*Network, int, error) {
+ id, err := s.GetNetworkID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetNetworkByID(id)
+ r, count, err := s.GetNetworkByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -993,12 +985,18 @@ func (s *NetworkService) GetNetworkByName(name string) (*Network, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNetworkByID(id string) (*Network, int, error) {
+func (s *NetworkService) GetNetworkByID(id string, opts ...OptionFunc) (*Network, int, error) {
p := &ListNetworksParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListNetworks(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1009,21 +1007,6 @@ func (s *NetworkService) GetNetworkByID(id string) (*Network, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListNetworks(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1175,7 +1158,7 @@ func (s *NetworkService) NewRestartNetworkParams(id string) *RestartNetworkParam
return p
}
-// Restarts the network; includes 1) restarting network elements - virtual routers, dhcp servers 2) reapplying all public ips 3) reapplying loadBalancing/portForwarding rules
+// Restarts the network; includes 1) restarting network elements - virtual routers, DHCP servers 2) reapplying all public IPs 3) reapplying loadBalancing/portForwarding rules
func (s *NetworkService) RestartNetwork(p *RestartNetworkParams) (*RestartNetworkResponse, error) {
resp, err := s.cs.newRequest("restartNetwork", p.toURLValues())
if err != nil {
@@ -1805,12 +1788,18 @@ func (s *NetworkService) NewListPhysicalNetworksParams() *ListPhysicalNetworksPa
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetPhysicalNetworkID(name string) (string, error) {
+func (s *NetworkService) GetPhysicalNetworkID(name string, opts ...OptionFunc) (string, error) {
p := &ListPhysicalNetworksParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListPhysicalNetworks(p)
if err != nil {
return "", err
@@ -1835,13 +1824,13 @@ func (s *NetworkService) GetPhysicalNetworkID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetPhysicalNetworkByName(name string) (*PhysicalNetwork, int, error) {
- id, err := s.GetPhysicalNetworkID(name)
+func (s *NetworkService) GetPhysicalNetworkByName(name string, opts ...OptionFunc) (*PhysicalNetwork, int, error) {
+ id, err := s.GetPhysicalNetworkID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetPhysicalNetworkByID(id)
+ r, count, err := s.GetPhysicalNetworkByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1849,12 +1838,18 @@ func (s *NetworkService) GetPhysicalNetworkByName(name string) (*PhysicalNetwork
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetPhysicalNetworkByID(id string) (*PhysicalNetwork, int, error) {
+func (s *NetworkService) GetPhysicalNetworkByID(id string, opts ...OptionFunc) (*PhysicalNetwork, int, error) {
p := &ListPhysicalNetworksParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPhysicalNetworks(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2414,12 +2409,18 @@ func (s *NetworkService) NewListNetworkServiceProvidersParams() *ListNetworkServ
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNetworkServiceProviderID(name string) (string, error) {
+func (s *NetworkService) GetNetworkServiceProviderID(name string, opts ...OptionFunc) (string, error) {
p := &ListNetworkServiceProvidersParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNetworkServiceProviders(p)
if err != nil {
return "", err
@@ -2866,12 +2867,18 @@ func (s *NetworkService) NewListStorageNetworkIpRangeParams() *ListStorageNetwor
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetStorageNetworkIpRangeByID(id string) (*StorageNetworkIpRange, int, error) {
+func (s *NetworkService) GetStorageNetworkIpRangeByID(id string, opts ...OptionFunc) (*StorageNetworkIpRange, int, error) {
p := &ListStorageNetworkIpRangeParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListStorageNetworkIpRange(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -3115,13 +3122,19 @@ func (s *NetworkService) NewListPaloAltoFirewallNetworksParams(lbdeviceid string
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetPaloAltoFirewallNetworkID(keyword string, lbdeviceid string) (string, error) {
+func (s *NetworkService) GetPaloAltoFirewallNetworkID(keyword string, lbdeviceid string, opts ...OptionFunc) (string, error) {
p := &ListPaloAltoFirewallNetworksParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
p.p["lbdeviceid"] = lbdeviceid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListPaloAltoFirewallNetworks(p)
if err != nil {
return "", err
@@ -3310,13 +3323,19 @@ func (s *NetworkService) NewListNetscalerLoadBalancerNetworksParams(lbdeviceid s
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNetscalerLoadBalancerNetworkID(keyword string, lbdeviceid string) (string, error) {
+func (s *NetworkService) GetNetscalerLoadBalancerNetworkID(keyword string, lbdeviceid string, opts ...OptionFunc) (string, error) {
p := &ListNetscalerLoadBalancerNetworksParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
p.p["lbdeviceid"] = lbdeviceid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNetscalerLoadBalancerNetworks(p)
if err != nil {
return "", err
@@ -3505,13 +3524,19 @@ func (s *NetworkService) NewListNiciraNvpDeviceNetworksParams(nvpdeviceid string
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *NetworkService) GetNiciraNvpDeviceNetworkID(keyword string, nvpdeviceid string) (string, error) {
+func (s *NetworkService) GetNiciraNvpDeviceNetworkID(keyword string, nvpdeviceid string, opts ...OptionFunc) (string, error) {
p := &ListNiciraNvpDeviceNetworksParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
p.p["nvpdeviceid"] = nvpdeviceid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListNiciraNvpDeviceNetworks(p)
if err != nil {
return "", err
@@ -3709,3 +3734,291 @@ type ListNetworkIsolationMethodsResponse struct {
type NetworkIsolationMethod struct {
Name string `json:"name,omitempty"`
}
+
+type AddOpenDaylightControllerParams struct {
+ p map[string]interface{}
+}
+
+func (p *AddOpenDaylightControllerParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["password"]; found {
+ u.Set("password", v.(string))
+ }
+ if v, found := p.p["physicalnetworkid"]; found {
+ u.Set("physicalnetworkid", v.(string))
+ }
+ if v, found := p.p["url"]; found {
+ u.Set("url", v.(string))
+ }
+ if v, found := p.p["username"]; found {
+ u.Set("username", v.(string))
+ }
+ return u
+}
+
+func (p *AddOpenDaylightControllerParams) SetPassword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["password"] = v
+ return
+}
+
+func (p *AddOpenDaylightControllerParams) SetPhysicalnetworkid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["physicalnetworkid"] = v
+ return
+}
+
+func (p *AddOpenDaylightControllerParams) SetUrl(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["url"] = v
+ return
+}
+
+func (p *AddOpenDaylightControllerParams) SetUsername(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["username"] = v
+ return
+}
+
+// You should always use this function to get a new AddOpenDaylightControllerParams instance,
+// as then you are sure you have configured all required params
+func (s *NetworkService) NewAddOpenDaylightControllerParams(password string, physicalnetworkid string, url string, username string) *AddOpenDaylightControllerParams {
+ p := &AddOpenDaylightControllerParams{}
+ p.p = make(map[string]interface{})
+ p.p["password"] = password
+ p.p["physicalnetworkid"] = physicalnetworkid
+ p.p["url"] = url
+ p.p["username"] = username
+ return p
+}
+
+// Adds an OpenDyalight controler
+func (s *NetworkService) AddOpenDaylightController(p *AddOpenDaylightControllerParams) (*AddOpenDaylightControllerResponse, error) {
+ resp, err := s.cs.newRequest("addOpenDaylightController", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r AddOpenDaylightControllerResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ b, err = getRawValue(b)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type AddOpenDaylightControllerResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
+ Url string `json:"url,omitempty"`
+ Username string `json:"username,omitempty"`
+}
+
+type DeleteOpenDaylightControllerParams struct {
+ p map[string]interface{}
+}
+
+func (p *DeleteOpenDaylightControllerParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["id"]; found {
+ u.Set("id", v.(string))
+ }
+ return u
+}
+
+func (p *DeleteOpenDaylightControllerParams) SetId(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["id"] = v
+ return
+}
+
+// You should always use this function to get a new DeleteOpenDaylightControllerParams instance,
+// as then you are sure you have configured all required params
+func (s *NetworkService) NewDeleteOpenDaylightControllerParams(id string) *DeleteOpenDaylightControllerParams {
+ p := &DeleteOpenDaylightControllerParams{}
+ p.p = make(map[string]interface{})
+ p.p["id"] = id
+ return p
+}
+
+// Removes an OpenDyalight controler
+func (s *NetworkService) DeleteOpenDaylightController(p *DeleteOpenDaylightControllerParams) (*DeleteOpenDaylightControllerResponse, error) {
+ resp, err := s.cs.newRequest("deleteOpenDaylightController", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r DeleteOpenDaylightControllerResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ b, err = getRawValue(b)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type DeleteOpenDaylightControllerResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
+ Url string `json:"url,omitempty"`
+ Username string `json:"username,omitempty"`
+}
+
+type ListOpenDaylightControllersParams struct {
+ p map[string]interface{}
+}
+
+func (p *ListOpenDaylightControllersParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["id"]; found {
+ u.Set("id", v.(string))
+ }
+ if v, found := p.p["physicalnetworkid"]; found {
+ u.Set("physicalnetworkid", v.(string))
+ }
+ return u
+}
+
+func (p *ListOpenDaylightControllersParams) SetId(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["id"] = v
+ return
+}
+
+func (p *ListOpenDaylightControllersParams) SetPhysicalnetworkid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["physicalnetworkid"] = v
+ return
+}
+
+// You should always use this function to get a new ListOpenDaylightControllersParams instance,
+// as then you are sure you have configured all required params
+func (s *NetworkService) NewListOpenDaylightControllersParams() *ListOpenDaylightControllersParams {
+ p := &ListOpenDaylightControllersParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// This is a courtesy helper function, which in some cases may not work as expected!
+func (s *NetworkService) GetOpenDaylightControllerByID(id string, opts ...OptionFunc) (*OpenDaylightController, int, error) {
+ p := &ListOpenDaylightControllersParams{}
+ p.p = make(map[string]interface{})
+
+ p.p["id"] = id
+
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
+ l, err := s.ListOpenDaylightControllers(p)
+ if err != nil {
+ if strings.Contains(err.Error(), fmt.Sprintf(
+ "Invalid parameter id value=%s due to incorrect long value format, "+
+ "or entity does not exist", id)) {
+ return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
+ }
+ return nil, -1, err
+ }
+
+ if l.Count == 0 {
+ return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
+ }
+
+ if l.Count == 1 {
+ return l.OpenDaylightControllers[0], l.Count, nil
+ }
+ return nil, l.Count, fmt.Errorf("There is more then one result for OpenDaylightController UUID: %s!", id)
+}
+
+// Lists OpenDyalight controllers
+func (s *NetworkService) ListOpenDaylightControllers(p *ListOpenDaylightControllersParams) (*ListOpenDaylightControllersResponse, error) {
+ resp, err := s.cs.newRequest("listOpenDaylightControllers", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r ListOpenDaylightControllersResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type ListOpenDaylightControllersResponse struct {
+ Count int `json:"count"`
+ OpenDaylightControllers []*OpenDaylightController `json:"opendaylightcontroller"`
+}
+
+type OpenDaylightController struct {
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
+ Url string `json:"url,omitempty"`
+ Username string `json:"username,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NicService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NicService.go
index 3ef49d6adfb9..d83e07567e68 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NicService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NicService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -175,6 +175,260 @@ type RemoveIpFromNicResponse struct {
Success bool `json:"success,omitempty"`
}
+type UpdateVmNicIpParams struct {
+ p map[string]interface{}
+}
+
+func (p *UpdateVmNicIpParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["ipaddress"]; found {
+ u.Set("ipaddress", v.(string))
+ }
+ if v, found := p.p["nicid"]; found {
+ u.Set("nicid", v.(string))
+ }
+ return u
+}
+
+func (p *UpdateVmNicIpParams) SetIpaddress(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ipaddress"] = v
+ return
+}
+
+func (p *UpdateVmNicIpParams) SetNicid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["nicid"] = v
+ return
+}
+
+// You should always use this function to get a new UpdateVmNicIpParams instance,
+// as then you are sure you have configured all required params
+func (s *NicService) NewUpdateVmNicIpParams(nicid string) *UpdateVmNicIpParams {
+ p := &UpdateVmNicIpParams{}
+ p.p = make(map[string]interface{})
+ p.p["nicid"] = nicid
+ return p
+}
+
+// Update the default Ip of a VM Nic
+func (s *NicService) UpdateVmNicIp(p *UpdateVmNicIpParams) (*UpdateVmNicIpResponse, error) {
+ resp, err := s.cs.newRequest("updateVmNicIp", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r UpdateVmNicIpResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+
+ // If we have a async client, we need to wait for the async result
+ if s.cs.async {
+ b, err := s.cs.GetAsyncJobResult(r.JobID, s.cs.timeout)
+ if err != nil {
+ if err == AsyncTimeoutErr {
+ return &r, err
+ }
+ return nil, err
+ }
+
+ b, err = getRawValue(b)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := json.Unmarshal(b, &r); err != nil {
+ return nil, err
+ }
+ }
+ return &r, nil
+}
+
+type UpdateVmNicIpResponse struct {
+ JobID string `json:"jobid,omitempty"`
+ Account string `json:"account,omitempty"`
+ Affinitygroup []struct {
+ Account string `json:"account,omitempty"`
+ Description string `json:"description,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Type string `json:"type,omitempty"`
+ VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
+ } `json:"affinitygroup,omitempty"`
+ Cpunumber int `json:"cpunumber,omitempty"`
+ Cpuspeed int `json:"cpuspeed,omitempty"`
+ Cpuused string `json:"cpuused,omitempty"`
+ Created string `json:"created,omitempty"`
+ Details map[string]string `json:"details,omitempty"`
+ Diskioread int64 `json:"diskioread,omitempty"`
+ Diskiowrite int64 `json:"diskiowrite,omitempty"`
+ Diskkbsread int64 `json:"diskkbsread,omitempty"`
+ Diskkbswrite int64 `json:"diskkbswrite,omitempty"`
+ Diskofferingid string `json:"diskofferingid,omitempty"`
+ Diskofferingname string `json:"diskofferingname,omitempty"`
+ Displayname string `json:"displayname,omitempty"`
+ Displayvm bool `json:"displayvm,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Forvirtualnetwork bool `json:"forvirtualnetwork,omitempty"`
+ Group string `json:"group,omitempty"`
+ Groupid string `json:"groupid,omitempty"`
+ Guestosid string `json:"guestosid,omitempty"`
+ Haenable bool `json:"haenable,omitempty"`
+ Hostid string `json:"hostid,omitempty"`
+ Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
+ Id string `json:"id,omitempty"`
+ Instancename string `json:"instancename,omitempty"`
+ Isdynamicallyscalable bool `json:"isdynamicallyscalable,omitempty"`
+ Isodisplaytext string `json:"isodisplaytext,omitempty"`
+ Isoid string `json:"isoid,omitempty"`
+ Isoname string `json:"isoname,omitempty"`
+ Keypair string `json:"keypair,omitempty"`
+ Memory int `json:"memory,omitempty"`
+ Name string `json:"name,omitempty"`
+ Networkkbsread int64 `json:"networkkbsread,omitempty"`
+ Networkkbswrite int64 `json:"networkkbswrite,omitempty"`
+ Nic []struct {
+ Broadcasturi string `json:"broadcasturi,omitempty"`
+ Deviceid string `json:"deviceid,omitempty"`
+ Gateway string `json:"gateway,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ip6address string `json:"ip6address,omitempty"`
+ Ip6cidr string `json:"ip6cidr,omitempty"`
+ Ip6gateway string `json:"ip6gateway,omitempty"`
+ Ipaddress string `json:"ipaddress,omitempty"`
+ Isdefault bool `json:"isdefault,omitempty"`
+ Isolationuri string `json:"isolationuri,omitempty"`
+ Macaddress string `json:"macaddress,omitempty"`
+ Netmask string `json:"netmask,omitempty"`
+ Networkid string `json:"networkid,omitempty"`
+ Networkname string `json:"networkname,omitempty"`
+ Secondaryip []struct {
+ Id string `json:"id,omitempty"`
+ Ipaddress string `json:"ipaddress,omitempty"`
+ } `json:"secondaryip,omitempty"`
+ Traffictype string `json:"traffictype,omitempty"`
+ Type string `json:"type,omitempty"`
+ Virtualmachineid string `json:"virtualmachineid,omitempty"`
+ } `json:"nic,omitempty"`
+ Ostypeid int64 `json:"ostypeid,omitempty"`
+ Password string `json:"password,omitempty"`
+ Passwordenabled bool `json:"passwordenabled,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Publicip string `json:"publicip,omitempty"`
+ Publicipid string `json:"publicipid,omitempty"`
+ Rootdeviceid int64 `json:"rootdeviceid,omitempty"`
+ Rootdevicetype string `json:"rootdevicetype,omitempty"`
+ Securitygroup []struct {
+ Account string `json:"account,omitempty"`
+ Description string `json:"description,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Egressrule []struct {
+ Account string `json:"account,omitempty"`
+ Cidr string `json:"cidr,omitempty"`
+ Endport int `json:"endport,omitempty"`
+ Icmpcode int `json:"icmpcode,omitempty"`
+ Icmptype int `json:"icmptype,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ Ruleid string `json:"ruleid,omitempty"`
+ Securitygroupname string `json:"securitygroupname,omitempty"`
+ Startport int `json:"startport,omitempty"`
+ Tags []struct {
+ Account string `json:"account,omitempty"`
+ Customer string `json:"customer,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Key string `json:"key,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Resourceid string `json:"resourceid,omitempty"`
+ Resourcetype string `json:"resourcetype,omitempty"`
+ Value string `json:"value,omitempty"`
+ } `json:"tags,omitempty"`
+ } `json:"egressrule,omitempty"`
+ Id string `json:"id,omitempty"`
+ Ingressrule []struct {
+ Account string `json:"account,omitempty"`
+ Cidr string `json:"cidr,omitempty"`
+ Endport int `json:"endport,omitempty"`
+ Icmpcode int `json:"icmpcode,omitempty"`
+ Icmptype int `json:"icmptype,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+ Ruleid string `json:"ruleid,omitempty"`
+ Securitygroupname string `json:"securitygroupname,omitempty"`
+ Startport int `json:"startport,omitempty"`
+ Tags []struct {
+ Account string `json:"account,omitempty"`
+ Customer string `json:"customer,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Key string `json:"key,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Resourceid string `json:"resourceid,omitempty"`
+ Resourcetype string `json:"resourcetype,omitempty"`
+ Value string `json:"value,omitempty"`
+ } `json:"tags,omitempty"`
+ } `json:"ingressrule,omitempty"`
+ Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Tags []struct {
+ Account string `json:"account,omitempty"`
+ Customer string `json:"customer,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Key string `json:"key,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Resourceid string `json:"resourceid,omitempty"`
+ Resourcetype string `json:"resourcetype,omitempty"`
+ Value string `json:"value,omitempty"`
+ } `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
+ } `json:"securitygroup,omitempty"`
+ Serviceofferingid string `json:"serviceofferingid,omitempty"`
+ Serviceofferingname string `json:"serviceofferingname,omitempty"`
+ Servicestate string `json:"servicestate,omitempty"`
+ State string `json:"state,omitempty"`
+ Tags []struct {
+ Account string `json:"account,omitempty"`
+ Customer string `json:"customer,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
+ Key string `json:"key,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Resourceid string `json:"resourceid,omitempty"`
+ Resourcetype string `json:"resourcetype,omitempty"`
+ Value string `json:"value,omitempty"`
+ } `json:"tags,omitempty"`
+ Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
+ Templateid string `json:"templateid,omitempty"`
+ Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
+ Vgpu string `json:"vgpu,omitempty"`
+ Zoneid string `json:"zoneid,omitempty"`
+ Zonename string `json:"zonename,omitempty"`
+}
+
type ListNicsParams struct {
p map[string]interface{}
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NiciraNVPService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NiciraNVPService.go
index a4cb2050b7ec..2efcd1587fde 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/NiciraNVPService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/NiciraNVPService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/OvsElementService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/OvsElementService.go
index 3c81420b7e2e..a86d61f2814f 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/OvsElementService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/OvsElementService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -205,12 +205,18 @@ func (s *OvsElementService) NewListOvsElementsParams() *ListOvsElementsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *OvsElementService) GetOvsElementByID(id string) (*OvsElement, int, error) {
+func (s *OvsElementService) GetOvsElementByID(id string, opts ...OptionFunc) (*OvsElement, int, error) {
p := &ListOvsElementsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListOvsElements(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PodService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PodService.go
index f52919095bb3..ce7b94a3ba73 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PodService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PodService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -463,12 +463,18 @@ func (s *PodService) NewListPodsParams() *ListPodsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PodService) GetPodID(name string) (string, error) {
+func (s *PodService) GetPodID(name string, opts ...OptionFunc) (string, error) {
p := &ListPodsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListPods(p)
if err != nil {
return "", err
@@ -493,13 +499,13 @@ func (s *PodService) GetPodID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PodService) GetPodByName(name string) (*Pod, int, error) {
- id, err := s.GetPodID(name)
+func (s *PodService) GetPodByName(name string, opts ...OptionFunc) (*Pod, int, error) {
+ id, err := s.GetPodID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetPodByID(id)
+ r, count, err := s.GetPodByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -507,12 +513,18 @@ func (s *PodService) GetPodByName(name string) (*Pod, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PodService) GetPodByID(id string) (*Pod, int, error) {
+func (s *PodService) GetPodByID(id string, opts ...OptionFunc) (*Pod, int, error) {
p := &ListPodsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPods(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PoolService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PoolService.go
index c42224ec8439..78b643b17fe6 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PoolService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PoolService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -168,12 +168,18 @@ func (s *PoolService) NewListStoragePoolsParams() *ListStoragePoolsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PoolService) GetStoragePoolID(name string) (string, error) {
+func (s *PoolService) GetStoragePoolID(name string, opts ...OptionFunc) (string, error) {
p := &ListStoragePoolsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListStoragePools(p)
if err != nil {
return "", err
@@ -198,13 +204,13 @@ func (s *PoolService) GetStoragePoolID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PoolService) GetStoragePoolByName(name string) (*StoragePool, int, error) {
- id, err := s.GetStoragePoolID(name)
+func (s *PoolService) GetStoragePoolByName(name string, opts ...OptionFunc) (*StoragePool, int, error) {
+ id, err := s.GetStoragePoolID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetStoragePoolByID(id)
+ r, count, err := s.GetStoragePoolByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -212,12 +218,18 @@ func (s *PoolService) GetStoragePoolByName(name string) (*StoragePool, int, erro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PoolService) GetStoragePoolByID(id string) (*StoragePool, int, error) {
+func (s *PoolService) GetStoragePoolByID(id string, opts ...OptionFunc) (*StoragePool, int, error) {
p := &ListStoragePoolsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListStoragePools(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -514,6 +526,10 @@ func (p *UpdateStoragePoolParams) toURLValues() url.Values {
vv := strconv.FormatInt(v.(int64), 10)
u.Set("capacityiops", vv)
}
+ if v, found := p.p["enabled"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("enabled", vv)
+ }
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
@@ -540,6 +556,14 @@ func (p *UpdateStoragePoolParams) SetCapacityiops(v int64) {
return
}
+func (p *UpdateStoragePoolParams) SetEnabled(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["enabled"] = v
+ return
+}
+
func (p *UpdateStoragePoolParams) SetId(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PortableIPService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PortableIPService.go
index 3dabf38ba24c..2896ae2bb3fa 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/PortableIPService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/PortableIPService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -319,12 +319,18 @@ func (s *PortableIPService) NewListPortableIpRangesParams() *ListPortableIpRange
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *PortableIPService) GetPortableIpRangeByID(id string) (*PortableIpRange, int, error) {
+func (s *PortableIPService) GetPortableIpRangeByID(id string, opts ...OptionFunc) (*PortableIpRange, int, error) {
p := &ListPortableIpRangesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPortableIpRanges(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ProjectService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ProjectService.go
index b826868a911a..3269b47fa820 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ProjectService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ProjectService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -805,12 +805,18 @@ func (s *ProjectService) NewListProjectsParams() *ListProjectsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ProjectService) GetProjectID(name string) (string, error) {
+func (s *ProjectService) GetProjectID(name string, opts ...OptionFunc) (string, error) {
p := &ListProjectsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListProjects(p)
if err != nil {
return "", err
@@ -835,13 +841,13 @@ func (s *ProjectService) GetProjectID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ProjectService) GetProjectByName(name string) (*Project, int, error) {
- id, err := s.GetProjectID(name)
+func (s *ProjectService) GetProjectByName(name string, opts ...OptionFunc) (*Project, int, error) {
+ id, err := s.GetProjectID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetProjectByID(id)
+ r, count, err := s.GetProjectByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -849,12 +855,18 @@ func (s *ProjectService) GetProjectByName(name string) (*Project, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ProjectService) GetProjectByID(id string) (*Project, int, error) {
+func (s *ProjectService) GetProjectByID(id string, opts ...OptionFunc) (*Project, int, error) {
p := &ListProjectsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListProjects(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1098,12 +1110,18 @@ func (s *ProjectService) NewListProjectInvitationsParams() *ListProjectInvitatio
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ProjectService) GetProjectInvitationByID(id string) (*ProjectInvitation, int, error) {
+func (s *ProjectService) GetProjectInvitationByID(id string, opts ...OptionFunc) (*ProjectInvitation, int, error) {
p := &ListProjectInvitationsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListProjectInvitations(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1114,21 +1132,6 @@ func (s *ProjectService) GetProjectInvitationByID(id string) (*ProjectInvitation
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListProjectInvitations(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1139,7 +1142,7 @@ func (s *ProjectService) GetProjectInvitationByID(id string) (*ProjectInvitation
return nil, l.Count, fmt.Errorf("There is more then one result for ProjectInvitation UUID: %s!", id)
}
-// Lists projects and provides detailed information for listed projects
+// Lists project invitations and provides detailed information for listed invitations
func (s *ProjectService) ListProjectInvitations(p *ListProjectInvitationsParams) (*ListProjectInvitationsResponse, error) {
resp, err := s.cs.newRequest("listProjectInvitations", p.toURLValues())
if err != nil {
@@ -1302,7 +1305,7 @@ func (s *ProjectService) NewDeleteProjectInvitationParams(id string) *DeleteProj
return p
}
-// Accepts or declines project invitation
+// Deletes project invitation
func (s *ProjectService) DeleteProjectInvitation(p *DeleteProjectInvitationParams) (*DeleteProjectInvitationResponse, error) {
resp, err := s.cs.newRequest("deleteProjectInvitation", p.toURLValues())
if err != nil {
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/QuotaService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/QuotaService.go
new file mode 100644
index 000000000000..896a4d79838a
--- /dev/null
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/QuotaService.go
@@ -0,0 +1,60 @@
+//
+// Copyright 2016, Sander van Harmelen
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+package cloudstack
+
+import (
+ "encoding/json"
+ "net/url"
+)
+
+type QuotaIsEnabledParams struct {
+ p map[string]interface{}
+}
+
+func (p *QuotaIsEnabledParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ return u
+}
+
+// You should always use this function to get a new QuotaIsEnabledParams instance,
+// as then you are sure you have configured all required params
+func (s *QuotaService) NewQuotaIsEnabledParams() *QuotaIsEnabledParams {
+ p := &QuotaIsEnabledParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// Return true if the plugin is enabled
+func (s *QuotaService) QuotaIsEnabled(p *QuotaIsEnabledParams) (*QuotaIsEnabledResponse, error) {
+ resp, err := s.cs.newRequest("quotaIsEnabled", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r QuotaIsEnabledResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type QuotaIsEnabledResponse struct {
+ Isenabled bool `json:"isenabled,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/RegionService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/RegionService.go
index 18cede7fdd46..a434f851361e 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/RegionService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/RegionService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcemetadataService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcemetadataService.go
index e61b0ecc0016..136b9d3b177c 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcemetadataService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcemetadataService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcetagsService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcetagsService.go
index abfd86245f44..c9cf1c0f7804 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcetagsService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ResourcetagsService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -24,6 +24,122 @@ import (
"strings"
)
+type ListStorageTagsParams struct {
+ p map[string]interface{}
+}
+
+func (p *ListStorageTagsParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["keyword"]; found {
+ u.Set("keyword", v.(string))
+ }
+ if v, found := p.p["page"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("page", vv)
+ }
+ if v, found := p.p["pagesize"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("pagesize", vv)
+ }
+ return u
+}
+
+func (p *ListStorageTagsParams) SetKeyword(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["keyword"] = v
+ return
+}
+
+func (p *ListStorageTagsParams) SetPage(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["page"] = v
+ return
+}
+
+func (p *ListStorageTagsParams) SetPagesize(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["pagesize"] = v
+ return
+}
+
+// You should always use this function to get a new ListStorageTagsParams instance,
+// as then you are sure you have configured all required params
+func (s *ResourcetagsService) NewListStorageTagsParams() *ListStorageTagsParams {
+ p := &ListStorageTagsParams{}
+ p.p = make(map[string]interface{})
+ return p
+}
+
+// This is a courtesy helper function, which in some cases may not work as expected!
+func (s *ResourcetagsService) GetStorageTagID(keyword string, opts ...OptionFunc) (string, error) {
+ p := &ListStorageTagsParams{}
+ p.p = make(map[string]interface{})
+
+ p.p["keyword"] = keyword
+
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
+ l, err := s.ListStorageTags(p)
+ if err != nil {
+ return "", err
+ }
+
+ if l.Count == 0 {
+ return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
+ }
+
+ if l.Count == 1 {
+ return l.StorageTags[0].Id, nil
+ }
+
+ if l.Count > 1 {
+ for _, v := range l.StorageTags {
+ if v.Name == keyword {
+ return v.Id, nil
+ }
+ }
+ }
+ return "", fmt.Errorf("Could not find an exact match for %s: %+v", keyword, l)
+}
+
+// Lists storage tags
+func (s *ResourcetagsService) ListStorageTags(p *ListStorageTagsParams) (*ListStorageTagsResponse, error) {
+ resp, err := s.cs.newRequest("listStorageTags", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r ListStorageTagsResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type ListStorageTagsResponse struct {
+ Count int `json:"count"`
+ StorageTags []*StorageTag `json:"storagetag"`
+}
+
+type StorageTag struct {
+ Id string `json:"id,omitempty"`
+ Name string `json:"name,omitempty"`
+ Poolid int64 `json:"poolid,omitempty"`
+}
+
type CreateTagsParams struct {
p map[string]interface{}
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/RouterService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/RouterService.go
index 6bd59e496612..bbbbc4d7d4e1 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/RouterService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/RouterService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -103,8 +103,10 @@ type StartRouterResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -155,6 +157,7 @@ type StartRouterResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -238,8 +241,10 @@ type RebootRouterResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -290,6 +295,7 @@ type RebootRouterResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -385,8 +391,10 @@ type StopRouterResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -437,6 +445,7 @@ type StopRouterResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -520,8 +529,10 @@ type DestroyRouterResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -572,6 +583,7 @@ type DestroyRouterResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -646,8 +658,10 @@ type ChangeServiceForRouterResponse struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -698,6 +712,7 @@ type ChangeServiceForRouterResponse struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -937,27 +952,23 @@ func (s *RouterService) NewListRoutersParams() *ListRoutersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *RouterService) GetRouterID(name string) (string, error) {
+func (s *RouterService) GetRouterID(name string, opts ...OptionFunc) (string, error) {
p := &ListRoutersParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListRouters(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListRouters(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -977,13 +988,13 @@ func (s *RouterService) GetRouterID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *RouterService) GetRouterByName(name string) (*Router, int, error) {
- id, err := s.GetRouterID(name)
+func (s *RouterService) GetRouterByName(name string, opts ...OptionFunc) (*Router, int, error) {
+ id, err := s.GetRouterID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetRouterByID(id)
+ r, count, err := s.GetRouterByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -991,12 +1002,18 @@ func (s *RouterService) GetRouterByName(name string) (*Router, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *RouterService) GetRouterByID(id string) (*Router, int, error) {
+func (s *RouterService) GetRouterByID(id string, opts ...OptionFunc) (*Router, int, error) {
p := &ListRoutersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListRouters(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1007,21 +1024,6 @@ func (s *RouterService) GetRouterByID(id string) (*Router, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListRouters(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1063,8 +1065,10 @@ type Router struct {
Guestmacaddress string `json:"guestmacaddress,omitempty"`
Guestnetmask string `json:"guestnetmask,omitempty"`
Guestnetworkid string `json:"guestnetworkid,omitempty"`
+ Guestnetworkname string `json:"guestnetworkname,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Ip6dns1 string `json:"ip6dns1,omitempty"`
Ip6dns2 string `json:"ip6dns2,omitempty"`
@@ -1115,6 +1119,7 @@ type Router struct {
Templateid string `json:"templateid,omitempty"`
Version string `json:"version,omitempty"`
Vpcid string `json:"vpcid,omitempty"`
+ Vpcname string `json:"vpcname,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
@@ -1209,12 +1214,18 @@ func (s *RouterService) NewListVirtualRouterElementsParams() *ListVirtualRouterE
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *RouterService) GetVirtualRouterElementByID(id string) (*VirtualRouterElement, int, error) {
+func (s *RouterService) GetVirtualRouterElementByID(id string, opts ...OptionFunc) (*VirtualRouterElement, int, error) {
p := &ListVirtualRouterElementsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVirtualRouterElements(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/S3Service.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/S3Service.go
deleted file mode 100644
index d2cbb9326b4b..000000000000
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/S3Service.go
+++ /dev/null
@@ -1,281 +0,0 @@
-//
-// Copyright 2014, Sander van Harmelen
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-//
-
-package cloudstack
-
-import (
- "encoding/json"
- "fmt"
- "net/url"
- "strconv"
-)
-
-type AddS3Params struct {
- p map[string]interface{}
-}
-
-func (p *AddS3Params) toURLValues() url.Values {
- u := url.Values{}
- if p.p == nil {
- return u
- }
- if v, found := p.p["accesskey"]; found {
- u.Set("accesskey", v.(string))
- }
- if v, found := p.p["bucket"]; found {
- u.Set("bucket", v.(string))
- }
- if v, found := p.p["connectiontimeout"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("connectiontimeout", vv)
- }
- if v, found := p.p["endpoint"]; found {
- u.Set("endpoint", v.(string))
- }
- if v, found := p.p["maxerrorretry"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("maxerrorretry", vv)
- }
- if v, found := p.p["secretkey"]; found {
- u.Set("secretkey", v.(string))
- }
- if v, found := p.p["sockettimeout"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("sockettimeout", vv)
- }
- if v, found := p.p["usehttps"]; found {
- vv := strconv.FormatBool(v.(bool))
- u.Set("usehttps", vv)
- }
- return u
-}
-
-func (p *AddS3Params) SetAccesskey(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["accesskey"] = v
- return
-}
-
-func (p *AddS3Params) SetBucket(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["bucket"] = v
- return
-}
-
-func (p *AddS3Params) SetConnectiontimeout(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["connectiontimeout"] = v
- return
-}
-
-func (p *AddS3Params) SetEndpoint(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["endpoint"] = v
- return
-}
-
-func (p *AddS3Params) SetMaxerrorretry(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["maxerrorretry"] = v
- return
-}
-
-func (p *AddS3Params) SetSecretkey(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["secretkey"] = v
- return
-}
-
-func (p *AddS3Params) SetSockettimeout(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["sockettimeout"] = v
- return
-}
-
-func (p *AddS3Params) SetUsehttps(v bool) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["usehttps"] = v
- return
-}
-
-// You should always use this function to get a new AddS3Params instance,
-// as then you are sure you have configured all required params
-func (s *S3Service) NewAddS3Params(accesskey string, bucket string, secretkey string) *AddS3Params {
- p := &AddS3Params{}
- p.p = make(map[string]interface{})
- p.p["accesskey"] = accesskey
- p.p["bucket"] = bucket
- p.p["secretkey"] = secretkey
- return p
-}
-
-// Adds S3
-func (s *S3Service) AddS3(p *AddS3Params) (*AddS3Response, error) {
- resp, err := s.cs.newRequest("addS3", p.toURLValues())
- if err != nil {
- return nil, err
- }
-
- var r AddS3Response
- if err := json.Unmarshal(resp, &r); err != nil {
- return nil, err
- }
- return &r, nil
-}
-
-type AddS3Response struct {
- Details []string `json:"details,omitempty"`
- Id string `json:"id,omitempty"`
- Name string `json:"name,omitempty"`
- Protocol string `json:"protocol,omitempty"`
- Providername string `json:"providername,omitempty"`
- Scope string `json:"scope,omitempty"`
- Url string `json:"url,omitempty"`
- Zoneid string `json:"zoneid,omitempty"`
- Zonename string `json:"zonename,omitempty"`
-}
-
-type ListS3sParams struct {
- p map[string]interface{}
-}
-
-func (p *ListS3sParams) toURLValues() url.Values {
- u := url.Values{}
- if p.p == nil {
- return u
- }
- if v, found := p.p["keyword"]; found {
- u.Set("keyword", v.(string))
- }
- if v, found := p.p["page"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("page", vv)
- }
- if v, found := p.p["pagesize"]; found {
- vv := strconv.Itoa(v.(int))
- u.Set("pagesize", vv)
- }
- return u
-}
-
-func (p *ListS3sParams) SetKeyword(v string) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["keyword"] = v
- return
-}
-
-func (p *ListS3sParams) SetPage(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["page"] = v
- return
-}
-
-func (p *ListS3sParams) SetPagesize(v int) {
- if p.p == nil {
- p.p = make(map[string]interface{})
- }
- p.p["pagesize"] = v
- return
-}
-
-// You should always use this function to get a new ListS3sParams instance,
-// as then you are sure you have configured all required params
-func (s *S3Service) NewListS3sParams() *ListS3sParams {
- p := &ListS3sParams{}
- p.p = make(map[string]interface{})
- return p
-}
-
-// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *S3Service) GetS3ID(keyword string) (string, error) {
- p := &ListS3sParams{}
- p.p = make(map[string]interface{})
-
- p.p["keyword"] = keyword
-
- l, err := s.ListS3s(p)
- if err != nil {
- return "", err
- }
-
- if l.Count == 0 {
- return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
- }
-
- if l.Count == 1 {
- return l.S3s[0].Id, nil
- }
-
- if l.Count > 1 {
- for _, v := range l.S3s {
- if v.Name == keyword {
- return v.Id, nil
- }
- }
- }
- return "", fmt.Errorf("Could not find an exact match for %s: %+v", keyword, l)
-}
-
-// Lists S3s
-func (s *S3Service) ListS3s(p *ListS3sParams) (*ListS3sResponse, error) {
- resp, err := s.cs.newRequest("listS3s", p.toURLValues())
- if err != nil {
- return nil, err
- }
-
- var r ListS3sResponse
- if err := json.Unmarshal(resp, &r); err != nil {
- return nil, err
- }
- return &r, nil
-}
-
-type ListS3sResponse struct {
- Count int `json:"count"`
- S3s []*S3 `json:"s3"`
-}
-
-type S3 struct {
- Details []string `json:"details,omitempty"`
- Id string `json:"id,omitempty"`
- Name string `json:"name,omitempty"`
- Protocol string `json:"protocol,omitempty"`
- Providername string `json:"providername,omitempty"`
- Scope string `json:"scope,omitempty"`
- Url string `json:"url,omitempty"`
- Zoneid string `json:"zoneid,omitempty"`
- Zonename string `json:"zonename,omitempty"`
-}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SSHService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SSHService.go
index eb97dfcb86e1..2221a5921c27 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SSHService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SSHService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -143,6 +143,8 @@ type ResetSSHKeyForVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -279,6 +281,8 @@ type ResetSSHKeyForVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -299,6 +303,8 @@ type ResetSSHKeyForVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -400,6 +406,9 @@ func (s *SSHService) RegisterSSHKeyPair(p *RegisterSSHKeyPairParams) (*RegisterS
}
type RegisterSSHKeyPairResponse struct {
+ Account string `json:"account,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
Fingerprint string `json:"fingerprint,omitempty"`
Name string `json:"name,omitempty"`
}
@@ -729,6 +738,9 @@ type ListSSHKeyPairsResponse struct {
}
type SSHKeyPair struct {
+ Account string `json:"account,omitempty"`
+ Domain string `json:"domain,omitempty"`
+ Domainid string `json:"domainid,omitempty"`
Fingerprint string `json:"fingerprint,omitempty"`
Name string `json:"name,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SecurityGroupService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SecurityGroupService.go
index 282cd64bde53..8b43f4813ec2 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SecurityGroupService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SecurityGroupService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -181,6 +181,8 @@ type CreateSecurityGroupResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
}
type DeleteSecurityGroupParams struct {
@@ -1015,27 +1017,23 @@ func (s *SecurityGroupService) NewListSecurityGroupsParams() *ListSecurityGroups
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SecurityGroupService) GetSecurityGroupID(keyword string) (string, error) {
+func (s *SecurityGroupService) GetSecurityGroupID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListSecurityGroupsParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListSecurityGroups(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListSecurityGroups(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
}
@@ -1055,13 +1053,13 @@ func (s *SecurityGroupService) GetSecurityGroupID(keyword string) (string, error
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SecurityGroupService) GetSecurityGroupByName(name string) (*SecurityGroup, int, error) {
- id, err := s.GetSecurityGroupID(name)
+func (s *SecurityGroupService) GetSecurityGroupByName(name string, opts ...OptionFunc) (*SecurityGroup, int, error) {
+ id, err := s.GetSecurityGroupID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetSecurityGroupByID(id)
+ r, count, err := s.GetSecurityGroupByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1069,12 +1067,18 @@ func (s *SecurityGroupService) GetSecurityGroupByName(name string) (*SecurityGro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SecurityGroupService) GetSecurityGroupByID(id string) (*SecurityGroup, int, error) {
+func (s *SecurityGroupService) GetSecurityGroupByID(id string, opts ...OptionFunc) (*SecurityGroup, int, error) {
p := &ListSecurityGroupsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListSecurityGroups(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1085,21 +1089,6 @@ func (s *SecurityGroupService) GetSecurityGroupByID(id string) (*SecurityGroup,
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListSecurityGroups(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1196,4 +1185,6 @@ type SecurityGroup struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ServiceOfferingService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ServiceOfferingService.go
index 500064d8f86f..b3b18548f215 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ServiceOfferingService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ServiceOfferingService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -112,6 +112,9 @@ func (p *CreateServiceOfferingParams) toURLValues() url.Values {
vv := strconv.FormatBool(v.(bool))
u.Set("offerha", vv)
}
+ if v, found := p.p["provisioningtype"]; found {
+ u.Set("provisioningtype", v.(string))
+ }
if v, found := p.p["serviceofferingdetails"]; found {
i := 0
for k, vv := range v.(map[string]string) {
@@ -300,6 +303,14 @@ func (p *CreateServiceOfferingParams) SetOfferha(v bool) {
return
}
+func (p *CreateServiceOfferingParams) SetProvisioningtype(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["provisioningtype"] = v
+ return
+}
+
func (p *CreateServiceOfferingParams) SetServiceofferingdetails(v map[string]string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -387,6 +398,7 @@ type CreateServiceOfferingResponse struct {
Name string `json:"name,omitempty"`
Networkrate int `json:"networkrate,omitempty"`
Offerha bool `json:"offerha,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Serviceofferingdetails map[string]string `json:"serviceofferingdetails,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Systemvmtype string `json:"systemvmtype,omitempty"`
@@ -551,6 +563,7 @@ type UpdateServiceOfferingResponse struct {
Name string `json:"name,omitempty"`
Networkrate int `json:"networkrate,omitempty"`
Offerha bool `json:"offerha,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Serviceofferingdetails map[string]string `json:"serviceofferingdetails,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Systemvmtype string `json:"systemvmtype,omitempty"`
@@ -572,6 +585,10 @@ func (p *ListServiceOfferingsParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
+ if v, found := p.p["isrecursive"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isrecursive", vv)
+ }
if v, found := p.p["issystem"]; found {
vv := strconv.FormatBool(v.(bool))
u.Set("issystem", vv)
@@ -579,6 +596,10 @@ func (p *ListServiceOfferingsParams) toURLValues() url.Values {
if v, found := p.p["keyword"]; found {
u.Set("keyword", v.(string))
}
+ if v, found := p.p["listall"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("listall", vv)
+ }
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
@@ -615,6 +636,14 @@ func (p *ListServiceOfferingsParams) SetId(v string) {
return
}
+func (p *ListServiceOfferingsParams) SetIsrecursive(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isrecursive"] = v
+ return
+}
+
func (p *ListServiceOfferingsParams) SetIssystem(v bool) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -631,6 +660,14 @@ func (p *ListServiceOfferingsParams) SetKeyword(v string) {
return
}
+func (p *ListServiceOfferingsParams) SetListall(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["listall"] = v
+ return
+}
+
func (p *ListServiceOfferingsParams) SetName(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -680,12 +717,18 @@ func (s *ServiceOfferingService) NewListServiceOfferingsParams() *ListServiceOff
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ServiceOfferingService) GetServiceOfferingID(name string) (string, error) {
+func (s *ServiceOfferingService) GetServiceOfferingID(name string, opts ...OptionFunc) (string, error) {
p := &ListServiceOfferingsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListServiceOfferings(p)
if err != nil {
return "", err
@@ -710,13 +753,13 @@ func (s *ServiceOfferingService) GetServiceOfferingID(name string) (string, erro
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ServiceOfferingService) GetServiceOfferingByName(name string) (*ServiceOffering, int, error) {
- id, err := s.GetServiceOfferingID(name)
+func (s *ServiceOfferingService) GetServiceOfferingByName(name string, opts ...OptionFunc) (*ServiceOffering, int, error) {
+ id, err := s.GetServiceOfferingID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetServiceOfferingByID(id)
+ r, count, err := s.GetServiceOfferingByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -724,12 +767,18 @@ func (s *ServiceOfferingService) GetServiceOfferingByName(name string) (*Service
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ServiceOfferingService) GetServiceOfferingByID(id string) (*ServiceOffering, int, error) {
+func (s *ServiceOfferingService) GetServiceOfferingByID(id string, opts ...OptionFunc) (*ServiceOffering, int, error) {
p := &ListServiceOfferingsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListServiceOfferings(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -796,6 +845,7 @@ type ServiceOffering struct {
Name string `json:"name,omitempty"`
Networkrate int `json:"networkrate,omitempty"`
Offerha bool `json:"offerha,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Serviceofferingdetails map[string]string `json:"serviceofferingdetails,omitempty"`
Storagetype string `json:"storagetype,omitempty"`
Systemvmtype string `json:"systemvmtype,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SnapshotService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SnapshotService.go
index 0b23ad1e7b8c..a92db27e5b49 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SnapshotService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SnapshotService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -39,6 +39,9 @@ func (p *CreateSnapshotParams) toURLValues() url.Values {
if v, found := p.p["domainid"]; found {
u.Set("domainid", v.(string))
}
+ if v, found := p.p["name"]; found {
+ u.Set("name", v.(string))
+ }
if v, found := p.p["policyid"]; found {
u.Set("policyid", v.(string))
}
@@ -68,6 +71,14 @@ func (p *CreateSnapshotParams) SetDomainid(v string) {
return
}
+func (p *CreateSnapshotParams) SetName(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["name"] = v
+ return
+}
+
func (p *CreateSnapshotParams) SetPolicyid(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -144,6 +155,7 @@ type CreateSnapshotResponse struct {
Id string `json:"id,omitempty"`
Intervaltype string `json:"intervaltype,omitempty"`
Name string `json:"name,omitempty"`
+ Physicalsize int64 `json:"physicalsize,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
Revertable bool `json:"revertable,omitempty"`
@@ -362,27 +374,23 @@ func (s *SnapshotService) NewListSnapshotsParams() *ListSnapshotsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SnapshotService) GetSnapshotID(name string) (string, error) {
+func (s *SnapshotService) GetSnapshotID(name string, opts ...OptionFunc) (string, error) {
p := &ListSnapshotsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListSnapshots(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListSnapshots(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -402,13 +410,13 @@ func (s *SnapshotService) GetSnapshotID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SnapshotService) GetSnapshotByName(name string) (*Snapshot, int, error) {
- id, err := s.GetSnapshotID(name)
+func (s *SnapshotService) GetSnapshotByName(name string, opts ...OptionFunc) (*Snapshot, int, error) {
+ id, err := s.GetSnapshotID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetSnapshotByID(id)
+ r, count, err := s.GetSnapshotByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -416,12 +424,18 @@ func (s *SnapshotService) GetSnapshotByName(name string) (*Snapshot, int, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SnapshotService) GetSnapshotByID(id string) (*Snapshot, int, error) {
+func (s *SnapshotService) GetSnapshotByID(id string, opts ...OptionFunc) (*Snapshot, int, error) {
p := &ListSnapshotsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListSnapshots(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -432,21 +446,6 @@ func (s *SnapshotService) GetSnapshotByID(id string) (*Snapshot, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListSnapshots(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -484,6 +483,7 @@ type Snapshot struct {
Id string `json:"id,omitempty"`
Intervaltype string `json:"intervaltype,omitempty"`
Name string `json:"name,omitempty"`
+ Physicalsize int64 `json:"physicalsize,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
Revertable bool `json:"revertable,omitempty"`
@@ -942,12 +942,18 @@ func (s *SnapshotService) NewListSnapshotPoliciesParams() *ListSnapshotPoliciesP
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SnapshotService) GetSnapshotPolicyByID(id string) (*SnapshotPolicy, int, error) {
+func (s *SnapshotService) GetSnapshotPolicyByID(id string, opts ...OptionFunc) (*SnapshotPolicy, int, error) {
p := &ListSnapshotPoliciesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListSnapshotPolicies(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1072,6 +1078,7 @@ type RevertSnapshotResponse struct {
Id string `json:"id,omitempty"`
Intervaltype string `json:"intervaltype,omitempty"`
Name string `json:"name,omitempty"`
+ Physicalsize int64 `json:"physicalsize,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
Revertable bool `json:"revertable,omitempty"`
@@ -1268,27 +1275,23 @@ func (s *SnapshotService) NewListVMSnapshotParams() *ListVMSnapshotParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SnapshotService) GetVMSnapshotID(name string) (string, error) {
+func (s *SnapshotService) GetVMSnapshotID(name string, opts ...OptionFunc) (string, error) {
p := &ListVMSnapshotParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVMSnapshot(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVMSnapshot(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -1622,6 +1625,8 @@ type RevertToVMSnapshotResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -1758,6 +1763,8 @@ type RevertToVMSnapshotResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -1778,6 +1785,8 @@ type RevertToVMSnapshotResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/StoragePoolService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/StoragePoolService.go
index e5a0f6aba498..9b7b82c96fa7 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/StoragePoolService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/StoragePoolService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/StratosphereSSPService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/StratosphereSSPService.go
index 4d63b8f57257..4e638f843015 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/StratosphereSSPService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/StratosphereSSPService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SwiftService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SwiftService.go
index f1c743365295..fe70dd475505 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SwiftService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SwiftService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -182,12 +182,18 @@ func (s *SwiftService) NewListSwiftsParams() *ListSwiftsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SwiftService) GetSwiftID(keyword string) (string, error) {
+func (s *SwiftService) GetSwiftID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListSwiftsParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListSwifts(p)
if err != nil {
return "", err
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemCapacityService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemCapacityService.go
index fea76c8ab2a3..98444d658034 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemCapacityService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemCapacityService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemVMService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemVMService.go
index 35369654ed13..55f81356ce84 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemVMService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/SystemVMService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -99,6 +99,7 @@ type StartSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -196,6 +197,7 @@ type RebootSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -305,6 +307,7 @@ type StopSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -402,6 +405,7 @@ type DestroySystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -568,12 +572,18 @@ func (s *SystemVMService) NewListSystemVmsParams() *ListSystemVmsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SystemVMService) GetSystemVmID(name string) (string, error) {
+func (s *SystemVMService) GetSystemVmID(name string, opts ...OptionFunc) (string, error) {
p := &ListSystemVmsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListSystemVms(p)
if err != nil {
return "", err
@@ -598,13 +608,13 @@ func (s *SystemVMService) GetSystemVmID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SystemVMService) GetSystemVmByName(name string) (*SystemVm, int, error) {
- id, err := s.GetSystemVmID(name)
+func (s *SystemVMService) GetSystemVmByName(name string, opts ...OptionFunc) (*SystemVm, int, error) {
+ id, err := s.GetSystemVmID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetSystemVmByID(id)
+ r, count, err := s.GetSystemVmByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -612,12 +622,18 @@ func (s *SystemVMService) GetSystemVmByName(name string) (*SystemVm, int, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *SystemVMService) GetSystemVmByID(id string) (*SystemVm, int, error) {
+func (s *SystemVMService) GetSystemVmByID(id string, opts ...OptionFunc) (*SystemVm, int, error) {
p := &ListSystemVmsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListSystemVms(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -665,6 +681,7 @@ type SystemVm struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -774,6 +791,7 @@ type MigrateSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -878,6 +896,7 @@ type ChangeServiceForSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
@@ -1003,6 +1022,7 @@ type ScaleSystemVmResponse struct {
Gateway string `json:"gateway,omitempty"`
Hostid string `json:"hostid,omitempty"`
Hostname string `json:"hostname,omitempty"`
+ Hypervisor string `json:"hypervisor,omitempty"`
Id string `json:"id,omitempty"`
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/TemplateService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/TemplateService.go
index 860e592035ab..ad915c0ddc04 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/TemplateService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/TemplateService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -70,6 +70,9 @@ func (p *CreateTemplateParams) toURLValues() url.Values {
vv := strconv.FormatBool(v.(bool))
u.Set("passwordenabled", vv)
}
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
if v, found := p.p["requireshvm"]; found {
vv := strconv.FormatBool(v.(bool))
u.Set("requireshvm", vv)
@@ -164,6 +167,14 @@ func (p *CreateTemplateParams) SetPasswordenabled(v bool) {
return
}
+func (p *CreateTemplateParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
func (p *CreateTemplateParams) SetRequireshvm(v bool) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -708,6 +719,10 @@ func (p *UpdateTemplateParams) toURLValues() url.Values {
vv := strconv.FormatBool(v.(bool))
u.Set("passwordenabled", vv)
}
+ if v, found := p.p["requireshvm"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("requireshvm", vv)
+ }
if v, found := p.p["sortkey"]; found {
vv := strconv.Itoa(v.(int))
u.Set("sortkey", vv)
@@ -795,6 +810,14 @@ func (p *UpdateTemplateParams) SetPasswordenabled(v bool) {
return
}
+func (p *UpdateTemplateParams) SetRequireshvm(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["requireshvm"] = v
+ return
+}
+
func (p *UpdateTemplateParams) SetSortkey(v int) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1291,7 +1314,7 @@ func (s *TemplateService) NewListTemplatesParams(templatefilter string) *ListTem
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *TemplateService) GetTemplateID(name string, templatefilter string, zoneid string) (string, error) {
+func (s *TemplateService) GetTemplateID(name string, templatefilter string, zoneid string, opts ...OptionFunc) (string, error) {
p := &ListTemplatesParams{}
p.p = make(map[string]interface{})
@@ -1299,21 +1322,17 @@ func (s *TemplateService) GetTemplateID(name string, templatefilter string, zone
p.p["templatefilter"] = templatefilter
p.p["zoneid"] = zoneid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListTemplates(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListTemplates(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -1333,13 +1352,13 @@ func (s *TemplateService) GetTemplateID(name string, templatefilter string, zone
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *TemplateService) GetTemplateByName(name string, templatefilter string, zoneid string) (*Template, int, error) {
- id, err := s.GetTemplateID(name, templatefilter, zoneid)
+func (s *TemplateService) GetTemplateByName(name string, templatefilter string, zoneid string, opts ...OptionFunc) (*Template, int, error) {
+ id, err := s.GetTemplateID(name, templatefilter, zoneid, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetTemplateByID(id, templatefilter)
+ r, count, err := s.GetTemplateByID(id, templatefilter, opts...)
if err != nil {
return nil, count, err
}
@@ -1347,13 +1366,19 @@ func (s *TemplateService) GetTemplateByName(name string, templatefilter string,
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *TemplateService) GetTemplateByID(id string, templatefilter string) (*Template, int, error) {
+func (s *TemplateService) GetTemplateByID(id string, templatefilter string, opts ...OptionFunc) (*Template, int, error) {
p := &ListTemplatesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
p.p["templatefilter"] = templatefilter
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListTemplates(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1364,21 +1389,6 @@ func (s *TemplateService) GetTemplateByID(id string, templatefilter string) (*Te
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListTemplates(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1613,13 +1623,19 @@ func (s *TemplateService) NewListTemplatePermissionsParams(id string) *ListTempl
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *TemplateService) GetTemplatePermissionByID(id string) (*TemplatePermission, int, error) {
+func (s *TemplateService) GetTemplatePermissionByID(id string, opts ...OptionFunc) (*TemplatePermission, int, error) {
p := &ListTemplatePermissionsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListTemplatePermissions(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1794,6 +1810,9 @@ func (p *PrepareTemplateParams) toURLValues() url.Values {
if p.p == nil {
return u
}
+ if v, found := p.p["storageid"]; found {
+ u.Set("storageid", v.(string))
+ }
if v, found := p.p["templateid"]; found {
u.Set("templateid", v.(string))
}
@@ -1803,6 +1822,14 @@ func (p *PrepareTemplateParams) toURLValues() url.Values {
return u
}
+func (p *PrepareTemplateParams) SetStorageid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["storageid"] = v
+ return
+}
+
func (p *PrepareTemplateParams) SetTemplateid(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1997,3 +2024,296 @@ type UpgradeRouterTemplateResponse struct {
Jobid string `json:"jobid,omitempty"`
Jobstatus int `json:"jobstatus,omitempty"`
}
+
+type GetUploadParamsForTemplateParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetUploadParamsForTemplateParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["account"]; found {
+ u.Set("account", v.(string))
+ }
+ if v, found := p.p["bits"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("bits", vv)
+ }
+ if v, found := p.p["checksum"]; found {
+ u.Set("checksum", v.(string))
+ }
+ if v, found := p.p["details"]; found {
+ i := 0
+ for k, vv := range v.(map[string]string) {
+ u.Set(fmt.Sprintf("details[%d].key", i), k)
+ u.Set(fmt.Sprintf("details[%d].value", i), vv)
+ i++
+ }
+ }
+ if v, found := p.p["displaytext"]; found {
+ u.Set("displaytext", v.(string))
+ }
+ if v, found := p.p["domainid"]; found {
+ u.Set("domainid", v.(string))
+ }
+ if v, found := p.p["format"]; found {
+ u.Set("format", v.(string))
+ }
+ if v, found := p.p["hypervisor"]; found {
+ u.Set("hypervisor", v.(string))
+ }
+ if v, found := p.p["isdynamicallyscalable"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isdynamicallyscalable", vv)
+ }
+ if v, found := p.p["isextractable"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isextractable", vv)
+ }
+ if v, found := p.p["isfeatured"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isfeatured", vv)
+ }
+ if v, found := p.p["ispublic"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("ispublic", vv)
+ }
+ if v, found := p.p["isrouting"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("isrouting", vv)
+ }
+ if v, found := p.p["name"]; found {
+ u.Set("name", v.(string))
+ }
+ if v, found := p.p["ostypeid"]; found {
+ u.Set("ostypeid", v.(string))
+ }
+ if v, found := p.p["passwordenabled"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("passwordenabled", vv)
+ }
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
+ if v, found := p.p["requireshvm"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("requireshvm", vv)
+ }
+ if v, found := p.p["sshkeyenabled"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("sshkeyenabled", vv)
+ }
+ if v, found := p.p["templatetag"]; found {
+ u.Set("templatetag", v.(string))
+ }
+ if v, found := p.p["zoneid"]; found {
+ u.Set("zoneid", v.(string))
+ }
+ return u
+}
+
+func (p *GetUploadParamsForTemplateParams) SetAccount(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["account"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetBits(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["bits"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetChecksum(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["checksum"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetDetails(v map[string]string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["details"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetDisplaytext(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["displaytext"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetDomainid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domainid"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetFormat(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["format"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetHypervisor(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["hypervisor"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetIsdynamicallyscalable(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isdynamicallyscalable"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetIsextractable(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isextractable"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetIsfeatured(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isfeatured"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetIspublic(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ispublic"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetIsrouting(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["isrouting"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetName(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["name"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetOstypeid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ostypeid"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetPasswordenabled(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["passwordenabled"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetRequireshvm(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["requireshvm"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetSshkeyenabled(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["sshkeyenabled"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetTemplatetag(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["templatetag"] = v
+ return
+}
+
+func (p *GetUploadParamsForTemplateParams) SetZoneid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["zoneid"] = v
+ return
+}
+
+// You should always use this function to get a new GetUploadParamsForTemplateParams instance,
+// as then you are sure you have configured all required params
+func (s *TemplateService) NewGetUploadParamsForTemplateParams(displaytext string, format string, hypervisor string, name string, ostypeid string, zoneid string) *GetUploadParamsForTemplateParams {
+ p := &GetUploadParamsForTemplateParams{}
+ p.p = make(map[string]interface{})
+ p.p["displaytext"] = displaytext
+ p.p["format"] = format
+ p.p["hypervisor"] = hypervisor
+ p.p["name"] = name
+ p.p["ostypeid"] = ostypeid
+ p.p["zoneid"] = zoneid
+ return p
+}
+
+// upload an existing template into the CloudStack cloud.
+func (s *TemplateService) GetUploadParamsForTemplate(p *GetUploadParamsForTemplateParams) (*GetUploadParamsForTemplateResponse, error) {
+ resp, err := s.cs.newRequest("getUploadParamsForTemplate", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetUploadParamsForTemplateResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetUploadParamsForTemplateResponse struct {
+ Expires string `json:"expires,omitempty"`
+ Id string `json:"id,omitempty"`
+ Metadata string `json:"metadata,omitempty"`
+ PostURL string `json:"postURL,omitempty"`
+ Signature string `json:"signature,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UCSService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UCSService.go
index 5bf79c5e789d..0f95779ce989 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UCSService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UCSService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -202,12 +202,18 @@ func (s *UCSService) NewListUcsManagersParams() *ListUcsManagersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *UCSService) GetUcsManagerID(keyword string) (string, error) {
+func (s *UCSService) GetUcsManagerID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListUcsManagersParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListUcsManagers(p)
if err != nil {
return "", err
@@ -232,13 +238,13 @@ func (s *UCSService) GetUcsManagerID(keyword string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *UCSService) GetUcsManagerByName(name string) (*UcsManager, int, error) {
- id, err := s.GetUcsManagerID(name)
+func (s *UCSService) GetUcsManagerByName(name string, opts ...OptionFunc) (*UcsManager, int, error) {
+ id, err := s.GetUcsManagerID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetUcsManagerByID(id)
+ r, count, err := s.GetUcsManagerByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -246,12 +252,18 @@ func (s *UCSService) GetUcsManagerByName(name string) (*UcsManager, int, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *UCSService) GetUcsManagerByID(id string) (*UcsManager, int, error) {
+func (s *UCSService) GetUcsManagerByID(id string, opts ...OptionFunc) (*UcsManager, int, error) {
p := &ListUcsManagersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListUcsManagers(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UsageService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UsageService.go
index ea468e1cb814..7bae32e1d48f 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UsageService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UsageService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -41,6 +41,9 @@ func (p *AddTrafficTypeParams) toURLValues() url.Values {
if v, found := p.p["kvmnetworklabel"]; found {
u.Set("kvmnetworklabel", v.(string))
}
+ if v, found := p.p["ovm3networklabel"]; found {
+ u.Set("ovm3networklabel", v.(string))
+ }
if v, found := p.p["physicalnetworkid"]; found {
u.Set("physicalnetworkid", v.(string))
}
@@ -83,6 +86,14 @@ func (p *AddTrafficTypeParams) SetKvmnetworklabel(v string) {
return
}
+func (p *AddTrafficTypeParams) SetOvm3networklabel(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ovm3networklabel"] = v
+ return
+}
+
func (p *AddTrafficTypeParams) SetPhysicalnetworkid(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -172,6 +183,7 @@ type AddTrafficTypeResponse struct {
Hypervnetworklabel string `json:"hypervnetworklabel,omitempty"`
Id string `json:"id,omitempty"`
Kvmnetworklabel string `json:"kvmnetworklabel,omitempty"`
+ Ovm3networklabel string `json:"ovm3networklabel,omitempty"`
Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
Traffictype string `json:"traffictype,omitempty"`
Vmwarenetworklabel string `json:"vmwarenetworklabel,omitempty"`
@@ -313,13 +325,19 @@ func (s *UsageService) NewListTrafficTypesParams(physicalnetworkid string) *List
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *UsageService) GetTrafficTypeID(keyword string, physicalnetworkid string) (string, error) {
+func (s *UsageService) GetTrafficTypeID(keyword string, physicalnetworkid string, opts ...OptionFunc) (string, error) {
p := &ListTrafficTypesParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
p.p["physicalnetworkid"] = physicalnetworkid
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListTrafficTypes(p)
if err != nil {
return "", err
@@ -390,6 +408,9 @@ func (p *UpdateTrafficTypeParams) toURLValues() url.Values {
if v, found := p.p["kvmnetworklabel"]; found {
u.Set("kvmnetworklabel", v.(string))
}
+ if v, found := p.p["ovm3networklabel"]; found {
+ u.Set("ovm3networklabel", v.(string))
+ }
if v, found := p.p["vmwarenetworklabel"]; found {
u.Set("vmwarenetworklabel", v.(string))
}
@@ -423,6 +444,14 @@ func (p *UpdateTrafficTypeParams) SetKvmnetworklabel(v string) {
return
}
+func (p *UpdateTrafficTypeParams) SetOvm3networklabel(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["ovm3networklabel"] = v
+ return
+}
+
func (p *UpdateTrafficTypeParams) SetVmwarenetworklabel(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -487,6 +516,7 @@ type UpdateTrafficTypeResponse struct {
Hypervnetworklabel string `json:"hypervnetworklabel,omitempty"`
Id string `json:"id,omitempty"`
Kvmnetworklabel string `json:"kvmnetworklabel,omitempty"`
+ Ovm3networklabel string `json:"ovm3networklabel,omitempty"`
Physicalnetworkid string `json:"physicalnetworkid,omitempty"`
Traffictype string `json:"traffictype,omitempty"`
Vmwarenetworklabel string `json:"vmwarenetworklabel,omitempty"`
@@ -699,6 +729,9 @@ func (p *ListUsageRecordsParams) toURLValues() url.Values {
vv := strconv.FormatInt(v.(int64), 10)
u.Set("type", vv)
}
+ if v, found := p.p["usageid"]; found {
+ u.Set("usageid", v.(string))
+ }
return u
}
@@ -782,6 +815,14 @@ func (p *ListUsageRecordsParams) SetType(v int64) {
return
}
+func (p *ListUsageRecordsParams) SetUsageid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["usageid"] = v
+ return
+}
+
// You should always use this function to get a new ListUsageRecordsParams instance,
// as then you are sure you have configured all required params
func (s *UsageService) NewListUsageRecordsParams(enddate string, startdate string) *ListUsageRecordsParams {
@@ -886,6 +927,58 @@ type UsageType struct {
Usagetypeid int `json:"usagetypeid,omitempty"`
}
+type RemoveRawUsageRecordsParams struct {
+ p map[string]interface{}
+}
+
+func (p *RemoveRawUsageRecordsParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["interval"]; found {
+ vv := strconv.Itoa(v.(int))
+ u.Set("interval", vv)
+ }
+ return u
+}
+
+func (p *RemoveRawUsageRecordsParams) SetInterval(v int) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["interval"] = v
+ return
+}
+
+// You should always use this function to get a new RemoveRawUsageRecordsParams instance,
+// as then you are sure you have configured all required params
+func (s *UsageService) NewRemoveRawUsageRecordsParams(interval int) *RemoveRawUsageRecordsParams {
+ p := &RemoveRawUsageRecordsParams{}
+ p.p = make(map[string]interface{})
+ p.p["interval"] = interval
+ return p
+}
+
+// Safely removes raw records from cloud_usage table
+func (s *UsageService) RemoveRawUsageRecords(p *RemoveRawUsageRecordsParams) (*RemoveRawUsageRecordsResponse, error) {
+ resp, err := s.cs.newRequest("removeRawUsageRecords", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r RemoveRawUsageRecordsResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type RemoveRawUsageRecordsResponse struct {
+ Displaytext string `json:"displaytext,omitempty"`
+ Success string `json:"success,omitempty"`
+}
+
type AddTrafficMonitorParams struct {
p map[string]interface{}
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UserService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UserService.go
index e11e25f61f7f..cf091225d4b5 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/UserService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/UserService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -535,12 +535,18 @@ func (s *UserService) NewListUsersParams() *ListUsersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *UserService) GetUserByID(id string) (*User, int, error) {
+func (s *UserService) GetUserByID(id string, opts ...OptionFunc) (*User, int, error) {
p := &ListUsersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListUsers(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VLANService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VLANService.go
index 38ebb0136826..0809b6dae850 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VLANService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VLANService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -489,12 +489,18 @@ func (s *VLANService) NewListVlanIpRangesParams() *ListVlanIpRangesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VLANService) GetVlanIpRangeByID(id string) (*VlanIpRange, int, error) {
+func (s *VLANService) GetVlanIpRangeByID(id string, opts ...OptionFunc) (*VlanIpRange, int, error) {
p := &ListVlanIpRangesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVlanIpRanges(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -505,21 +511,6 @@ func (s *VLANService) GetVlanIpRangeByID(id string) (*VlanIpRange, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVlanIpRanges(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -879,12 +870,18 @@ func (s *VLANService) NewListDedicatedGuestVlanRangesParams() *ListDedicatedGues
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VLANService) GetDedicatedGuestVlanRangeByID(id string) (*DedicatedGuestVlanRange, int, error) {
+func (s *VLANService) GetDedicatedGuestVlanRangeByID(id string, opts ...OptionFunc) (*DedicatedGuestVlanRange, int, error) {
p := &ListDedicatedGuestVlanRangesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListDedicatedGuestVlanRanges(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -895,21 +892,6 @@ func (s *VLANService) GetDedicatedGuestVlanRangeByID(id string) (*DedicatedGuest
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListDedicatedGuestVlanRanges(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VMGroupService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VMGroupService.go
index da57bef01956..998e9c004ecc 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VMGroupService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VMGroupService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -368,27 +368,23 @@ func (s *VMGroupService) NewListInstanceGroupsParams() *ListInstanceGroupsParams
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VMGroupService) GetInstanceGroupID(name string) (string, error) {
+func (s *VMGroupService) GetInstanceGroupID(name string, opts ...OptionFunc) (string, error) {
p := &ListInstanceGroupsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListInstanceGroups(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListInstanceGroups(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -408,13 +404,13 @@ func (s *VMGroupService) GetInstanceGroupID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VMGroupService) GetInstanceGroupByName(name string) (*InstanceGroup, int, error) {
- id, err := s.GetInstanceGroupID(name)
+func (s *VMGroupService) GetInstanceGroupByName(name string, opts ...OptionFunc) (*InstanceGroup, int, error) {
+ id, err := s.GetInstanceGroupID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetInstanceGroupByID(id)
+ r, count, err := s.GetInstanceGroupByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -422,12 +418,18 @@ func (s *VMGroupService) GetInstanceGroupByName(name string) (*InstanceGroup, in
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VMGroupService) GetInstanceGroupByID(id string) (*InstanceGroup, int, error) {
+func (s *VMGroupService) GetInstanceGroupByID(id string, opts ...OptionFunc) (*InstanceGroup, int, error) {
p := &ListInstanceGroupsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListInstanceGroups(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -438,21 +440,6 @@ func (s *VMGroupService) GetInstanceGroupByID(id string) (*InstanceGroup, int, e
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListInstanceGroups(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPCService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPCService.go
index 956bbbc46ae8..8108300281c8 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPCService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPCService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -295,12 +295,13 @@ type CreateVPCResponse struct {
Zonename string `json:"zonename,omitempty"`
Zonesnetworkspans []string `json:"zonesnetworkspans,omitempty"`
} `json:"network,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Project string `json:"project,omitempty"`
- Projectid string `json:"projectid,omitempty"`
- Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
- Restartrequired bool `json:"restartrequired,omitempty"`
- Service []struct {
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Redundantvpcrouter bool `json:"redundantvpcrouter,omitempty"`
+ Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
+ Restartrequired bool `json:"restartrequired,omitempty"`
+ Service []struct {
Capability []struct {
Canchooseservicecapability bool `json:"canchooseservicecapability,omitempty"`
Name string `json:"name,omitempty"`
@@ -577,27 +578,23 @@ func (s *VPCService) NewListVPCsParams() *ListVPCsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCID(name string) (string, error) {
+func (s *VPCService) GetVPCID(name string, opts ...OptionFunc) (string, error) {
p := &ListVPCsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVPCs(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVPCs(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -617,13 +614,13 @@ func (s *VPCService) GetVPCID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCByName(name string) (*VPC, int, error) {
- id, err := s.GetVPCID(name)
+func (s *VPCService) GetVPCByName(name string, opts ...OptionFunc) (*VPC, int, error) {
+ id, err := s.GetVPCID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetVPCByID(id)
+ r, count, err := s.GetVPCByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -631,12 +628,18 @@ func (s *VPCService) GetVPCByName(name string) (*VPC, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCByID(id string) (*VPC, int, error) {
+func (s *VPCService) GetVPCByID(id string, opts ...OptionFunc) (*VPC, int, error) {
p := &ListVPCsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVPCs(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -647,21 +650,6 @@ func (s *VPCService) GetVPCByID(id string) (*VPC, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVPCs(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -779,12 +767,13 @@ type VPC struct {
Zonename string `json:"zonename,omitempty"`
Zonesnetworkspans []string `json:"zonesnetworkspans,omitempty"`
} `json:"network,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Project string `json:"project,omitempty"`
- Projectid string `json:"projectid,omitempty"`
- Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
- Restartrequired bool `json:"restartrequired,omitempty"`
- Service []struct {
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Redundantvpcrouter bool `json:"redundantvpcrouter,omitempty"`
+ Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
+ Restartrequired bool `json:"restartrequired,omitempty"`
+ Service []struct {
Capability []struct {
Canchooseservicecapability bool `json:"canchooseservicecapability,omitempty"`
Name string `json:"name,omitempty"`
@@ -1086,12 +1075,13 @@ type UpdateVPCResponse struct {
Zonename string `json:"zonename,omitempty"`
Zonesnetworkspans []string `json:"zonesnetworkspans,omitempty"`
} `json:"network,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Project string `json:"project,omitempty"`
- Projectid string `json:"projectid,omitempty"`
- Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
- Restartrequired bool `json:"restartrequired,omitempty"`
- Service []struct {
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Redundantvpcrouter bool `json:"redundantvpcrouter,omitempty"`
+ Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
+ Restartrequired bool `json:"restartrequired,omitempty"`
+ Service []struct {
Capability []struct {
Canchooseservicecapability bool `json:"canchooseservicecapability,omitempty"`
Name string `json:"name,omitempty"`
@@ -1135,12 +1125,28 @@ func (p *RestartVPCParams) toURLValues() url.Values {
if p.p == nil {
return u
}
+ if v, found := p.p["cleanup"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("cleanup", vv)
+ }
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
+ if v, found := p.p["makeredundant"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("makeredundant", vv)
+ }
return u
}
+func (p *RestartVPCParams) SetCleanup(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["cleanup"] = v
+ return
+}
+
func (p *RestartVPCParams) SetId(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1149,6 +1155,14 @@ func (p *RestartVPCParams) SetId(v string) {
return
}
+func (p *RestartVPCParams) SetMakeredundant(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["makeredundant"] = v
+ return
+}
+
// You should always use this function to get a new RestartVPCParams instance,
// as then you are sure you have configured all required params
func (s *VPCService) NewRestartVPCParams(id string) *RestartVPCParams {
@@ -1281,12 +1295,13 @@ type RestartVPCResponse struct {
Zonename string `json:"zonename,omitempty"`
Zonesnetworkspans []string `json:"zonesnetworkspans,omitempty"`
} `json:"network,omitempty"`
- Networkdomain string `json:"networkdomain,omitempty"`
- Project string `json:"project,omitempty"`
- Projectid string `json:"projectid,omitempty"`
- Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
- Restartrequired bool `json:"restartrequired,omitempty"`
- Service []struct {
+ Networkdomain string `json:"networkdomain,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
+ Redundantvpcrouter bool `json:"redundantvpcrouter,omitempty"`
+ Regionlevelvpc bool `json:"regionlevelvpc,omitempty"`
+ Restartrequired bool `json:"restartrequired,omitempty"`
+ Service []struct {
Capability []struct {
Canchooseservicecapability bool `json:"canchooseservicecapability,omitempty"`
Name string `json:"name,omitempty"`
@@ -1803,12 +1818,18 @@ func (s *VPCService) NewListVPCOfferingsParams() *ListVPCOfferingsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCOfferingID(name string) (string, error) {
+func (s *VPCService) GetVPCOfferingID(name string, opts ...OptionFunc) (string, error) {
p := &ListVPCOfferingsParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVPCOfferings(p)
if err != nil {
return "", err
@@ -1833,13 +1854,13 @@ func (s *VPCService) GetVPCOfferingID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCOfferingByName(name string) (*VPCOffering, int, error) {
- id, err := s.GetVPCOfferingID(name)
+func (s *VPCService) GetVPCOfferingByName(name string, opts ...OptionFunc) (*VPCOffering, int, error) {
+ id, err := s.GetVPCOfferingID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetVPCOfferingByID(id)
+ r, count, err := s.GetVPCOfferingByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1847,12 +1868,18 @@ func (s *VPCService) GetVPCOfferingByName(name string) (*VPCOffering, int, error
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetVPCOfferingByID(id string) (*VPCOffering, int, error) {
+func (s *VPCService) GetVPCOfferingByID(id string, opts ...OptionFunc) (*VPCOffering, int, error) {
p := &ListVPCOfferingsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVPCOfferings(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2268,12 +2295,18 @@ func (s *VPCService) NewListPrivateGatewaysParams() *ListPrivateGatewaysParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetPrivateGatewayByID(id string) (*PrivateGateway, int, error) {
+func (s *VPCService) GetPrivateGatewayByID(id string, opts ...OptionFunc) (*PrivateGateway, int, error) {
p := &ListPrivateGatewaysParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListPrivateGateways(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2284,21 +2317,6 @@ func (s *VPCService) GetPrivateGatewayByID(id string) (*PrivateGateway, int, err
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListPrivateGateways(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2748,12 +2766,18 @@ func (s *VPCService) NewListStaticRoutesParams() *ListStaticRoutesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPCService) GetStaticRouteByID(id string) (*StaticRoute, int, error) {
+func (s *VPCService) GetStaticRouteByID(id string, opts ...OptionFunc) (*StaticRoute, int, error) {
p := &ListStaticRoutesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListStaticRoutes(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2764,21 +2788,6 @@ func (s *VPCService) GetStaticRouteByID(id string) (*StaticRoute, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListStaticRoutes(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPNService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPNService.go
index 37550c4ebe0f..b25e0a94fb21 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPNService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VPNService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -388,12 +388,18 @@ func (s *VPNService) NewListRemoteAccessVpnsParams() *ListRemoteAccessVpnsParams
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetRemoteAccessVpnByID(id string) (*RemoteAccessVpn, int, error) {
+func (s *VPNService) GetRemoteAccessVpnByID(id string, opts ...OptionFunc) (*RemoteAccessVpn, int, error) {
p := &ListRemoteAccessVpnsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListRemoteAccessVpns(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -404,21 +410,6 @@ func (s *VPNService) GetRemoteAccessVpnByID(id string) (*RemoteAccessVpn, int, e
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListRemoteAccessVpns(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -926,12 +917,18 @@ func (s *VPNService) NewListVpnUsersParams() *ListVpnUsersParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnUserByID(id string) (*VpnUser, int, error) {
+func (s *VPNService) GetVpnUserByID(id string, opts ...OptionFunc) (*VpnUser, int, error) {
p := &ListVpnUsersParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVpnUsers(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -942,21 +939,6 @@ func (s *VPNService) GetVpnUserByID(id string) (*VpnUser, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVpnUsers(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1026,6 +1008,10 @@ func (p *CreateVpnCustomerGatewayParams) toURLValues() url.Values {
if v, found := p.p["esppolicy"]; found {
u.Set("esppolicy", v.(string))
}
+ if v, found := p.p["forceencap"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("forceencap", vv)
+ }
if v, found := p.p["gateway"]; found {
u.Set("gateway", v.(string))
}
@@ -1042,6 +1028,9 @@ func (p *CreateVpnCustomerGatewayParams) toURLValues() url.Values {
if v, found := p.p["name"]; found {
u.Set("name", v.(string))
}
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
return u
}
@@ -1093,6 +1082,14 @@ func (p *CreateVpnCustomerGatewayParams) SetEsppolicy(v string) {
return
}
+func (p *CreateVpnCustomerGatewayParams) SetForceencap(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["forceencap"] = v
+ return
+}
+
func (p *CreateVpnCustomerGatewayParams) SetGateway(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1133,6 +1130,14 @@ func (p *CreateVpnCustomerGatewayParams) SetName(v string) {
return
}
+func (p *CreateVpnCustomerGatewayParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
// You should always use this function to get a new CreateVpnCustomerGatewayParams instance,
// as then you are sure you have configured all required params
func (s *VPNService) NewCreateVpnCustomerGatewayParams(cidrlist string, esppolicy string, gateway string, ikepolicy string, ipsecpsk string) *CreateVpnCustomerGatewayParams {
@@ -1189,6 +1194,7 @@ type CreateVpnCustomerGatewayResponse struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
Ikelifetime int64 `json:"ikelifetime,omitempty"`
@@ -1405,6 +1411,7 @@ type CreateVpnConnectionResponse struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
@@ -1651,6 +1658,10 @@ func (p *UpdateVpnCustomerGatewayParams) toURLValues() url.Values {
if v, found := p.p["esppolicy"]; found {
u.Set("esppolicy", v.(string))
}
+ if v, found := p.p["forceencap"]; found {
+ vv := strconv.FormatBool(v.(bool))
+ u.Set("forceencap", vv)
+ }
if v, found := p.p["gateway"]; found {
u.Set("gateway", v.(string))
}
@@ -1721,6 +1732,14 @@ func (p *UpdateVpnCustomerGatewayParams) SetEsppolicy(v string) {
return
}
+func (p *UpdateVpnCustomerGatewayParams) SetForceencap(v bool) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["forceencap"] = v
+ return
+}
+
func (p *UpdateVpnCustomerGatewayParams) SetGateway(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1826,6 +1845,7 @@ type UpdateVpnCustomerGatewayResponse struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
Ikelifetime int64 `json:"ikelifetime,omitempty"`
@@ -1936,6 +1956,7 @@ type ResetVpnConnectionResponse struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
@@ -2076,27 +2097,23 @@ func (s *VPNService) NewListVpnCustomerGatewaysParams() *ListVpnCustomerGateways
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnCustomerGatewayID(keyword string) (string, error) {
+func (s *VPNService) GetVpnCustomerGatewayID(keyword string, opts ...OptionFunc) (string, error) {
p := &ListVpnCustomerGatewaysParams{}
p.p = make(map[string]interface{})
p.p["keyword"] = keyword
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVpnCustomerGateways(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVpnCustomerGateways(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", keyword, l)
}
@@ -2116,13 +2133,13 @@ func (s *VPNService) GetVpnCustomerGatewayID(keyword string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnCustomerGatewayByName(name string) (*VpnCustomerGateway, int, error) {
- id, err := s.GetVpnCustomerGatewayID(name)
+func (s *VPNService) GetVpnCustomerGatewayByName(name string, opts ...OptionFunc) (*VpnCustomerGateway, int, error) {
+ id, err := s.GetVpnCustomerGatewayID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetVpnCustomerGatewayByID(id)
+ r, count, err := s.GetVpnCustomerGatewayByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -2130,12 +2147,18 @@ func (s *VPNService) GetVpnCustomerGatewayByName(name string) (*VpnCustomerGatew
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnCustomerGatewayByID(id string) (*VpnCustomerGateway, int, error) {
+func (s *VPNService) GetVpnCustomerGatewayByID(id string, opts ...OptionFunc) (*VpnCustomerGateway, int, error) {
p := &ListVpnCustomerGatewaysParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVpnCustomerGateways(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2146,21 +2169,6 @@ func (s *VPNService) GetVpnCustomerGatewayByID(id string) (*VpnCustomerGateway,
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVpnCustomerGateways(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2198,6 +2206,7 @@ type VpnCustomerGateway struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
Ikelifetime int64 `json:"ikelifetime,omitempty"`
@@ -2357,12 +2366,18 @@ func (s *VPNService) NewListVpnGatewaysParams() *ListVpnGatewaysParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnGatewayByID(id string) (*VpnGateway, int, error) {
+func (s *VPNService) GetVpnGatewayByID(id string, opts ...OptionFunc) (*VpnGateway, int, error) {
p := &ListVpnGatewaysParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVpnGateways(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2373,21 +2388,6 @@ func (s *VPNService) GetVpnGatewayByID(id string) (*VpnGateway, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVpnGateways(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2577,12 +2577,18 @@ func (s *VPNService) NewListVpnConnectionsParams() *ListVpnConnectionsParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VPNService) GetVpnConnectionByID(id string) (*VpnConnection, int, error) {
+func (s *VPNService) GetVpnConnectionByID(id string, opts ...OptionFunc) (*VpnConnection, int, error) {
p := &ListVpnConnectionsParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVpnConnections(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2593,21 +2599,6 @@ func (s *VPNService) GetVpnConnectionByID(id string) (*VpnConnection, int, error
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVpnConnections(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2646,6 +2637,7 @@ type VpnConnection struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
@@ -2761,6 +2753,7 @@ type UpdateVpnConnectionResponse struct {
Dpd bool `json:"dpd,omitempty"`
Esplifetime int64 `json:"esplifetime,omitempty"`
Esppolicy string `json:"esppolicy,omitempty"`
+ Forceencap bool `json:"forceencap,omitempty"`
Fordisplay bool `json:"fordisplay,omitempty"`
Gateway string `json:"gateway,omitempty"`
Id string `json:"id,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VirtualMachineService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VirtualMachineService.go
index 2265e0b91683..79783e2f92ff 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VirtualMachineService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VirtualMachineService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -440,6 +440,8 @@ type DeployVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -576,6 +578,8 @@ type DeployVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -596,6 +600,8 @@ type DeployVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -645,7 +651,7 @@ func (s *VirtualMachineService) NewDestroyVirtualMachineParams(id string) *Destr
return p
}
-// Destroys a virtual machine. Once destroyed, only the administrator can recover it.
+// Destroys a virtual machine.
func (s *VirtualMachineService) DestroyVirtualMachine(p *DestroyVirtualMachineParams) (*DestroyVirtualMachineResponse, error) {
resp, err := s.cs.newRequest("destroyVirtualMachine", p.toURLValues())
if err != nil {
@@ -689,6 +695,8 @@ type DestroyVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -825,6 +833,8 @@ type DestroyVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -845,6 +855,8 @@ type DestroyVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -926,6 +938,8 @@ type RebootVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -1062,6 +1076,8 @@ type RebootVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -1082,6 +1098,8 @@ type RebootVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -1185,6 +1203,8 @@ type StartVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -1321,6 +1341,8 @@ type StartVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -1341,6 +1363,8 @@ type StartVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -1434,6 +1458,8 @@ type StopVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -1570,6 +1596,8 @@ type StopVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -1590,6 +1618,8 @@ type StopVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -1671,6 +1701,8 @@ type ResetPasswordForVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -1807,6 +1839,8 @@ type ResetPasswordForVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -1827,6 +1861,8 @@ type ResetPasswordForVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -1869,6 +1905,9 @@ func (p *UpdateVirtualMachineParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
+ if v, found := p.p["instancename"]; found {
+ u.Set("instancename", v.(string))
+ }
if v, found := p.p["isdynamicallyscalable"]; found {
vv := strconv.FormatBool(v.(bool))
u.Set("isdynamicallyscalable", vv)
@@ -1941,6 +1980,14 @@ func (p *UpdateVirtualMachineParams) SetId(v string) {
return
}
+func (p *UpdateVirtualMachineParams) SetInstancename(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["instancename"] = v
+ return
+}
+
func (p *UpdateVirtualMachineParams) SetIsdynamicallyscalable(v bool) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -2005,6 +2052,8 @@ type UpdateVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -2141,6 +2190,8 @@ type UpdateVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -2161,6 +2212,8 @@ type UpdateVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -2222,6 +2275,9 @@ func (p *ListVirtualMachinesParams) toURLValues() url.Values {
vv := strconv.FormatBool(v.(bool))
u.Set("isrecursive", vv)
}
+ if v, found := p.p["keypair"]; found {
+ u.Set("keypair", v.(string))
+ }
if v, found := p.p["keyword"]; found {
u.Set("keyword", v.(string))
}
@@ -2275,6 +2331,9 @@ func (p *ListVirtualMachinesParams) toURLValues() url.Values {
if v, found := p.p["templateid"]; found {
u.Set("templateid", v.(string))
}
+ if v, found := p.p["userid"]; found {
+ u.Set("userid", v.(string))
+ }
if v, found := p.p["vpcid"]; found {
u.Set("vpcid", v.(string))
}
@@ -2388,6 +2447,14 @@ func (p *ListVirtualMachinesParams) SetIsrecursive(v bool) {
return
}
+func (p *ListVirtualMachinesParams) SetKeypair(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["keypair"] = v
+ return
+}
+
func (p *ListVirtualMachinesParams) SetKeyword(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -2492,6 +2559,14 @@ func (p *ListVirtualMachinesParams) SetTemplateid(v string) {
return
}
+func (p *ListVirtualMachinesParams) SetUserid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["userid"] = v
+ return
+}
+
func (p *ListVirtualMachinesParams) SetVpcid(v string) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -2517,27 +2592,23 @@ func (s *VirtualMachineService) NewListVirtualMachinesParams() *ListVirtualMachi
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VirtualMachineService) GetVirtualMachineID(name string) (string, error) {
+func (s *VirtualMachineService) GetVirtualMachineID(name string, opts ...OptionFunc) (string, error) {
p := &ListVirtualMachinesParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVirtualMachines(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVirtualMachines(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -2557,13 +2628,13 @@ func (s *VirtualMachineService) GetVirtualMachineID(name string) (string, error)
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VirtualMachineService) GetVirtualMachineByName(name string) (*VirtualMachine, int, error) {
- id, err := s.GetVirtualMachineID(name)
+func (s *VirtualMachineService) GetVirtualMachineByName(name string, opts ...OptionFunc) (*VirtualMachine, int, error) {
+ id, err := s.GetVirtualMachineID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetVirtualMachineByID(id)
+ r, count, err := s.GetVirtualMachineByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -2571,12 +2642,18 @@ func (s *VirtualMachineService) GetVirtualMachineByName(name string) (*VirtualMa
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VirtualMachineService) GetVirtualMachineByID(id string) (*VirtualMachine, int, error) {
+func (s *VirtualMachineService) GetVirtualMachineByID(id string, opts ...OptionFunc) (*VirtualMachine, int, error) {
p := &ListVirtualMachinesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVirtualMachines(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -2587,21 +2664,6 @@ func (s *VirtualMachineService) GetVirtualMachineByID(id string) (*VirtualMachin
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVirtualMachines(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -2640,6 +2702,8 @@ type VirtualMachine struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -2776,6 +2840,8 @@ type VirtualMachine struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -2796,6 +2862,8 @@ type VirtualMachine struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -2938,6 +3006,8 @@ type RestoreVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -3074,6 +3144,8 @@ type RestoreVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -3094,6 +3166,8 @@ type RestoreVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -3182,6 +3256,8 @@ type ChangeServiceForVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -3318,6 +3394,8 @@ type ChangeServiceForVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -3338,6 +3416,8 @@ type ChangeServiceForVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -3541,6 +3621,8 @@ type AssignVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -3677,6 +3759,8 @@ type AssignVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -3697,6 +3781,8 @@ type AssignVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -3800,6 +3886,8 @@ type MigrateVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -3936,6 +4024,8 @@ type MigrateVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -3956,6 +4046,8 @@ type MigrateVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -4065,6 +4157,8 @@ type MigrateVirtualMachineWithVolumeResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -4201,6 +4295,8 @@ type MigrateVirtualMachineWithVolumeResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -4221,6 +4317,8 @@ type MigrateVirtualMachineWithVolumeResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -4281,6 +4379,8 @@ type RecoverVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -4417,6 +4517,8 @@ type RecoverVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -4437,6 +4539,8 @@ type RecoverVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -4663,6 +4767,8 @@ type AddNicToVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -4799,6 +4905,8 @@ type AddNicToVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -4819,6 +4927,8 @@ type AddNicToVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -4912,6 +5022,8 @@ type RemoveNicFromVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -5048,6 +5160,8 @@ type RemoveNicFromVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -5068,6 +5182,8 @@ type RemoveNicFromVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
@@ -5161,6 +5277,8 @@ type UpdateDefaultNicForVirtualMachineResponse struct {
Domainid string `json:"domainid,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
+ Project string `json:"project,omitempty"`
+ Projectid string `json:"projectid,omitempty"`
Type string `json:"type,omitempty"`
VirtualmachineIds []string `json:"virtualmachineIds,omitempty"`
} `json:"affinitygroup,omitempty"`
@@ -5297,6 +5415,8 @@ type UpdateDefaultNicForVirtualMachineResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
+ Virtualmachinecount int `json:"virtualmachinecount,omitempty"`
+ Virtualmachineids []string `json:"virtualmachineids,omitempty"`
} `json:"securitygroup,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
Serviceofferingname string `json:"serviceofferingname,omitempty"`
@@ -5317,6 +5437,8 @@ type UpdateDefaultNicForVirtualMachineResponse struct {
Templatedisplaytext string `json:"templatedisplaytext,omitempty"`
Templateid string `json:"templateid,omitempty"`
Templatename string `json:"templatename,omitempty"`
+ Userid string `json:"userid,omitempty"`
+ Username string `json:"username,omitempty"`
Vgpu string `json:"vgpu,omitempty"`
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VolumeService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VolumeService.go
index eb4a7c4608a7..6ed16adb6d59 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/VolumeService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/VolumeService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -144,6 +144,7 @@ type AttachVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -377,6 +378,7 @@ type UploadVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -530,6 +532,7 @@ type DetachVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -726,10 +729,9 @@ func (p *CreateVolumeParams) SetZoneid(v string) {
// You should always use this function to get a new CreateVolumeParams instance,
// as then you are sure you have configured all required params
-func (s *VolumeService) NewCreateVolumeParams(name string) *CreateVolumeParams {
+func (s *VolumeService) NewCreateVolumeParams() *CreateVolumeParams {
p := &CreateVolumeParams{}
p.p = make(map[string]interface{})
- p.p["name"] = name
return p
}
@@ -797,6 +799,7 @@ type CreateVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -1123,27 +1126,23 @@ func (s *VolumeService) NewListVolumesParams() *ListVolumesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VolumeService) GetVolumeID(name string) (string, error) {
+func (s *VolumeService) GetVolumeID(name string, opts ...OptionFunc) (string, error) {
p := &ListVolumesParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListVolumes(p)
if err != nil {
return "", err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVolumes(p)
- if err != nil {
- return "", err
- }
- }
-
if l.Count == 0 {
return "", fmt.Errorf("No match found for %s: %+v", name, l)
}
@@ -1163,13 +1162,13 @@ func (s *VolumeService) GetVolumeID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VolumeService) GetVolumeByName(name string) (*Volume, int, error) {
- id, err := s.GetVolumeID(name)
+func (s *VolumeService) GetVolumeByName(name string, opts ...OptionFunc) (*Volume, int, error) {
+ id, err := s.GetVolumeID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetVolumeByID(id)
+ r, count, err := s.GetVolumeByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -1177,12 +1176,18 @@ func (s *VolumeService) GetVolumeByName(name string) (*Volume, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *VolumeService) GetVolumeByID(id string) (*Volume, int, error) {
+func (s *VolumeService) GetVolumeByID(id string, opts ...OptionFunc) (*Volume, int, error) {
p := &ListVolumesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListVolumes(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -1193,21 +1198,6 @@ func (s *VolumeService) GetVolumeByID(id string) (*Volume, int, error) {
return nil, -1, err
}
- if l.Count == 0 {
- // If no matches, search all projects
- p.p["projectid"] = "-1"
-
- l, err = s.ListVolumes(p)
- if err != nil {
- if strings.Contains(err.Error(), fmt.Sprintf(
- "Invalid parameter id value=%s due to incorrect long value format, "+
- "or entity does not exist", id)) {
- return nil, 0, fmt.Errorf("No match found for %s: %+v", id, l)
- }
- return nil, -1, err
- }
- }
-
if l.Count == 0 {
return nil, l.Count, fmt.Errorf("No match found for %s: %+v", id, l)
}
@@ -1266,6 +1256,7 @@ type Volume struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -1540,6 +1531,7 @@ type MigrateVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -1590,6 +1582,14 @@ func (p *ResizeVolumeParams) toURLValues() url.Values {
if v, found := p.p["id"]; found {
u.Set("id", v.(string))
}
+ if v, found := p.p["maxiops"]; found {
+ vv := strconv.FormatInt(v.(int64), 10)
+ u.Set("maxiops", vv)
+ }
+ if v, found := p.p["miniops"]; found {
+ vv := strconv.FormatInt(v.(int64), 10)
+ u.Set("miniops", vv)
+ }
if v, found := p.p["shrinkok"]; found {
vv := strconv.FormatBool(v.(bool))
u.Set("shrinkok", vv)
@@ -1617,6 +1617,22 @@ func (p *ResizeVolumeParams) SetId(v string) {
return
}
+func (p *ResizeVolumeParams) SetMaxiops(v int64) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["maxiops"] = v
+ return
+}
+
+func (p *ResizeVolumeParams) SetMiniops(v int64) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["miniops"] = v
+ return
+}
+
func (p *ResizeVolumeParams) SetShrinkok(v bool) {
if p.p == nil {
p.p = make(map[string]interface{})
@@ -1706,6 +1722,7 @@ type ResizeVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -1903,6 +1920,7 @@ type UpdateVolumeResponse struct {
Path string `json:"path,omitempty"`
Project string `json:"project,omitempty"`
Projectid string `json:"projectid,omitempty"`
+ Provisioningtype string `json:"provisioningtype,omitempty"`
Quiescevm bool `json:"quiescevm,omitempty"`
Serviceofferingdisplaytext string `json:"serviceofferingdisplaytext,omitempty"`
Serviceofferingid string `json:"serviceofferingid,omitempty"`
@@ -1937,3 +1955,321 @@ type UpdateVolumeResponse struct {
Zoneid string `json:"zoneid,omitempty"`
Zonename string `json:"zonename,omitempty"`
}
+
+type GetSolidFireVolumeSizeParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetSolidFireVolumeSizeParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["storageid"]; found {
+ u.Set("storageid", v.(string))
+ }
+ if v, found := p.p["volumeid"]; found {
+ u.Set("volumeid", v.(string))
+ }
+ return u
+}
+
+func (p *GetSolidFireVolumeSizeParams) SetStorageid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["storageid"] = v
+ return
+}
+
+func (p *GetSolidFireVolumeSizeParams) SetVolumeid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["volumeid"] = v
+ return
+}
+
+// You should always use this function to get a new GetSolidFireVolumeSizeParams instance,
+// as then you are sure you have configured all required params
+func (s *VolumeService) NewGetSolidFireVolumeSizeParams(storageid string, volumeid string) *GetSolidFireVolumeSizeParams {
+ p := &GetSolidFireVolumeSizeParams{}
+ p.p = make(map[string]interface{})
+ p.p["storageid"] = storageid
+ p.p["volumeid"] = volumeid
+ return p
+}
+
+// Get the SF volume size including Hypervisor Snapshot Reserve
+func (s *VolumeService) GetSolidFireVolumeSize(p *GetSolidFireVolumeSizeParams) (*GetSolidFireVolumeSizeResponse, error) {
+ resp, err := s.cs.newRequest("getSolidFireVolumeSize", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetSolidFireVolumeSizeResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetSolidFireVolumeSizeResponse struct {
+ SolidFireVolumeSize int64 `json:"solidFireVolumeSize,omitempty"`
+}
+
+type GetSolidFireVolumeAccessGroupIdParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetSolidFireVolumeAccessGroupIdParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["clusterid"]; found {
+ u.Set("clusterid", v.(string))
+ }
+ if v, found := p.p["storageid"]; found {
+ u.Set("storageid", v.(string))
+ }
+ return u
+}
+
+func (p *GetSolidFireVolumeAccessGroupIdParams) SetClusterid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["clusterid"] = v
+ return
+}
+
+func (p *GetSolidFireVolumeAccessGroupIdParams) SetStorageid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["storageid"] = v
+ return
+}
+
+// You should always use this function to get a new GetSolidFireVolumeAccessGroupIdParams instance,
+// as then you are sure you have configured all required params
+func (s *VolumeService) NewGetSolidFireVolumeAccessGroupIdParams(clusterid string, storageid string) *GetSolidFireVolumeAccessGroupIdParams {
+ p := &GetSolidFireVolumeAccessGroupIdParams{}
+ p.p = make(map[string]interface{})
+ p.p["clusterid"] = clusterid
+ p.p["storageid"] = storageid
+ return p
+}
+
+// Get the SF Volume Access Group ID
+func (s *VolumeService) GetSolidFireVolumeAccessGroupId(p *GetSolidFireVolumeAccessGroupIdParams) (*GetSolidFireVolumeAccessGroupIdResponse, error) {
+ resp, err := s.cs.newRequest("getSolidFireVolumeAccessGroupId", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetSolidFireVolumeAccessGroupIdResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetSolidFireVolumeAccessGroupIdResponse struct {
+ SolidFireVolumeAccessGroupId int64 `json:"solidFireVolumeAccessGroupId,omitempty"`
+}
+
+type GetSolidFireVolumeIscsiNameParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetSolidFireVolumeIscsiNameParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["volumeid"]; found {
+ u.Set("volumeid", v.(string))
+ }
+ return u
+}
+
+func (p *GetSolidFireVolumeIscsiNameParams) SetVolumeid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["volumeid"] = v
+ return
+}
+
+// You should always use this function to get a new GetSolidFireVolumeIscsiNameParams instance,
+// as then you are sure you have configured all required params
+func (s *VolumeService) NewGetSolidFireVolumeIscsiNameParams(volumeid string) *GetSolidFireVolumeIscsiNameParams {
+ p := &GetSolidFireVolumeIscsiNameParams{}
+ p.p = make(map[string]interface{})
+ p.p["volumeid"] = volumeid
+ return p
+}
+
+// Get SolidFire Volume's Iscsi Name
+func (s *VolumeService) GetSolidFireVolumeIscsiName(p *GetSolidFireVolumeIscsiNameParams) (*GetSolidFireVolumeIscsiNameResponse, error) {
+ resp, err := s.cs.newRequest("getSolidFireVolumeIscsiName", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetSolidFireVolumeIscsiNameResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetSolidFireVolumeIscsiNameResponse struct {
+ SolidFireVolumeIscsiName string `json:"solidFireVolumeIscsiName,omitempty"`
+}
+
+type GetUploadParamsForVolumeParams struct {
+ p map[string]interface{}
+}
+
+func (p *GetUploadParamsForVolumeParams) toURLValues() url.Values {
+ u := url.Values{}
+ if p.p == nil {
+ return u
+ }
+ if v, found := p.p["account"]; found {
+ u.Set("account", v.(string))
+ }
+ if v, found := p.p["checksum"]; found {
+ u.Set("checksum", v.(string))
+ }
+ if v, found := p.p["diskofferingid"]; found {
+ u.Set("diskofferingid", v.(string))
+ }
+ if v, found := p.p["domainid"]; found {
+ u.Set("domainid", v.(string))
+ }
+ if v, found := p.p["format"]; found {
+ u.Set("format", v.(string))
+ }
+ if v, found := p.p["imagestoreuuid"]; found {
+ u.Set("imagestoreuuid", v.(string))
+ }
+ if v, found := p.p["name"]; found {
+ u.Set("name", v.(string))
+ }
+ if v, found := p.p["projectid"]; found {
+ u.Set("projectid", v.(string))
+ }
+ if v, found := p.p["zoneid"]; found {
+ u.Set("zoneid", v.(string))
+ }
+ return u
+}
+
+func (p *GetUploadParamsForVolumeParams) SetAccount(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["account"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetChecksum(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["checksum"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetDiskofferingid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["diskofferingid"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetDomainid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["domainid"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetFormat(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["format"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetImagestoreuuid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["imagestoreuuid"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetName(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["name"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetProjectid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["projectid"] = v
+ return
+}
+
+func (p *GetUploadParamsForVolumeParams) SetZoneid(v string) {
+ if p.p == nil {
+ p.p = make(map[string]interface{})
+ }
+ p.p["zoneid"] = v
+ return
+}
+
+// You should always use this function to get a new GetUploadParamsForVolumeParams instance,
+// as then you are sure you have configured all required params
+func (s *VolumeService) NewGetUploadParamsForVolumeParams(format string, name string, zoneid string) *GetUploadParamsForVolumeParams {
+ p := &GetUploadParamsForVolumeParams{}
+ p.p = make(map[string]interface{})
+ p.p["format"] = format
+ p.p["name"] = name
+ p.p["zoneid"] = zoneid
+ return p
+}
+
+// Upload a data disk to the cloudstack cloud.
+func (s *VolumeService) GetUploadParamsForVolume(p *GetUploadParamsForVolumeParams) (*GetUploadParamsForVolumeResponse, error) {
+ resp, err := s.cs.newRequest("getUploadParamsForVolume", p.toURLValues())
+ if err != nil {
+ return nil, err
+ }
+
+ var r GetUploadParamsForVolumeResponse
+ if err := json.Unmarshal(resp, &r); err != nil {
+ return nil, err
+ }
+ return &r, nil
+}
+
+type GetUploadParamsForVolumeResponse struct {
+ Expires string `json:"expires,omitempty"`
+ Id string `json:"id,omitempty"`
+ Metadata string `json:"metadata,omitempty"`
+ PostURL string `json:"postURL,omitempty"`
+ Signature string `json:"signature,omitempty"`
+}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ZoneService.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ZoneService.go
index 56894dbdf018..e993481809dc 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/ZoneService.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/ZoneService.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -263,7 +263,6 @@ type CreateZoneResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
- Vlan string `json:"vlan,omitempty"`
Zonetoken string `json:"zonetoken,omitempty"`
}
@@ -531,7 +530,6 @@ type UpdateZoneResponse struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
- Vlan string `json:"vlan,omitempty"`
Zonetoken string `json:"zonetoken,omitempty"`
}
@@ -726,12 +724,18 @@ func (s *ZoneService) NewListZonesParams() *ListZonesParams {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ZoneService) GetZoneID(name string) (string, error) {
+func (s *ZoneService) GetZoneID(name string, opts ...OptionFunc) (string, error) {
p := &ListZonesParams{}
p.p = make(map[string]interface{})
p.p["name"] = name
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return "", err
+ }
+ }
+
l, err := s.ListZones(p)
if err != nil {
return "", err
@@ -756,13 +760,13 @@ func (s *ZoneService) GetZoneID(name string) (string, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ZoneService) GetZoneByName(name string) (*Zone, int, error) {
- id, err := s.GetZoneID(name)
+func (s *ZoneService) GetZoneByName(name string, opts ...OptionFunc) (*Zone, int, error) {
+ id, err := s.GetZoneID(name, opts...)
if err != nil {
return nil, -1, err
}
- r, count, err := s.GetZoneByID(id)
+ r, count, err := s.GetZoneByID(id, opts...)
if err != nil {
return nil, count, err
}
@@ -770,12 +774,18 @@ func (s *ZoneService) GetZoneByName(name string) (*Zone, int, error) {
}
// This is a courtesy helper function, which in some cases may not work as expected!
-func (s *ZoneService) GetZoneByID(id string) (*Zone, int, error) {
+func (s *ZoneService) GetZoneByID(id string, opts ...OptionFunc) (*Zone, int, error) {
p := &ListZonesParams{}
p.p = make(map[string]interface{})
p.p["id"] = id
+ for _, fn := range opts {
+ if err := fn(s.cs, p); err != nil {
+ return nil, -1, err
+ }
+ }
+
l, err := s.ListZones(p)
if err != nil {
if strings.Contains(err.Error(), fmt.Sprintf(
@@ -860,7 +870,6 @@ type Zone struct {
Resourcetype string `json:"resourcetype,omitempty"`
Value string `json:"value,omitempty"`
} `json:"tags,omitempty"`
- Vlan string `json:"vlan,omitempty"`
Zonetoken string `json:"zonetoken,omitempty"`
}
diff --git a/vendor/github.com/xanzy/go-cloudstack/cloudstack/cloudstack.go b/vendor/github.com/xanzy/go-cloudstack/cloudstack/cloudstack.go
index 20eb3a798029..53e38bfed26a 100644
--- a/vendor/github.com/xanzy/go-cloudstack/cloudstack/cloudstack.go
+++ b/vendor/github.com/xanzy/go-cloudstack/cloudstack/cloudstack.go
@@ -1,5 +1,5 @@
//
-// Copyright 2014, Sander van Harmelen
+// Copyright 2016, Sander van Harmelen
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -28,11 +28,25 @@ import (
"io/ioutil"
"net/http"
"net/url"
+ "regexp"
"sort"
"strings"
"time"
)
+// UnlimitedResourceID is a special ID to define an unlimited resource
+const UnlimitedResourceID = "-1"
+
+var idRegex = regexp.MustCompile(`^([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}|-1)$`)
+
+// IsID return true if the passed ID is either a UUID or a UnlimitedResourceID
+func IsID(id string) bool {
+ return idRegex.MatchString(id)
+}
+
+// OptionFunc can be passed to the courtesy helper functions to set additional parameters
+type OptionFunc func(*CloudStackClient, interface{}) error
+
type CSError struct {
ErrorCode int `json:"errorcode"`
CSErrorCode int `json:"cserrorcode"`
@@ -59,9 +73,9 @@ type CloudStackClient struct {
AffinityGroup *AffinityGroupService
Alert *AlertService
Asyncjob *AsyncjobService
+ Authentication *AuthenticationService
AutoScale *AutoScaleService
Baremetal *BaremetalService
- BigSwitchVNS *BigSwitchVNSService
Certificate *CertificateService
CloudIdentifier *CloudIdentifierService
Cluster *ClusterService
@@ -91,11 +105,11 @@ type CloudStackClient struct {
Pool *PoolService
PortableIP *PortableIPService
Project *ProjectService
+ Quota *QuotaService
Region *RegionService
Resourcemetadata *ResourcemetadataService
Resourcetags *ResourcetagsService
Router *RouterService
- S3 *S3Service
SSH *SSHService
SecurityGroup *SecurityGroupService
ServiceOffering *ServiceOfferingService
@@ -140,9 +154,9 @@ func newClient(apiurl string, apikey string, secret string, async bool, verifyss
cs.AffinityGroup = NewAffinityGroupService(cs)
cs.Alert = NewAlertService(cs)
cs.Asyncjob = NewAsyncjobService(cs)
+ cs.Authentication = NewAuthenticationService(cs)
cs.AutoScale = NewAutoScaleService(cs)
cs.Baremetal = NewBaremetalService(cs)
- cs.BigSwitchVNS = NewBigSwitchVNSService(cs)
cs.Certificate = NewCertificateService(cs)
cs.CloudIdentifier = NewCloudIdentifierService(cs)
cs.Cluster = NewClusterService(cs)
@@ -172,11 +186,11 @@ func newClient(apiurl string, apikey string, secret string, async bool, verifyss
cs.Pool = NewPoolService(cs)
cs.PortableIP = NewPortableIPService(cs)
cs.Project = NewProjectService(cs)
+ cs.Quota = NewQuotaService(cs)
cs.Region = NewRegionService(cs)
cs.Resourcemetadata = NewResourcemetadataService(cs)
cs.Resourcetags = NewResourcetagsService(cs)
cs.Router = NewRouterService(cs)
- cs.S3 = NewS3Service(cs)
cs.SSH = NewSSHService(cs)
cs.SecurityGroup = NewSecurityGroupService(cs)
cs.ServiceOffering = NewServiceOfferingService(cs)
@@ -369,6 +383,34 @@ func getRawValue(b json.RawMessage) (json.RawMessage, error) {
return nil, fmt.Errorf("Unable to extract the raw value from:\n\n%s\n\n", string(b))
}
+// ProjectIDSetter is an interface that every type that can set a project ID must implement
+type ProjectIDSetter interface {
+ SetProjectid(string)
+}
+
+// WithProject takes either a project name or ID and sets the `projectid` parameter
+func WithProject(project string) OptionFunc {
+ return func(cs *CloudStackClient, p interface{}) error {
+ ps, ok := p.(ProjectIDSetter)
+
+ if !ok || project == "" {
+ return nil
+ }
+
+ if !IsID(project) {
+ id, err := cs.Project.GetProjectID(project)
+ if err != nil {
+ return err
+ }
+ project = id
+ }
+
+ ps.SetProjectid(project)
+
+ return nil
+ }
+}
+
type APIDiscoveryService struct {
cs *CloudStackClient
}
@@ -417,6 +459,14 @@ func NewAsyncjobService(cs *CloudStackClient) *AsyncjobService {
return &AsyncjobService{cs: cs}
}
+type AuthenticationService struct {
+ cs *CloudStackClient
+}
+
+func NewAuthenticationService(cs *CloudStackClient) *AuthenticationService {
+ return &AuthenticationService{cs: cs}
+}
+
type AutoScaleService struct {
cs *CloudStackClient
}
@@ -433,14 +483,6 @@ func NewBaremetalService(cs *CloudStackClient) *BaremetalService {
return &BaremetalService{cs: cs}
}
-type BigSwitchVNSService struct {
- cs *CloudStackClient
-}
-
-func NewBigSwitchVNSService(cs *CloudStackClient) *BigSwitchVNSService {
- return &BigSwitchVNSService{cs: cs}
-}
-
type CertificateService struct {
cs *CloudStackClient
}
@@ -673,6 +715,14 @@ func NewProjectService(cs *CloudStackClient) *ProjectService {
return &ProjectService{cs: cs}
}
+type QuotaService struct {
+ cs *CloudStackClient
+}
+
+func NewQuotaService(cs *CloudStackClient) *QuotaService {
+ return &QuotaService{cs: cs}
+}
+
type RegionService struct {
cs *CloudStackClient
}
@@ -705,14 +755,6 @@ func NewRouterService(cs *CloudStackClient) *RouterService {
return &RouterService{cs: cs}
}
-type S3Service struct {
- cs *CloudStackClient
-}
-
-func NewS3Service(cs *CloudStackClient) *S3Service {
- return &S3Service{cs: cs}
-}
-
type SSHService struct {
cs *CloudStackClient
}
diff --git a/vendor/golang.org/x/oauth2/internal/token.go b/vendor/golang.org/x/oauth2/internal/token.go
index 39caf6c6176c..739a89bfe96a 100644
--- a/vendor/golang.org/x/oauth2/internal/token.go
+++ b/vendor/golang.org/x/oauth2/internal/token.go
@@ -105,6 +105,7 @@ var brokenAuthHeaderProviders = []string{
"https://oauth.sandbox.trainingpeaks.com/",
"https://oauth.trainingpeaks.com/",
"https://oauth.vk.com/",
+ "https://openapi.baidu.com/",
"https://slack.com/",
"https://test-sandbox.auth.corp.google.com",
"https://test.salesforce.com/",
@@ -113,6 +114,8 @@ var brokenAuthHeaderProviders = []string{
"https://www.googleapis.com/",
"https://www.linkedin.com/",
"https://www.strava.com/oauth/",
+ "https://www.wunderlist.com/oauth/",
+ "https://api.patreon.com/",
}
func RegisterBrokenAuthHeaderProvider(tokenURL string) {
diff --git a/vendor/google.golang.org/api/compute/v1/compute-api.json b/vendor/google.golang.org/api/compute/v1/compute-api.json
index 8d5db8003114..84e1500aaa52 100644
--- a/vendor/google.golang.org/api/compute/v1/compute-api.json
+++ b/vendor/google.golang.org/api/compute/v1/compute-api.json
@@ -1,11 +1,11 @@
{
"kind": "discovery#restDescription",
- "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/c55dTQvv4NWDkglZO3_PlmckRzg\"",
+ "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/0HKk2qVFNFj4BfRYktkIsjDiv2o\"",
"discoveryVersion": "v1",
"id": "compute:v1",
"name": "compute",
"version": "v1",
- "revision": "20160120",
+ "revision": "20160302",
"title": "Compute Engine API",
"description": "API for the Google Compute Engine service.",
"ownerDomain": "google.com",
@@ -151,7 +151,7 @@
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"annotations": {
"required": [
@@ -222,7 +222,7 @@
"AddressList": {
"id": "AddressList",
"type": "object",
- "description": "Contains a list of address resources.",
+ "description": "Contains a list of addresses.",
"properties": {
"id": {
"type": "string",
@@ -230,7 +230,7 @@
},
"items": {
"type": "array",
- "description": "[Output Only] A list of Address resources.",
+ "description": "[Output Only] A list of addresses.",
"items": {
"$ref": "Address"
}
@@ -269,6 +269,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -298,6 +299,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -309,7 +311,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -354,7 +356,7 @@
},
"interface": {
"type": "string",
- "description": "Specifies the disk interface to use for attaching this disk, either SCSI or NVME. The default is SCSI. For performance characteristics of SCSI over NVMe, see Local SSD performance.",
+ "description": "Specifies the disk interface to use for attaching this disk, which is either SCSI or NVME. The default is SCSI. Persistent disks must always use SCSI and the request will fail if you attempt to attach a persistent disk in any other format than SCSI. Local SSDs can use either NVME or SCSI. For performance characteristics of SCSI over NVMe, see Local SSD performance.",
"enum": [
"NVME",
"SCSI"
@@ -438,10 +440,11 @@
"Autoscaler": {
"id": "Autoscaler",
"type": "object",
+ "description": "Represents an Autoscaler resource. Autoscalers allow you to automatically scale virtual machine instances in managed instance groups according to an autoscaling policy that you define. For more information, read Autoscaling Groups of Instances.",
"properties": {
"autoscalingPolicy": {
"$ref": "AutoscalingPolicy",
- "description": "Autoscaling configuration."
+ "description": "The configuration parameters for the autoscaling algorithm. You can define one or more of the policies for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization.\n\nIf none of these are specified, the default will be to autoscale based on cpuUtilization to 0.8 or 80%."
},
"creationTimestamp": {
"type": "string",
@@ -458,7 +461,7 @@
},
"kind": {
"type": "string",
- "description": "Type of the resource.",
+ "description": "[Output Only] Type of the resource. Always compute#autoscaler for autoscalers.",
"default": "compute#autoscaler"
},
"name": {
@@ -477,7 +480,7 @@
},
"target": {
"type": "string",
- "description": "URL of Instance Group Manager or Replica Pool which will be controlled by Autoscaler."
+ "description": "URL of the managed instance group that this autoscaler will scale."
},
"zone": {
"type": "string",
@@ -498,12 +501,12 @@
"description": "A map of scoped autoscaler lists.",
"additionalProperties": {
"$ref": "AutoscalersScopedList",
- "description": "Name of the scope containing this set of autoscalers."
+ "description": "[Output Only] Name of the scope containing this set of autoscalers."
}
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#autoscalerAggregatedList for aggregated lists of autoscalers.",
"default": "compute#autoscalerAggregatedList"
},
"nextPageToken": {
@@ -519,7 +522,7 @@
"AutoscalerList": {
"id": "AutoscalerList",
"type": "object",
- "description": "Contains a list of persistent autoscaler resources.",
+ "description": "Contains a list of Autoscaler resources.",
"properties": {
"id": {
"type": "string",
@@ -534,7 +537,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#autoscalerList for lists of autoscalers.",
"default": "compute#autoscalerList"
},
"nextPageToken": {
@@ -553,19 +556,20 @@
"properties": {
"autoscalers": {
"type": "array",
- "description": "List of autoscalers contained in this scope.",
+ "description": "[Output Only] List of autoscalers contained in this scope.",
"items": {
"$ref": "Autoscaler"
}
},
"warning": {
"type": "object",
- "description": "Informational warning which replaces the list of autoscalers when the list is empty.",
+ "description": "[Output Only] Informational warning which replaces the list of autoscalers when the list is empty.",
"properties": {
"code": {
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -595,6 +599,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -606,7 +611,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -630,16 +635,16 @@
"properties": {
"coolDownPeriodSec": {
"type": "integer",
- "description": "The number of seconds that the Autoscaler should wait between two succeeding changes to the number of virtual machines. You should define an interval that is at least as long as the initialization time of a virtual machine and the time it may take for replica pool to create the virtual machine. The default is 60 seconds.",
+ "description": "The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds.\n\nVirtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process.",
"format": "int32"
},
"cpuUtilization": {
"$ref": "AutoscalingPolicyCpuUtilization",
- "description": "TODO(jbartosik): Add support for scaling based on muliple utilization metrics (take max recommendation). Exactly one utilization policy should be provided. Configuration parameters of CPU based autoscaling policy."
+ "description": "Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group."
},
"customMetricUtilizations": {
"type": "array",
- "description": "Configuration parameters of autoscaling based on custom metric.",
+ "description": "Configuration parameters of autoscaling based on a custom metric.",
"items": {
"$ref": "AutoscalingPolicyCustomMetricUtilization"
}
@@ -650,12 +655,12 @@
},
"maxNumReplicas": {
"type": "integer",
- "description": "The maximum number of replicas that the Autoscaler can scale up to. This field is required for config to be effective. Maximum number of replicas should be not lower than minimal number of replicas. Absolute limit for this value is defined in Autoscaler backend.",
+ "description": "The maximum number of instances that the autoscaler can scale up to. This is required when creating or updating an autoscaler. The maximum number of replicas should not be lower than minimal number of replicas.",
"format": "int32"
},
"minNumReplicas": {
"type": "integer",
- "description": "The minimum number of replicas that the Autoscaler can scale down to. Can't be less than 0. If not provided Autoscaler will choose default value depending on maximal number of replicas.",
+ "description": "The minimum number of replicas that the autoscaler can scale down to. This cannot be less than 0. If not provided, autoscaler will choose a default value depending on maximum number of instances allowed.",
"format": "int32"
}
}
@@ -667,7 +672,7 @@
"properties": {
"utilizationTarget": {
"type": "number",
- "description": "The target utilization that the Autoscaler should maintain. It is represented as a fraction of used cores. For example: 6 cores used in 8-core VM are represented here as 0.75. Must be a float value between (0, 1]. If not defined, the default is 0.8.",
+ "description": "The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.8.\n\nIf the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization.\n\nIf the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.",
"format": "double"
}
}
@@ -679,16 +684,16 @@
"properties": {
"metric": {
"type": "string",
- "description": "Identifier of the metric. It should be a Cloud Monitoring metric. The metric can not have negative values. The metric should be an utilization metric (increasing number of VMs handling requests x times should reduce average value of the metric roughly x times). For example you could use: compute.googleapis.com/instance/network/received_bytes_count."
+ "description": "The identifier of the Cloud Monitoring metric. The metric cannot have negative values and should be a utilization metric, which means that the number of virtual machines handling requests should increase or decrease proportionally to the metric. The metric must also have a label of compute.googleapis.com/resource_id with the value of the instance's unique ID, although this alone does not guarantee that the metric is valid.\n\nFor example, the following is a valid metric:\ncompute.googleapis.com/instance/network/received_bytes_count\n\n\nThe following is not a valid metric because it does not increase or decrease based on usage:\ncompute.googleapis.com/instance/cpu/reserved_cores"
},
"utilizationTarget": {
"type": "number",
- "description": "Target value of the metric which Autoscaler should maintain. Must be a positive value.",
+ "description": "Target value of the metric which autoscaler should maintain. Must be a positive value.",
"format": "double"
},
"utilizationTargetType": {
"type": "string",
- "description": "Defines type in which utilization_target is expressed.",
+ "description": "Defines how target utilization value is expressed for a Cloud Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE. If not specified, the default is GAUGE.",
"enum": [
"DELTA_PER_MINUTE",
"DELTA_PER_SECOND",
@@ -705,11 +710,11 @@
"AutoscalingPolicyLoadBalancingUtilization": {
"id": "AutoscalingPolicyLoadBalancingUtilization",
"type": "object",
- "description": "Load balancing utilization policy.",
+ "description": "Configuration parameters of autoscaling based on load balancing.",
"properties": {
"utilizationTarget": {
"type": "number",
- "description": "Fraction of backend capacity utilization (set in HTTP load balancing configuration) that Autoscaler should maintain. Must be a positive float value. If not defined, the default is 0.8. For example if your maxRatePerInstance capacity (in HTTP Load Balancing configuration) is set at 10 and you would like to keep number of instances such that each instance receives 7 QPS on average, set this to 0.7.",
+ "description": "Fraction of backend capacity utilization (set in HTTP(s) load balancing configuration) that autoscaler should maintain. Must be a positive float value. If not defined, the default is 0.8.",
"format": "double"
}
}
@@ -721,7 +726,7 @@
"properties": {
"balancingMode": {
"type": "string",
- "description": "Specifies the balancing mode for this backend. The default is UTILIZATION but available values are UTILIZATION and RATE.",
+ "description": "Specifies the balancing mode for this backend. For global HTTP(S) load balancing, the default is UTILIZATION. Valid values are UTILIZATION and RATE.",
"enum": [
"RATE",
"UTILIZATION"
@@ -746,12 +751,12 @@
},
"maxRate": {
"type": "integer",
- "description": "The max requests per second (RPS) of the group. Can be used with either balancing mode, but required if RATE mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
+ "description": "The max requests per second (RPS) of the group. Can be used with either RATE or UTILIZATION balancing modes, but required if RATE mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
"format": "int32"
},
"maxRatePerInstance": {
"type": "number",
- "description": "The max requests per second (RPS) that a single backed instance can handle. This is used to calculate the capacity of the group. Can be used in either balancing mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
+ "description": "The max requests per second (RPS) that a single backend instance can handle.This is used to calculate the capacity of the group. Can be used in either balancing mode. For RATE mode, either maxRate or maxRatePerInstance must be set.",
"format": "float"
},
"maxUtilization": {
@@ -815,10 +820,11 @@
},
"portName": {
"type": "string",
- "description": "Name of backend port. The same name should appear in the resource views referenced by this service. Required."
+ "description": "Name of backend port. The same name should appear in the instance groups referenced by this service. Required."
},
"protocol": {
"type": "string",
+ "description": "The protocol this BackendService uses to communicate with backends.\n\nPossible values are HTTP, HTTPS, HTTP2, TCP and SSL.",
"enum": [
"HTTP",
"HTTPS"
@@ -834,7 +840,7 @@
},
"timeoutSec": {
"type": "integer",
- "description": "How many seconds to wait for the backend before considering it a failed request. Default is 30 seconds. Valid range is [1, 86400].",
+ "description": "How many seconds to wait for the backend before considering it a failed request. Default is 30 seconds.",
"format": "int32"
}
}
@@ -957,14 +963,14 @@
},
"licenses": {
"type": "array",
- "description": "Any applicable publicly visible licenses.",
+ "description": "[Output Only] Any applicable publicly visible licenses.",
"items": {
"type": "string"
}
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"annotations": {
"required": [
@@ -987,11 +993,11 @@
},
"sourceImage": {
"type": "string",
- "description": "The source image used to create this disk. If the source image is deleted from the system, this field will not be set, even if an image with the same name has been re-created.\n\nWhen creating a disk, you can provide a private (custom) image using the following input, and Compute Engine will use the corresponding image from your project. For example:\n\nglobal/images/my-private-image \n\nOr you can provide an image from a publicly-available project. For example, to use a Debian image from the debian-cloud project, make sure to include the project in the URL:\n\nprojects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD \n\nwhere vYYYYMMDD is the image version. The fully-qualified URL will also work in both cases."
+ "description": "The source image used to create this disk. If the source image is deleted from the system, this field will not be set, even if an image with the same name has been re-created.\n\nWhen creating a disk, you can provide a private (custom) image using the following input, and Compute Engine will use the corresponding image from your project. For example:\n\nglobal/images/my-private-image \n\nOr you can provide an image from a publicly-available project. For example, to use a Debian image from the debian-cloud project, make sure to include the project in the URL:\n\nprojects/debian-cloud/global/images/debian-7-wheezy-vYYYYMMDD \n\nwhere vYYYYMMDD is the image version. The fully-qualified URL will also work in both cases.\n\nYou can also specify the latest image for a private image family by replacing the image name suffix with family/family-name. For example:\n\nglobal/images/family/my-private-family \n\nOr you can specify an image family from a publicly-available project. For example, to use the latest Debian 7 from the debian-cloud project, make sure to include the project in the URL:\n\nprojects/debian-cloud/global/images/family/debian-7"
},
"sourceImageId": {
"type": "string",
- "description": "The ID value of the image used to create this disk. This value identifies the exact image that was used to create this persistent disk. For example, if you created the persistent disk from an image that was later deleted and recreated under the same name, the source image ID would identify the exact version of the image that was used."
+ "description": "[Output Only] The ID value of the image used to create this disk. This value identifies the exact image that was used to create this persistent disk. For example, if you created the persistent disk from an image that was later deleted and recreated under the same name, the source image ID would identify the exact version of the image that was used."
},
"sourceSnapshot": {
"type": "string",
@@ -1019,11 +1025,11 @@
},
"type": {
"type": "string",
- "description": "URL of the disk type resource describing which disk type to use to create the disk; provided by the client when the disk is created."
+ "description": "URL of the disk type resource describing which disk type to use to create the disk. Provide this when creating the disk."
},
"users": {
"type": "array",
- "description": "Links to the users of the disk (attached instances) in form: project/zones/zone/instances/instance",
+ "description": "[Output Only] Links to the users of the disk (attached instances) in form: project/zones/zone/instances/instance",
"items": {
"type": "string"
}
@@ -1102,7 +1108,7 @@
"properties": {
"destinationZone": {
"type": "string",
- "description": "The URL of the destination zone to move the disk to. This can be a full or partial URL. For example, the following are all valid URLs to a zone: \n- https://www.googleapis.com/compute/v1/projects/project/zones/zone \n- projects/project/zones/zone \n- zones/zone"
+ "description": "The URL of the destination zone to move the disk. This can be a full or partial URL. For example, the following are all valid URLs to a zone: \n- https://www.googleapis.com/compute/v1/projects/project/zones/zone \n- projects/project/zones/zone \n- zones/zone"
},
"targetDisk": {
"type": "string",
@@ -1113,7 +1119,7 @@
"DiskType": {
"id": "DiskType",
"type": "object",
- "description": "A disk type resource.",
+ "description": "A DiskType resource.",
"properties": {
"creationTimestamp": {
"type": "string",
@@ -1195,7 +1201,7 @@
"DiskTypeList": {
"id": "DiskTypeList",
"type": "object",
- "description": "Contains a list of disk type resources.",
+ "description": "Contains a list of disk types.",
"properties": {
"id": {
"type": "string",
@@ -1242,6 +1248,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -1271,6 +1278,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -1282,7 +1290,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -1318,6 +1326,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -1347,6 +1356,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -1358,7 +1368,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -1378,7 +1388,7 @@
"Firewall": {
"id": "Firewall",
"type": "object",
- "description": "A Firewall resource.",
+ "description": "Represents a Firewall resource.",
"properties": {
"allowed": {
"type": "array",
@@ -1388,7 +1398,7 @@
"properties": {
"IPProtocol": {
"type": "string",
- "description": "The IP protocol that is allowed for this rule. The protocol type is required when creating a firewall. This value can either be one of the following well known protocol strings (tcp, udp, icmp, esp, ah, sctp), or the IP protocol number."
+ "description": "The IP protocol that is allowed for this rule. The protocol type is required when creating a firewall rule. This value can either be one of the following well known protocol strings (tcp, udp, icmp, esp, ah, sctp), or the IP protocol number."
},
"ports": {
"type": "array",
@@ -1463,7 +1473,7 @@
"FirewallList": {
"id": "FirewallList",
"type": "object",
- "description": "Contains a list of Firewall resources.",
+ "description": "Contains a list of firewalls.",
"properties": {
"id": {
"type": "string",
@@ -1577,7 +1587,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#forwardingRuleAggregatedList for lists of forwarding rules.",
"default": "compute#forwardingRuleAggregatedList"
},
"nextPageToken": {
@@ -1640,6 +1650,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -1669,6 +1680,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -1680,7 +1692,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -1700,6 +1712,7 @@
"HealthCheckReference": {
"id": "HealthCheckReference",
"type": "object",
+ "description": "A full or valid partial URL to a health check. For example, the following are valid URLs: \n- https://www.googleapis.com/compute/beta/projects/project-id/global/httpHealthChecks/health-check \n- projects/project-id/global/httpHealthChecks/health-check \n- global/httpHealthChecks/health-check",
"properties": {
"healthCheck": {
"type": "string"
@@ -1793,7 +1806,7 @@
},
"kind": {
"type": "string",
- "description": "Type of the resource.",
+ "description": "[Output Only] Type of the resource. Always compute#httpHealthCheck for HTTP health checks.",
"default": "compute#httpHealthCheck"
},
"name": {
@@ -1808,7 +1821,7 @@
},
"requestPath": {
"type": "string",
- "description": "The request path of the HTTP health check request. The default value is \"/\"."
+ "description": "The request path of the HTTP health check request. The default value is /."
},
"selfLink": {
"type": "string",
@@ -2081,7 +2094,7 @@
"ImageList": {
"id": "ImageList",
"type": "object",
- "description": "Contains a list of Image resources.",
+ "description": "Contains a list of images.",
"properties": {
"id": {
"type": "string",
@@ -2149,7 +2162,7 @@
},
"machineType": {
"type": "string",
- "description": "Full or partial URL of the machine type resource to use for this instance, in the format: zones/zone/machineTypes/ machine-type. This is provided by the client when the instance is created. For example, the following is a valid partial url to a predefined machine type:\n\nzones/us-central1-f/machineTypes/n1-standard-1 \n\nTo create a custom machine type, provide a URL to a machine type in the following format, where CPUS is 1 or an even number up to 32 (2, 4, 6, ... 24, etc), and MEMORY is the total memory for this instance. Memory must be a multiple of 256 MB and must be supplied in MB (e.g. 5 GB of memory is 5120 MB):\n\nzones/zone/machineTypes/custom-CPUS-MEMORY \n\nFor example: zones/us-central1-f/machineTypes/custom-4-5120 \n\nFor a full list of restrictions, read the Specifications for custom machine types.",
+ "description": "Full or partial URL of the machine type resource to use for this instance, in the format: zones/zone/machineTypes/machine-type. This is provided by the client when the instance is created. For example, the following is a valid partial url to a predefined machine type:\n\nzones/us-central1-f/machineTypes/n1-standard-1 \n\nTo create a custom machine type, provide a URL to a machine type in the following format, where CPUS is 1 or an even number up to 32 (2, 4, 6, ... 24, etc), and MEMORY is the total memory for this instance. Memory must be a multiple of 256 MB and must be supplied in MB (e.g. 5 GB of memory is 5120 MB):\n\nzones/zone/machineTypes/custom-CPUS-MEMORY \n\nFor example: zones/us-central1-f/machineTypes/custom-4-5120 \n\nFor a full list of restrictions, read the Specifications for custom machine types.",
"annotations": {
"required": [
"compute.instances.insert"
@@ -2216,7 +2229,7 @@
},
"tags": {
"$ref": "Tags",
- "description": "A list of tags to appy to this instance. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during instance creation. The tags can be later modified by the setTags method. Each tag within the list must comply with RFC1035."
+ "description": "A list of tags to apply to this instance. Tags are used to identify valid sources or targets for network firewalls and are specified by the client during instance creation. The tags can be later modified by the setTags method. Each tag within the list must comply with RFC1035."
},
"zone": {
"type": "string",
@@ -2237,7 +2250,7 @@
"description": "[Output Only] A map of scoped instance lists.",
"additionalProperties": {
"$ref": "InstancesScopedList",
- "description": "Name of the scope containing this set of instances."
+ "description": "[Output Only] Name of the scope containing this set of instances."
}
},
"kind": {
@@ -2301,7 +2314,7 @@
},
"network": {
"type": "string",
- "description": "[Output Only] The URL of the network to which all instances in the instance group belong."
+ "description": "The URL of the network to which all instances in the instance group belong."
},
"selfLink": {
"type": "string",
@@ -2314,7 +2327,7 @@
},
"subnetwork": {
"type": "string",
- "description": "[Output Only] The URL of the subnetwork to which all instances in the instance group belong."
+ "description": "The URL of the subnetwork to which all instances in the instance group belong."
},
"zone": {
"type": "string",
@@ -2387,7 +2400,6 @@
"InstanceGroupManager": {
"id": "InstanceGroupManager",
"type": "object",
- "description": "InstanceGroupManagers\n\nNext available tag: 20",
"properties": {
"baseInstanceName": {
"type": "string",
@@ -2657,6 +2669,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -2686,6 +2699,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -2697,7 +2711,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -2835,6 +2849,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -2864,6 +2879,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -2875,7 +2891,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -2913,7 +2929,7 @@
"InstanceList": {
"id": "InstanceList",
"type": "object",
- "description": "Contains a list of instance resources.",
+ "description": "Contains a list of instances.",
"properties": {
"id": {
"type": "string",
@@ -2921,7 +2937,7 @@
},
"items": {
"type": "array",
- "description": "[Output Only] A list of Instance resources.",
+ "description": "[Output Only] A list of instances.",
"items": {
"$ref": "Instance"
}
@@ -2947,7 +2963,7 @@
"properties": {
"destinationZone": {
"type": "string",
- "description": "The URL of the destination zone to move the instance to. This can be a full or partial URL. For example, the following are all valid URLs to a zone: \n- https://www.googleapis.com/compute/v1/projects/project/zones/zone \n- projects/project/zones/zone \n- zones/zone"
+ "description": "The URL of the destination zone to move the instance. This can be a full or partial URL. For example, the following are all valid URLs to a zone: \n- https://www.googleapis.com/compute/v1/projects/project/zones/zone \n- projects/project/zones/zone \n- zones/zone"
},
"targetInstance": {
"type": "string",
@@ -3156,6 +3172,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -3185,6 +3202,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -3196,7 +3214,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -3368,7 +3386,7 @@
"MachineTypeList": {
"id": "MachineTypeList",
"type": "object",
- "description": "Contains a list of Machine Type resources.",
+ "description": "Contains a list of machine types.",
"properties": {
"id": {
"type": "string",
@@ -3415,6 +3433,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -3444,6 +3463,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -3455,7 +3475,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -3557,7 +3577,7 @@
},
"location": {
"type": "string",
- "description": "[Output Only] Indicates the field in the request which caused the error. This property is optional."
+ "description": "[Output Only] Indicates the field in the request that caused the error. This property is optional."
},
"message": {
"type": "string",
@@ -3636,7 +3656,7 @@
"Network": {
"id": "Network",
"type": "object",
- "description": "A network resource.",
+ "description": "Represents a Network resource. Read Networks and Firewalls for more information.",
"properties": {
"IPv4Range": {
"type": "string",
@@ -3731,7 +3751,7 @@
"NetworkList": {
"id": "NetworkList",
"type": "object",
- "description": "Contains a list of Network resources.",
+ "description": "Contains a list of networks.",
"properties": {
"id": {
"type": "string",
@@ -3766,7 +3786,7 @@
"properties": {
"clientOperationId": {
"type": "string",
- "description": "[Output Only] A unique client ID generated by the server."
+ "description": "[Output Only] Reserved for future use."
},
"creationTimestamp": {
"type": "string",
@@ -3796,7 +3816,7 @@
},
"location": {
"type": "string",
- "description": "[Output Only] Indicates the field in the request which caused the error. This property is optional."
+ "description": "[Output Only] Indicates the field in the request that caused the error. This property is optional."
},
"message": {
"type": "string",
@@ -3827,7 +3847,7 @@
},
"kind": {
"type": "string",
- "description": "[Output Only] Type of the resource. Always compute#operation for Operation resources.",
+ "description": "[Output Only] Type of the resource. Always compute#operation for operation resources.",
"default": "compute#operation"
},
"name": {
@@ -3836,7 +3856,7 @@
},
"operationType": {
"type": "string",
- "description": "[Output Only] The type of operation, which can be insert, update, or delete."
+ "description": "[Output Only] The type of operation, such as insert, update, or delete, and so on."
},
"progress": {
"type": "integer",
@@ -3845,7 +3865,7 @@
},
"region": {
"type": "string",
- "description": "[Output Only] URL of the region where the operation resides. Only available when performing regional operations."
+ "description": "[Output Only] The URL of the region where the operation resides. Only available when performing regional operations."
},
"selfLink": {
"type": "string",
@@ -3880,7 +3900,7 @@
},
"targetLink": {
"type": "string",
- "description": "[Output Only] The URL of the resource that the operation is modifying."
+ "description": "[Output Only] The URL of the resource that the operation modifies."
},
"user": {
"type": "string",
@@ -3896,6 +3916,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -3925,6 +3946,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -3936,7 +3958,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -3954,7 +3976,7 @@
},
"zone": {
"type": "string",
- "description": "[Output Only] URL of the zone where the operation resides. Only available when performing per-zone operations."
+ "description": "[Output Only] The URL of the zone where the operation resides. Only available when performing per-zone operations."
}
}
},
@@ -4000,7 +4022,7 @@
},
"items": {
"type": "array",
- "description": "[Output Only] The Operation resources.",
+ "description": "[Output Only] A list of Operation resources.",
"items": {
"$ref": "Operation"
}
@@ -4039,6 +4061,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -4068,6 +4091,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -4079,7 +4103,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -4143,7 +4167,7 @@
"Project": {
"id": "Project",
"type": "object",
- "description": "A Project resource. Projects can only be created in the Google Developers Console. Unless marked otherwise, values can only be modified in the console.",
+ "description": "A Project resource. Projects can only be created in the Google Cloud Platform Console. Unless marked otherwise, values can only be modified in the console.",
"properties": {
"commonInstanceMetadata": {
"$ref": "Metadata",
@@ -4383,7 +4407,7 @@
"Route": {
"id": "Route",
"type": "object",
- "description": "The route resource. A Route is a rule that specifies how certain packets should be handled by the virtual network. Routes are associated with instances by tags and the set of Routes for a particular instance is called its routing table. For each packet leaving a instance, the system searches that instance's routing table for a single best matching Route. Routes match packets by destination IP address, preferring smaller or more specific ranges over larger ones. If there is a tie, the system selects the Route with the smallest priority value. If there is still a tie, it uses the layer three and four packet headers to select just one of the remaining matching Routes. The packet is then forwarded as specified by the nextHop field of the winning Route -- either to another instance destination, a instance gateway or a Google Compute Engien-operated gateway. Packets that do not match any Route in the sending instance's routing table are dropped.",
+ "description": "Represents a Route resource. A route specifies how certain packets should be handled by the network. Routes are associated with instances by tags and the set of routes for a particular instance is called its routing table.\n\nFor each packet leaving a instance, the system searches that instance's routing table for a single best matching route. Routes match packets by destination IP address, preferring smaller or more specific ranges over larger ones. If there is a tie, the system selects the route with the smallest priority value. If there is still a tie, it uses the layer three and four packet headers to select just one of the remaining matching routes. The packet is then forwarded as specified by the nextHop field of the winning route - either to another instance destination, a instance gateway or a Google Compute Engine-operated gateway.\n\nPackets that do not match any route in the sending instance's routing table are dropped.",
"properties": {
"creationTimestamp": {
"type": "string",
@@ -4414,7 +4438,7 @@
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"annotations": {
"required": [
@@ -4487,6 +4511,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -4516,6 +4541,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -4527,7 +4553,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -4548,7 +4574,7 @@
"RouteList": {
"id": "RouteList",
"type": "object",
- "description": "Contains a list of route resources.",
+ "description": "Contains a list of Route resources.",
"properties": {
"id": {
"type": "string",
@@ -4587,7 +4613,7 @@
},
"onHostMaintenance": {
"type": "string",
- "description": "Defines the maintenance behavior for this instance. For standard instances, the default behavior is MIGRATE. For preemptible instances, the default and only possible behavior is TERMINATE. For more information, see Setting maintenance behavior.",
+ "description": "Defines the maintenance behavior for this instance. For standard instances, the default behavior is MIGRATE. For preemptible instances, the default and only possible behavior is TERMINATE. For more information, see Setting Instance Scheduling Options.",
"enum": [
"MIGRATE",
"TERMINATE"
@@ -4671,7 +4697,7 @@
},
"licenses": {
"type": "array",
- "description": "Public visible licenses.",
+ "description": "[Output Only] A list of public visible licenses that apply to this snapshot. This can be because the original image had licenses attached (such as a Windows image).",
"items": {
"type": "string"
}
@@ -4695,7 +4721,7 @@
},
"status": {
"type": "string",
- "description": "[Output Only] The status of the snapshot.",
+ "description": "[Output Only] The status of the snapshot. This can be CREATING, DELETING, FAILED, READY, or UPLOADING.",
"enum": [
"CREATING",
"DELETING",
@@ -4718,7 +4744,7 @@
},
"storageBytesStatus": {
"type": "string",
- "description": "[Output Only] An indicator whether storageBytes is in a stable state or it is being adjusted as a result of shared storage reallocation.",
+ "description": "[Output Only] An indicator whether storageBytes is in a stable state or it is being adjusted as a result of shared storage reallocation. This status can either be UPDATING, meaning the size of the snapshot is being updated, or UP_TO_DATE, meaning the size of the snapshot is up-to-date.",
"enum": [
"UPDATING",
"UP_TO_DATE"
@@ -4965,6 +4991,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -4994,6 +5021,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -5005,7 +5033,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -5066,7 +5094,7 @@
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?"
},
"selfLink": {
@@ -5097,7 +5125,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource. Always compute#targetHttpProxyList for lists of Target HTTP proxies.",
+ "description": "Type of resource. Always compute#targetHttpProxyList for lists of target HTTP proxies.",
"default": "compute#targetHttpProxyList"
},
"nextPageToken": {
@@ -5116,7 +5144,7 @@
"properties": {
"sslCertificates": {
"type": "array",
- "description": "New set of URLs to SslCertificate resources to associate with this TargetHttpProxy. Currently exactly one ssl certificate must be specified.",
+ "description": "New set of SslCertificate resources to associate with this TargetHttpsProxy resource. Currently exactly one SslCertificate resource must be specified.",
"items": {
"type": "string"
}
@@ -5143,7 +5171,7 @@
},
"kind": {
"type": "string",
- "description": "[Output Only] Type of the resource. Always compute#targetHttpsProxy for target HTTPS proxies.",
+ "description": "[Output Only] Type of resource. Always compute#targetHttpsProxy for target HTTPS proxies.",
"default": "compute#targetHttpsProxy"
},
"name": {
@@ -5157,14 +5185,14 @@
},
"sslCertificates": {
"type": "array",
- "description": "URLs to SslCertificate resources that are used to authenticate connections between users and the load balancer. Currently exactly one SSL certificate must be specified.",
+ "description": "URLs to SslCertificate resources that are used to authenticate connections between users and the load balancer. Currently, exactly one SSL certificate must be specified.",
"items": {
"type": "string"
}
},
"urlMap": {
"type": "string",
- "description": "URL to the UrlMap resource that defines the mapping from URL to the BackendService."
+ "description": "A fully-qualified or valid partial URL to the UrlMap resource that defines the mapping from URL to the BackendService. For example, the following are all valid URLs for specifying a URL map: \n- https://www.googleapis.compute/v1/projects/project/global/urlMaps/url-map \n- projects/project/global/urlMaps/url-map \n- global/urlMaps/url-map"
}
}
},
@@ -5186,7 +5214,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "Type of resource. Always compute#targetHttpsProxyList for lists of target HTTPS proxies.",
"default": "compute#targetHttpsProxyList"
},
"nextPageToken": {
@@ -5219,7 +5247,7 @@
},
"instance": {
"type": "string",
- "description": "The URL to the instance that terminates the relevant traffic."
+ "description": "A URL to the virtual machine instance that handles traffic for this target instance. When creating a target instance, you can provide the fully-qualified URL or a valid partial URL to the desired virtual machine. For example, the following are all valid URLs: \n- https://www.googleapis.com/compute/v1/projects/project/zones/zone/instances/instance \n- projects/project/zones/zone/instances/instance \n- zones/zone/instances/instance"
},
"kind": {
"type": "string",
@@ -5332,6 +5360,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -5361,6 +5390,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -5372,7 +5402,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -5392,7 +5422,7 @@
"TargetPool": {
"id": "TargetPool",
"type": "object",
- "description": "A TargetPool resource. This resource defines a pool of instances, associated HttpHealthCheck resources, and the fallback TargetPool.",
+ "description": "A TargetPool resource. This resource defines a pool of instances, associated HttpHealthCheck resources, and the fallback target pool.",
"properties": {
"backupPool": {
"type": "string",
@@ -5425,7 +5455,7 @@
},
"instances": {
"type": "array",
- "description": "A list of resource URLs to the member virtual machines serving this pool. They must live in zones contained in the same region as this pool.",
+ "description": "A list of resource URLs to the virtual machine instances serving this pool. They must live in zones contained in the same region as this pool.",
"items": {
"type": "string"
}
@@ -5474,7 +5504,7 @@
},
"items": {
"type": "object",
- "description": "A map of scoped target pool lists.",
+ "description": "[Output Only] A map of scoped target pool lists.",
"additionalProperties": {
"$ref": "TargetPoolsScopedList",
"description": "Name of the scope containing this set of target pools."
@@ -5482,7 +5512,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#targetPoolAggregatedList for aggregated lists of target pools.",
"default": "compute#targetPoolAggregatedList"
},
"nextPageToken": {
@@ -5507,7 +5537,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#targetPoolInstanceHealth when checking the health of an instance.",
"default": "compute#targetPoolInstanceHealth"
}
}
@@ -5530,7 +5560,7 @@
},
"kind": {
"type": "string",
- "description": "Type of resource.",
+ "description": "[Output Only] Type of resource. Always compute#targetPoolList for lists of target pools.",
"default": "compute#targetPoolList"
},
"nextPageToken": {
@@ -5549,7 +5579,7 @@
"properties": {
"healthChecks": {
"type": "array",
- "description": "Health check URLs to be added to targetPool.",
+ "description": "A list of HttpHealthCheck resources to add to the target pool.",
"items": {
"$ref": "HealthCheckReference"
}
@@ -5562,7 +5592,7 @@
"properties": {
"instances": {
"type": "array",
- "description": "URLs of the instances to be added to targetPool.",
+ "description": "A full or partial URL to an instance to add to this target pool. This can be a full or partial URL. For example, the following are valid URLs: \n- https://www.googleapis.com/compute/v1/projects/project-id/zones/zone/instances/instance-name \n- projects/project-id/zones/zone/instances/instance-name \n- zones/zone/instances/instance-name",
"items": {
"$ref": "InstanceReference"
}
@@ -5575,7 +5605,7 @@
"properties": {
"healthChecks": {
"type": "array",
- "description": "Health check URLs to be removed from targetPool.",
+ "description": "Health check URL to be removed. This can be a full or valid partial URL. For example, the following are valid URLs: \n- https://www.googleapis.com/compute/beta/projects/project/global/httpHealthChecks/health-check \n- projects/project/global/httpHealthChecks/health-check \n- global/httpHealthChecks/health-check",
"items": {
"$ref": "HealthCheckReference"
}
@@ -5588,7 +5618,7 @@
"properties": {
"instances": {
"type": "array",
- "description": "URLs of the instances to be removed from targetPool.",
+ "description": "URLs of the instances to be removed from target pool.",
"items": {
"$ref": "InstanceReference"
}
@@ -5614,6 +5644,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -5643,6 +5674,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -5654,7 +5686,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -5683,6 +5715,7 @@
"TargetVpnGateway": {
"id": "TargetVpnGateway",
"type": "object",
+ "description": "Represents a Target VPN gateway resource.",
"properties": {
"creationTimestamp": {
"type": "string",
@@ -5711,7 +5744,7 @@
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"annotations": {
"required": [
@@ -5754,7 +5787,7 @@
},
"tunnels": {
"type": "array",
- "description": "[Output Only] A list of URLs to VpnTunnel resources. VpnTunnels are created using compute.vpntunnels.insert and associated to a VPN gateway.",
+ "description": "[Output Only] A list of URLs to VpnTunnel resources. VpnTunnels are created using compute.vpntunnels.insert method and associated to a VPN gateway.",
"items": {
"type": "string"
}
@@ -5774,7 +5807,7 @@
"description": "A map of scoped target vpn gateway lists.",
"additionalProperties": {
"$ref": "TargetVpnGatewaysScopedList",
- "description": "[Output Only] Name of the scope containing this set of target vpn gateways."
+ "description": "[Output Only] Name of the scope containing this set of target VPN gateways."
}
},
"kind": {
@@ -5842,6 +5875,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -5871,6 +5905,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -5882,7 +5917,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -6145,7 +6180,7 @@
},
"name": {
"type": "string",
- "description": "Name of the resource; provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
+ "description": "Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"annotations": {
"required": [
@@ -6177,6 +6212,7 @@
"type": "string",
"description": "[Output Only] The status of the VPN tunnel.",
"enum": [
+ "ALLOCATING_RESOURCES",
"AUTHORIZATION_ERROR",
"DEPROVISIONING",
"ESTABLISHED",
@@ -6200,12 +6236,13 @@
"",
"",
"",
+ "",
""
]
},
"targetVpnGateway": {
"type": "string",
- "description": "URL of the VPN gateway to which this VPN tunnel is associated. Provided by the client when the VPN tunnel is created.",
+ "description": "URL of the VPN gateway with which this VPN tunnel is associated. Provided by the client when the VPN tunnel is created.",
"annotations": {
"required": [
"compute.vpnTunnels.insert"
@@ -6295,6 +6332,7 @@
"type": "string",
"description": "[Output Only] A warning code, if applicable. For example, Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in the response.",
"enum": [
+ "CLEANUP_FAILED",
"DEPRECATED_RESOURCE_USED",
"DISK_SIZE_LARGER_THAN_IMAGE_SIZE",
"INJECTED_KERNELS_DEPRECATED",
@@ -6324,6 +6362,7 @@
"",
"",
"",
+ "",
""
]
},
@@ -6335,7 +6374,7 @@
"properties": {
"key": {
"type": "string",
- "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource, and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
+ "description": "[Output Only] A key that provides more detail on the warning being returned. For example, for warnings where there are no results in a list request for a particular zone, this key might be scope and the key value might be the zone name. Other examples might be a key indicating a deprecated resource and a suggested replacement, or a warning about invalid network settings (for example, if an instance attempts to perform IP forwarding but is not enabled for IP forwarding)."
},
"value": {
"type": "string",
@@ -6473,12 +6512,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -6532,7 +6571,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6573,7 +6612,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6608,7 +6647,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6633,16 +6672,16 @@
"id": "compute.addresses.list",
"path": "{project}/regions/{region}/addresses",
"httpMethod": "GET",
- "description": "Retrieves a list of address resources contained within the specified region.",
+ "description": "Retrieves a list of addresses contained within the specified region.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -6663,7 +6702,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6694,12 +6733,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -6735,11 +6774,11 @@
"id": "compute.autoscalers.delete",
"path": "{project}/zones/{zone}/autoscalers/{autoscaler}",
"httpMethod": "DELETE",
- "description": "Deletes the specified autoscaler resource.",
+ "description": "Deletes the specified autoscaler.",
"parameters": {
"autoscaler": {
"type": "string",
- "description": "Name of the persistent autoscaler resource to delete.",
+ "description": "Name of the autoscaler to delete.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6753,7 +6792,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6776,11 +6815,11 @@
"id": "compute.autoscalers.get",
"path": "{project}/zones/{zone}/autoscalers/{autoscaler}",
"httpMethod": "GET",
- "description": "Returns the specified autoscaler resource.",
+ "description": "Returns the specified autoscaler resource. Get a list of available autoscalers by making a list() request.",
"parameters": {
"autoscaler": {
"type": "string",
- "description": "Name of the persistent autoscaler resource to return.",
+ "description": "Name of the autoscaler to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6794,7 +6833,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6818,7 +6857,7 @@
"id": "compute.autoscalers.insert",
"path": "{project}/zones/{zone}/autoscalers",
"httpMethod": "POST",
- "description": "Creates an autoscaler resource in the specified project using the data included in the request.",
+ "description": "Creates an autoscaler in the specified project using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -6829,7 +6868,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6854,16 +6893,16 @@
"id": "compute.autoscalers.list",
"path": "{project}/zones/{zone}/autoscalers",
"httpMethod": "GET",
- "description": "Retrieves a list of autoscaler resources contained within the specified zone.",
+ "description": "Retrieves a list of autoscalers contained within the specified zone.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -6884,7 +6923,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6907,11 +6946,11 @@
"id": "compute.autoscalers.patch",
"path": "{project}/zones/{zone}/autoscalers",
"httpMethod": "PATCH",
- "description": "Updates an autoscaler resource in the specified project using the data included in the request. This method supports patch semantics.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports patch semantics.",
"parameters": {
"autoscaler": {
"type": "string",
- "description": "Name of the autoscaler resource to update.",
+ "description": "Name of the autoscaler to update.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "query"
@@ -6925,7 +6964,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -6951,11 +6990,11 @@
"id": "compute.autoscalers.update",
"path": "{project}/zones/{zone}/autoscalers",
"httpMethod": "PUT",
- "description": "Updates an autoscaler resource in the specified project using the data included in the request.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request.",
"parameters": {
"autoscaler": {
"type": "string",
- "description": "Name of the autoscaler resource to update.",
+ "description": "Name of the autoscaler to update.",
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "query"
},
@@ -6968,7 +7007,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7030,7 +7069,7 @@
"id": "compute.backendServices.get",
"path": "{project}/global/backendServices/{backendService}",
"httpMethod": "GET",
- "description": "Returns the specified BackendService resource.",
+ "description": "Returns the specified BackendService resource. Get a list of available backend services by making a list() request.",
"parameters": {
"backendService": {
"type": "string",
@@ -7132,12 +7171,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7249,16 +7288,16 @@
"id": "compute.diskTypes.aggregatedList",
"path": "{project}/aggregated/diskTypes",
"httpMethod": "GET",
- "description": "Retrieves an aggregated list of disk type resources.",
+ "description": "Retrieves an aggregated list of disk types.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7294,11 +7333,11 @@
"id": "compute.diskTypes.get",
"path": "{project}/zones/{zone}/diskTypes/{diskType}",
"httpMethod": "GET",
- "description": "Returns the specified disk type resource.",
+ "description": "Returns the specified disk type. Get a list of available disk types by making a list() request.",
"parameters": {
"diskType": {
"type": "string",
- "description": "Name of the disk type resource to return.",
+ "description": "Name of the disk type to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7336,16 +7375,16 @@
"id": "compute.diskTypes.list",
"path": "{project}/zones/{zone}/diskTypes",
"httpMethod": "GET",
- "description": "Retrieves a list of disk type resources available to the specified project.",
+ "description": "Retrieves a list of disk types available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7397,12 +7436,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7523,7 +7562,7 @@
"id": "compute.disks.get",
"path": "{project}/zones/{zone}/disks/{disk}",
"httpMethod": "GET",
- "description": "Returns a specified persistent disk.",
+ "description": "Returns a specified persistent disk. Get a list of available persistent disks by making a list() request.",
"parameters": {
"disk": {
"type": "string",
@@ -7565,7 +7604,7 @@
"id": "compute.disks.insert",
"path": "{project}/zones/{zone}/disks",
"httpMethod": "POST",
- "description": "Creates a persistent disk in the specified project using the data included in the request.",
+ "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 200 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property.",
"parameters": {
"project": {
"type": "string",
@@ -7610,12 +7649,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7663,11 +7702,11 @@
"id": "compute.firewalls.delete",
"path": "{project}/global/firewalls/{firewall}",
"httpMethod": "DELETE",
- "description": "Deletes the specified firewall resource.",
+ "description": "Deletes the specified firewall.",
"parameters": {
"firewall": {
"type": "string",
- "description": "Name of the firewall resource to delete.",
+ "description": "Name of the firewall rule to delete.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7696,11 +7735,11 @@
"id": "compute.firewalls.get",
"path": "{project}/global/firewalls/{firewall}",
"httpMethod": "GET",
- "description": "Returns the specified firewall resource.",
+ "description": "Returns the specified firewall.",
"parameters": {
"firewall": {
"type": "string",
- "description": "Name of the firewall resource to return.",
+ "description": "Name of the firewall rule to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7730,7 +7769,7 @@
"id": "compute.firewalls.insert",
"path": "{project}/global/firewalls",
"httpMethod": "POST",
- "description": "Creates a firewall resource in the specified project using the data included in the request.",
+ "description": "Creates a firewall rule in the specified project using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -7758,16 +7797,16 @@
"id": "compute.firewalls.list",
"path": "{project}/global/firewalls",
"httpMethod": "GET",
- "description": "Retrieves the list of firewall resources available to the specified project.",
+ "description": "Retrieves the list of firewall rules available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -7803,11 +7842,11 @@
"id": "compute.firewalls.patch",
"path": "{project}/global/firewalls/{firewall}",
"httpMethod": "PATCH",
- "description": "Updates the specified firewall resource with the data included in the request. This method supports patch semantics.",
+ "description": "Updates the specified firewall rule with the data included in the request. This method supports patch semantics.",
"parameters": {
"firewall": {
"type": "string",
- "description": "Name of the firewall resource to update.",
+ "description": "Name of the firewall rule to update.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7839,11 +7878,11 @@
"id": "compute.firewalls.update",
"path": "{project}/global/firewalls/{firewall}",
"httpMethod": "PUT",
- "description": "Updates the specified firewall resource with the data included in the request.",
+ "description": "Updates the specified firewall rule with the data included in the request.",
"parameters": {
"firewall": {
"type": "string",
- "description": "Name of the firewall resource to update.",
+ "description": "Name of the firewall rule to update.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -7883,12 +7922,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8047,12 +8086,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8177,7 +8216,7 @@
"id": "compute.globalAddresses.get",
"path": "{project}/global/addresses/{address}",
"httpMethod": "GET",
- "description": "Returns the specified address resource.",
+ "description": "Returns the specified address resource. Get a list of available addresses by making a list() request.",
"parameters": {
"address": {
"type": "string",
@@ -8239,16 +8278,16 @@
"id": "compute.globalAddresses.list",
"path": "{project}/global/addresses",
"httpMethod": "GET",
- "description": "Retrieves a list of global address resources.",
+ "description": "Retrieves a list of global addresses.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8321,7 +8360,7 @@
"id": "compute.globalForwardingRules.get",
"path": "{project}/global/forwardingRules/{forwardingRule}",
"httpMethod": "GET",
- "description": "Returns the specified ForwardingRule resource.",
+ "description": "Returns the specified ForwardingRule resource. Get a list of available forwarding rules by making a list() request.",
"parameters": {
"forwardingRule": {
"type": "string",
@@ -8387,12 +8426,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8472,12 +8511,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8543,7 +8582,7 @@
"id": "compute.globalOperations.get",
"path": "{project}/global/operations/{operation}",
"httpMethod": "GET",
- "description": "Retrieves the specified Operations resource.",
+ "description": "Retrieves the specified Operations resource. Get a list of operations by making a list() request.",
"parameters": {
"operation": {
"type": "string",
@@ -8581,12 +8620,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8659,7 +8698,7 @@
"id": "compute.httpHealthChecks.get",
"path": "{project}/global/httpHealthChecks/{httpHealthCheck}",
"httpMethod": "GET",
- "description": "Returns the specified HttpHealthCheck resource.",
+ "description": "Returns the specified HttpHealthCheck resource. Get a list of available HTTP health checks by making a list() request.",
"parameters": {
"httpHealthCheck": {
"type": "string",
@@ -8725,12 +8764,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -8875,7 +8914,7 @@
"id": "compute.httpsHealthChecks.get",
"path": "{project}/global/httpsHealthChecks/{httpsHealthCheck}",
"httpMethod": "GET",
- "description": "Returns the specified HttpsHealthCheck resource.",
+ "description": "Returns the specified HttpsHealthCheck resource. Get a list of available HTTPS health checks by making a list() request.",
"parameters": {
"httpsHealthCheck": {
"type": "string",
@@ -8941,12 +8980,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -9058,7 +9097,7 @@
"id": "compute.images.delete",
"path": "{project}/global/images/{image}",
"httpMethod": "DELETE",
- "description": "Deletes the specified image resource.",
+ "description": "Deletes the specified image.",
"parameters": {
"image": {
"type": "string",
@@ -9127,7 +9166,7 @@
"id": "compute.images.get",
"path": "{project}/global/images/{image}",
"httpMethod": "GET",
- "description": "Returns the specified image resource.",
+ "description": "Returns the specified image. Get a list of available images by making a list() request.",
"parameters": {
"image": {
"type": "string",
@@ -9161,7 +9200,7 @@
"id": "compute.images.insert",
"path": "{project}/global/images",
"httpMethod": "POST",
- "description": "Creates an image resource in the specified project using the data included in the request.",
+ "description": "Creates an image in the specified project using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -9196,12 +9235,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -9287,12 +9326,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -9409,7 +9448,7 @@
"id": "compute.instanceGroupManagers.get",
"path": "{project}/zones/{zone}/instanceGroupManagers/{instanceGroupManager}",
"httpMethod": "GET",
- "description": "Returns all of the details about the specified managed instance group.",
+ "description": "Returns all of the details about the specified managed instance group. Get a list of available managed instance groups by making a list() request.",
"parameters": {
"instanceGroupManager": {
"type": "string",
@@ -9488,12 +9527,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -9753,7 +9792,7 @@
"id": "compute.instanceGroups.addInstances",
"path": "{project}/zones/{zone}/instanceGroups/{instanceGroup}/addInstances",
"httpMethod": "POST",
- "description": "Adds a list of instances to the specified instance group. Read Adding instances for more information.",
+ "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information.",
"parameters": {
"instanceGroup": {
"type": "string",
@@ -9799,12 +9838,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -9879,7 +9918,7 @@
"id": "compute.instanceGroups.get",
"path": "{project}/zones/{zone}/instanceGroups/{instanceGroup}",
"httpMethod": "GET",
- "description": "Returns the specified instance group resource.",
+ "description": "Returns the specified instance group. Get a list of available instance groups by making a list() request.",
"parameters": {
"instanceGroup": {
"type": "string",
@@ -9958,12 +9997,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -10010,7 +10049,7 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"instanceGroup": {
@@ -10021,7 +10060,7 @@
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -10156,7 +10195,7 @@
"id": "compute.instanceTemplates.delete",
"path": "{project}/global/instanceTemplates/{instanceTemplate}",
"httpMethod": "DELETE",
- "description": "Deletes the specified instance template.",
+ "description": "Deletes the specified instance template. If you delete an instance template that is being referenced from another instance group, the instance group will not be able to create or recreate virtual machine instances. Deleting an instance template is permanent and cannot be undone.",
"parameters": {
"instanceTemplate": {
"type": "string",
@@ -10189,7 +10228,7 @@
"id": "compute.instanceTemplates.get",
"path": "{project}/global/instanceTemplates/{instanceTemplate}",
"httpMethod": "GET",
- "description": "Returns the specified instance template resource.",
+ "description": "Returns the specified instance template. Get a list of available instance templates by making a list() request.",
"parameters": {
"instanceTemplate": {
"type": "string",
@@ -10255,12 +10294,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -10351,16 +10390,16 @@
"id": "compute.instances.aggregatedList",
"path": "{project}/aggregated/instances",
"httpMethod": "GET",
- "description": "Retrieves aggregated list of instance resources.",
+ "description": "Retrieves aggregated list of instances.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -10400,7 +10439,7 @@
"parameters": {
"instance": {
"type": "string",
- "description": "Instance name.",
+ "description": "The instance name for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -10585,7 +10624,7 @@
"id": "compute.instances.get",
"path": "{project}/zones/{zone}/instances/{instance}",
"httpMethod": "GET",
- "description": "Returns the specified instance resource.",
+ "description": "Returns the specified Instance resource. Get a list of available instances by making a list() request.",
"parameters": {
"instance": {
"type": "string",
@@ -10714,16 +10753,16 @@
"id": "compute.instances.list",
"path": "{project}/zones/{zone}/instances",
"httpMethod": "GET",
- "description": "Retrieves the list of instance resources contained within the specified zone.",
+ "description": "Retrieves the list of instances contained within the specified zone.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11126,11 +11165,11 @@
"id": "compute.licenses.get",
"path": "{project}/global/licenses/{license}",
"httpMethod": "GET",
- "description": "Returns the specified license resource.",
+ "description": "Returns the specified License resource. Get a list of available licenses by making a list() request.",
"parameters": {
"license": {
"type": "string",
- "description": "Name of the license resource to return.",
+ "description": "Name of the License resource to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11164,16 +11203,16 @@
"id": "compute.machineTypes.aggregatedList",
"path": "{project}/aggregated/machineTypes",
"httpMethod": "GET",
- "description": "Retrieves an aggregated list of machine type resources.",
+ "description": "Retrieves an aggregated list of machine types.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11209,11 +11248,11 @@
"id": "compute.machineTypes.get",
"path": "{project}/zones/{zone}/machineTypes/{machineType}",
"httpMethod": "GET",
- "description": "Returns the specified machine type resource.",
+ "description": "Returns the specified machine type. Get a list of available machine types by making a list() request.",
"parameters": {
"machineType": {
"type": "string",
- "description": "Name of the machine type resource to return.",
+ "description": "Name of the machine type to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11251,16 +11290,16 @@
"id": "compute.machineTypes.list",
"path": "{project}/zones/{zone}/machineTypes",
"httpMethod": "GET",
- "description": "Retrieves a list of machine type resources available to the specified project.",
+ "description": "Retrieves a list of machine types available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11308,11 +11347,11 @@
"id": "compute.networks.delete",
"path": "{project}/global/networks/{network}",
"httpMethod": "DELETE",
- "description": "Deletes the specified network resource.",
+ "description": "Deletes the specified network.",
"parameters": {
"network": {
"type": "string",
- "description": "Name of the network resource to delete.",
+ "description": "Name of the network to delete.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11341,11 +11380,11 @@
"id": "compute.networks.get",
"path": "{project}/global/networks/{network}",
"httpMethod": "GET",
- "description": "Returns the specified network resource.",
+ "description": "Returns the specified network. Get a list of available networks by making a list() request.",
"parameters": {
"network": {
"type": "string",
- "description": "Name of the network resource to return.",
+ "description": "Name of the network to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11375,7 +11414,7 @@
"id": "compute.networks.insert",
"path": "{project}/global/networks",
"httpMethod": "POST",
- "description": "Creates a network resource in the specified project using the data included in the request.",
+ "description": "Creates a network in the specified project using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -11403,16 +11442,16 @@
"id": "compute.networks.list",
"path": "{project}/global/networks",
"httpMethod": "GET",
- "description": "Retrieves the list of network resources available to the specified project.",
+ "description": "Retrieves the list of networks available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11452,7 +11491,7 @@
"id": "compute.projects.get",
"path": "{project}",
"httpMethod": "GET",
- "description": "Returns the specified project resource.",
+ "description": "Returns the specified Project resource.",
"parameters": {
"project": {
"type": "string",
@@ -11615,7 +11654,7 @@
},
"region": {
"type": "string",
- "description": "Name of the region scoping this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11653,7 +11692,7 @@
},
"region": {
"type": "string",
- "description": "Name of the region scoping this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11681,12 +11720,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11707,7 +11746,7 @@
},
"region": {
"type": "string",
- "description": "Name of the region scoping this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11734,7 +11773,7 @@
"id": "compute.regions.get",
"path": "{project}/regions/{region}",
"httpMethod": "GET",
- "description": "Returns the specified region resource.",
+ "description": "Returns the specified Region resource. Get a list of available regions by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -11772,12 +11811,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11817,7 +11856,7 @@
"id": "compute.routes.delete",
"path": "{project}/global/routes/{route}",
"httpMethod": "DELETE",
- "description": "Deletes the specified route resource.",
+ "description": "Deletes the specified Route resource.",
"parameters": {
"project": {
"type": "string",
@@ -11828,7 +11867,7 @@
},
"route": {
"type": "string",
- "description": "Name of the route resource to delete.",
+ "description": "Name of the Route resource to delete.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11850,7 +11889,7 @@
"id": "compute.routes.get",
"path": "{project}/global/routes/{route}",
"httpMethod": "GET",
- "description": "Returns the specified route resource.",
+ "description": "Returns the specified Route resource. Get a list of available routes by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -11861,7 +11900,7 @@
},
"route": {
"type": "string",
- "description": "Name of the route resource to return.",
+ "description": "Name of the Route resource to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -11884,7 +11923,7 @@
"id": "compute.routes.insert",
"path": "{project}/global/routes",
"httpMethod": "POST",
- "description": "Creates a route resource in the specified project using the data included in the request.",
+ "description": "Creates a Route resource in the specified project using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -11912,16 +11951,16 @@
"id": "compute.routes.list",
"path": "{project}/global/routes",
"httpMethod": "GET",
- "description": "Retrieves the list of route resources available to the specified project.",
+ "description": "Retrieves the list of Route resources available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -11994,7 +12033,7 @@
"id": "compute.snapshots.get",
"path": "{project}/global/snapshots/{snapshot}",
"httpMethod": "GET",
- "description": "Returns the specified Snapshot resource.",
+ "description": "Returns the specified Snapshot resource. Get a list of available snapshots by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -12032,12 +12071,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12110,7 +12149,7 @@
"id": "compute.sslCertificates.get",
"path": "{project}/global/sslCertificates/{sslCertificate}",
"httpMethod": "GET",
- "description": "Returns the specified SslCertificate resource.",
+ "description": "Returns the specified SslCertificate resource. Get a list of available SSL certificates by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -12176,12 +12215,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12225,12 +12264,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12307,7 +12346,7 @@
"id": "compute.subnetworks.get",
"path": "{project}/regions/{region}/subnetworks/{subnetwork}",
"httpMethod": "GET",
- "description": "Returns the specified subnetwork.",
+ "description": "Returns the specified subnetwork. Get a list of available subnetworks by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -12389,12 +12428,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12475,7 +12514,7 @@
"id": "compute.targetHttpProxies.get",
"path": "{project}/global/targetHttpProxies/{targetHttpProxy}",
"httpMethod": "GET",
- "description": "Returns the specified TargetHttpProxy resource.",
+ "description": "Returns the specified TargetHttpProxy resource. Get a list of available target HTTP proxies by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -12541,12 +12580,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12593,7 +12632,7 @@
},
"targetHttpProxy": {
"type": "string",
- "description": "Name of the TargetHttpProxy resource whose URL map is to be set.",
+ "description": "Name of the TargetHttpProxy to set a URL map for.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -12655,7 +12694,7 @@
"id": "compute.targetHttpsProxies.get",
"path": "{project}/global/targetHttpsProxies/{targetHttpsProxy}",
"httpMethod": "GET",
- "description": "Returns the specified TargetHttpsProxy resource.",
+ "description": "Returns the specified TargetHttpsProxy resource. Get a list of available target HTTPS proxies by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -12721,12 +12760,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12773,7 +12812,7 @@
},
"targetHttpsProxy": {
"type": "string",
- "description": "Name of the TargetHttpsProxy resource whose SSLCertificate is to be set.",
+ "description": "Name of the TargetHttpsProxy resource to set an SslCertificates resource for.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -12842,12 +12881,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -12924,7 +12963,7 @@
"id": "compute.targetInstances.get",
"path": "{project}/zones/{zone}/targetInstances/{targetInstance}",
"httpMethod": "GET",
- "description": "Returns the specified TargetInstance resource.",
+ "description": "Returns the specified TargetInstance resource. Get a list of available target instances by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -13006,12 +13045,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -13059,10 +13098,11 @@
"id": "compute.targetPools.addHealthCheck",
"path": "{project}/regions/{region}/targetPools/{targetPool}/addHealthCheck",
"httpMethod": "POST",
- "description": "Adds health check URL to targetPool.",
+ "description": "Adds health check URLs to a target pool.",
"parameters": {
"project": {
"type": "string",
+ "description": "Project ID for this request.",
"required": true,
"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
"location": "path"
@@ -13076,7 +13116,7 @@
},
"targetPool": {
"type": "string",
- "description": "Name of the TargetPool resource to which health_check_url is to be added.",
+ "description": "Name of the target pool to add a health check to.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13102,10 +13142,11 @@
"id": "compute.targetPools.addInstance",
"path": "{project}/regions/{region}/targetPools/{targetPool}/addInstance",
"httpMethod": "POST",
- "description": "Adds instance URL to targetPool.",
+ "description": "Adds an instance to a target pool.",
"parameters": {
"project": {
"type": "string",
+ "description": "Project ID for this request.",
"required": true,
"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
"location": "path"
@@ -13119,7 +13160,7 @@
},
"targetPool": {
"type": "string",
- "description": "Name of the TargetPool resource to which instance_url is to be added.",
+ "description": "Name of the TargetPool resource to add instances to.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13149,12 +13190,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -13190,7 +13231,7 @@
"id": "compute.targetPools.delete",
"path": "{project}/regions/{region}/targetPools/{targetPool}",
"httpMethod": "DELETE",
- "description": "Deletes the specified TargetPool resource.",
+ "description": "Deletes the specified target pool.",
"parameters": {
"project": {
"type": "string",
@@ -13231,7 +13272,7 @@
"id": "compute.targetPools.get",
"path": "{project}/regions/{region}/targetPools/{targetPool}",
"httpMethod": "GET",
- "description": "Returns the specified TargetPool resource.",
+ "description": "Returns the specified target pool. Get a list of available target pools by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -13273,10 +13314,11 @@
"id": "compute.targetPools.getHealth",
"path": "{project}/regions/{region}/targetPools/{targetPool}/getHealth",
"httpMethod": "POST",
- "description": "Gets the most recent health check results for each IP for the given instance that is referenced by the given TargetPool.",
+ "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool.",
"parameters": {
"project": {
"type": "string",
+ "description": "Project ID for this request.",
"required": true,
"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
"location": "path"
@@ -13317,7 +13359,7 @@
"id": "compute.targetPools.insert",
"path": "{project}/regions/{region}/targetPools",
"httpMethod": "POST",
- "description": "Creates a TargetPool resource in the specified project and region using the data included in the request.",
+ "description": "Creates a target pool in the specified project and region using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -13353,16 +13395,16 @@
"id": "compute.targetPools.list",
"path": "{project}/regions/{region}/targetPools",
"httpMethod": "GET",
- "description": "Retrieves a list of TargetPool resources available to the specified project and region.",
+ "description": "Retrieves a list of target pools available to the specified project and region.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -13406,24 +13448,25 @@
"id": "compute.targetPools.removeHealthCheck",
"path": "{project}/regions/{region}/targetPools/{targetPool}/removeHealthCheck",
"httpMethod": "POST",
- "description": "Removes health check URL from targetPool.",
+ "description": "Removes health check URL from a target pool.",
"parameters": {
"project": {
"type": "string",
+ "description": "Project ID for this request.",
"required": true,
"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
"location": "path"
},
"region": {
"type": "string",
- "description": "Name of the region scoping this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
},
"targetPool": {
"type": "string",
- "description": "Name of the TargetPool resource to which health_check_url is to be removed.",
+ "description": "Name of the target pool to remove health checks from.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13449,10 +13492,11 @@
"id": "compute.targetPools.removeInstance",
"path": "{project}/regions/{region}/targetPools/{targetPool}/removeInstance",
"httpMethod": "POST",
- "description": "Removes instance URL from targetPool.",
+ "description": "Removes instance URL from a target pool.",
"parameters": {
"project": {
"type": "string",
+ "description": "Project ID for this request.",
"required": true,
"pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
"location": "path"
@@ -13466,7 +13510,7 @@
},
"targetPool": {
"type": "string",
- "description": "Name of the TargetPool resource to which instance_url is to be removed.",
+ "description": "Name of the TargetPool resource to remove instances from.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13492,11 +13536,11 @@
"id": "compute.targetPools.setBackup",
"path": "{project}/regions/{region}/targetPools/{targetPool}/setBackup",
"httpMethod": "POST",
- "description": "Changes backup pool configurations.",
+ "description": "Changes a backup target pool's configurations.",
"parameters": {
"failoverRatio": {
"type": "number",
- "description": "New failoverRatio value for the containing target pool.",
+ "description": "New failoverRatio value for the target pool.",
"format": "float",
"location": "query"
},
@@ -13516,7 +13560,7 @@
},
"targetPool": {
"type": "string",
- "description": "Name of the TargetPool resource for which the backup is to be set.",
+ "description": "Name of the TargetPool resource to set a backup pool for.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13546,16 +13590,16 @@
"id": "compute.targetVpnGateways.aggregatedList",
"path": "{project}/aggregated/targetVpnGateways",
"httpMethod": "GET",
- "description": "Retrieves an aggregated list of target VPN gateways .",
+ "description": "Retrieves an aggregated list of target VPN gateways.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -13591,7 +13635,7 @@
"id": "compute.targetVpnGateways.delete",
"path": "{project}/regions/{region}/targetVpnGateways/{targetVpnGateway}",
"httpMethod": "DELETE",
- "description": "Deletes the specified TargetVpnGateway resource.",
+ "description": "Deletes the specified target VPN gateway.",
"parameters": {
"project": {
"type": "string",
@@ -13602,14 +13646,14 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
},
"targetVpnGateway": {
"type": "string",
- "description": "Name of the TargetVpnGateway resource to delete.",
+ "description": "Name of the target VPN gateway to delete.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13632,7 +13676,7 @@
"id": "compute.targetVpnGateways.get",
"path": "{project}/regions/{region}/targetVpnGateways/{targetVpnGateway}",
"httpMethod": "GET",
- "description": "Returns the specified TargetVpnGateway resource.",
+ "description": "Returns the specified target VPN gateway. Get a list of available target VPN gateways by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -13643,14 +13687,14 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
},
"targetVpnGateway": {
"type": "string",
- "description": "Name of the TargetVpnGateway resource to return.",
+ "description": "Name of the target VPN gateway to return.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13674,7 +13718,7 @@
"id": "compute.targetVpnGateways.insert",
"path": "{project}/regions/{region}/targetVpnGateways",
"httpMethod": "POST",
- "description": "Creates a TargetVpnGateway resource in the specified project and region using the data included in the request.",
+ "description": "Creates a target VPN gateway in the specified project and region using the data included in the request.",
"parameters": {
"project": {
"type": "string",
@@ -13685,7 +13729,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13710,16 +13754,16 @@
"id": "compute.targetVpnGateways.list",
"path": "{project}/regions/{region}/targetVpnGateways",
"httpMethod": "GET",
- "description": "Retrieves a list of TargetVpnGateway resources available to the specified project and region.",
+ "description": "Retrieves a list of target VPN gateways available to the specified project and region.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -13740,7 +13784,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -13800,7 +13844,7 @@
"id": "compute.urlMaps.get",
"path": "{project}/global/urlMaps/{urlMap}",
"httpMethod": "GET",
- "description": "Returns the specified UrlMap resource.",
+ "description": "Returns the specified UrlMap resource. Get a list of available URL maps by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -13866,12 +13910,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -14023,12 +14067,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -14075,7 +14119,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14105,7 +14149,7 @@
"id": "compute.vpnTunnels.get",
"path": "{project}/regions/{region}/vpnTunnels/{vpnTunnel}",
"httpMethod": "GET",
- "description": "Returns the specified VpnTunnel resource.",
+ "description": "Returns the specified VpnTunnel resource. Get a list of available VPN tunnels by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -14116,7 +14160,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14158,7 +14202,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14187,12 +14231,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -14213,7 +14257,7 @@
},
"region": {
"type": "string",
- "description": "The name of the region for this request.",
+ "description": "Name of the region for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14258,7 +14302,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14296,7 +14340,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for this request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14324,12 +14368,12 @@
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
@@ -14350,7 +14394,7 @@
},
"zone": {
"type": "string",
- "description": "Name of the zone scoping this request.",
+ "description": "Name of the zone for request.",
"required": true,
"pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
"location": "path"
@@ -14377,7 +14421,7 @@
"id": "compute.zones.get",
"path": "{project}/zones/{zone}",
"httpMethod": "GET",
- "description": "Returns the specified zone resource.",
+ "description": "Returns the specified Zone resource. Get a list of available zones by making a list() request.",
"parameters": {
"project": {
"type": "string",
@@ -14411,16 +14455,16 @@
"id": "compute.zones.list",
"path": "{project}/zones",
"httpMethod": "GET",
- "description": "Retrieves the list of zone resources available to the specified project.",
+ "description": "Retrieves the list of Zone resources available to the specified project.",
"parameters": {
"filter": {
"type": "string",
- "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
"location": "query"
},
"maxResults": {
"type": "integer",
- "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
"default": "500",
"format": "uint32",
"minimum": "0",
diff --git a/vendor/google.golang.org/api/compute/v1/compute-gen.go b/vendor/google.golang.org/api/compute/v1/compute-gen.go
index 9e9bde1bc11a..9bd95d00f9a2 100644
--- a/vendor/google.golang.org/api/compute/v1/compute-gen.go
+++ b/vendor/google.golang.org/api/compute/v1/compute-gen.go
@@ -579,7 +579,7 @@ type Address struct {
// addresses.
Kind string `json:"kind,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -670,13 +670,13 @@ func (s *AddressAggregatedList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// AddressList: Contains a list of address resources.
+// AddressList: Contains a list of addresses.
type AddressList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
Id string `json:"id,omitempty"`
- // Items: [Output Only] A list of Address resources.
+ // Items: [Output Only] A list of addresses.
Items []*Address `json:"items,omitempty"`
// Kind: [Output Only] Type of resource. Always compute#addressList for
@@ -744,6 +744,7 @@ type AddressesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -789,7 +790,7 @@ type AddressesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -852,8 +853,11 @@ type AttachedDisk struct {
InitializeParams *AttachedDiskInitializeParams `json:"initializeParams,omitempty"`
// Interface: Specifies the disk interface to use for attaching this
- // disk, either SCSI or NVME. The default is SCSI. For performance
- // characteristics of SCSI over NVMe, see Local SSD performance.
+ // disk, which is either SCSI or NVME. The default is SCSI. Persistent
+ // disks must always use SCSI and the request will fail if you attempt
+ // to attach a persistent disk in any other format than SCSI. Local SSDs
+ // can use either NVME or SCSI. For performance characteristics of SCSI
+ // over NVMe, see Local SSD performance.
//
// Possible values:
// "NVME"
@@ -969,8 +973,18 @@ func (s *AttachedDiskInitializeParams) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
+// Autoscaler: Represents an Autoscaler resource. Autoscalers allow you
+// to automatically scale virtual machine instances in managed instance
+// groups according to an autoscaling policy that you define. For more
+// information, read Autoscaling Groups of Instances.
type Autoscaler struct {
- // AutoscalingPolicy: Autoscaling configuration.
+ // AutoscalingPolicy: The configuration parameters for the autoscaling
+ // algorithm. You can define one or more of the policies for an
+ // autoscaler: cpuUtilization, customMetricUtilizations, and
+ // loadBalancingUtilization.
+ //
+ // If none of these are specified, the default will be to autoscale
+ // based on cpuUtilization to 0.8 or 80%.
AutoscalingPolicy *AutoscalingPolicy `json:"autoscalingPolicy,omitempty"`
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
@@ -985,7 +999,8 @@ type Autoscaler struct {
// identifier is defined by the server.
Id uint64 `json:"id,omitempty,string"`
- // Kind: Type of the resource.
+ // Kind: [Output Only] Type of the resource. Always compute#autoscaler
+ // for autoscalers.
Kind string `json:"kind,omitempty"`
// Name: Name of the resource. Provided by the client when the resource
@@ -1000,8 +1015,8 @@ type Autoscaler struct {
// SelfLink: [Output Only] Server-defined URL for the resource.
SelfLink string `json:"selfLink,omitempty"`
- // Target: URL of Instance Group Manager or Replica Pool which will be
- // controlled by Autoscaler.
+ // Target: URL of the managed instance group that this autoscaler will
+ // scale.
Target string `json:"target,omitempty"`
// Zone: [Output Only] URL of the zone where the instance group resides.
@@ -1034,7 +1049,8 @@ type AutoscalerAggregatedList struct {
// Items: A map of scoped autoscaler lists.
Items map[string]AutoscalersScopedList `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always
+ // compute#autoscalerAggregatedList for aggregated lists of autoscalers.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -1067,7 +1083,7 @@ func (s *AutoscalerAggregatedList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// AutoscalerList: Contains a list of persistent autoscaler resources.
+// AutoscalerList: Contains a list of Autoscaler resources.
type AutoscalerList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -1076,7 +1092,8 @@ type AutoscalerList struct {
// Items: A list of Autoscaler resources.
Items []*Autoscaler `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always compute#autoscalerList
+ // for lists of autoscalers.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -1110,11 +1127,12 @@ func (s *AutoscalerList) MarshalJSON() ([]byte, error) {
}
type AutoscalersScopedList struct {
- // Autoscalers: List of autoscalers contained in this scope.
+ // Autoscalers: [Output Only] List of autoscalers contained in this
+ // scope.
Autoscalers []*Autoscaler `json:"autoscalers,omitempty"`
- // Warning: Informational warning which replaces the list of autoscalers
- // when the list is empty.
+ // Warning: [Output Only] Informational warning which replaces the list
+ // of autoscalers when the list is empty.
Warning *AutoscalersScopedListWarning `json:"warning,omitempty"`
// ForceSendFields is a list of field names (e.g. "Autoscalers") to
@@ -1132,14 +1150,15 @@ func (s *AutoscalersScopedList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// AutoscalersScopedListWarning: Informational warning which replaces
-// the list of autoscalers when the list is empty.
+// AutoscalersScopedListWarning: [Output Only] Informational warning
+// which replaces the list of autoscalers when the list is empty.
type AutoscalersScopedListWarning struct {
// Code: [Output Only] A warning code, if applicable. For example,
// Compute Engine returns NO_RESULTS_ON_PAGE if there are no results in
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -1185,7 +1204,7 @@ type AutoscalersScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -1211,38 +1230,42 @@ func (s *AutoscalersScopedListWarningData) MarshalJSON() ([]byte, error) {
// AutoscalingPolicy: Cloud Autoscaler policy.
type AutoscalingPolicy struct {
- // CoolDownPeriodSec: The number of seconds that the Autoscaler should
- // wait between two succeeding changes to the number of virtual
- // machines. You should define an interval that is at least as long as
- // the initialization time of a virtual machine and the time it may take
- // for replica pool to create the virtual machine. The default is 60
+ // CoolDownPeriodSec: The number of seconds that the autoscaler should
+ // wait before it starts collecting information from a new instance.
+ // This prevents the autoscaler from collecting information when the
+ // instance is initializing, during which the collected usage would not
+ // be reliable. The default time autoscaler waits is 60
// seconds.
+ //
+ // Virtual machine initialization times might vary because of numerous
+ // factors. We recommend that you test how long an instance may take to
+ // initialize. To do this, create an instance and time the startup
+ // process.
CoolDownPeriodSec int64 `json:"coolDownPeriodSec,omitempty"`
- // CpuUtilization: TODO(jbartosik): Add support for scaling based on
- // muliple utilization metrics (take max recommendation). Exactly one
- // utilization policy should be provided. Configuration parameters of
- // CPU based autoscaling policy.
+ // CpuUtilization: Defines the CPU utilization policy that allows the
+ // autoscaler to scale based on the average CPU utilization of a managed
+ // instance group.
CpuUtilization *AutoscalingPolicyCpuUtilization `json:"cpuUtilization,omitempty"`
// CustomMetricUtilizations: Configuration parameters of autoscaling
- // based on custom metric.
+ // based on a custom metric.
CustomMetricUtilizations []*AutoscalingPolicyCustomMetricUtilization `json:"customMetricUtilizations,omitempty"`
// LoadBalancingUtilization: Configuration parameters of autoscaling
// based on load balancer.
LoadBalancingUtilization *AutoscalingPolicyLoadBalancingUtilization `json:"loadBalancingUtilization,omitempty"`
- // MaxNumReplicas: The maximum number of replicas that the Autoscaler
- // can scale up to. This field is required for config to be effective.
- // Maximum number of replicas should be not lower than minimal number of
- // replicas. Absolute limit for this value is defined in Autoscaler
- // backend.
+ // MaxNumReplicas: The maximum number of instances that the autoscaler
+ // can scale up to. This is required when creating or updating an
+ // autoscaler. The maximum number of replicas should not be lower than
+ // minimal number of replicas.
MaxNumReplicas int64 `json:"maxNumReplicas,omitempty"`
- // MinNumReplicas: The minimum number of replicas that the Autoscaler
- // can scale down to. Can't be less than 0. If not provided Autoscaler
- // will choose default value depending on maximal number of replicas.
+ // MinNumReplicas: The minimum number of replicas that the autoscaler
+ // can scale down to. This cannot be less than 0. If not provided,
+ // autoscaler will choose a default value depending on maximum number of
+ // instances allowed.
MinNumReplicas int64 `json:"minNumReplicas,omitempty"`
// ForceSendFields is a list of field names (e.g. "CoolDownPeriodSec")
@@ -1262,10 +1285,19 @@ func (s *AutoscalingPolicy) MarshalJSON() ([]byte, error) {
// AutoscalingPolicyCpuUtilization: CPU utilization policy.
type AutoscalingPolicyCpuUtilization struct {
- // UtilizationTarget: The target utilization that the Autoscaler should
- // maintain. It is represented as a fraction of used cores. For example:
- // 6 cores used in 8-core VM are represented here as 0.75. Must be a
- // float value between (0, 1]. If not defined, the default is 0.8.
+ // UtilizationTarget: The target CPU utilization that the autoscaler
+ // should maintain. Must be a float value in the range (0, 1]. If not
+ // specified, the default is 0.8.
+ //
+ // If the CPU level is below the target utilization, the autoscaler
+ // scales down the number of instances until it reaches the minimum
+ // number of instances you specified or until the average CPU of your
+ // instances reaches the target utilization.
+ //
+ // If the average CPU is above the target utilization, the autoscaler
+ // scales up until it reaches the maximum number of instances you
+ // specified or until the average utilization reaches the target
+ // utilization.
UtilizationTarget float64 `json:"utilizationTarget,omitempty"`
// ForceSendFields is a list of field names (e.g. "UtilizationTarget")
@@ -1286,20 +1318,34 @@ func (s *AutoscalingPolicyCpuUtilization) MarshalJSON() ([]byte, error) {
// AutoscalingPolicyCustomMetricUtilization: Custom utilization metric
// policy.
type AutoscalingPolicyCustomMetricUtilization struct {
- // Metric: Identifier of the metric. It should be a Cloud Monitoring
- // metric. The metric can not have negative values. The metric should be
- // an utilization metric (increasing number of VMs handling requests x
- // times should reduce average value of the metric roughly x times). For
- // example you could use:
- // compute.googleapis.com/instance/network/received_bytes_count.
+ // Metric: The identifier of the Cloud Monitoring metric. The metric
+ // cannot have negative values and should be a utilization metric, which
+ // means that the number of virtual machines handling requests should
+ // increase or decrease proportionally to the metric. The metric must
+ // also have a label of compute.googleapis.com/resource_id with the
+ // value of the instance's unique ID, although this alone does not
+ // guarantee that the metric is valid.
+ //
+ // For example, the following is a valid
+ // metric:
+ // compute.googleapis.com/instance/network/received_bytes_count
+ //
+ //
+ //
+ // The following is not a valid metric because it does not increase or
+ // decrease based on
+ // usage:
+ // compute.googleapis.com/instance/cpu/reserved_cores
Metric string `json:"metric,omitempty"`
- // UtilizationTarget: Target value of the metric which Autoscaler should
+ // UtilizationTarget: Target value of the metric which autoscaler should
// maintain. Must be a positive value.
UtilizationTarget float64 `json:"utilizationTarget,omitempty"`
- // UtilizationTargetType: Defines type in which utilization_target is
- // expressed.
+ // UtilizationTargetType: Defines how target utilization value is
+ // expressed for a Cloud Monitoring metric. Either GAUGE,
+ // DELTA_PER_SECOND, or DELTA_PER_MINUTE. If not specified, the default
+ // is GAUGE.
//
// Possible values:
// "DELTA_PER_MINUTE"
@@ -1322,16 +1368,13 @@ func (s *AutoscalingPolicyCustomMetricUtilization) MarshalJSON() ([]byte, error)
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// AutoscalingPolicyLoadBalancingUtilization: Load balancing utilization
-// policy.
+// AutoscalingPolicyLoadBalancingUtilization: Configuration parameters
+// of autoscaling based on load balancing.
type AutoscalingPolicyLoadBalancingUtilization struct {
// UtilizationTarget: Fraction of backend capacity utilization (set in
- // HTTP load balancing configuration) that Autoscaler should maintain.
- // Must be a positive float value. If not defined, the default is 0.8.
- // For example if your maxRatePerInstance capacity (in HTTP Load
- // Balancing configuration) is set at 10 and you would like to keep
- // number of instances such that each instance receives 7 QPS on
- // average, set this to 0.7.
+ // HTTP(s) load balancing configuration) that autoscaler should
+ // maintain. Must be a positive float value. If not defined, the default
+ // is 0.8.
UtilizationTarget float64 `json:"utilizationTarget,omitempty"`
// ForceSendFields is a list of field names (e.g. "UtilizationTarget")
@@ -1351,8 +1394,9 @@ func (s *AutoscalingPolicyLoadBalancingUtilization) MarshalJSON() ([]byte, error
// Backend: Message containing information of one individual backend.
type Backend struct {
- // BalancingMode: Specifies the balancing mode for this backend. The
- // default is UTILIZATION but available values are UTILIZATION and RATE.
+ // BalancingMode: Specifies the balancing mode for this backend. For
+ // global HTTP(S) load balancing, the default is UTILIZATION. Valid
+ // values are UTILIZATION and RATE.
//
// Possible values:
// "RATE"
@@ -1383,12 +1427,13 @@ type Backend struct {
Group string `json:"group,omitempty"`
// MaxRate: The max requests per second (RPS) of the group. Can be used
- // with either balancing mode, but required if RATE mode. For RATE mode,
- // either maxRate or maxRatePerInstance must be set.
+ // with either RATE or UTILIZATION balancing modes, but required if RATE
+ // mode. For RATE mode, either maxRate or maxRatePerInstance must be
+ // set.
MaxRate int64 `json:"maxRate,omitempty"`
// MaxRatePerInstance: The max requests per second (RPS) that a single
- // backed instance can handle. This is used to calculate the capacity of
+ // backend instance can handle.This is used to calculate the capacity of
// the group. Can be used in either balancing mode. For RATE mode,
// either maxRate or maxRatePerInstance must be set.
MaxRatePerInstance float64 `json:"maxRatePerInstance,omitempty"`
@@ -1461,9 +1506,14 @@ type BackendService struct {
Port int64 `json:"port,omitempty"`
// PortName: Name of backend port. The same name should appear in the
- // resource views referenced by this service. Required.
+ // instance groups referenced by this service. Required.
PortName string `json:"portName,omitempty"`
+ // Protocol: The protocol this BackendService uses to communicate with
+ // backends.
+ //
+ // Possible values are HTTP, HTTPS, HTTP2, TCP and SSL.
+ //
// Possible values:
// "HTTP"
// "HTTPS"
@@ -1473,8 +1523,7 @@ type BackendService struct {
SelfLink string `json:"selfLink,omitempty"`
// TimeoutSec: How many seconds to wait for the backend before
- // considering it a failed request. Default is 30 seconds. Valid range
- // is [1, 86400].
+ // considering it a failed request. Default is 30 seconds.
TimeoutSec int64 `json:"timeoutSec,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -1638,10 +1687,10 @@ type Disk struct {
// text format.
LastDetachTimestamp string `json:"lastDetachTimestamp,omitempty"`
- // Licenses: Any applicable publicly visible licenses.
+ // Licenses: [Output Only] Any applicable publicly visible licenses.
Licenses []string `json:"licenses,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -1686,14 +1735,27 @@ type Disk struct {
//
// where vYYYYMMDD is the image version. The fully-qualified URL will
// also work in both cases.
+ //
+ // You can also specify the latest image for a private image family by
+ // replacing the image name suffix with family/family-name. For
+ // example:
+ //
+ // global/images/family/my-private-family
+ //
+ // Or you can specify an image family from a publicly-available project.
+ // For example, to use the latest Debian 7 from the debian-cloud
+ // project, make sure to include the project in the
+ // URL:
+ //
+ // projects/debian-cloud/global/images/family/debian-7
SourceImage string `json:"sourceImage,omitempty"`
- // SourceImageId: The ID value of the image used to create this disk.
- // This value identifies the exact image that was used to create this
- // persistent disk. For example, if you created the persistent disk from
- // an image that was later deleted and recreated under the same name,
- // the source image ID would identify the exact version of the image
- // that was used.
+ // SourceImageId: [Output Only] The ID value of the image used to create
+ // this disk. This value identifies the exact image that was used to
+ // create this persistent disk. For example, if you created the
+ // persistent disk from an image that was later deleted and recreated
+ // under the same name, the source image ID would identify the exact
+ // version of the image that was used.
SourceImageId string `json:"sourceImageId,omitempty"`
// SourceSnapshot: The source snapshot used to create this disk. You can
@@ -1724,11 +1786,11 @@ type Disk struct {
Status string `json:"status,omitempty"`
// Type: URL of the disk type resource describing which disk type to use
- // to create the disk; provided by the client when the disk is created.
+ // to create the disk. Provide this when creating the disk.
Type string `json:"type,omitempty"`
- // Users: Links to the users of the disk (attached instances) in form:
- // project/zones/zone/instances/instance
+ // Users: [Output Only] Links to the users of the disk (attached
+ // instances) in form: project/zones/zone/instances/instance
Users []string `json:"users,omitempty"`
// Zone: [Output Only] URL of the zone where the disk resides.
@@ -1839,7 +1901,7 @@ func (s *DiskList) MarshalJSON() ([]byte, error) {
}
type DiskMoveRequest struct {
- // DestinationZone: The URL of the destination zone to move the disk to.
+ // DestinationZone: The URL of the destination zone to move the disk.
// This can be a full or partial URL. For example, the following are all
// valid URLs to a zone:
// - https://www.googleapis.com/compute/v1/projects/project/zones/zone
@@ -1872,7 +1934,7 @@ func (s *DiskMoveRequest) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// DiskType: A disk type resource.
+// DiskType: A DiskType resource.
type DiskType struct {
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
// format.
@@ -1971,7 +2033,7 @@ func (s *DiskTypeAggregatedList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// DiskTypeList: Contains a list of disk type resources.
+// DiskTypeList: Contains a list of disk types.
type DiskTypeList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -2045,6 +2107,7 @@ type DiskTypesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -2090,7 +2153,7 @@ type DiskTypesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -2145,6 +2208,7 @@ type DisksScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -2190,7 +2254,7 @@ type DisksScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -2214,7 +2278,7 @@ func (s *DisksScopedListWarningData) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// Firewall: A Firewall resource.
+// Firewall: Represents a Firewall resource.
type Firewall struct {
// Allowed: The list of rules specified by this firewall. Each rule
// specifies a protocol and port-range tuple that describes a permitted
@@ -2308,9 +2372,9 @@ func (s *Firewall) MarshalJSON() ([]byte, error) {
type FirewallAllowed struct {
// IPProtocol: The IP protocol that is allowed for this rule. The
- // protocol type is required when creating a firewall. This value can
- // either be one of the following well known protocol strings (tcp, udp,
- // icmp, esp, ah, sctp), or the IP protocol number.
+ // protocol type is required when creating a firewall rule. This value
+ // can either be one of the following well known protocol strings (tcp,
+ // udp, icmp, esp, ah, sctp), or the IP protocol number.
IPProtocol string `json:"IPProtocol,omitempty"`
// Ports: An optional list of ports which are allowed. This field is
@@ -2336,7 +2400,7 @@ func (s *FirewallAllowed) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// FirewallList: Contains a list of Firewall resources.
+// FirewallList: Contains a list of firewalls.
type FirewallList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -2477,7 +2541,8 @@ type ForwardingRuleAggregatedList struct {
// Items: A map of scoped forwarding rule lists.
Items map[string]ForwardingRulesScopedList `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always
+ // compute#forwardingRuleAggregatedList for lists of forwarding rules.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -2583,6 +2648,7 @@ type ForwardingRulesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -2628,7 +2694,7 @@ type ForwardingRulesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -2652,6 +2718,12 @@ func (s *ForwardingRulesScopedListWarningData) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
+// HealthCheckReference: A full or valid partial URL to a health check.
+// For example, the following are valid URLs:
+// -
+// https://www.googleapis.com/compute/beta/projects/project-id/global/httpHealthChecks/health-check
+// - projects/project-id/global/httpHealthChecks/health-check
+// - global/httpHealthChecks/health-check
type HealthCheckReference struct {
HealthCheck string `json:"healthCheck,omitempty"`
@@ -2763,7 +2835,8 @@ type HttpHealthCheck struct {
// identifier is defined by the server.
Id uint64 `json:"id,omitempty,string"`
- // Kind: Type of the resource.
+ // Kind: [Output Only] Type of the resource. Always
+ // compute#httpHealthCheck for HTTP health checks.
Kind string `json:"kind,omitempty"`
// Name: Name of the resource. Provided by the client when the resource
@@ -2780,7 +2853,7 @@ type HttpHealthCheck struct {
Port int64 `json:"port,omitempty"`
// RequestPath: The request path of the HTTP health check request. The
- // default value is "/".
+ // default value is /.
RequestPath string `json:"requestPath,omitempty"`
// SelfLink: [Output Only] Server-defined URL for the resource.
@@ -3114,7 +3187,7 @@ func (s *ImageRawDisk) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// ImageList: Contains a list of Image resources.
+// ImageList: Contains a list of images.
type ImageList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -3188,10 +3261,10 @@ type Instance struct {
Kind string `json:"kind,omitempty"`
// MachineType: Full or partial URL of the machine type resource to use
- // for this instance, in the format: zones/zone/machineTypes/
- // machine-type. This is provided by the client when the instance is
- // created. For example, the following is a valid partial url to a
- // predefined machine
+ // for this instance, in the format:
+ // zones/zone/machineTypes/machine-type. This is provided by the client
+ // when the instance is created. For example, the following is a valid
+ // partial url to a predefined machine
// type:
//
// zones/us-central1-f/machineTypes/n1-standard-1
@@ -3262,7 +3335,7 @@ type Instance struct {
// of the status.
StatusMessage string `json:"statusMessage,omitempty"`
- // Tags: A list of tags to appy to this instance. Tags are used to
+ // Tags: A list of tags to apply to this instance. Tags are used to
// identify valid sources or targets for network firewalls and are
// specified by the client during instance creation. The tags can be
// later modified by the setTags method. Each tag within the list must
@@ -3371,8 +3444,8 @@ type InstanceGroup struct {
// Named ports apply to all instances in this instance group.
NamedPorts []*NamedPort `json:"namedPorts,omitempty"`
- // Network: [Output Only] The URL of the network to which all instances
- // in the instance group belong.
+ // Network: The URL of the network to which all instances in the
+ // instance group belong.
Network string `json:"network,omitempty"`
// SelfLink: [Output Only] The URL for this instance group. The server
@@ -3383,8 +3456,8 @@ type InstanceGroup struct {
// group.
Size int64 `json:"size,omitempty"`
- // Subnetwork: [Output Only] The URL of the subnetwork to which all
- // instances in the instance group belong.
+ // Subnetwork: The URL of the subnetwork to which all instances in the
+ // instance group belong.
Subnetwork string `json:"subnetwork,omitempty"`
// Zone: [Output Only] The URL of the zone where the instance group is
@@ -3498,9 +3571,6 @@ func (s *InstanceGroupList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// InstanceGroupManager: InstanceGroupManagers
-//
-// Next available tag: 20
type InstanceGroupManager struct {
// BaseInstanceName: The base instance name to use for instances in this
// group. The value must be 1-58 characters long. Instances are named by
@@ -3849,6 +3919,7 @@ type InstanceGroupManagersScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -3894,7 +3965,7 @@ type InstanceGroupManagersScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -4111,6 +4182,7 @@ type InstanceGroupsScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -4156,7 +4228,7 @@ type InstanceGroupsScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -4207,13 +4279,13 @@ func (s *InstanceGroupsSetNamedPortsRequest) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// InstanceList: Contains a list of instance resources.
+// InstanceList: Contains a list of instances.
type InstanceList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
Id string `json:"id,omitempty"`
- // Items: [Output Only] A list of Instance resources.
+ // Items: [Output Only] A list of instances.
Items []*Instance `json:"items,omitempty"`
// Kind: [Output Only] Type of resource. Always compute#instanceList for
@@ -4251,9 +4323,9 @@ func (s *InstanceList) MarshalJSON() ([]byte, error) {
}
type InstanceMoveRequest struct {
- // DestinationZone: The URL of the destination zone to move the instance
- // to. This can be a full or partial URL. For example, the following are
- // all valid URLs to a zone:
+ // DestinationZone: The URL of the destination zone to move the
+ // instance. This can be a full or partial URL. For example, the
+ // following are all valid URLs to a zone:
// - https://www.googleapis.com/compute/v1/projects/project/zones/zone
//
// - projects/project/zones/zone
@@ -4530,6 +4602,7 @@ type InstancesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -4575,7 +4648,7 @@ type InstancesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -4794,7 +4867,7 @@ func (s *MachineTypeAggregatedList) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// MachineTypeList: Contains a list of Machine Type resources.
+// MachineTypeList: Contains a list of machine types.
type MachineTypeList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -4869,6 +4942,7 @@ type MachineTypesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -4914,7 +4988,7 @@ type MachineTypesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -5061,7 +5135,7 @@ type ManagedInstanceLastAttemptErrorsErrors struct {
// Code: [Output Only] The error type identifier for this error.
Code string `json:"code,omitempty"`
- // Location: [Output Only] Indicates the field in the request which
+ // Location: [Output Only] Indicates the field in the request that
// caused the error. This property is optional.
Location string `json:"location,omitempty"`
@@ -5169,7 +5243,8 @@ func (s *NamedPort) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// Network: A network resource.
+// Network: Represents a Network resource. Read Networks and Firewalls
+// for more information.
type Network struct {
// IPv4Range: The range of internal addresses that are legal on this
// network. This range is a CIDR specification, for example:
@@ -5300,7 +5375,7 @@ func (s *NetworkInterface) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// NetworkList: Contains a list of Network resources.
+// NetworkList: Contains a list of networks.
type NetworkList struct {
// Id: [Output Only] The unique identifier for the resource. This
// identifier is defined by the server.
@@ -5346,8 +5421,7 @@ func (s *NetworkList) MarshalJSON() ([]byte, error) {
// Operation: An Operation resource, used to manage asynchronous API
// requests.
type Operation struct {
- // ClientOperationId: [Output Only] A unique client ID generated by the
- // server.
+ // ClientOperationId: [Output Only] Reserved for future use.
ClientOperationId string `json:"clientOperationId,omitempty"`
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
@@ -5384,14 +5458,14 @@ type Operation struct {
InsertTime string `json:"insertTime,omitempty"`
// Kind: [Output Only] Type of the resource. Always compute#operation
- // for Operation resources.
+ // for operation resources.
Kind string `json:"kind,omitempty"`
// Name: [Output Only] Name of the resource.
Name string `json:"name,omitempty"`
- // OperationType: [Output Only] The type of operation, which can be
- // insert, update, or delete.
+ // OperationType: [Output Only] The type of operation, such as insert,
+ // update, or delete, and so on.
OperationType string `json:"operationType,omitempty"`
// Progress: [Output Only] An optional progress indicator that ranges
@@ -5401,8 +5475,8 @@ type Operation struct {
// increase as the operation progresses.
Progress int64 `json:"progress,omitempty"`
- // Region: [Output Only] URL of the region where the operation resides.
- // Only available when performing regional operations.
+ // Region: [Output Only] The URL of the region where the operation
+ // resides. Only available when performing regional operations.
Region string `json:"region,omitempty"`
// SelfLink: [Output Only] Server-defined URL for the resource.
@@ -5430,7 +5504,7 @@ type Operation struct {
TargetId uint64 `json:"targetId,omitempty,string"`
// TargetLink: [Output Only] The URL of the resource that the operation
- // is modifying.
+ // modifies.
TargetLink string `json:"targetLink,omitempty"`
// User: [Output Only] User who requested the operation, for example:
@@ -5441,8 +5515,8 @@ type Operation struct {
// processing of the operation, this field will be populated.
Warnings []*OperationWarnings `json:"warnings,omitempty"`
- // Zone: [Output Only] URL of the zone where the operation resides. Only
- // available when performing per-zone operations.
+ // Zone: [Output Only] The URL of the zone where the operation resides.
+ // Only available when performing per-zone operations.
Zone string `json:"zone,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -5490,7 +5564,7 @@ type OperationErrorErrors struct {
// Code: [Output Only] The error type identifier for this error.
Code string `json:"code,omitempty"`
- // Location: [Output Only] Indicates the field in the request which
+ // Location: [Output Only] Indicates the field in the request that
// caused the error. This property is optional.
Location string `json:"location,omitempty"`
@@ -5518,6 +5592,7 @@ type OperationWarnings struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -5563,7 +5638,7 @@ type OperationWarningsData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -5635,7 +5710,7 @@ type OperationList struct {
// identifier is defined by the server.
Id string `json:"id,omitempty"`
- // Items: [Output Only] The Operation resources.
+ // Items: [Output Only] A list of Operation resources.
Items []*Operation `json:"items,omitempty"`
// Kind: [Output Only] Type of resource. Always compute#operations for
@@ -5703,6 +5778,7 @@ type OperationsScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -5748,7 +5824,7 @@ type OperationsScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -5841,8 +5917,8 @@ func (s *PathRule) MarshalJSON() ([]byte, error) {
}
// Project: A Project resource. Projects can only be created in the
-// Google Developers Console. Unless marked otherwise, values can only
-// be modified in the console.
+// Google Cloud Platform Console. Unless marked otherwise, values can
+// only be modified in the console.
type Project struct {
// CommonInstanceMetadata: Metadata key/value pairs available to all
// instances contained in this project. See Custom metadata for more
@@ -6081,21 +6157,24 @@ func (s *ResourceGroupReference) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// Route: The route resource. A Route is a rule that specifies how
-// certain packets should be handled by the virtual network. Routes are
-// associated with instances by tags and the set of Routes for a
-// particular instance is called its routing table. For each packet
-// leaving a instance, the system searches that instance's routing table
-// for a single best matching Route. Routes match packets by destination
-// IP address, preferring smaller or more specific ranges over larger
-// ones. If there is a tie, the system selects the Route with the
-// smallest priority value. If there is still a tie, it uses the layer
-// three and four packet headers to select just one of the remaining
-// matching Routes. The packet is then forwarded as specified by the
-// nextHop field of the winning Route -- either to another instance
-// destination, a instance gateway or a Google Compute Engien-operated
-// gateway. Packets that do not match any Route in the sending
-// instance's routing table are dropped.
+// Route: Represents a Route resource. A route specifies how certain
+// packets should be handled by the network. Routes are associated with
+// instances by tags and the set of routes for a particular instance is
+// called its routing table.
+//
+// For each packet leaving a instance, the system searches that
+// instance's routing table for a single best matching route. Routes
+// match packets by destination IP address, preferring smaller or more
+// specific ranges over larger ones. If there is a tie, the system
+// selects the route with the smallest priority value. If there is still
+// a tie, it uses the layer three and four packet headers to select just
+// one of the remaining matching routes. The packet is then forwarded as
+// specified by the nextHop field of the winning route - either to
+// another instance destination, a instance gateway or a Google Compute
+// Engine-operated gateway.
+//
+// Packets that do not match any route in the sending instance's routing
+// table are dropped.
type Route struct {
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
// format.
@@ -6117,7 +6196,7 @@ type Route struct {
// Route resources.
Kind string `json:"kind,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -6198,6 +6277,7 @@ type RouteWarnings struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -6243,7 +6323,7 @@ type RouteWarningsData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -6267,7 +6347,7 @@ func (s *RouteWarningsData) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// RouteList: Contains a list of route resources.
+// RouteList: Contains a list of Route resources.
type RouteList struct {
// Id: [Output Only] Unique identifier for the resource. Defined by the
// server.
@@ -6321,7 +6401,8 @@ type Scheduling struct {
// OnHostMaintenance: Defines the maintenance behavior for this
// instance. For standard instances, the default behavior is MIGRATE.
// For preemptible instances, the default and only possible behavior is
- // TERMINATE. For more information, see Setting maintenance behavior.
+ // TERMINATE. For more information, see Setting Instance Scheduling
+ // Options.
//
// Possible values:
// "MIGRATE"
@@ -6422,7 +6503,9 @@ type Snapshot struct {
// Snapshot resources.
Kind string `json:"kind,omitempty"`
- // Licenses: Public visible licenses.
+ // Licenses: [Output Only] A list of public visible licenses that apply
+ // to this snapshot. This can be because the original image had licenses
+ // attached (such as a Windows image).
Licenses []string `json:"licenses,omitempty"`
// Name: Name of the resource; provided by the client when the resource
@@ -6447,7 +6530,8 @@ type Snapshot struct {
// disk name.
SourceDiskId string `json:"sourceDiskId,omitempty"`
- // Status: [Output Only] The status of the snapshot.
+ // Status: [Output Only] The status of the snapshot. This can be
+ // CREATING, DELETING, FAILED, READY, or UPLOADING.
//
// Possible values:
// "CREATING"
@@ -6464,7 +6548,9 @@ type Snapshot struct {
// StorageBytesStatus: [Output Only] An indicator whether storageBytes
// is in a stable state or it is being adjusted as a result of shared
- // storage reallocation.
+ // storage reallocation. This status can either be UPDATING, meaning the
+ // size of the snapshot is being updated, or UP_TO_DATE, meaning the
+ // size of the snapshot is up-to-date.
//
// Possible values:
// "UPDATING"
@@ -6817,6 +6903,7 @@ type SubnetworksScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -6862,7 +6949,7 @@ type SubnetworksScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -6936,7 +7023,7 @@ type TargetHttpProxy struct {
// for target HTTP proxies.
Kind string `json:"kind,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -6981,7 +7068,7 @@ type TargetHttpProxyList struct {
Items []*TargetHttpProxy `json:"items,omitempty"`
// Kind: Type of resource. Always compute#targetHttpProxyList for lists
- // of Target HTTP proxies.
+ // of target HTTP proxies.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -7015,9 +7102,9 @@ func (s *TargetHttpProxyList) MarshalJSON() ([]byte, error) {
}
type TargetHttpsProxiesSetSslCertificatesRequest struct {
- // SslCertificates: New set of URLs to SslCertificate resources to
- // associate with this TargetHttpProxy. Currently exactly one ssl
- // certificate must be specified.
+ // SslCertificates: New set of SslCertificate resources to associate
+ // with this TargetHttpsProxy resource. Currently exactly one
+ // SslCertificate resource must be specified.
SslCertificates []string `json:"sslCertificates,omitempty"`
// ForceSendFields is a list of field names (e.g. "SslCertificates") to
@@ -7050,8 +7137,8 @@ type TargetHttpsProxy struct {
// identifier is defined by the server.
Id uint64 `json:"id,omitempty,string"`
- // Kind: [Output Only] Type of the resource. Always
- // compute#targetHttpsProxy for target HTTPS proxies.
+ // Kind: [Output Only] Type of resource. Always compute#targetHttpsProxy
+ // for target HTTPS proxies.
Kind string `json:"kind,omitempty"`
// Name: Name of the resource. Provided by the client when the resource
@@ -7068,11 +7155,16 @@ type TargetHttpsProxy struct {
// SslCertificates: URLs to SslCertificate resources that are used to
// authenticate connections between users and the load balancer.
- // Currently exactly one SSL certificate must be specified.
+ // Currently, exactly one SSL certificate must be specified.
SslCertificates []string `json:"sslCertificates,omitempty"`
- // UrlMap: URL to the UrlMap resource that defines the mapping from URL
- // to the BackendService.
+ // UrlMap: A fully-qualified or valid partial URL to the UrlMap resource
+ // that defines the mapping from URL to the BackendService. For example,
+ // the following are all valid URLs for specifying a URL map:
+ // -
+ // https://www.googleapis.compute/v1/projects/project/global/urlMaps/url-map
+ // - projects/project/global/urlMaps/url-map
+ // - global/urlMaps/url-map
UrlMap string `json:"urlMap,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -7103,7 +7195,8 @@ type TargetHttpsProxyList struct {
// Items: A list of TargetHttpsProxy resources.
Items []*TargetHttpsProxy `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: Type of resource. Always compute#targetHttpsProxyList for lists
+ // of target HTTPS proxies.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -7151,8 +7244,14 @@ type TargetInstance struct {
// identifier is defined by the server.
Id uint64 `json:"id,omitempty,string"`
- // Instance: The URL to the instance that terminates the relevant
- // traffic.
+ // Instance: A URL to the virtual machine instance that handles traffic
+ // for this target instance. When creating a target instance, you can
+ // provide the fully-qualified URL or a valid partial URL to the desired
+ // virtual machine. For example, the following are all valid URLs:
+ // -
+ // https://www.googleapis.com/compute/v1/projects/project/zones/zone/instances/instance
+ // - projects/project/zones/zone/instances/instance
+ // - zones/zone/instances/instance
Instance string `json:"instance,omitempty"`
// Kind: [Output Only] The type of the resource. Always
@@ -7315,6 +7414,7 @@ type TargetInstancesScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -7360,7 +7460,7 @@ type TargetInstancesScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -7386,7 +7486,7 @@ func (s *TargetInstancesScopedListWarningData) MarshalJSON() ([]byte, error) {
// TargetPool: A TargetPool resource. This resource defines a pool of
// instances, associated HttpHealthCheck resources, and the fallback
-// TargetPool.
+// target pool.
type TargetPool struct {
// BackupPool: This field is applicable only when the containing target
// pool is serving a forwarding rule as the primary pool, and its
@@ -7441,7 +7541,7 @@ type TargetPool struct {
// identifier is defined by the server.
Id uint64 `json:"id,omitempty,string"`
- // Instances: A list of resource URLs to the member virtual machines
+ // Instances: A list of resource URLs to the virtual machine instances
// serving this pool. They must live in zones contained in the same
// region as this pool.
Instances []string `json:"instances,omitempty"`
@@ -7507,10 +7607,12 @@ type TargetPoolAggregatedList struct {
// server.
Id string `json:"id,omitempty"`
- // Items: A map of scoped target pool lists.
+ // Items: [Output Only] A map of scoped target pool lists.
Items map[string]TargetPoolsScopedList `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always
+ // compute#targetPoolAggregatedList for aggregated lists of target
+ // pools.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -7546,7 +7648,9 @@ func (s *TargetPoolAggregatedList) MarshalJSON() ([]byte, error) {
type TargetPoolInstanceHealth struct {
HealthStatus []*HealthStatus `json:"healthStatus,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always
+ // compute#targetPoolInstanceHealth when checking the health of an
+ // instance.
Kind string `json:"kind,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -7577,7 +7681,8 @@ type TargetPoolList struct {
// Items: A list of TargetPool resources.
Items []*TargetPool `json:"items,omitempty"`
- // Kind: Type of resource.
+ // Kind: [Output Only] Type of resource. Always compute#targetPoolList
+ // for lists of target pools.
Kind string `json:"kind,omitempty"`
// NextPageToken: [Output Only] This token allows you to get the next
@@ -7611,7 +7716,8 @@ func (s *TargetPoolList) MarshalJSON() ([]byte, error) {
}
type TargetPoolsAddHealthCheckRequest struct {
- // HealthChecks: Health check URLs to be added to targetPool.
+ // HealthChecks: A list of HttpHealthCheck resources to add to the
+ // target pool.
HealthChecks []*HealthCheckReference `json:"healthChecks,omitempty"`
// ForceSendFields is a list of field names (e.g. "HealthChecks") to
@@ -7630,7 +7736,13 @@ func (s *TargetPoolsAddHealthCheckRequest) MarshalJSON() ([]byte, error) {
}
type TargetPoolsAddInstanceRequest struct {
- // Instances: URLs of the instances to be added to targetPool.
+ // Instances: A full or partial URL to an instance to add to this target
+ // pool. This can be a full or partial URL. For example, the following
+ // are valid URLs:
+ // -
+ // https://www.googleapis.com/compute/v1/projects/project-id/zones/zone/instances/instance-name
+ // - projects/project-id/zones/zone/instances/instance-name
+ // - zones/zone/instances/instance-name
Instances []*InstanceReference `json:"instances,omitempty"`
// ForceSendFields is a list of field names (e.g. "Instances") to
@@ -7649,7 +7761,12 @@ func (s *TargetPoolsAddInstanceRequest) MarshalJSON() ([]byte, error) {
}
type TargetPoolsRemoveHealthCheckRequest struct {
- // HealthChecks: Health check URLs to be removed from targetPool.
+ // HealthChecks: Health check URL to be removed. This can be a full or
+ // valid partial URL. For example, the following are valid URLs:
+ // -
+ // https://www.googleapis.com/compute/beta/projects/project/global/httpHealthChecks/health-check
+ // - projects/project/global/httpHealthChecks/health-check
+ // - global/httpHealthChecks/health-check
HealthChecks []*HealthCheckReference `json:"healthChecks,omitempty"`
// ForceSendFields is a list of field names (e.g. "HealthChecks") to
@@ -7668,7 +7785,7 @@ func (s *TargetPoolsRemoveHealthCheckRequest) MarshalJSON() ([]byte, error) {
}
type TargetPoolsRemoveInstanceRequest struct {
- // Instances: URLs of the instances to be removed from targetPool.
+ // Instances: URLs of the instances to be removed from target pool.
Instances []*InstanceReference `json:"instances,omitempty"`
// ForceSendFields is a list of field names (e.g. "Instances") to
@@ -7717,6 +7834,7 @@ type TargetPoolsScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -7762,7 +7880,7 @@ type TargetPoolsScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -7804,6 +7922,7 @@ func (s *TargetReference) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
+// TargetVpnGateway: Represents a Target VPN gateway resource.
type TargetVpnGateway struct {
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
// format.
@@ -7826,7 +7945,7 @@ type TargetVpnGateway struct {
// for target VPN gateways.
Kind string `json:"kind,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -7856,8 +7975,8 @@ type TargetVpnGateway struct {
Status string `json:"status,omitempty"`
// Tunnels: [Output Only] A list of URLs to VpnTunnel resources.
- // VpnTunnels are created using compute.vpntunnels.insert and associated
- // to a VPN gateway.
+ // VpnTunnels are created using compute.vpntunnels.insert method and
+ // associated to a VPN gateway.
Tunnels []string `json:"tunnels,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -7996,6 +8115,7 @@ type TargetVpnGatewaysScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -8041,7 +8161,7 @@ type TargetVpnGatewaysScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -8389,7 +8509,7 @@ type VpnTunnel struct {
// disjoint.
LocalTrafficSelector []string `json:"localTrafficSelector,omitempty"`
- // Name: Name of the resource; provided by the client when the resource
+ // Name: Name of the resource. Provided by the client when the resource
// is created. The name must be 1-63 characters long, and comply with
// RFC1035. Specifically, the name must be 1-63 characters long and
// match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means
@@ -8417,6 +8537,7 @@ type VpnTunnel struct {
// Status: [Output Only] The status of the VPN tunnel.
//
// Possible values:
+ // "ALLOCATING_RESOURCES"
// "AUTHORIZATION_ERROR"
// "DEPROVISIONING"
// "ESTABLISHED"
@@ -8430,8 +8551,8 @@ type VpnTunnel struct {
// "WAITING_FOR_FULL_CONFIG"
Status string `json:"status,omitempty"`
- // TargetVpnGateway: URL of the VPN gateway to which this VPN tunnel is
- // associated. Provided by the client when the VPN tunnel is created.
+ // TargetVpnGateway: URL of the VPN gateway with which this VPN tunnel
+ // is associated. Provided by the client when the VPN tunnel is created.
TargetVpnGateway string `json:"targetVpnGateway,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -8569,6 +8690,7 @@ type VpnTunnelsScopedListWarning struct {
// the response.
//
// Possible values:
+ // "CLEANUP_FAILED"
// "DEPRECATED_RESOURCE_USED"
// "DISK_SIZE_LARGER_THAN_IMAGE_SIZE"
// "INJECTED_KERNELS_DEPRECATED"
@@ -8614,7 +8736,7 @@ type VpnTunnelsScopedListWarningData struct {
// being returned. For example, for warnings where there are no results
// in a list request for a particular zone, this key might be scope and
// the key value might be the zone name. Other examples might be a key
- // indicating a deprecated resource, and a suggested replacement, or a
+ // indicating a deprecated resource and a suggested replacement, or a
// warning about invalid network settings (for example, if an instance
// attempts to perform IP forwarding but is not enabled for IP
// forwarding).
@@ -8806,7 +8928,9 @@ func (r *AddressesService) AggregatedList(project string) *AddressesAggregatedLi
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -8818,7 +8942,7 @@ func (r *AddressesService) AggregatedList(project string) *AddressesAggregatedLi
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *AddressesAggregatedListCall) Filter(filter string) *AddressesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -8826,10 +8950,10 @@ func (c *AddressesAggregatedListCall) Filter(filter string) *AddressesAggregated
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *AddressesAggregatedListCall) MaxResults(maxResults int64) *AddressesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -8933,13 +9057,13 @@ func (c *AddressesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Address
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -9109,7 +9233,7 @@ func (c *AddressesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -9258,7 +9382,7 @@ func (c *AddressesGetCall) Do(opts ...googleapi.CallOption) (*Address, error) {
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -9392,7 +9516,7 @@ func (c *AddressesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, erro
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -9425,8 +9549,8 @@ type AddressesListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of address resources contained within the
-// specified region.
+// List: Retrieves a list of addresses contained within the specified
+// region.
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/list
func (r *AddressesService) List(project string, region string) *AddressesListCall {
c := &AddressesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -9449,7 +9573,9 @@ func (r *AddressesService) List(project string, region string) *AddressesListCal
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -9461,7 +9587,7 @@ func (r *AddressesService) List(project string, region string) *AddressesListCal
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *AddressesListCall) Filter(filter string) *AddressesListCall {
c.urlParams_.Set("filter", filter)
@@ -9469,10 +9595,10 @@ func (c *AddressesListCall) Filter(filter string) *AddressesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *AddressesListCall) MaxResults(maxResults int64) *AddressesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -9569,7 +9695,7 @@ func (c *AddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList, erro
}
return ret, nil
// {
- // "description": "Retrieves a list of address resources contained within the specified region.",
+ // "description": "Retrieves a list of addresses contained within the specified region.",
// "httpMethod": "GET",
// "id": "compute.addresses.list",
// "parameterOrder": [
@@ -9578,13 +9704,13 @@ func (c *AddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList, erro
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -9604,7 +9730,7 @@ func (c *AddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList, erro
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -9676,7 +9802,9 @@ func (r *AutoscalersService) AggregatedList(project string) *AutoscalersAggregat
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -9688,7 +9816,7 @@ func (r *AutoscalersService) AggregatedList(project string) *AutoscalersAggregat
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *AutoscalersAggregatedListCall) Filter(filter string) *AutoscalersAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -9696,10 +9824,10 @@ func (c *AutoscalersAggregatedListCall) Filter(filter string) *AutoscalersAggreg
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *AutoscalersAggregatedListCall) MaxResults(maxResults int64) *AutoscalersAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -9803,13 +9931,13 @@ func (c *AutoscalersAggregatedListCall) Do(opts ...googleapi.CallOption) (*Autos
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -9874,7 +10002,7 @@ type AutoscalersDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified autoscaler resource.
+// Delete: Deletes the specified autoscaler.
func (r *AutoscalersService) Delete(project string, zone string, autoscaler string) *AutoscalersDeleteCall {
c := &AutoscalersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -9954,7 +10082,7 @@ func (c *AutoscalersDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified autoscaler resource.",
+ // "description": "Deletes the specified autoscaler.",
// "httpMethod": "DELETE",
// "id": "compute.autoscalers.delete",
// "parameterOrder": [
@@ -9964,7 +10092,7 @@ func (c *AutoscalersDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
// ],
// "parameters": {
// "autoscaler": {
- // "description": "Name of the persistent autoscaler resource to delete.",
+ // "description": "Name of the autoscaler to delete.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -9978,7 +10106,7 @@ func (c *AutoscalersDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10009,7 +10137,8 @@ type AutoscalersGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified autoscaler resource.
+// Get: Returns the specified autoscaler resource. Get a list of
+// available autoscalers by making a list() request.
func (r *AutoscalersService) Get(project string, zone string, autoscaler string) *AutoscalersGetCall {
c := &AutoscalersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -10102,7 +10231,7 @@ func (c *AutoscalersGetCall) Do(opts ...googleapi.CallOption) (*Autoscaler, erro
}
return ret, nil
// {
- // "description": "Returns the specified autoscaler resource.",
+ // "description": "Returns the specified autoscaler resource. Get a list of available autoscalers by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.autoscalers.get",
// "parameterOrder": [
@@ -10112,7 +10241,7 @@ func (c *AutoscalersGetCall) Do(opts ...googleapi.CallOption) (*Autoscaler, erro
// ],
// "parameters": {
// "autoscaler": {
- // "description": "Name of the persistent autoscaler resource to return.",
+ // "description": "Name of the autoscaler to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10126,7 +10255,7 @@ func (c *AutoscalersGetCall) Do(opts ...googleapi.CallOption) (*Autoscaler, erro
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10157,8 +10286,8 @@ type AutoscalersInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates an autoscaler resource in the specified project using
-// the data included in the request.
+// Insert: Creates an autoscaler in the specified project using the data
+// included in the request.
func (r *AutoscalersService) Insert(project string, zone string, autoscaler *Autoscaler) *AutoscalersInsertCall {
c := &AutoscalersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -10243,7 +10372,7 @@ func (c *AutoscalersInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates an autoscaler resource in the specified project using the data included in the request.",
+ // "description": "Creates an autoscaler in the specified project using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.autoscalers.insert",
// "parameterOrder": [
@@ -10259,7 +10388,7 @@ func (c *AutoscalersInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10292,8 +10421,8 @@ type AutoscalersListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of autoscaler resources contained within the
-// specified zone.
+// List: Retrieves a list of autoscalers contained within the specified
+// zone.
func (r *AutoscalersService) List(project string, zone string) *AutoscalersListCall {
c := &AutoscalersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -10315,7 +10444,9 @@ func (r *AutoscalersService) List(project string, zone string) *AutoscalersListC
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -10327,7 +10458,7 @@ func (r *AutoscalersService) List(project string, zone string) *AutoscalersListC
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *AutoscalersListCall) Filter(filter string) *AutoscalersListCall {
c.urlParams_.Set("filter", filter)
@@ -10335,10 +10466,10 @@ func (c *AutoscalersListCall) Filter(filter string) *AutoscalersListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *AutoscalersListCall) MaxResults(maxResults int64) *AutoscalersListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -10435,7 +10566,7 @@ func (c *AutoscalersListCall) Do(opts ...googleapi.CallOption) (*AutoscalerList,
}
return ret, nil
// {
- // "description": "Retrieves a list of autoscaler resources contained within the specified zone.",
+ // "description": "Retrieves a list of autoscalers contained within the specified zone.",
// "httpMethod": "GET",
// "id": "compute.autoscalers.list",
// "parameterOrder": [
@@ -10444,13 +10575,13 @@ func (c *AutoscalersListCall) Do(opts ...googleapi.CallOption) (*AutoscalerList,
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -10470,7 +10601,7 @@ func (c *AutoscalersListCall) Do(opts ...googleapi.CallOption) (*AutoscalerList,
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10522,9 +10653,8 @@ type AutoscalersPatchCall struct {
ctx_ context.Context
}
-// Patch: Updates an autoscaler resource in the specified project using
-// the data included in the request. This method supports patch
-// semantics.
+// Patch: Updates an autoscaler in the specified project using the data
+// included in the request. This method supports patch semantics.
func (r *AutoscalersService) Patch(project string, zone string, autoscaler string, autoscaler2 *Autoscaler) *AutoscalersPatchCall {
c := &AutoscalersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -10610,7 +10740,7 @@ func (c *AutoscalersPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Updates an autoscaler resource in the specified project using the data included in the request. This method supports patch semantics.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports patch semantics.",
// "httpMethod": "PATCH",
// "id": "compute.autoscalers.patch",
// "parameterOrder": [
@@ -10620,7 +10750,7 @@ func (c *AutoscalersPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
// ],
// "parameters": {
// "autoscaler": {
- // "description": "Name of the autoscaler resource to update.",
+ // "description": "Name of the autoscaler to update.",
// "location": "query",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10634,7 +10764,7 @@ func (c *AutoscalersPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10667,8 +10797,8 @@ type AutoscalersUpdateCall struct {
ctx_ context.Context
}
-// Update: Updates an autoscaler resource in the specified project using
-// the data included in the request.
+// Update: Updates an autoscaler in the specified project using the data
+// included in the request.
func (r *AutoscalersService) Update(project string, zone string, autoscaler *Autoscaler) *AutoscalersUpdateCall {
c := &AutoscalersUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -10678,7 +10808,7 @@ func (r *AutoscalersService) Update(project string, zone string, autoscaler *Aut
}
// Autoscaler sets the optional parameter "autoscaler": Name of the
-// autoscaler resource to update.
+// autoscaler to update.
func (c *AutoscalersUpdateCall) Autoscaler(autoscaler string) *AutoscalersUpdateCall {
c.urlParams_.Set("autoscaler", autoscaler)
return c
@@ -10760,7 +10890,7 @@ func (c *AutoscalersUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Updates an autoscaler resource in the specified project using the data included in the request.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request.",
// "httpMethod": "PUT",
// "id": "compute.autoscalers.update",
// "parameterOrder": [
@@ -10769,7 +10899,7 @@ func (c *AutoscalersUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, er
// ],
// "parameters": {
// "autoscaler": {
- // "description": "Name of the autoscaler resource to update.",
+ // "description": "Name of the autoscaler to update.",
// "location": "query",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "type": "string"
@@ -10782,7 +10912,7 @@ func (c *AutoscalersUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, er
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -10939,7 +11069,8 @@ type BackendServicesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified BackendService resource.
+// Get: Returns the specified BackendService resource. Get a list of
+// available backend services by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/get
func (r *BackendServicesService) Get(project string, backendService string) *BackendServicesGetCall {
c := &BackendServicesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -11031,7 +11162,7 @@ func (c *BackendServicesGetCall) Do(opts ...googleapi.CallOption) (*BackendServi
}
return ret, nil
// {
- // "description": "Returns the specified BackendService resource.",
+ // "description": "Returns the specified BackendService resource. Get a list of available backend services by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.backendServices.get",
// "parameterOrder": [
@@ -11363,7 +11494,9 @@ func (r *BackendServicesService) List(project string) *BackendServicesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -11375,7 +11508,7 @@ func (r *BackendServicesService) List(project string) *BackendServicesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *BackendServicesListCall) Filter(filter string) *BackendServicesListCall {
c.urlParams_.Set("filter", filter)
@@ -11383,10 +11516,10 @@ func (c *BackendServicesListCall) Filter(filter string) *BackendServicesListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *BackendServicesListCall) MaxResults(maxResults int64) *BackendServicesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -11490,13 +11623,13 @@ func (c *BackendServicesListCall) Do(opts ...googleapi.CallOption) (*BackendServ
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -11836,7 +11969,7 @@ type DiskTypesAggregatedListCall struct {
ctx_ context.Context
}
-// AggregatedList: Retrieves an aggregated list of disk type resources.
+// AggregatedList: Retrieves an aggregated list of disk types.
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/aggregatedList
func (r *DiskTypesService) AggregatedList(project string) *DiskTypesAggregatedListCall {
c := &DiskTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -11858,7 +11991,9 @@ func (r *DiskTypesService) AggregatedList(project string) *DiskTypesAggregatedLi
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -11870,7 +12005,7 @@ func (r *DiskTypesService) AggregatedList(project string) *DiskTypesAggregatedLi
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *DiskTypesAggregatedListCall) Filter(filter string) *DiskTypesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -11878,10 +12013,10 @@ func (c *DiskTypesAggregatedListCall) Filter(filter string) *DiskTypesAggregated
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *DiskTypesAggregatedListCall) MaxResults(maxResults int64) *DiskTypesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -11977,7 +12112,7 @@ func (c *DiskTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*DiskTyp
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of disk type resources.",
+ // "description": "Retrieves an aggregated list of disk types.",
// "httpMethod": "GET",
// "id": "compute.diskTypes.aggregatedList",
// "parameterOrder": [
@@ -11985,13 +12120,13 @@ func (c *DiskTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*DiskTyp
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -12057,7 +12192,8 @@ type DiskTypesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified disk type resource.
+// Get: Returns the specified disk type. Get a list of available disk
+// types by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/get
func (r *DiskTypesService) Get(project string, zone string, diskType string) *DiskTypesGetCall {
c := &DiskTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -12151,7 +12287,7 @@ func (c *DiskTypesGetCall) Do(opts ...googleapi.CallOption) (*DiskType, error) {
}
return ret, nil
// {
- // "description": "Returns the specified disk type resource.",
+ // "description": "Returns the specified disk type. Get a list of available disk types by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.diskTypes.get",
// "parameterOrder": [
@@ -12161,7 +12297,7 @@ func (c *DiskTypesGetCall) Do(opts ...googleapi.CallOption) (*DiskType, error) {
// ],
// "parameters": {
// "diskType": {
- // "description": "Name of the disk type resource to return.",
+ // "description": "Name of the disk type to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -12206,8 +12342,8 @@ type DiskTypesListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of disk type resources available to the
-// specified project.
+// List: Retrieves a list of disk types available to the specified
+// project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/list
func (r *DiskTypesService) List(project string, zone string) *DiskTypesListCall {
c := &DiskTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -12230,7 +12366,9 @@ func (r *DiskTypesService) List(project string, zone string) *DiskTypesListCall
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -12242,7 +12380,7 @@ func (r *DiskTypesService) List(project string, zone string) *DiskTypesListCall
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *DiskTypesListCall) Filter(filter string) *DiskTypesListCall {
c.urlParams_.Set("filter", filter)
@@ -12250,10 +12388,10 @@ func (c *DiskTypesListCall) Filter(filter string) *DiskTypesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *DiskTypesListCall) MaxResults(maxResults int64) *DiskTypesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -12350,7 +12488,7 @@ func (c *DiskTypesListCall) Do(opts ...googleapi.CallOption) (*DiskTypeList, err
}
return ret, nil
// {
- // "description": "Retrieves a list of disk type resources available to the specified project.",
+ // "description": "Retrieves a list of disk types available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.diskTypes.list",
// "parameterOrder": [
@@ -12359,13 +12497,13 @@ func (c *DiskTypesListCall) Do(opts ...googleapi.CallOption) (*DiskTypeList, err
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -12458,7 +12596,9 @@ func (r *DisksService) AggregatedList(project string) *DisksAggregatedListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -12470,7 +12610,7 @@ func (r *DisksService) AggregatedList(project string) *DisksAggregatedListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *DisksAggregatedListCall) Filter(filter string) *DisksAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -12478,10 +12618,10 @@ func (c *DisksAggregatedListCall) Filter(filter string) *DisksAggregatedListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *DisksAggregatedListCall) MaxResults(maxResults int64) *DisksAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -12585,13 +12725,13 @@ func (c *DisksAggregatedListCall) Do(opts ...googleapi.CallOption) (*DiskAggrega
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -12941,7 +13081,8 @@ type DisksGetCall struct {
ctx_ context.Context
}
-// Get: Returns a specified persistent disk.
+// Get: Returns a specified persistent disk. Get a list of available
+// persistent disks by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/get
func (r *DisksService) Get(project string, zone string, disk string) *DisksGetCall {
c := &DisksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13035,7 +13176,7 @@ func (c *DisksGetCall) Do(opts ...googleapi.CallOption) (*Disk, error) {
}
return ret, nil
// {
- // "description": "Returns a specified persistent disk.",
+ // "description": "Returns a specified persistent disk. Get a list of available persistent disks by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.disks.get",
// "parameterOrder": [
@@ -13091,7 +13232,10 @@ type DisksInsertCall struct {
}
// Insert: Creates a persistent disk in the specified project using the
-// data included in the request.
+// data in the request. You can create a disk with a sourceImage, a
+// sourceSnapshot, or create an empty 200 GB data disk by omitting all
+// properties. You can also create a disk that is larger than the
+// default size by specifying the sizeGb property.
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/insert
func (r *DisksService) Insert(project string, zone string, disk *Disk) *DisksInsertCall {
c := &DisksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13184,7 +13328,7 @@ func (c *DisksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error) {
}
return ret, nil
// {
- // "description": "Creates a persistent disk in the specified project using the data included in the request.",
+ // "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 200 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property.",
// "httpMethod": "POST",
// "id": "compute.disks.insert",
// "parameterOrder": [
@@ -13262,7 +13406,9 @@ func (r *DisksService) List(project string, zone string) *DisksListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -13274,7 +13420,7 @@ func (r *DisksService) List(project string, zone string) *DisksListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *DisksListCall) Filter(filter string) *DisksListCall {
c.urlParams_.Set("filter", filter)
@@ -13282,10 +13428,10 @@ func (c *DisksListCall) Filter(filter string) *DisksListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *DisksListCall) MaxResults(maxResults int64) *DisksListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -13391,13 +13537,13 @@ func (c *DisksListCall) Do(opts ...googleapi.CallOption) (*DiskList, error) {
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -13468,7 +13614,7 @@ type FirewallsDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified firewall resource.
+// Delete: Deletes the specified firewall.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/delete
func (r *FirewallsService) Delete(project string, firewall string) *FirewallsDeleteCall {
c := &FirewallsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13547,7 +13693,7 @@ func (c *FirewallsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes the specified firewall resource.",
+ // "description": "Deletes the specified firewall.",
// "httpMethod": "DELETE",
// "id": "compute.firewalls.delete",
// "parameterOrder": [
@@ -13556,7 +13702,7 @@ func (c *FirewallsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
// ],
// "parameters": {
// "firewall": {
- // "description": "Name of the firewall resource to delete.",
+ // "description": "Name of the firewall rule to delete.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -13593,7 +13739,7 @@ type FirewallsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified firewall resource.
+// Get: Returns the specified firewall.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/get
func (r *FirewallsService) Get(project string, firewall string) *FirewallsGetCall {
c := &FirewallsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13685,7 +13831,7 @@ func (c *FirewallsGetCall) Do(opts ...googleapi.CallOption) (*Firewall, error) {
}
return ret, nil
// {
- // "description": "Returns the specified firewall resource.",
+ // "description": "Returns the specified firewall.",
// "httpMethod": "GET",
// "id": "compute.firewalls.get",
// "parameterOrder": [
@@ -13694,7 +13840,7 @@ func (c *FirewallsGetCall) Do(opts ...googleapi.CallOption) (*Firewall, error) {
// ],
// "parameters": {
// "firewall": {
- // "description": "Name of the firewall resource to return.",
+ // "description": "Name of the firewall rule to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -13731,8 +13877,8 @@ type FirewallsInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates a firewall resource in the specified project using
-// the data included in the request.
+// Insert: Creates a firewall rule in the specified project using the
+// data included in the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/insert
func (r *FirewallsService) Insert(project string, firewall *Firewall) *FirewallsInsertCall {
c := &FirewallsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13816,7 +13962,7 @@ func (c *FirewallsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Creates a firewall resource in the specified project using the data included in the request.",
+ // "description": "Creates a firewall rule in the specified project using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.firewalls.insert",
// "parameterOrder": [
@@ -13856,8 +14002,8 @@ type FirewallsListCall struct {
ctx_ context.Context
}
-// List: Retrieves the list of firewall resources available to the
-// specified project.
+// List: Retrieves the list of firewall rules available to the specified
+// project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/list
func (r *FirewallsService) List(project string) *FirewallsListCall {
c := &FirewallsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -13879,7 +14025,9 @@ func (r *FirewallsService) List(project string) *FirewallsListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -13891,7 +14039,7 @@ func (r *FirewallsService) List(project string) *FirewallsListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *FirewallsListCall) Filter(filter string) *FirewallsListCall {
c.urlParams_.Set("filter", filter)
@@ -13899,10 +14047,10 @@ func (c *FirewallsListCall) Filter(filter string) *FirewallsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *FirewallsListCall) MaxResults(maxResults int64) *FirewallsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -13998,7 +14146,7 @@ func (c *FirewallsListCall) Do(opts ...googleapi.CallOption) (*FirewallList, err
}
return ret, nil
// {
- // "description": "Retrieves the list of firewall resources available to the specified project.",
+ // "description": "Retrieves the list of firewall rules available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.firewalls.list",
// "parameterOrder": [
@@ -14006,13 +14154,13 @@ func (c *FirewallsListCall) Do(opts ...googleapi.CallOption) (*FirewallList, err
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -14077,8 +14225,8 @@ type FirewallsPatchCall struct {
ctx_ context.Context
}
-// Patch: Updates the specified firewall resource with the data included
-// in the request. This method supports patch semantics.
+// Patch: Updates the specified firewall rule with the data included in
+// the request. This method supports patch semantics.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/patch
func (r *FirewallsService) Patch(project string, firewall string, firewall2 *Firewall) *FirewallsPatchCall {
c := &FirewallsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -14164,7 +14312,7 @@ func (c *FirewallsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Updates the specified firewall resource with the data included in the request. This method supports patch semantics.",
+ // "description": "Updates the specified firewall rule with the data included in the request. This method supports patch semantics.",
// "httpMethod": "PATCH",
// "id": "compute.firewalls.patch",
// "parameterOrder": [
@@ -14173,7 +14321,7 @@ func (c *FirewallsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error
// ],
// "parameters": {
// "firewall": {
- // "description": "Name of the firewall resource to update.",
+ // "description": "Name of the firewall rule to update.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -14213,8 +14361,8 @@ type FirewallsUpdateCall struct {
ctx_ context.Context
}
-// Update: Updates the specified firewall resource with the data
-// included in the request.
+// Update: Updates the specified firewall rule with the data included in
+// the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/update
func (r *FirewallsService) Update(project string, firewall string, firewall2 *Firewall) *FirewallsUpdateCall {
c := &FirewallsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -14300,7 +14448,7 @@ func (c *FirewallsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Updates the specified firewall resource with the data included in the request.",
+ // "description": "Updates the specified firewall rule with the data included in the request.",
// "httpMethod": "PUT",
// "id": "compute.firewalls.update",
// "parameterOrder": [
@@ -14309,7 +14457,7 @@ func (c *FirewallsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, erro
// ],
// "parameters": {
// "firewall": {
- // "description": "Name of the firewall resource to update.",
+ // "description": "Name of the firewall rule to update.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -14370,7 +14518,9 @@ func (r *ForwardingRulesService) AggregatedList(project string) *ForwardingRules
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -14382,7 +14532,7 @@ func (r *ForwardingRulesService) AggregatedList(project string) *ForwardingRules
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *ForwardingRulesAggregatedListCall) Filter(filter string) *ForwardingRulesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -14390,10 +14540,10 @@ func (c *ForwardingRulesAggregatedListCall) Filter(filter string) *ForwardingRul
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *ForwardingRulesAggregatedListCall) MaxResults(maxResults int64) *ForwardingRulesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -14497,13 +14647,13 @@ func (c *ForwardingRulesAggregatedListCall) Do(opts ...googleapi.CallOption) (*F
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -15013,7 +15163,9 @@ func (r *ForwardingRulesService) List(project string, region string) *Forwarding
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -15025,7 +15177,7 @@ func (r *ForwardingRulesService) List(project string, region string) *Forwarding
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *ForwardingRulesListCall) Filter(filter string) *ForwardingRulesListCall {
c.urlParams_.Set("filter", filter)
@@ -15033,10 +15185,10 @@ func (c *ForwardingRulesListCall) Filter(filter string) *ForwardingRulesListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *ForwardingRulesListCall) MaxResults(maxResults int64) *ForwardingRulesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -15142,13 +15294,13 @@ func (c *ForwardingRulesListCall) Do(opts ...googleapi.CallOption) (*ForwardingR
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -15491,7 +15643,8 @@ type GlobalAddressesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified address resource.
+// Get: Returns the specified address resource. Get a list of available
+// addresses by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/get
func (r *GlobalAddressesService) Get(project string, address string) *GlobalAddressesGetCall {
c := &GlobalAddressesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -15583,7 +15736,7 @@ func (c *GlobalAddressesGetCall) Do(opts ...googleapi.CallOption) (*Address, err
}
return ret, nil
// {
- // "description": "Returns the specified address resource.",
+ // "description": "Returns the specified address resource. Get a list of available addresses by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.globalAddresses.get",
// "parameterOrder": [
@@ -15754,7 +15907,7 @@ type GlobalAddressesListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of global address resources.
+// List: Retrieves a list of global addresses.
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/list
func (r *GlobalAddressesService) List(project string) *GlobalAddressesListCall {
c := &GlobalAddressesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -15776,7 +15929,9 @@ func (r *GlobalAddressesService) List(project string) *GlobalAddressesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -15788,7 +15943,7 @@ func (r *GlobalAddressesService) List(project string) *GlobalAddressesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *GlobalAddressesListCall) Filter(filter string) *GlobalAddressesListCall {
c.urlParams_.Set("filter", filter)
@@ -15796,10 +15951,10 @@ func (c *GlobalAddressesListCall) Filter(filter string) *GlobalAddressesListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *GlobalAddressesListCall) MaxResults(maxResults int64) *GlobalAddressesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -15895,7 +16050,7 @@ func (c *GlobalAddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList
}
return ret, nil
// {
- // "description": "Retrieves a list of global address resources.",
+ // "description": "Retrieves a list of global addresses.",
// "httpMethod": "GET",
// "id": "compute.globalAddresses.list",
// "parameterOrder": [
@@ -15903,13 +16058,13 @@ func (c *GlobalAddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -16098,7 +16253,8 @@ type GlobalForwardingRulesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified ForwardingRule resource.
+// Get: Returns the specified ForwardingRule resource. Get a list of
+// available forwarding rules by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/get
func (r *GlobalForwardingRulesService) Get(project string, forwardingRule string) *GlobalForwardingRulesGetCall {
c := &GlobalForwardingRulesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -16190,7 +16346,7 @@ func (c *GlobalForwardingRulesGetCall) Do(opts ...googleapi.CallOption) (*Forwar
}
return ret, nil
// {
- // "description": "Returns the specified ForwardingRule resource.",
+ // "description": "Returns the specified ForwardingRule resource. Get a list of available forwarding rules by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.globalForwardingRules.get",
// "parameterOrder": [
@@ -16384,7 +16540,9 @@ func (r *GlobalForwardingRulesService) List(project string) *GlobalForwardingRul
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -16396,7 +16554,7 @@ func (r *GlobalForwardingRulesService) List(project string) *GlobalForwardingRul
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *GlobalForwardingRulesListCall) Filter(filter string) *GlobalForwardingRulesListCall {
c.urlParams_.Set("filter", filter)
@@ -16404,10 +16562,10 @@ func (c *GlobalForwardingRulesListCall) Filter(filter string) *GlobalForwardingR
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *GlobalForwardingRulesListCall) MaxResults(maxResults int64) *GlobalForwardingRulesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -16511,13 +16669,13 @@ func (c *GlobalForwardingRulesListCall) Do(opts ...googleapi.CallOption) (*Forwa
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -16739,7 +16897,9 @@ func (r *GlobalOperationsService) AggregatedList(project string) *GlobalOperatio
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -16751,7 +16911,7 @@ func (r *GlobalOperationsService) AggregatedList(project string) *GlobalOperatio
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *GlobalOperationsAggregatedListCall) Filter(filter string) *GlobalOperationsAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -16759,10 +16919,10 @@ func (c *GlobalOperationsAggregatedListCall) Filter(filter string) *GlobalOperat
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *GlobalOperationsAggregatedListCall) MaxResults(maxResults int64) *GlobalOperationsAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -16866,13 +17026,13 @@ func (c *GlobalOperationsAggregatedListCall) Do(opts ...googleapi.CallOption) (*
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -17034,7 +17194,8 @@ type GlobalOperationsGetCall struct {
ctx_ context.Context
}
-// Get: Retrieves the specified Operations resource.
+// Get: Retrieves the specified Operations resource. Get a list of
+// operations by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalOperations/get
func (r *GlobalOperationsService) Get(project string, operation string) *GlobalOperationsGetCall {
c := &GlobalOperationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -17126,7 +17287,7 @@ func (c *GlobalOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Retrieves the specified Operations resource.",
+ // "description": "Retrieves the specified Operations resource. Get a list of operations by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.globalOperations.get",
// "parameterOrder": [
@@ -17195,7 +17356,9 @@ func (r *GlobalOperationsService) List(project string) *GlobalOperationsListCall
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -17207,7 +17370,7 @@ func (r *GlobalOperationsService) List(project string) *GlobalOperationsListCall
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *GlobalOperationsListCall) Filter(filter string) *GlobalOperationsListCall {
c.urlParams_.Set("filter", filter)
@@ -17215,10 +17378,10 @@ func (c *GlobalOperationsListCall) Filter(filter string) *GlobalOperationsListCa
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *GlobalOperationsListCall) MaxResults(maxResults int64) *GlobalOperationsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -17322,13 +17485,13 @@ func (c *GlobalOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationL
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -17517,7 +17680,8 @@ type HttpHealthChecksGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified HttpHealthCheck resource.
+// Get: Returns the specified HttpHealthCheck resource. Get a list of
+// available HTTP health checks by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/get
func (r *HttpHealthChecksService) Get(project string, httpHealthCheck string) *HttpHealthChecksGetCall {
c := &HttpHealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -17609,7 +17773,7 @@ func (c *HttpHealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HttpHealthC
}
return ret, nil
// {
- // "description": "Returns the specified HttpHealthCheck resource.",
+ // "description": "Returns the specified HttpHealthCheck resource. Get a list of available HTTP health checks by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.httpHealthChecks.get",
// "parameterOrder": [
@@ -17803,7 +17967,9 @@ func (r *HttpHealthChecksService) List(project string) *HttpHealthChecksListCall
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -17815,7 +17981,7 @@ func (r *HttpHealthChecksService) List(project string) *HttpHealthChecksListCall
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *HttpHealthChecksListCall) Filter(filter string) *HttpHealthChecksListCall {
c.urlParams_.Set("filter", filter)
@@ -17823,10 +17989,10 @@ func (c *HttpHealthChecksListCall) Filter(filter string) *HttpHealthChecksListCa
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *HttpHealthChecksListCall) MaxResults(maxResults int64) *HttpHealthChecksListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -17930,13 +18096,13 @@ func (c *HttpHealthChecksListCall) Do(opts ...googleapi.CallOption) (*HttpHealth
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -18397,7 +18563,8 @@ type HttpsHealthChecksGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified HttpsHealthCheck resource.
+// Get: Returns the specified HttpsHealthCheck resource. Get a list of
+// available HTTPS health checks by making a list() request.
func (r *HttpsHealthChecksService) Get(project string, httpsHealthCheck string) *HttpsHealthChecksGetCall {
c := &HttpsHealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -18488,7 +18655,7 @@ func (c *HttpsHealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HttpsHealt
}
return ret, nil
// {
- // "description": "Returns the specified HttpsHealthCheck resource.",
+ // "description": "Returns the specified HttpsHealthCheck resource. Get a list of available HTTPS health checks by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.httpsHealthChecks.get",
// "parameterOrder": [
@@ -18680,7 +18847,9 @@ func (r *HttpsHealthChecksService) List(project string) *HttpsHealthChecksListCa
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -18692,7 +18861,7 @@ func (r *HttpsHealthChecksService) List(project string) *HttpsHealthChecksListCa
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *HttpsHealthChecksListCall) Filter(filter string) *HttpsHealthChecksListCall {
c.urlParams_.Set("filter", filter)
@@ -18700,10 +18869,10 @@ func (c *HttpsHealthChecksListCall) Filter(filter string) *HttpsHealthChecksList
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *HttpsHealthChecksListCall) MaxResults(maxResults int64) *HttpsHealthChecksListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -18807,13 +18976,13 @@ func (c *HttpsHealthChecksListCall) Do(opts ...googleapi.CallOption) (*HttpsHeal
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -19148,7 +19317,7 @@ type ImagesDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified image resource.
+// Delete: Deletes the specified image.
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/delete
func (r *ImagesService) Delete(project string, image string) *ImagesDeleteCall {
c := &ImagesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -19227,7 +19396,7 @@ func (c *ImagesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified image resource.",
+ // "description": "Deletes the specified image.",
// "httpMethod": "DELETE",
// "id": "compute.images.delete",
// "parameterOrder": [
@@ -19411,7 +19580,8 @@ type ImagesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified image resource.
+// Get: Returns the specified image. Get a list of available images by
+// making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/get
func (r *ImagesService) Get(project string, image string) *ImagesGetCall {
c := &ImagesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -19503,7 +19673,7 @@ func (c *ImagesGetCall) Do(opts ...googleapi.CallOption) (*Image, error) {
}
return ret, nil
// {
- // "description": "Returns the specified image resource.",
+ // "description": "Returns the specified image. Get a list of available images by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.images.get",
// "parameterOrder": [
@@ -19549,8 +19719,8 @@ type ImagesInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates an image resource in the specified project using the
-// data included in the request.
+// Insert: Creates an image in the specified project using the data
+// included in the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/insert
func (r *ImagesService) Insert(project string, image *Image) *ImagesInsertCall {
c := &ImagesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -19634,7 +19804,7 @@ func (c *ImagesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates an image resource in the specified project using the data included in the request.",
+ // "description": "Creates an image in the specified project using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.images.insert",
// "parameterOrder": [
@@ -19707,7 +19877,9 @@ func (r *ImagesService) List(project string) *ImagesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -19719,7 +19891,7 @@ func (r *ImagesService) List(project string) *ImagesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *ImagesListCall) Filter(filter string) *ImagesListCall {
c.urlParams_.Set("filter", filter)
@@ -19727,10 +19899,10 @@ func (c *ImagesListCall) Filter(filter string) *ImagesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *ImagesListCall) MaxResults(maxResults int64) *ImagesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -19834,13 +20006,13 @@ func (c *ImagesListCall) Do(opts ...googleapi.CallOption) (*ImageList, error) {
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -20077,7 +20249,9 @@ func (r *InstanceGroupManagersService) AggregatedList(project string) *InstanceG
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -20089,7 +20263,7 @@ func (r *InstanceGroupManagersService) AggregatedList(project string) *InstanceG
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceGroupManagersAggregatedListCall) Filter(filter string) *InstanceGroupManagersAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -20097,10 +20271,10 @@ func (c *InstanceGroupManagersAggregatedListCall) Filter(filter string) *Instanc
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceGroupManagersAggregatedListCall) MaxResults(maxResults int64) *InstanceGroupManagersAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -20205,13 +20379,13 @@ func (c *InstanceGroupManagersAggregatedListCall) Do(opts ...googleapi.CallOptio
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -20563,7 +20737,8 @@ type InstanceGroupManagersGetCall struct {
}
// Get: Returns all of the details about the specified managed instance
-// group.
+// group. Get a list of available managed instance groups by making a
+// list() request.
func (r *InstanceGroupManagersService) Get(project string, zone string, instanceGroupManager string) *InstanceGroupManagersGetCall {
c := &InstanceGroupManagersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -20656,7 +20831,7 @@ func (c *InstanceGroupManagersGetCall) Do(opts ...googleapi.CallOption) (*Instan
}
return ret, nil
// {
- // "description": "Returns all of the details about the specified managed instance group.",
+ // "description": "Returns all of the details about the specified managed instance group. Get a list of available managed instance groups by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.instanceGroupManagers.get",
// "parameterOrder": [
@@ -20871,7 +21046,9 @@ func (r *InstanceGroupManagersService) List(project string, zone string) *Instan
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -20883,7 +21060,7 @@ func (r *InstanceGroupManagersService) List(project string, zone string) *Instan
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceGroupManagersListCall) Filter(filter string) *InstanceGroupManagersListCall {
c.urlParams_.Set("filter", filter)
@@ -20891,10 +21068,10 @@ func (c *InstanceGroupManagersListCall) Filter(filter string) *InstanceGroupMana
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceGroupManagersListCall) MaxResults(maxResults int64) *InstanceGroupManagersListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -21000,13 +21177,13 @@ func (c *InstanceGroupManagersListCall) Do(opts ...googleapi.CallOption) (*Insta
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -21809,7 +21986,8 @@ type InstanceGroupsAddInstancesCall struct {
}
// AddInstances: Adds a list of instances to the specified instance
-// group. Read Adding instances for more information.
+// group. All of the instances in the instance group must be in the same
+// network/subnetwork. Read Adding instances for more information.
func (r *InstanceGroupsService) AddInstances(project string, zone string, instanceGroup string, instancegroupsaddinstancesrequest *InstanceGroupsAddInstancesRequest) *InstanceGroupsAddInstancesCall {
c := &InstanceGroupsAddInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -21896,7 +22074,7 @@ func (c *InstanceGroupsAddInstancesCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Adds a list of instances to the specified instance group. Read Adding instances for more information.",
+ // "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information.",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.addInstances",
// "parameterOrder": [
@@ -21972,7 +22150,9 @@ func (r *InstanceGroupsService) AggregatedList(project string) *InstanceGroupsAg
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -21984,7 +22164,7 @@ func (r *InstanceGroupsService) AggregatedList(project string) *InstanceGroupsAg
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceGroupsAggregatedListCall) Filter(filter string) *InstanceGroupsAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -21992,10 +22172,10 @@ func (c *InstanceGroupsAggregatedListCall) Filter(filter string) *InstanceGroups
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceGroupsAggregatedListCall) MaxResults(maxResults int64) *InstanceGroupsAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -22099,13 +22279,13 @@ func (c *InstanceGroupsAggregatedListCall) Do(opts ...googleapi.CallOption) (*In
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -22306,7 +22486,8 @@ type InstanceGroupsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified instance group resource.
+// Get: Returns the specified instance group. Get a list of available
+// instance groups by making a list() request.
func (r *InstanceGroupsService) Get(project string, zone string, instanceGroup string) *InstanceGroupsGetCall {
c := &InstanceGroupsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -22399,7 +22580,7 @@ func (c *InstanceGroupsGetCall) Do(opts ...googleapi.CallOption) (*InstanceGroup
}
return ret, nil
// {
- // "description": "Returns the specified instance group resource.",
+ // "description": "Returns the specified instance group. Get a list of available instance groups by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.instanceGroups.get",
// "parameterOrder": [
@@ -22609,7 +22790,9 @@ func (r *InstanceGroupsService) List(project string, zone string) *InstanceGroup
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -22621,7 +22804,7 @@ func (r *InstanceGroupsService) List(project string, zone string) *InstanceGroup
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceGroupsListCall) Filter(filter string) *InstanceGroupsListCall {
c.urlParams_.Set("filter", filter)
@@ -22629,10 +22812,10 @@ func (c *InstanceGroupsListCall) Filter(filter string) *InstanceGroupsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceGroupsListCall) MaxResults(maxResults int64) *InstanceGroupsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -22738,13 +22921,13 @@ func (c *InstanceGroupsListCall) Do(opts ...googleapi.CallOption) (*InstanceGrou
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -22840,7 +23023,9 @@ func (r *InstanceGroupsService) ListInstances(project string, zone string, insta
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -22852,7 +23037,7 @@ func (r *InstanceGroupsService) ListInstances(project string, zone string, insta
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceGroupsListInstancesCall) Filter(filter string) *InstanceGroupsListInstancesCall {
c.urlParams_.Set("filter", filter)
@@ -22860,10 +23045,10 @@ func (c *InstanceGroupsListInstancesCall) Filter(filter string) *InstanceGroupsL
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceGroupsListInstancesCall) MaxResults(maxResults int64) *InstanceGroupsListInstancesCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -22964,7 +23149,7 @@ func (c *InstanceGroupsListInstancesCall) Do(opts ...googleapi.CallOption) (*Ins
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
@@ -22976,7 +23161,7 @@ func (c *InstanceGroupsListInstancesCall) Do(opts ...googleapi.CallOption) (*Ins
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -23315,7 +23500,11 @@ type InstanceTemplatesDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified instance template.
+// Delete: Deletes the specified instance template. If you delete an
+// instance template that is being referenced from another instance
+// group, the instance group will not be able to create or recreate
+// virtual machine instances. Deleting an instance template is permanent
+// and cannot be undone.
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/delete
func (r *InstanceTemplatesService) Delete(project string, instanceTemplate string) *InstanceTemplatesDeleteCall {
c := &InstanceTemplatesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -23394,7 +23583,7 @@ func (c *InstanceTemplatesDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified instance template.",
+ // "description": "Deletes the specified instance template. If you delete an instance template that is being referenced from another instance group, the instance group will not be able to create or recreate virtual machine instances. Deleting an instance template is permanent and cannot be undone.",
// "httpMethod": "DELETE",
// "id": "compute.instanceTemplates.delete",
// "parameterOrder": [
@@ -23440,7 +23629,8 @@ type InstanceTemplatesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified instance template resource.
+// Get: Returns the specified instance template. Get a list of available
+// instance templates by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/get
func (r *InstanceTemplatesService) Get(project string, instanceTemplate string) *InstanceTemplatesGetCall {
c := &InstanceTemplatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -23532,7 +23722,7 @@ func (c *InstanceTemplatesGetCall) Do(opts ...googleapi.CallOption) (*InstanceTe
}
return ret, nil
// {
- // "description": "Returns the specified instance template resource.",
+ // "description": "Returns the specified instance template. Get a list of available instance templates by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.instanceTemplates.get",
// "parameterOrder": [
@@ -23729,7 +23919,9 @@ func (r *InstanceTemplatesService) List(project string) *InstanceTemplatesListCa
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -23741,7 +23933,7 @@ func (r *InstanceTemplatesService) List(project string) *InstanceTemplatesListCa
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstanceTemplatesListCall) Filter(filter string) *InstanceTemplatesListCall {
c.urlParams_.Set("filter", filter)
@@ -23749,10 +23941,10 @@ func (c *InstanceTemplatesListCall) Filter(filter string) *InstanceTemplatesList
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstanceTemplatesListCall) MaxResults(maxResults int64) *InstanceTemplatesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -23856,13 +24048,13 @@ func (c *InstanceTemplatesListCall) Do(opts ...googleapi.CallOption) (*InstanceT
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -24081,7 +24273,7 @@ type InstancesAggregatedListCall struct {
ctx_ context.Context
}
-// AggregatedList: Retrieves aggregated list of instance resources.
+// AggregatedList: Retrieves aggregated list of instances.
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/aggregatedList
func (r *InstancesService) AggregatedList(project string) *InstancesAggregatedListCall {
c := &InstancesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -24103,7 +24295,9 @@ func (r *InstancesService) AggregatedList(project string) *InstancesAggregatedLi
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -24115,7 +24309,7 @@ func (r *InstancesService) AggregatedList(project string) *InstancesAggregatedLi
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstancesAggregatedListCall) Filter(filter string) *InstancesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -24123,10 +24317,10 @@ func (c *InstancesAggregatedListCall) Filter(filter string) *InstancesAggregated
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstancesAggregatedListCall) MaxResults(maxResults int64) *InstancesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -24222,7 +24416,7 @@ func (c *InstancesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Instanc
}
return ret, nil
// {
- // "description": "Retrieves aggregated list of instance resources.",
+ // "description": "Retrieves aggregated list of instances.",
// "httpMethod": "GET",
// "id": "compute.instances.aggregatedList",
// "parameterOrder": [
@@ -24230,13 +24424,13 @@ func (c *InstancesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Instanc
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -24400,7 +24594,7 @@ func (c *InstancesAttachDiskCall) Do(opts ...googleapi.CallOption) (*Operation,
// ],
// "parameters": {
// "instance": {
- // "description": "Instance name.",
+ // "description": "The instance name for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -24880,7 +25074,8 @@ type InstancesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified instance resource.
+// Get: Returns the specified Instance resource. Get a list of available
+// instances by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/get
func (r *InstancesService) Get(project string, zone string, instance string) *InstancesGetCall {
c := &InstancesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -24974,7 +25169,7 @@ func (c *InstancesGetCall) Do(opts ...googleapi.CallOption) (*Instance, error) {
}
return ret, nil
// {
- // "description": "Returns the specified instance resource.",
+ // "description": "Returns the specified Instance resource. Get a list of available instances by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.instances.get",
// "parameterOrder": [
@@ -25332,8 +25527,8 @@ type InstancesListCall struct {
ctx_ context.Context
}
-// List: Retrieves the list of instance resources contained within the
-// specified zone.
+// List: Retrieves the list of instances contained within the specified
+// zone.
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/list
func (r *InstancesService) List(project string, zone string) *InstancesListCall {
c := &InstancesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -25356,7 +25551,9 @@ func (r *InstancesService) List(project string, zone string) *InstancesListCall
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -25368,7 +25565,7 @@ func (r *InstancesService) List(project string, zone string) *InstancesListCall
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *InstancesListCall) Filter(filter string) *InstancesListCall {
c.urlParams_.Set("filter", filter)
@@ -25376,10 +25573,10 @@ func (c *InstancesListCall) Filter(filter string) *InstancesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *InstancesListCall) MaxResults(maxResults int64) *InstancesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -25476,7 +25673,7 @@ func (c *InstancesListCall) Do(opts ...googleapi.CallOption) (*InstanceList, err
}
return ret, nil
// {
- // "description": "Retrieves the list of instance resources contained within the specified zone.",
+ // "description": "Retrieves the list of instances contained within the specified zone.",
// "httpMethod": "GET",
// "id": "compute.instances.list",
// "parameterOrder": [
@@ -25485,13 +25682,13 @@ func (c *InstancesListCall) Do(opts ...googleapi.CallOption) (*InstanceList, err
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -26715,7 +26912,8 @@ type LicensesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified license resource.
+// Get: Returns the specified License resource. Get a list of available
+// licenses by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/licenses/get
func (r *LicensesService) Get(project string, license string) *LicensesGetCall {
c := &LicensesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -26807,7 +27005,7 @@ func (c *LicensesGetCall) Do(opts ...googleapi.CallOption) (*License, error) {
}
return ret, nil
// {
- // "description": "Returns the specified license resource.",
+ // "description": "Returns the specified License resource. Get a list of available licenses by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.licenses.get",
// "parameterOrder": [
@@ -26816,7 +27014,7 @@ func (c *LicensesGetCall) Do(opts ...googleapi.CallOption) (*License, error) {
// ],
// "parameters": {
// "license": {
- // "description": "Name of the license resource to return.",
+ // "description": "Name of the License resource to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -26853,8 +27051,7 @@ type MachineTypesAggregatedListCall struct {
ctx_ context.Context
}
-// AggregatedList: Retrieves an aggregated list of machine type
-// resources.
+// AggregatedList: Retrieves an aggregated list of machine types.
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/aggregatedList
func (r *MachineTypesService) AggregatedList(project string) *MachineTypesAggregatedListCall {
c := &MachineTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -26876,7 +27073,9 @@ func (r *MachineTypesService) AggregatedList(project string) *MachineTypesAggreg
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -26888,7 +27087,7 @@ func (r *MachineTypesService) AggregatedList(project string) *MachineTypesAggreg
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *MachineTypesAggregatedListCall) Filter(filter string) *MachineTypesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -26896,10 +27095,10 @@ func (c *MachineTypesAggregatedListCall) Filter(filter string) *MachineTypesAggr
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *MachineTypesAggregatedListCall) MaxResults(maxResults int64) *MachineTypesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -26995,7 +27194,7 @@ func (c *MachineTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Mach
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of machine type resources.",
+ // "description": "Retrieves an aggregated list of machine types.",
// "httpMethod": "GET",
// "id": "compute.machineTypes.aggregatedList",
// "parameterOrder": [
@@ -27003,13 +27202,13 @@ func (c *MachineTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Mach
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -27075,7 +27274,8 @@ type MachineTypesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified machine type resource.
+// Get: Returns the specified machine type. Get a list of available
+// machine types by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/get
func (r *MachineTypesService) Get(project string, zone string, machineType string) *MachineTypesGetCall {
c := &MachineTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27169,7 +27369,7 @@ func (c *MachineTypesGetCall) Do(opts ...googleapi.CallOption) (*MachineType, er
}
return ret, nil
// {
- // "description": "Returns the specified machine type resource.",
+ // "description": "Returns the specified machine type. Get a list of available machine types by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.machineTypes.get",
// "parameterOrder": [
@@ -27179,7 +27379,7 @@ func (c *MachineTypesGetCall) Do(opts ...googleapi.CallOption) (*MachineType, er
// ],
// "parameters": {
// "machineType": {
- // "description": "Name of the machine type resource to return.",
+ // "description": "Name of the machine type to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -27224,8 +27424,8 @@ type MachineTypesListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of machine type resources available to the
-// specified project.
+// List: Retrieves a list of machine types available to the specified
+// project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/list
func (r *MachineTypesService) List(project string, zone string) *MachineTypesListCall {
c := &MachineTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27248,7 +27448,9 @@ func (r *MachineTypesService) List(project string, zone string) *MachineTypesLis
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -27260,7 +27462,7 @@ func (r *MachineTypesService) List(project string, zone string) *MachineTypesLis
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *MachineTypesListCall) Filter(filter string) *MachineTypesListCall {
c.urlParams_.Set("filter", filter)
@@ -27268,10 +27470,10 @@ func (c *MachineTypesListCall) Filter(filter string) *MachineTypesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *MachineTypesListCall) MaxResults(maxResults int64) *MachineTypesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -27368,7 +27570,7 @@ func (c *MachineTypesListCall) Do(opts ...googleapi.CallOption) (*MachineTypeLis
}
return ret, nil
// {
- // "description": "Retrieves a list of machine type resources available to the specified project.",
+ // "description": "Retrieves a list of machine types available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.machineTypes.list",
// "parameterOrder": [
@@ -27377,13 +27579,13 @@ func (c *MachineTypesListCall) Do(opts ...googleapi.CallOption) (*MachineTypeLis
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -27454,7 +27656,7 @@ type NetworksDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified network resource.
+// Delete: Deletes the specified network.
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/delete
func (r *NetworksService) Delete(project string, network string) *NetworksDeleteCall {
c := &NetworksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27533,7 +27735,7 @@ func (c *NetworksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Deletes the specified network resource.",
+ // "description": "Deletes the specified network.",
// "httpMethod": "DELETE",
// "id": "compute.networks.delete",
// "parameterOrder": [
@@ -27542,7 +27744,7 @@ func (c *NetworksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error
// ],
// "parameters": {
// "network": {
- // "description": "Name of the network resource to delete.",
+ // "description": "Name of the network to delete.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -27579,7 +27781,8 @@ type NetworksGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified network resource.
+// Get: Returns the specified network. Get a list of available networks
+// by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/get
func (r *NetworksService) Get(project string, network string) *NetworksGetCall {
c := &NetworksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27671,7 +27874,7 @@ func (c *NetworksGetCall) Do(opts ...googleapi.CallOption) (*Network, error) {
}
return ret, nil
// {
- // "description": "Returns the specified network resource.",
+ // "description": "Returns the specified network. Get a list of available networks by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.networks.get",
// "parameterOrder": [
@@ -27680,7 +27883,7 @@ func (c *NetworksGetCall) Do(opts ...googleapi.CallOption) (*Network, error) {
// ],
// "parameters": {
// "network": {
- // "description": "Name of the network resource to return.",
+ // "description": "Name of the network to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -27717,8 +27920,8 @@ type NetworksInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates a network resource in the specified project using the
-// data included in the request.
+// Insert: Creates a network in the specified project using the data
+// included in the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/insert
func (r *NetworksService) Insert(project string, network *Network) *NetworksInsertCall {
c := &NetworksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27802,7 +28005,7 @@ func (c *NetworksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Creates a network resource in the specified project using the data included in the request.",
+ // "description": "Creates a network in the specified project using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.networks.insert",
// "parameterOrder": [
@@ -27842,8 +28045,8 @@ type NetworksListCall struct {
ctx_ context.Context
}
-// List: Retrieves the list of network resources available to the
-// specified project.
+// List: Retrieves the list of networks available to the specified
+// project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/list
func (r *NetworksService) List(project string) *NetworksListCall {
c := &NetworksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -27865,7 +28068,9 @@ func (r *NetworksService) List(project string) *NetworksListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -27877,7 +28082,7 @@ func (r *NetworksService) List(project string) *NetworksListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *NetworksListCall) Filter(filter string) *NetworksListCall {
c.urlParams_.Set("filter", filter)
@@ -27885,10 +28090,10 @@ func (c *NetworksListCall) Filter(filter string) *NetworksListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *NetworksListCall) MaxResults(maxResults int64) *NetworksListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -27984,7 +28189,7 @@ func (c *NetworksListCall) Do(opts ...googleapi.CallOption) (*NetworkList, error
}
return ret, nil
// {
- // "description": "Retrieves the list of network resources available to the specified project.",
+ // "description": "Retrieves the list of networks available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.networks.list",
// "parameterOrder": [
@@ -27992,13 +28197,13 @@ func (c *NetworksListCall) Do(opts ...googleapi.CallOption) (*NetworkList, error
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -28062,7 +28267,7 @@ type ProjectsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified project resource.
+// Get: Returns the specified Project resource.
// For details, see https://cloud.google.com/compute/docs/reference/latest/projects/get
func (r *ProjectsService) Get(project string) *ProjectsGetCall {
c := &ProjectsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -28152,7 +28357,7 @@ func (c *ProjectsGetCall) Do(opts ...googleapi.CallOption) (*Project, error) {
}
return ret, nil
// {
- // "description": "Returns the specified project resource.",
+ // "description": "Returns the specified Project resource.",
// "httpMethod": "GET",
// "id": "compute.projects.get",
// "parameterOrder": [
@@ -28774,7 +28979,7 @@ func (c *RegionOperationsDeleteCall) Do(opts ...googleapi.CallOption) error {
// "type": "string"
// },
// "region": {
- // "description": "Name of the region scoping this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -28920,7 +29125,7 @@ func (c *RegionOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation,
// "type": "string"
// },
// "region": {
- // "description": "Name of the region scoping this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -28975,7 +29180,9 @@ func (r *RegionOperationsService) List(project string, region string) *RegionOpe
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -28987,7 +29194,7 @@ func (r *RegionOperationsService) List(project string, region string) *RegionOpe
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *RegionOperationsListCall) Filter(filter string) *RegionOperationsListCall {
c.urlParams_.Set("filter", filter)
@@ -28995,10 +29202,10 @@ func (c *RegionOperationsListCall) Filter(filter string) *RegionOperationsListCa
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *RegionOperationsListCall) MaxResults(maxResults int64) *RegionOperationsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -29104,13 +29311,13 @@ func (c *RegionOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationL
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -29130,7 +29337,7 @@ func (c *RegionOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationL
// "type": "string"
// },
// "region": {
- // "description": "Name of the region scoping this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -29182,7 +29389,8 @@ type RegionsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified region resource.
+// Get: Returns the specified Region resource. Get a list of available
+// regions by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/regions/get
func (r *RegionsService) Get(project string, region string) *RegionsGetCall {
c := &RegionsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -29274,7 +29482,7 @@ func (c *RegionsGetCall) Do(opts ...googleapi.CallOption) (*Region, error) {
}
return ret, nil
// {
- // "description": "Returns the specified region resource.",
+ // "description": "Returns the specified Region resource. Get a list of available regions by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.regions.get",
// "parameterOrder": [
@@ -29343,7 +29551,9 @@ func (r *RegionsService) List(project string) *RegionsListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -29355,7 +29565,7 @@ func (r *RegionsService) List(project string) *RegionsListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *RegionsListCall) Filter(filter string) *RegionsListCall {
c.urlParams_.Set("filter", filter)
@@ -29363,10 +29573,10 @@ func (c *RegionsListCall) Filter(filter string) *RegionsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *RegionsListCall) MaxResults(maxResults int64) *RegionsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -29470,13 +29680,13 @@ func (c *RegionsListCall) Do(opts ...googleapi.CallOption) (*RegionList, error)
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -29540,7 +29750,7 @@ type RoutesDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified route resource.
+// Delete: Deletes the specified Route resource.
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/delete
func (r *RoutesService) Delete(project string, route string) *RoutesDeleteCall {
c := &RoutesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -29619,7 +29829,7 @@ func (c *RoutesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified route resource.",
+ // "description": "Deletes the specified Route resource.",
// "httpMethod": "DELETE",
// "id": "compute.routes.delete",
// "parameterOrder": [
@@ -29635,7 +29845,7 @@ func (c *RoutesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
// "type": "string"
// },
// "route": {
- // "description": "Name of the route resource to delete.",
+ // "description": "Name of the Route resource to delete.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -29665,7 +29875,8 @@ type RoutesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified route resource.
+// Get: Returns the specified Route resource. Get a list of available
+// routes by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/get
func (r *RoutesService) Get(project string, route string) *RoutesGetCall {
c := &RoutesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -29757,7 +29968,7 @@ func (c *RoutesGetCall) Do(opts ...googleapi.CallOption) (*Route, error) {
}
return ret, nil
// {
- // "description": "Returns the specified route resource.",
+ // "description": "Returns the specified Route resource. Get a list of available routes by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.routes.get",
// "parameterOrder": [
@@ -29773,7 +29984,7 @@ func (c *RoutesGetCall) Do(opts ...googleapi.CallOption) (*Route, error) {
// "type": "string"
// },
// "route": {
- // "description": "Name of the route resource to return.",
+ // "description": "Name of the Route resource to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -29803,7 +30014,7 @@ type RoutesInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates a route resource in the specified project using the
+// Insert: Creates a Route resource in the specified project using the
// data included in the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/insert
func (r *RoutesService) Insert(project string, route *Route) *RoutesInsertCall {
@@ -29888,7 +30099,7 @@ func (c *RoutesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates a route resource in the specified project using the data included in the request.",
+ // "description": "Creates a Route resource in the specified project using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.routes.insert",
// "parameterOrder": [
@@ -29928,7 +30139,7 @@ type RoutesListCall struct {
ctx_ context.Context
}
-// List: Retrieves the list of route resources available to the
+// List: Retrieves the list of Route resources available to the
// specified project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/list
func (r *RoutesService) List(project string) *RoutesListCall {
@@ -29951,7 +30162,9 @@ func (r *RoutesService) List(project string) *RoutesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -29963,7 +30176,7 @@ func (r *RoutesService) List(project string) *RoutesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *RoutesListCall) Filter(filter string) *RoutesListCall {
c.urlParams_.Set("filter", filter)
@@ -29971,10 +30184,10 @@ func (c *RoutesListCall) Filter(filter string) *RoutesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *RoutesListCall) MaxResults(maxResults int64) *RoutesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -30070,7 +30283,7 @@ func (c *RoutesListCall) Do(opts ...googleapi.CallOption) (*RouteList, error) {
}
return ret, nil
// {
- // "description": "Retrieves the list of route resources available to the specified project.",
+ // "description": "Retrieves the list of Route resources available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.routes.list",
// "parameterOrder": [
@@ -30078,13 +30291,13 @@ func (c *RoutesListCall) Do(opts ...googleapi.CallOption) (*RouteList, error) {
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -30279,7 +30492,8 @@ type SnapshotsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified Snapshot resource.
+// Get: Returns the specified Snapshot resource. Get a list of available
+// snapshots by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/snapshots/get
func (r *SnapshotsService) Get(project string, snapshot string) *SnapshotsGetCall {
c := &SnapshotsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -30371,7 +30585,7 @@ func (c *SnapshotsGetCall) Do(opts ...googleapi.CallOption) (*Snapshot, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Snapshot resource.",
+ // "description": "Returns the specified Snapshot resource. Get a list of available snapshots by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.snapshots.get",
// "parameterOrder": [
@@ -30440,7 +30654,9 @@ func (r *SnapshotsService) List(project string) *SnapshotsListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -30452,7 +30668,7 @@ func (r *SnapshotsService) List(project string) *SnapshotsListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *SnapshotsListCall) Filter(filter string) *SnapshotsListCall {
c.urlParams_.Set("filter", filter)
@@ -30460,10 +30676,10 @@ func (c *SnapshotsListCall) Filter(filter string) *SnapshotsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *SnapshotsListCall) MaxResults(maxResults int64) *SnapshotsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -30567,13 +30783,13 @@ func (c *SnapshotsListCall) Do(opts ...googleapi.CallOption) (*SnapshotList, err
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -30761,7 +30977,8 @@ type SslCertificatesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified SslCertificate resource.
+// Get: Returns the specified SslCertificate resource. Get a list of
+// available SSL certificates by making a list() request.
func (r *SslCertificatesService) Get(project string, sslCertificate string) *SslCertificatesGetCall {
c := &SslCertificatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -30852,7 +31069,7 @@ func (c *SslCertificatesGetCall) Do(opts ...googleapi.CallOption) (*SslCertifica
}
return ret, nil
// {
- // "description": "Returns the specified SslCertificate resource.",
+ // "description": "Returns the specified SslCertificate resource. Get a list of available SSL certificates by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.sslCertificates.get",
// "parameterOrder": [
@@ -31044,7 +31261,9 @@ func (r *SslCertificatesService) List(project string) *SslCertificatesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -31056,7 +31275,7 @@ func (r *SslCertificatesService) List(project string) *SslCertificatesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *SslCertificatesListCall) Filter(filter string) *SslCertificatesListCall {
c.urlParams_.Set("filter", filter)
@@ -31064,10 +31283,10 @@ func (c *SslCertificatesListCall) Filter(filter string) *SslCertificatesListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *SslCertificatesListCall) MaxResults(maxResults int64) *SslCertificatesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -31171,13 +31390,13 @@ func (c *SslCertificatesListCall) Do(opts ...googleapi.CallOption) (*SslCertific
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -31262,7 +31481,9 @@ func (r *SubnetworksService) AggregatedList(project string) *SubnetworksAggregat
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -31274,7 +31495,7 @@ func (r *SubnetworksService) AggregatedList(project string) *SubnetworksAggregat
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *SubnetworksAggregatedListCall) Filter(filter string) *SubnetworksAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -31282,10 +31503,10 @@ func (c *SubnetworksAggregatedListCall) Filter(filter string) *SubnetworksAggreg
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *SubnetworksAggregatedListCall) MaxResults(maxResults int64) *SubnetworksAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -31389,13 +31610,13 @@ func (c *SubnetworksAggregatedListCall) Do(opts ...googleapi.CallOption) (*Subne
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -31595,7 +31816,8 @@ type SubnetworksGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified subnetwork.
+// Get: Returns the specified subnetwork. Get a list of available
+// subnetworks by making a list() request.
func (r *SubnetworksService) Get(project string, region string, subnetwork string) *SubnetworksGetCall {
c := &SubnetworksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -31688,7 +31910,7 @@ func (c *SubnetworksGetCall) Do(opts ...googleapi.CallOption) (*Subnetwork, erro
}
return ret, nil
// {
- // "description": "Returns the specified subnetwork.",
+ // "description": "Returns the specified subnetwork. Get a list of available subnetworks by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.subnetworks.get",
// "parameterOrder": [
@@ -31901,7 +32123,9 @@ func (r *SubnetworksService) List(project string, region string) *SubnetworksLis
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -31913,7 +32137,7 @@ func (r *SubnetworksService) List(project string, region string) *SubnetworksLis
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *SubnetworksListCall) Filter(filter string) *SubnetworksListCall {
c.urlParams_.Set("filter", filter)
@@ -31921,10 +32145,10 @@ func (c *SubnetworksListCall) Filter(filter string) *SubnetworksListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *SubnetworksListCall) MaxResults(maxResults int64) *SubnetworksListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -32030,13 +32254,13 @@ func (c *SubnetworksListCall) Do(opts ...googleapi.CallOption) (*SubnetworkList,
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -32232,7 +32456,8 @@ type TargetHttpProxiesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified TargetHttpProxy resource.
+// Get: Returns the specified TargetHttpProxy resource. Get a list of
+// available target HTTP proxies by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/get
func (r *TargetHttpProxiesService) Get(project string, targetHttpProxy string) *TargetHttpProxiesGetCall {
c := &TargetHttpProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -32324,7 +32549,7 @@ func (c *TargetHttpProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetHttp
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpProxy resource.",
+ // "description": "Returns the specified TargetHttpProxy resource. Get a list of available target HTTP proxies by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.targetHttpProxies.get",
// "parameterOrder": [
@@ -32518,7 +32743,9 @@ func (r *TargetHttpProxiesService) List(project string) *TargetHttpProxiesListCa
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -32530,7 +32757,7 @@ func (r *TargetHttpProxiesService) List(project string) *TargetHttpProxiesListCa
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetHttpProxiesListCall) Filter(filter string) *TargetHttpProxiesListCall {
c.urlParams_.Set("filter", filter)
@@ -32538,10 +32765,10 @@ func (c *TargetHttpProxiesListCall) Filter(filter string) *TargetHttpProxiesList
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetHttpProxiesListCall) MaxResults(maxResults int64) *TargetHttpProxiesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -32645,13 +32872,13 @@ func (c *TargetHttpProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetHtt
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -32818,7 +33045,7 @@ func (c *TargetHttpProxiesSetUrlMapCall) Do(opts ...googleapi.CallOption) (*Oper
// "type": "string"
// },
// "targetHttpProxy": {
- // "description": "Name of the TargetHttpProxy resource whose URL map is to be set.",
+ // "description": "Name of the TargetHttpProxy to set a URL map for.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -32974,7 +33201,8 @@ type TargetHttpsProxiesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified TargetHttpsProxy resource.
+// Get: Returns the specified TargetHttpsProxy resource. Get a list of
+// available target HTTPS proxies by making a list() request.
func (r *TargetHttpsProxiesService) Get(project string, targetHttpsProxy string) *TargetHttpsProxiesGetCall {
c := &TargetHttpsProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -33065,7 +33293,7 @@ func (c *TargetHttpsProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetHtt
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpsProxy resource.",
+ // "description": "Returns the specified TargetHttpsProxy resource. Get a list of available target HTTPS proxies by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.targetHttpsProxies.get",
// "parameterOrder": [
@@ -33257,7 +33485,9 @@ func (r *TargetHttpsProxiesService) List(project string) *TargetHttpsProxiesList
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -33269,7 +33499,7 @@ func (r *TargetHttpsProxiesService) List(project string) *TargetHttpsProxiesList
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetHttpsProxiesListCall) Filter(filter string) *TargetHttpsProxiesListCall {
c.urlParams_.Set("filter", filter)
@@ -33277,10 +33507,10 @@ func (c *TargetHttpsProxiesListCall) Filter(filter string) *TargetHttpsProxiesLi
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetHttpsProxiesListCall) MaxResults(maxResults int64) *TargetHttpsProxiesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -33384,13 +33614,13 @@ func (c *TargetHttpsProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetHt
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -33556,7 +33786,7 @@ func (c *TargetHttpsProxiesSetSslCertificatesCall) Do(opts ...googleapi.CallOpti
// "type": "string"
// },
// "targetHttpsProxy": {
- // "description": "Name of the TargetHttpsProxy resource whose SSLCertificate is to be set.",
+ // "description": "Name of the TargetHttpsProxy resource to set an SslCertificates resource for.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -33744,7 +33974,9 @@ func (r *TargetInstancesService) AggregatedList(project string) *TargetInstances
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -33756,7 +33988,7 @@ func (r *TargetInstancesService) AggregatedList(project string) *TargetInstances
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetInstancesAggregatedListCall) Filter(filter string) *TargetInstancesAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -33764,10 +33996,10 @@ func (c *TargetInstancesAggregatedListCall) Filter(filter string) *TargetInstanc
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetInstancesAggregatedListCall) MaxResults(maxResults int64) *TargetInstancesAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -33871,13 +34103,13 @@ func (c *TargetInstancesAggregatedListCall) Do(opts ...googleapi.CallOption) (*T
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -34078,7 +34310,8 @@ type TargetInstancesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified TargetInstance resource.
+// Get: Returns the specified TargetInstance resource. Get a list of
+// available target instances by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/get
func (r *TargetInstancesService) Get(project string, zone string, targetInstance string) *TargetInstancesGetCall {
c := &TargetInstancesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -34172,7 +34405,7 @@ func (c *TargetInstancesGetCall) Do(opts ...googleapi.CallOption) (*TargetInstan
}
return ret, nil
// {
- // "description": "Returns the specified TargetInstance resource.",
+ // "description": "Returns the specified TargetInstance resource. Get a list of available target instances by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.targetInstances.get",
// "parameterOrder": [
@@ -34387,7 +34620,9 @@ func (r *TargetInstancesService) List(project string, zone string) *TargetInstan
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -34399,7 +34634,7 @@ func (r *TargetInstancesService) List(project string, zone string) *TargetInstan
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetInstancesListCall) Filter(filter string) *TargetInstancesListCall {
c.urlParams_.Set("filter", filter)
@@ -34407,10 +34642,10 @@ func (c *TargetInstancesListCall) Filter(filter string) *TargetInstancesListCall
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetInstancesListCall) MaxResults(maxResults int64) *TargetInstancesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -34516,13 +34751,13 @@ func (c *TargetInstancesListCall) Do(opts ...googleapi.CallOption) (*TargetInsta
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -34595,7 +34830,7 @@ type TargetPoolsAddHealthCheckCall struct {
ctx_ context.Context
}
-// AddHealthCheck: Adds health check URL to targetPool.
+// AddHealthCheck: Adds health check URLs to a target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/addHealthCheck
func (r *TargetPoolsService) AddHealthCheck(project string, region string, targetPool string, targetpoolsaddhealthcheckrequest *TargetPoolsAddHealthCheckRequest) *TargetPoolsAddHealthCheckCall {
c := &TargetPoolsAddHealthCheckCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -34683,7 +34918,7 @@ func (c *TargetPoolsAddHealthCheckCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Adds health check URL to targetPool.",
+ // "description": "Adds health check URLs to a target pool.",
// "httpMethod": "POST",
// "id": "compute.targetPools.addHealthCheck",
// "parameterOrder": [
@@ -34693,6 +34928,7 @@ func (c *TargetPoolsAddHealthCheckCall) Do(opts ...googleapi.CallOption) (*Opera
// ],
// "parameters": {
// "project": {
+ // "description": "Project ID for this request.",
// "location": "path",
// "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
// "required": true,
@@ -34706,7 +34942,7 @@ func (c *TargetPoolsAddHealthCheckCall) Do(opts ...googleapi.CallOption) (*Opera
// "type": "string"
// },
// "targetPool": {
- // "description": "Name of the TargetPool resource to which health_check_url is to be added.",
+ // "description": "Name of the target pool to add a health check to.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -34740,7 +34976,7 @@ type TargetPoolsAddInstanceCall struct {
ctx_ context.Context
}
-// AddInstance: Adds instance URL to targetPool.
+// AddInstance: Adds an instance to a target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/addInstance
func (r *TargetPoolsService) AddInstance(project string, region string, targetPool string, targetpoolsaddinstancerequest *TargetPoolsAddInstanceRequest) *TargetPoolsAddInstanceCall {
c := &TargetPoolsAddInstanceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -34828,7 +35064,7 @@ func (c *TargetPoolsAddInstanceCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Adds instance URL to targetPool.",
+ // "description": "Adds an instance to a target pool.",
// "httpMethod": "POST",
// "id": "compute.targetPools.addInstance",
// "parameterOrder": [
@@ -34838,6 +35074,7 @@ func (c *TargetPoolsAddInstanceCall) Do(opts ...googleapi.CallOption) (*Operatio
// ],
// "parameters": {
// "project": {
+ // "description": "Project ID for this request.",
// "location": "path",
// "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
// "required": true,
@@ -34851,7 +35088,7 @@ func (c *TargetPoolsAddInstanceCall) Do(opts ...googleapi.CallOption) (*Operatio
// "type": "string"
// },
// "targetPool": {
- // "description": "Name of the TargetPool resource to which instance_url is to be added.",
+ // "description": "Name of the TargetPool resource to add instances to.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -34905,7 +35142,9 @@ func (r *TargetPoolsService) AggregatedList(project string) *TargetPoolsAggregat
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -34917,7 +35156,7 @@ func (r *TargetPoolsService) AggregatedList(project string) *TargetPoolsAggregat
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetPoolsAggregatedListCall) Filter(filter string) *TargetPoolsAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -34925,10 +35164,10 @@ func (c *TargetPoolsAggregatedListCall) Filter(filter string) *TargetPoolsAggreg
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetPoolsAggregatedListCall) MaxResults(maxResults int64) *TargetPoolsAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -35032,13 +35271,13 @@ func (c *TargetPoolsAggregatedListCall) Do(opts ...googleapi.CallOption) (*Targe
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -35103,7 +35342,7 @@ type TargetPoolsDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified TargetPool resource.
+// Delete: Deletes the specified target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/delete
func (r *TargetPoolsService) Delete(project string, region string, targetPool string) *TargetPoolsDeleteCall {
c := &TargetPoolsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35184,7 +35423,7 @@ func (c *TargetPoolsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified TargetPool resource.",
+ // "description": "Deletes the specified target pool.",
// "httpMethod": "DELETE",
// "id": "compute.targetPools.delete",
// "parameterOrder": [
@@ -35239,7 +35478,8 @@ type TargetPoolsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified TargetPool resource.
+// Get: Returns the specified target pool. Get a list of available
+// target pools by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/get
func (r *TargetPoolsService) Get(project string, region string, targetPool string) *TargetPoolsGetCall {
c := &TargetPoolsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35333,7 +35573,7 @@ func (c *TargetPoolsGetCall) Do(opts ...googleapi.CallOption) (*TargetPool, erro
}
return ret, nil
// {
- // "description": "Returns the specified TargetPool resource.",
+ // "description": "Returns the specified target pool. Get a list of available target pools by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.targetPools.get",
// "parameterOrder": [
@@ -35390,7 +35630,7 @@ type TargetPoolsGetHealthCall struct {
}
// GetHealth: Gets the most recent health check results for each IP for
-// the given instance that is referenced by the given TargetPool.
+// the instance that is referenced by the given target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/getHealth
func (r *TargetPoolsService) GetHealth(project string, region string, targetPool string, instancereference *InstanceReference) *TargetPoolsGetHealthCall {
c := &TargetPoolsGetHealthCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35478,7 +35718,7 @@ func (c *TargetPoolsGetHealthCall) Do(opts ...googleapi.CallOption) (*TargetPool
}
return ret, nil
// {
- // "description": "Gets the most recent health check results for each IP for the given instance that is referenced by the given TargetPool.",
+ // "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool.",
// "httpMethod": "POST",
// "id": "compute.targetPools.getHealth",
// "parameterOrder": [
@@ -35488,6 +35728,7 @@ func (c *TargetPoolsGetHealthCall) Do(opts ...googleapi.CallOption) (*TargetPool
// ],
// "parameters": {
// "project": {
+ // "description": "Project ID for this request.",
// "location": "path",
// "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
// "required": true,
@@ -35535,8 +35776,8 @@ type TargetPoolsInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates a TargetPool resource in the specified project and
-// region using the data included in the request.
+// Insert: Creates a target pool in the specified project and region
+// using the data included in the request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/insert
func (r *TargetPoolsService) Insert(project string, region string, targetpool *TargetPool) *TargetPoolsInsertCall {
c := &TargetPoolsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35622,7 +35863,7 @@ func (c *TargetPoolsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates a TargetPool resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a target pool in the specified project and region using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.targetPools.insert",
// "parameterOrder": [
@@ -35671,8 +35912,8 @@ type TargetPoolsListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of TargetPool resources available to the
-// specified project and region.
+// List: Retrieves a list of target pools available to the specified
+// project and region.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/list
func (r *TargetPoolsService) List(project string, region string) *TargetPoolsListCall {
c := &TargetPoolsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35695,7 +35936,9 @@ func (r *TargetPoolsService) List(project string, region string) *TargetPoolsLis
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -35707,7 +35950,7 @@ func (r *TargetPoolsService) List(project string, region string) *TargetPoolsLis
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetPoolsListCall) Filter(filter string) *TargetPoolsListCall {
c.urlParams_.Set("filter", filter)
@@ -35715,10 +35958,10 @@ func (c *TargetPoolsListCall) Filter(filter string) *TargetPoolsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetPoolsListCall) MaxResults(maxResults int64) *TargetPoolsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -35815,7 +36058,7 @@ func (c *TargetPoolsListCall) Do(opts ...googleapi.CallOption) (*TargetPoolList,
}
return ret, nil
// {
- // "description": "Retrieves a list of TargetPool resources available to the specified project and region.",
+ // "description": "Retrieves a list of target pools available to the specified project and region.",
// "httpMethod": "GET",
// "id": "compute.targetPools.list",
// "parameterOrder": [
@@ -35824,13 +36067,13 @@ func (c *TargetPoolsListCall) Do(opts ...googleapi.CallOption) (*TargetPoolList,
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -35903,7 +36146,7 @@ type TargetPoolsRemoveHealthCheckCall struct {
ctx_ context.Context
}
-// RemoveHealthCheck: Removes health check URL from targetPool.
+// RemoveHealthCheck: Removes health check URL from a target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/removeHealthCheck
func (r *TargetPoolsService) RemoveHealthCheck(project string, region string, targetPool string, targetpoolsremovehealthcheckrequest *TargetPoolsRemoveHealthCheckRequest) *TargetPoolsRemoveHealthCheckCall {
c := &TargetPoolsRemoveHealthCheckCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -35991,7 +36234,7 @@ func (c *TargetPoolsRemoveHealthCheckCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Removes health check URL from targetPool.",
+ // "description": "Removes health check URL from a target pool.",
// "httpMethod": "POST",
// "id": "compute.targetPools.removeHealthCheck",
// "parameterOrder": [
@@ -36001,20 +36244,21 @@ func (c *TargetPoolsRemoveHealthCheckCall) Do(opts ...googleapi.CallOption) (*Op
// ],
// "parameters": {
// "project": {
+ // "description": "Project ID for this request.",
// "location": "path",
// "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
// "required": true,
// "type": "string"
// },
// "region": {
- // "description": "Name of the region scoping this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
// "type": "string"
// },
// "targetPool": {
- // "description": "Name of the TargetPool resource to which health_check_url is to be removed.",
+ // "description": "Name of the target pool to remove health checks from.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36048,7 +36292,7 @@ type TargetPoolsRemoveInstanceCall struct {
ctx_ context.Context
}
-// RemoveInstance: Removes instance URL from targetPool.
+// RemoveInstance: Removes instance URL from a target pool.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/removeInstance
func (r *TargetPoolsService) RemoveInstance(project string, region string, targetPool string, targetpoolsremoveinstancerequest *TargetPoolsRemoveInstanceRequest) *TargetPoolsRemoveInstanceCall {
c := &TargetPoolsRemoveInstanceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -36136,7 +36380,7 @@ func (c *TargetPoolsRemoveInstanceCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Removes instance URL from targetPool.",
+ // "description": "Removes instance URL from a target pool.",
// "httpMethod": "POST",
// "id": "compute.targetPools.removeInstance",
// "parameterOrder": [
@@ -36146,6 +36390,7 @@ func (c *TargetPoolsRemoveInstanceCall) Do(opts ...googleapi.CallOption) (*Opera
// ],
// "parameters": {
// "project": {
+ // "description": "Project ID for this request.",
// "location": "path",
// "pattern": "(?:(?:[-a-z0-9]{1,63}\\.)*(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?):)?(?:[0-9]{1,19}|(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?))",
// "required": true,
@@ -36159,7 +36404,7 @@ func (c *TargetPoolsRemoveInstanceCall) Do(opts ...googleapi.CallOption) (*Opera
// "type": "string"
// },
// "targetPool": {
- // "description": "Name of the TargetPool resource to which instance_url is to be removed.",
+ // "description": "Name of the TargetPool resource to remove instances from.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36193,7 +36438,7 @@ type TargetPoolsSetBackupCall struct {
ctx_ context.Context
}
-// SetBackup: Changes backup pool configurations.
+// SetBackup: Changes a backup target pool's configurations.
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/setBackup
func (r *TargetPoolsService) SetBackup(project string, region string, targetPool string, targetreference *TargetReference) *TargetPoolsSetBackupCall {
c := &TargetPoolsSetBackupCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -36205,7 +36450,7 @@ func (r *TargetPoolsService) SetBackup(project string, region string, targetPool
}
// FailoverRatio sets the optional parameter "failoverRatio": New
-// failoverRatio value for the containing target pool.
+// failoverRatio value for the target pool.
func (c *TargetPoolsSetBackupCall) FailoverRatio(failoverRatio float64) *TargetPoolsSetBackupCall {
c.urlParams_.Set("failoverRatio", fmt.Sprint(failoverRatio))
return c
@@ -36288,7 +36533,7 @@ func (c *TargetPoolsSetBackupCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Changes backup pool configurations.",
+ // "description": "Changes a backup target pool's configurations.",
// "httpMethod": "POST",
// "id": "compute.targetPools.setBackup",
// "parameterOrder": [
@@ -36298,7 +36543,7 @@ func (c *TargetPoolsSetBackupCall) Do(opts ...googleapi.CallOption) (*Operation,
// ],
// "parameters": {
// "failoverRatio": {
- // "description": "New failoverRatio value for the containing target pool.",
+ // "description": "New failoverRatio value for the target pool.",
// "format": "float",
// "location": "query",
// "type": "number"
@@ -36318,7 +36563,7 @@ func (c *TargetPoolsSetBackupCall) Do(opts ...googleapi.CallOption) (*Operation,
// "type": "string"
// },
// "targetPool": {
- // "description": "Name of the TargetPool resource for which the backup is to be set.",
+ // "description": "Name of the TargetPool resource to set a backup pool for.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36350,7 +36595,7 @@ type TargetVpnGatewaysAggregatedListCall struct {
ctx_ context.Context
}
-// AggregatedList: Retrieves an aggregated list of target VPN gateways .
+// AggregatedList: Retrieves an aggregated list of target VPN gateways.
func (r *TargetVpnGatewaysService) AggregatedList(project string) *TargetVpnGatewaysAggregatedListCall {
c := &TargetVpnGatewaysAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -36371,7 +36616,9 @@ func (r *TargetVpnGatewaysService) AggregatedList(project string) *TargetVpnGate
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -36383,7 +36630,7 @@ func (r *TargetVpnGatewaysService) AggregatedList(project string) *TargetVpnGate
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetVpnGatewaysAggregatedListCall) Filter(filter string) *TargetVpnGatewaysAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -36391,10 +36638,10 @@ func (c *TargetVpnGatewaysAggregatedListCall) Filter(filter string) *TargetVpnGa
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetVpnGatewaysAggregatedListCall) MaxResults(maxResults int64) *TargetVpnGatewaysAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -36490,7 +36737,7 @@ func (c *TargetVpnGatewaysAggregatedListCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of target VPN gateways .",
+ // "description": "Retrieves an aggregated list of target VPN gateways.",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.aggregatedList",
// "parameterOrder": [
@@ -36498,13 +36745,13 @@ func (c *TargetVpnGatewaysAggregatedListCall) Do(opts ...googleapi.CallOption) (
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -36569,7 +36816,7 @@ type TargetVpnGatewaysDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes the specified TargetVpnGateway resource.
+// Delete: Deletes the specified target VPN gateway.
func (r *TargetVpnGatewaysService) Delete(project string, region string, targetVpnGateway string) *TargetVpnGatewaysDeleteCall {
c := &TargetVpnGatewaysDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -36649,7 +36896,7 @@ func (c *TargetVpnGatewaysDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified TargetVpnGateway resource.",
+ // "description": "Deletes the specified target VPN gateway.",
// "httpMethod": "DELETE",
// "id": "compute.targetVpnGateways.delete",
// "parameterOrder": [
@@ -36666,14 +36913,14 @@ func (c *TargetVpnGatewaysDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
// "type": "string"
// },
// "targetVpnGateway": {
- // "description": "Name of the TargetVpnGateway resource to delete.",
+ // "description": "Name of the target VPN gateway to delete.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36704,7 +36951,8 @@ type TargetVpnGatewaysGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified TargetVpnGateway resource.
+// Get: Returns the specified target VPN gateway. Get a list of
+// available target VPN gateways by making a list() request.
func (r *TargetVpnGatewaysService) Get(project string, region string, targetVpnGateway string) *TargetVpnGatewaysGetCall {
c := &TargetVpnGatewaysGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -36797,7 +37045,7 @@ func (c *TargetVpnGatewaysGetCall) Do(opts ...googleapi.CallOption) (*TargetVpnG
}
return ret, nil
// {
- // "description": "Returns the specified TargetVpnGateway resource.",
+ // "description": "Returns the specified target VPN gateway. Get a list of available target VPN gateways by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.get",
// "parameterOrder": [
@@ -36814,14 +37062,14 @@ func (c *TargetVpnGatewaysGetCall) Do(opts ...googleapi.CallOption) (*TargetVpnG
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
// "type": "string"
// },
// "targetVpnGateway": {
- // "description": "Name of the TargetVpnGateway resource to return.",
+ // "description": "Name of the target VPN gateway to return.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36852,8 +37100,8 @@ type TargetVpnGatewaysInsertCall struct {
ctx_ context.Context
}
-// Insert: Creates a TargetVpnGateway resource in the specified project
-// and region using the data included in the request.
+// Insert: Creates a target VPN gateway in the specified project and
+// region using the data included in the request.
func (r *TargetVpnGatewaysService) Insert(project string, region string, targetvpngateway *TargetVpnGateway) *TargetVpnGatewaysInsertCall {
c := &TargetVpnGatewaysInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -36938,7 +37186,7 @@ func (c *TargetVpnGatewaysInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates a TargetVpnGateway resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a target VPN gateway in the specified project and region using the data included in the request.",
// "httpMethod": "POST",
// "id": "compute.targetVpnGateways.insert",
// "parameterOrder": [
@@ -36954,7 +37202,7 @@ func (c *TargetVpnGatewaysInsertCall) Do(opts ...googleapi.CallOption) (*Operati
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -36987,7 +37235,7 @@ type TargetVpnGatewaysListCall struct {
ctx_ context.Context
}
-// List: Retrieves a list of TargetVpnGateway resources available to the
+// List: Retrieves a list of target VPN gateways available to the
// specified project and region.
func (r *TargetVpnGatewaysService) List(project string, region string) *TargetVpnGatewaysListCall {
c := &TargetVpnGatewaysListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -37010,7 +37258,9 @@ func (r *TargetVpnGatewaysService) List(project string, region string) *TargetVp
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -37022,7 +37272,7 @@ func (r *TargetVpnGatewaysService) List(project string, region string) *TargetVp
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *TargetVpnGatewaysListCall) Filter(filter string) *TargetVpnGatewaysListCall {
c.urlParams_.Set("filter", filter)
@@ -37030,10 +37280,10 @@ func (c *TargetVpnGatewaysListCall) Filter(filter string) *TargetVpnGatewaysList
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *TargetVpnGatewaysListCall) MaxResults(maxResults int64) *TargetVpnGatewaysListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -37130,7 +37380,7 @@ func (c *TargetVpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*TargetVpn
}
return ret, nil
// {
- // "description": "Retrieves a list of TargetVpnGateway resources available to the specified project and region.",
+ // "description": "Retrieves a list of target VPN gateways available to the specified project and region.",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.list",
// "parameterOrder": [
@@ -37139,13 +37389,13 @@ func (c *TargetVpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*TargetVpn
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -37165,7 +37415,7 @@ func (c *TargetVpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*TargetVpn
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -37341,7 +37591,8 @@ type UrlMapsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified UrlMap resource.
+// Get: Returns the specified UrlMap resource. Get a list of available
+// URL maps by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/get
func (r *UrlMapsService) Get(project string, urlMap string) *UrlMapsGetCall {
c := &UrlMapsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -37433,7 +37684,7 @@ func (c *UrlMapsGetCall) Do(opts ...googleapi.CallOption) (*UrlMap, error) {
}
return ret, nil
// {
- // "description": "Returns the specified UrlMap resource.",
+ // "description": "Returns the specified UrlMap resource. Get a list of available URL maps by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.urlMaps.get",
// "parameterOrder": [
@@ -37627,7 +37878,9 @@ func (r *UrlMapsService) List(project string) *UrlMapsListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -37639,7 +37892,7 @@ func (r *UrlMapsService) List(project string) *UrlMapsListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *UrlMapsListCall) Filter(filter string) *UrlMapsListCall {
c.urlParams_.Set("filter", filter)
@@ -37647,10 +37900,10 @@ func (c *UrlMapsListCall) Filter(filter string) *UrlMapsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *UrlMapsListCall) MaxResults(maxResults int64) *UrlMapsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -37754,13 +38007,13 @@ func (c *UrlMapsListCall) Do(opts ...googleapi.CallOption) (*UrlMapList, error)
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -38253,7 +38506,9 @@ func (r *VpnTunnelsService) AggregatedList(project string) *VpnTunnelsAggregated
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -38265,7 +38520,7 @@ func (r *VpnTunnelsService) AggregatedList(project string) *VpnTunnelsAggregated
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *VpnTunnelsAggregatedListCall) Filter(filter string) *VpnTunnelsAggregatedListCall {
c.urlParams_.Set("filter", filter)
@@ -38273,10 +38528,10 @@ func (c *VpnTunnelsAggregatedListCall) Filter(filter string) *VpnTunnelsAggregat
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *VpnTunnelsAggregatedListCall) MaxResults(maxResults int64) *VpnTunnelsAggregatedListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -38380,13 +38635,13 @@ func (c *VpnTunnelsAggregatedListCall) Do(opts ...googleapi.CallOption) (*VpnTun
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -38548,7 +38803,7 @@ func (c *VpnTunnelsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, err
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -38586,7 +38841,8 @@ type VpnTunnelsGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified VpnTunnel resource.
+// Get: Returns the specified VpnTunnel resource. Get a list of
+// available VPN tunnels by making a list() request.
func (r *VpnTunnelsService) Get(project string, region string, vpnTunnel string) *VpnTunnelsGetCall {
c := &VpnTunnelsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -38679,7 +38935,7 @@ func (c *VpnTunnelsGetCall) Do(opts ...googleapi.CallOption) (*VpnTunnel, error)
}
return ret, nil
// {
- // "description": "Returns the specified VpnTunnel resource.",
+ // "description": "Returns the specified VpnTunnel resource. Get a list of available VPN tunnels by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.vpnTunnels.get",
// "parameterOrder": [
@@ -38696,7 +38952,7 @@ func (c *VpnTunnelsGetCall) Do(opts ...googleapi.CallOption) (*VpnTunnel, error)
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -38836,7 +39092,7 @@ func (c *VpnTunnelsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, err
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -38892,7 +39148,9 @@ func (r *VpnTunnelsService) List(project string, region string) *VpnTunnelsListC
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -38904,7 +39162,7 @@ func (r *VpnTunnelsService) List(project string, region string) *VpnTunnelsListC
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *VpnTunnelsListCall) Filter(filter string) *VpnTunnelsListCall {
c.urlParams_.Set("filter", filter)
@@ -38912,10 +39170,10 @@ func (c *VpnTunnelsListCall) Filter(filter string) *VpnTunnelsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *VpnTunnelsListCall) MaxResults(maxResults int64) *VpnTunnelsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -39021,13 +39279,13 @@ func (c *VpnTunnelsListCall) Do(opts ...googleapi.CallOption) (*VpnTunnelList, e
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -39047,7 +39305,7 @@ func (c *VpnTunnelsListCall) Do(opts ...googleapi.CallOption) (*VpnTunnelList, e
// "type": "string"
// },
// "region": {
- // "description": "The name of the region for this request.",
+ // "description": "Name of the region for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -39180,7 +39438,7 @@ func (c *ZoneOperationsDeleteCall) Do(opts ...googleapi.CallOption) error {
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -39326,7 +39584,7 @@ func (c *ZoneOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation, er
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for this request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -39381,7 +39639,9 @@ func (r *ZoneOperationsService) List(project string, zone string) *ZoneOperation
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -39393,7 +39653,7 @@ func (r *ZoneOperationsService) List(project string, zone string) *ZoneOperation
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *ZoneOperationsListCall) Filter(filter string) *ZoneOperationsListCall {
c.urlParams_.Set("filter", filter)
@@ -39401,10 +39661,10 @@ func (c *ZoneOperationsListCall) Filter(filter string) *ZoneOperationsListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *ZoneOperationsListCall) MaxResults(maxResults int64) *ZoneOperationsListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -39510,13 +39770,13 @@ func (c *ZoneOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationLis
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
@@ -39536,7 +39796,7 @@ func (c *ZoneOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationLis
// "type": "string"
// },
// "zone": {
- // "description": "Name of the zone scoping this request.",
+ // "description": "Name of the zone for request.",
// "location": "path",
// "pattern": "[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?",
// "required": true,
@@ -39588,7 +39848,8 @@ type ZonesGetCall struct {
ctx_ context.Context
}
-// Get: Returns the specified zone resource.
+// Get: Returns the specified Zone resource. Get a list of available
+// zones by making a list() request.
// For details, see https://cloud.google.com/compute/docs/reference/latest/zones/get
func (r *ZonesService) Get(project string, zone string) *ZonesGetCall {
c := &ZonesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -39680,7 +39941,7 @@ func (c *ZonesGetCall) Do(opts ...googleapi.CallOption) (*Zone, error) {
}
return ret, nil
// {
- // "description": "Returns the specified zone resource.",
+ // "description": "Returns the specified Zone resource. Get a list of available zones by making a list() request.",
// "httpMethod": "GET",
// "id": "compute.zones.get",
// "parameterOrder": [
@@ -39726,7 +39987,7 @@ type ZonesListCall struct {
ctx_ context.Context
}
-// List: Retrieves the list of zone resources available to the specified
+// List: Retrieves the list of Zone resources available to the specified
// project.
// For details, see https://cloud.google.com/compute/docs/reference/latest/zones/list
func (r *ZonesService) List(project string) *ZonesListCall {
@@ -39749,7 +40010,9 @@ func (r *ZonesService) List(project string) *ZonesListCall {
// as a regular expression using RE2 syntax. The literal value must
// match the entire field.
//
-// For example, filter=name ne example-instance.
+// For example, to filter for instances that do not have a name of
+// example-instance, you would use filter=name ne
+// example-instance.
//
// Compute Engine Beta API Only: If you use filtering in the Beta API,
// you can also filter on nested fields. For example, you could filter
@@ -39761,7 +40024,7 @@ func (r *ZonesService) List(project string) *ZonesListCall {
// The Beta API also supports filtering on multiple expressions by
// providing each separate expression within parentheses. For example,
// (scheduling.automaticRestart eq true) (zone eq us-central1-f).
-// Multiple expressions are treated as AND expressions meaning that
+// Multiple expressions are treated as AND expressions, meaning that
// resources must match all expressions to pass the filters.
func (c *ZonesListCall) Filter(filter string) *ZonesListCall {
c.urlParams_.Set("filter", filter)
@@ -39769,10 +40032,10 @@ func (c *ZonesListCall) Filter(filter string) *ZonesListCall {
}
// MaxResults sets the optional parameter "maxResults": The maximum
-// number of results per page that Compute Engine should return. If the
-// number of available results is larger than maxResults, Compute Engine
-// returns a nextPageToken that can be used to get the next page of
-// results in subsequent list requests.
+// number of results per page that should be returned. If the number of
+// available results is larger than maxResults, Compute Engine returns a
+// nextPageToken that can be used to get the next page of results in
+// subsequent list requests.
func (c *ZonesListCall) MaxResults(maxResults int64) *ZonesListCall {
c.urlParams_.Set("maxResults", fmt.Sprint(maxResults))
return c
@@ -39868,7 +40131,7 @@ func (c *ZonesListCall) Do(opts ...googleapi.CallOption) (*ZoneList, error) {
}
return ret, nil
// {
- // "description": "Retrieves the list of zone resources available to the specified project.",
+ // "description": "Retrieves the list of Zone resources available to the specified project.",
// "httpMethod": "GET",
// "id": "compute.zones.list",
// "parameterOrder": [
@@ -39876,13 +40139,13 @@ func (c *ZonesListCall) Do(opts ...googleapi.CallOption) (*ZoneList, error) {
// ],
// "parameters": {
// "filter": {
- // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions meaning that resources must match all expressions to pass the filters.",
+ // "description": "Sets a filter expression for filtering listed resources, in the form filter={expression}. Your {expression} must be in the format: field_name comparison_string literal_string.\n\nThe field_name is the name of the field you want to compare. Only atomic field types are supported (string, number, boolean). The comparison_string must be either eq (equals) or ne (not equals). The literal_string is the string value to filter to. The literal value must be valid for the type of field you are filtering by (string, number, boolean). For string fields, the literal value is interpreted as a regular expression using RE2 syntax. The literal value must match the entire field.\n\nFor example, to filter for instances that do not have a name of example-instance, you would use filter=name ne example-instance.\n\nCompute Engine Beta API Only: If you use filtering in the Beta API, you can also filter on nested fields. For example, you could filter on instances that have set the scheduling.automaticRestart field to true. In particular, use filtering on nested fields to take advantage of instance labels to organize and filter results based on label values.\n\nThe Beta API also supports filtering on multiple expressions by providing each separate expression within parentheses. For example, (scheduling.automaticRestart eq true) (zone eq us-central1-f). Multiple expressions are treated as AND expressions, meaning that resources must match all expressions to pass the filters.",
// "location": "query",
// "type": "string"
// },
// "maxResults": {
// "default": "500",
- // "description": "The maximum number of results per page that Compute Engine should return. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
+ // "description": "The maximum number of results per page that should be returned. If the number of available results is larger than maxResults, Compute Engine returns a nextPageToken that can be used to get the next page of results in subsequent list requests.",
// "format": "uint32",
// "location": "query",
// "maximum": "500",
diff --git a/vendor/google.golang.org/api/container/v1/container-api.json b/vendor/google.golang.org/api/container/v1/container-api.json
index 5cbdf13f8f35..95dcdfeb5cb9 100644
--- a/vendor/google.golang.org/api/container/v1/container-api.json
+++ b/vendor/google.golang.org/api/container/v1/container-api.json
@@ -1,13 +1,13 @@
{
"kind": "discovery#restDescription",
- "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/-5Ir9-bAl4HnPM8XDQ5ycW_gSZQ\"",
+ "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/OU95LyfeHZcrBJDajlkYXi0P4Wo\"",
"discoveryVersion": "v1",
"id": "container:v1",
"name": "container",
"version": "v1",
- "revision": "20150603",
+ "revision": "20160321",
"title": "Google Container Engine API",
- "description": "The Google Container Engine API is used for building and managing container based applications, powered by the open source Kubernetes technology.",
+ "description": "Builds and manages clusters that run container-based applications, powered by open source Kubernetes technology.",
"ownerDomain": "google.com",
"ownerName": "Google",
"icons": {
@@ -121,6 +121,13 @@
"items": {
"$ref": "Cluster"
}
+ },
+ "missingZones": {
+ "type": "array",
+ "description": "If any zones are listed here, the list of clusters returned may be missing those zones.",
+ "items": {
+ "type": "string"
+ }
}
}
},
@@ -139,33 +146,41 @@
},
"initialNodeCount": {
"type": "integer",
- "description": "The number of nodes to create in this cluster. You must ensure that your Compute Engine [resource quota](/compute/docs/resource-quotas) is sufficient for this number of instances. You must also have available firewall and routes quota.",
+ "description": "The number of nodes to create in this cluster. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota. For requests, this field should only be used in lieu of a \"node_pool\" object, since this configuration (along with the \"node_config\") will be used to create a \"NodePool\" object with an auto-generated name. Do not use this and a node_pool at the same time.",
"format": "int32"
},
"nodeConfig": {
"$ref": "NodeConfig",
- "description": "Parameters used in creating the cluster's nodes. See the descriptions of the child properties of `nodeConfig`. If unspecified, the defaults for all child properties are used."
+ "description": "Parameters used in creating the cluster's nodes. See `nodeConfig` for the description of its properties. For requests, this field should only be used in lieu of a \"node_pool\" object, since this configuration (along with the \"initial_node_count\") will be used to create a \"NodePool\" object with an auto-generated name. Do not use this and a node_pool at the same time. For responses, this field will be populated with the node configuration of the first node pool. If unspecified, the defaults are used."
},
"masterAuth": {
"$ref": "MasterAuth",
- "description": "The authentication information for accessing the master."
+ "description": "The authentication information for accessing the master endpoint."
},
"loggingService": {
"type": "string",
- "description": "The logging service that the cluster should write logs to. Currently available options: * \"logging.googleapis.com\" - the Google Cloud Logging service * \"none\" - no logs will be exported from the cluster * \"\" - default value; the default is \"logging.googleapis.com\""
+ "description": "The logging service the cluster should use to write logs. Currently available options: * `logging.googleapis.com` - the Google Cloud Logging service. * `none` - no logs will be exported from the cluster. * if left as an empty string,`logging.googleapis.com` will be used."
},
"monitoringService": {
"type": "string",
- "description": "The monitoring service that the cluster should write metrics to. Currently available options: * \"monitoring.googleapis.com\" - the Google Cloud Monitoring service * \"none\" - no metrics will be exported from the cluster * \"\" - default value; the default is \"monitoring.googleapis.com\""
+ "description": "The monitoring service the cluster should use to write metrics. Currently available options: * `monitoring.googleapis.com` - the Google Cloud Monitoring service. * `none` - no metrics will be exported from the cluster. * if left as an empty string, `monitoring.googleapis.com` will be used."
},
"network": {
"type": "string",
- "description": "The name of the Google Compute Engine [network](/compute/docs/networking#networks_1) to which the cluster is connected. If left unspecified, the \"default\" network will be used."
+ "description": "The name of the Google Compute Engine [network](/compute/docs/networks-and-firewalls#networks) to which the cluster is connected. If left unspecified, the `default` network will be used."
},
"clusterIpv4Cidr": {
"type": "string",
"description": "The IP address range of the container pods in this cluster, in [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation (e.g. `10.96.0.0/14`). Leave blank to have one automatically chosen or specify a `/14` block in `10.0.0.0/8`."
},
+ "addonsConfig": {
+ "$ref": "AddonsConfig",
+ "description": "Configurations for the various addons available to run in the cluster."
+ },
+ "subnetwork": {
+ "type": "string",
+ "description": "The name of the Google Compute Engine [subnetwork](/compute/docs/subnetworks) to which the cluster is connected."
+ },
"selfLink": {
"type": "string",
"description": "[Output only] Server-defined URL for the resource."
@@ -176,11 +191,11 @@
},
"endpoint": {
"type": "string",
- "description": "[Output only] The IP address of this cluster's Kubernetes master endpoint. The endpoint can be accessed from the internet at `https://username:password@endpoint/`. See the `masterAuth` property of this resource for username and password information."
+ "description": "[Output only] The IP address of this cluster's master endpoint. The endpoint can be accessed from the internet at `https://username:password@endpoint/`. See the `masterAuth` property of this resource for username and password information."
},
"initialClusterVersion": {
"type": "string",
- "description": "[Output only] The software version of Kubernetes master and kubelets used in the cluster when it was first created. The version can be upgraded over time."
+ "description": "[Output only] The software version of the master endpoint and kubelets used in the cluster when it was first created. The version can be upgraded over time."
},
"currentMasterVersion": {
"type": "string",
@@ -188,7 +203,7 @@
},
"currentNodeVersion": {
"type": "string",
- "description": "[Output only] The current version of the node software components. If they are currently at different versions because they're in the process of being upgraded, this reflects the minimum version of any of them."
+ "description": "[Output only] The current version of the node software components. If they are currently at multiple versions because they're in the process of being upgraded, this reflects the minimum version of all nodes."
},
"createTime": {
"type": "string",
@@ -212,12 +227,12 @@
},
"nodeIpv4CidrSize": {
"type": "integer",
- "description": "[Output only] The size of the address space on each node for hosting containers. This is provisioned from within the container_ipv4_cidr range.",
+ "description": "[Output only] The size of the address space on each node for hosting containers. This is provisioned from within the `container_ipv4_cidr` range.",
"format": "int32"
},
"servicesIpv4Cidr": {
"type": "string",
- "description": "[Output only] The IP address range of the Kubernetes services in this cluster, in [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation (e.g. `1.2.3.4/29`). Service addresses are typically put in the last /16 from the container CIDR."
+ "description": "[Output only] The IP address range of the Kubernetes services in this cluster, in [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation (e.g. `1.2.3.4/29`). Service addresses are typically put in the last `/16` from the container CIDR."
},
"instanceGroupUrls": {
"type": "array",
@@ -225,13 +240,18 @@
"items": {
"type": "string"
}
+ },
+ "currentNodeCount": {
+ "type": "integer",
+ "description": "[Output only] The number of nodes currently in the cluster.",
+ "format": "int32"
}
}
},
"NodeConfig": {
"id": "NodeConfig",
"type": "object",
- "description": "Per-node parameters.",
+ "description": "Parameters that describe the nodes in a cluster.",
"properties": {
"machineType": {
"type": "string",
@@ -244,10 +264,17 @@
},
"oauthScopes": {
"type": "array",
- "description": "The set of Google API scopes to be made available on all of the node VMs under the \"default\" service account. The following scopes are recommended, but not required, and by default are not included: * `https://www.googleapis.com/auth/compute` is required for mounting persistent storage on your nodes. * `https://www.googleapis.com/auth/devstorage.read_only` is required for communicating with *gcr.io*. If unspecified, no scopes are added.",
+ "description": "The set of Google API scopes to be made available on all of the node VMs under the \"default\" service account. The following scopes are recommended, but not required, and by default are not included: * `https://www.googleapis.com/auth/compute` is required for mounting persistent storage on your nodes. * `https://www.googleapis.com/auth/devstorage.read_only` is required for communicating with **gcr.io** (the [Google Container Registry](/container-registry/)). If unspecified, no scopes are added, unless Cloud Logging or Cloud Monitoring are enabled, in which case their required scopes will be added.",
"items": {
"type": "string"
}
+ },
+ "metadata": {
+ "type": "object",
+ "description": "The metadata key/value pairs assigned to instances in the cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less than 128 bytes in length. These are reflected as part of a URL in the metadata server. Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: \"instance-template\", \"kube-env\", \"startup-script\", and \"user-data\" Values are free-form strings, and only have meaning as interpreted by the image running in the instance. The only restriction placed on them is that each value's size must be less than or equal to 32 KB. The total size of all keys and values must be less than 512 KB.",
+ "additionalProperties": {
+ "type": "string"
+ }
}
}
},
@@ -258,23 +285,60 @@
"properties": {
"username": {
"type": "string",
- "description": "The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint."
+ "description": "The username to use for HTTP basic authentication to the master endpoint."
},
"password": {
"type": "string",
- "description": "The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint. Because the master endpoint is open to the internet, you should create a strong password."
+ "description": "The password to use for HTTP basic authentication to the master endpoint. Because the master endpoint is open to the Internet, you should create a strong password."
},
"clusterCaCertificate": {
"type": "string",
- "description": "[Output only] Base64 encoded public certificate that is the root of trust for the cluster."
+ "description": "[Output only] Base64-encoded public certificate that is the root of trust for the cluster."
},
"clientCertificate": {
"type": "string",
- "description": "[Output only] Base64 encoded public certificate used by clients to authenticate to the cluster endpoint."
+ "description": "[Output only] Base64-encoded public certificate used by clients to authenticate to the cluster endpoint."
},
"clientKey": {
"type": "string",
- "description": "[Output only] Base64 encoded private key used by clients to authenticate to the cluster endpoint."
+ "description": "[Output only] Base64-encoded private key used by clients to authenticate to the cluster endpoint."
+ }
+ }
+ },
+ "AddonsConfig": {
+ "id": "AddonsConfig",
+ "type": "object",
+ "description": "Configuration for the addons that can be automatically spun up in the cluster, enabling additional functionality.",
+ "properties": {
+ "httpLoadBalancing": {
+ "$ref": "HttpLoadBalancing",
+ "description": "Configuration for the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster."
+ },
+ "horizontalPodAutoscaling": {
+ "$ref": "HorizontalPodAutoscaling",
+ "description": "Configuration for the horizontal pod autoscaling feature, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods."
+ }
+ }
+ },
+ "HttpLoadBalancing": {
+ "id": "HttpLoadBalancing",
+ "type": "object",
+ "description": "Configuration options for the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster.",
+ "properties": {
+ "disabled": {
+ "type": "boolean",
+ "description": "Whether the HTTP Load Balancing controller is enabled in the cluster. When enabled, it runs a small pod in the cluster that manages the load balancers."
+ }
+ }
+ },
+ "HorizontalPodAutoscaling": {
+ "id": "HorizontalPodAutoscaling",
+ "type": "object",
+ "description": "Configuration options for the horizontal pod autoscaling feature, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods.",
+ "properties": {
+ "disabled": {
+ "type": "boolean",
+ "description": "Whether the Horizontal Pod Autoscaling feature is enabled in the cluster. When enabled, it ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service."
}
}
},
@@ -292,7 +356,7 @@
"Operation": {
"id": "Operation",
"type": "object",
- "description": "Defines the operation resource. All fields are output only.",
+ "description": "This operation resource represents operations that may have happened or are happening on the cluster. All fields are output only.",
"properties": {
"name": {
"type": "string",
@@ -311,7 +375,10 @@
"DELETE_CLUSTER",
"UPGRADE_MASTER",
"UPGRADE_NODES",
- "REPAIR_CLUSTER"
+ "REPAIR_CLUSTER",
+ "UPDATE_CLUSTER",
+ "CREATE_NODE_POOL",
+ "DELETE_NODE_POOL"
]
},
"status": {
@@ -324,6 +391,10 @@
"DONE"
]
},
+ "detail": {
+ "type": "string",
+ "description": "Detailed operation progress, if available."
+ },
"statusMessage": {
"type": "string",
"description": "If an error has occurred, a textual description of the error."
@@ -341,7 +412,7 @@
"UpdateClusterRequest": {
"id": "UpdateClusterRequest",
"type": "object",
- "description": "UpdateClusterRequest updates a cluster.",
+ "description": "UpdateClusterRequest updates the settings of a cluster.",
"properties": {
"update": {
"$ref": "ClusterUpdate",
@@ -352,11 +423,23 @@
"ClusterUpdate": {
"id": "ClusterUpdate",
"type": "object",
- "description": "ClusterUpdate describes an update to the cluster.",
+ "description": "ClusterUpdate describes an update to the cluster. Exactly one update can be applied to a cluster with each request, so at most one field can be provided.",
"properties": {
"desiredNodeVersion": {
"type": "string",
- "description": "The Kubernetes version to change the nodes to (typically an upgrade). Use \"-\" to upgrade to the latest version supported by the server."
+ "description": "The Kubernetes version to change the nodes to (typically an upgrade). Use `-` to upgrade to the latest version supported by the server."
+ },
+ "desiredMonitoringService": {
+ "type": "string",
+ "description": "The monitoring service the cluster should use to write metrics. Currently available options: * \"monitoring.googleapis.com\" - the Google Cloud Monitoring service * \"none\" - no metrics will be exported from the cluster"
+ },
+ "desiredAddonsConfig": {
+ "$ref": "AddonsConfig",
+ "description": "Configurations for the various addons available to run in the cluster."
+ },
+ "desiredMasterVersion": {
+ "type": "string",
+ "description": "The Kubernetes version to change the master to. The only valid value is the latest supported version. Use \"-\" to have the server automatically select the latest version."
}
}
},
@@ -371,17 +454,24 @@
"items": {
"$ref": "Operation"
}
+ },
+ "missingZones": {
+ "type": "array",
+ "description": "If any zones are listed here, the list of operations returned may be missing the operations from those zones.",
+ "items": {
+ "type": "string"
+ }
}
}
},
"ServerConfig": {
"id": "ServerConfig",
"type": "object",
- "description": "Container Engine Server configuration.",
+ "description": "Container Engine service configuration.",
"properties": {
"defaultClusterVersion": {
"type": "string",
- "description": "What version this server deploys by default."
+ "description": "Version of Kubernetes the service deploys by default."
},
"validNodeVersions": {
"type": "array",
@@ -406,13 +496,13 @@
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
"zone": {
"type": "string",
- "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or \"-\" for all zones.",
+ "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for.",
"required": true,
"location": "path"
}
@@ -440,7 +530,7 @@
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
@@ -466,11 +556,11 @@
"id": "container.projects.zones.clusters.get",
"path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}",
"httpMethod": "GET",
- "description": "Gets a specific cluster.",
+ "description": "Gets the details of a specific cluster.",
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
@@ -503,11 +593,11 @@
"id": "container.projects.zones.clusters.create",
"path": "v1/projects/{projectId}/zones/{zone}/clusters",
"httpMethod": "POST",
- "description": "Creates a cluster, consisting of the specified number and type of Google Compute Engine instances, plus a Kubernetes master endpoint. By default, the cluster is created in the project's [default network](/compute/docs/networking#networks_1). One firewall is added for the cluster. After cluster creation, the cluster creates routes for each node to allow the containers on that node to communicate with all other instances in the cluster. Finally, an entry is added to the project's global metadata indicating which CIDR range is being used by the cluster.",
+ "description": "Creates a cluster, consisting of the specified number and type of Google Compute Engine instances. By default, the cluster is created in the project's [default network](/compute/docs/networks-and-firewalls#networks). One firewall is added for the cluster. After cluster creation, the cluster creates routes for each node to allow the containers on that node to communicate with all other instances in the cluster. Finally, an entry is added to the project's global metadata indicating which CIDR range is being used by the cluster.",
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
@@ -536,11 +626,11 @@
"id": "container.projects.zones.clusters.update",
"path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}",
"httpMethod": "PUT",
- "description": "Update settings of a specific cluster.",
+ "description": "Updates the settings of a specific cluster.",
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
@@ -576,11 +666,11 @@
"id": "container.projects.zones.clusters.delete",
"path": "v1/projects/{projectId}/zones/{zone}/clusters/{clusterId}",
"httpMethod": "DELETE",
- "description": "Deletes the cluster, including the Kubernetes endpoint and all worker nodes. Firewalls and routes that were configured during cluster creation are also deleted.",
+ "description": "Deletes the cluster, including the Kubernetes endpoint and all worker nodes. Firewalls and routes that were configured during cluster creation are also deleted. Other Google Compute Engine resources that might be in use by the cluster (e.g. load balancer resources) will not be deleted if they weren't present at the initial create time.",
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
@@ -621,13 +711,13 @@
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
"zone": {
"type": "string",
- "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or \"-\" for all zones.",
+ "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or `-` for all zones.",
"required": true,
"location": "path"
}
@@ -651,7 +741,7 @@
"parameters": {
"projectId": {
"type": "string",
- "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
"required": true,
"location": "path"
},
diff --git a/vendor/google.golang.org/api/container/v1/container-gen.go b/vendor/google.golang.org/api/container/v1/container-gen.go
index 27e0b70dd4d1..80e1806f9513 100644
--- a/vendor/google.golang.org/api/container/v1/container-gen.go
+++ b/vendor/google.golang.org/api/container/v1/container-gen.go
@@ -120,8 +120,42 @@ type ProjectsZonesOperationsService struct {
s *Service
}
+// AddonsConfig: Configuration for the addons that can be automatically
+// spun up in the cluster, enabling additional functionality.
+type AddonsConfig struct {
+ // HorizontalPodAutoscaling: Configuration for the horizontal pod
+ // autoscaling feature, which increases or decreases the number of
+ // replica pods a replication controller has based on the resource usage
+ // of the existing pods.
+ HorizontalPodAutoscaling *HorizontalPodAutoscaling `json:"horizontalPodAutoscaling,omitempty"`
+
+ // HttpLoadBalancing: Configuration for the HTTP (L7) load balancing
+ // controller addon, which makes it easy to set up HTTP load balancers
+ // for services in a cluster.
+ HttpLoadBalancing *HttpLoadBalancing `json:"httpLoadBalancing,omitempty"`
+
+ // ForceSendFields is a list of field names (e.g.
+ // "HorizontalPodAutoscaling") to unconditionally include in API
+ // requests. By default, fields with empty values are omitted from API
+ // requests. However, any non-pointer, non-interface field appearing in
+ // ForceSendFields will be sent to the server regardless of whether the
+ // field is empty or not. This may be used to include empty fields in
+ // Patch requests.
+ ForceSendFields []string `json:"-"`
+}
+
+func (s *AddonsConfig) MarshalJSON() ([]byte, error) {
+ type noMethod AddonsConfig
+ raw := noMethod(*s)
+ return gensupport.MarshalJSON(raw, s.ForceSendFields)
+}
+
// Cluster: A Google Container Engine cluster.
type Cluster struct {
+ // AddonsConfig: Configurations for the various addons available to run
+ // in the cluster.
+ AddonsConfig *AddonsConfig `json:"addonsConfig,omitempty"`
+
// ClusterIpv4Cidr: The IP address range of the container pods in this
// cluster, in
// [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)
@@ -137,51 +171,60 @@ type Cluster struct {
// the master endpoint.
CurrentMasterVersion string `json:"currentMasterVersion,omitempty"`
+ // CurrentNodeCount: [Output only] The number of nodes currently in the
+ // cluster.
+ CurrentNodeCount int64 `json:"currentNodeCount,omitempty"`
+
// CurrentNodeVersion: [Output only] The current version of the node
- // software components. If they are currently at different versions
+ // software components. If they are currently at multiple versions
// because they're in the process of being upgraded, this reflects the
- // minimum version of any of them.
+ // minimum version of all nodes.
CurrentNodeVersion string `json:"currentNodeVersion,omitempty"`
// Description: An optional description of this cluster.
Description string `json:"description,omitempty"`
- // Endpoint: [Output only] The IP address of this cluster's Kubernetes
- // master endpoint. The endpoint can be accessed from the internet at
+ // Endpoint: [Output only] The IP address of this cluster's master
+ // endpoint. The endpoint can be accessed from the internet at
// `https://username:password@endpoint/`. See the `masterAuth` property
// of this resource for username and password information.
Endpoint string `json:"endpoint,omitempty"`
- // InitialClusterVersion: [Output only] The software version of
- // Kubernetes master and kubelets used in the cluster when it was first
+ // InitialClusterVersion: [Output only] The software version of the
+ // master endpoint and kubelets used in the cluster when it was first
// created. The version can be upgraded over time.
InitialClusterVersion string `json:"initialClusterVersion,omitempty"`
// InitialNodeCount: The number of nodes to create in this cluster. You
- // must ensure that your Compute Engine [resource
- // quota](/compute/docs/resource-quotas) is sufficient for this number
- // of instances. You must also have available firewall and routes quota.
+ // must ensure that your Compute Engine resource quota is sufficient for
+ // this number of instances. You must also have available firewall and
+ // routes quota. For requests, this field should only be used in lieu of
+ // a "node_pool" object, since this configuration (along with the
+ // "node_config") will be used to create a "NodePool" object with an
+ // auto-generated name. Do not use this and a node_pool at the same
+ // time.
InitialNodeCount int64 `json:"initialNodeCount,omitempty"`
// InstanceGroupUrls: [Output only] The resource URLs of [instance
// groups](/compute/docs/instance-groups/) associated with this cluster.
InstanceGroupUrls []string `json:"instanceGroupUrls,omitempty"`
- // LoggingService: The logging service that the cluster should write
- // logs to. Currently available options: * "logging.googleapis.com" -
- // the Google Cloud Logging service * "none" - no logs will be exported
- // from the cluster * "" - default value; the default is
- // "logging.googleapis.com"
+ // LoggingService: The logging service the cluster should use to write
+ // logs. Currently available options: * `logging.googleapis.com` - the
+ // Google Cloud Logging service. * `none` - no logs will be exported
+ // from the cluster. * if left as an empty
+ // string,`logging.googleapis.com` will be used.
LoggingService string `json:"loggingService,omitempty"`
- // MasterAuth: The authentication information for accessing the master.
+ // MasterAuth: The authentication information for accessing the master
+ // endpoint.
MasterAuth *MasterAuth `json:"masterAuth,omitempty"`
- // MonitoringService: The monitoring service that the cluster should
- // write metrics to. Currently available options: *
- // "monitoring.googleapis.com" - the Google Cloud Monitoring service *
- // "none" - no metrics will be exported from the cluster * "" - default
- // value; the default is "monitoring.googleapis.com"
+ // MonitoringService: The monitoring service the cluster should use to
+ // write metrics. Currently available options: *
+ // `monitoring.googleapis.com` - the Google Cloud Monitoring service. *
+ // `none` - no metrics will be exported from the cluster. * if left as
+ // an empty string, `monitoring.googleapis.com` will be used.
MonitoringService string `json:"monitoringService,omitempty"`
// Name: The name of this cluster. The name must be unique within this
@@ -191,19 +234,24 @@ type Cluster struct {
Name string `json:"name,omitempty"`
// Network: The name of the Google Compute Engine
- // [network](/compute/docs/networking#networks_1) to which the cluster
- // is connected. If left unspecified, the "default" network will be
- // used.
+ // [network](/compute/docs/networks-and-firewalls#networks) to which the
+ // cluster is connected. If left unspecified, the `default` network will
+ // be used.
Network string `json:"network,omitempty"`
- // NodeConfig: Parameters used in creating the cluster's nodes. See the
- // descriptions of the child properties of `nodeConfig`. If unspecified,
- // the defaults for all child properties are used.
+ // NodeConfig: Parameters used in creating the cluster's nodes. See
+ // `nodeConfig` for the description of its properties. For requests,
+ // this field should only be used in lieu of a "node_pool" object, since
+ // this configuration (along with the "initial_node_count") will be used
+ // to create a "NodePool" object with an auto-generated name. Do not use
+ // this and a node_pool at the same time. For responses, this field will
+ // be populated with the node configuration of the first node pool. If
+ // unspecified, the defaults are used.
NodeConfig *NodeConfig `json:"nodeConfig,omitempty"`
// NodeIpv4CidrSize: [Output only] The size of the address space on each
// node for hosting containers. This is provisioned from within the
- // container_ipv4_cidr range.
+ // `container_ipv4_cidr` range.
NodeIpv4CidrSize int64 `json:"nodeIpv4CidrSize,omitempty"`
// SelfLink: [Output only] Server-defined URL for the resource.
@@ -213,7 +261,7 @@ type Cluster struct {
// Kubernetes services in this cluster, in
// [CIDR](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)
// notation (e.g. `1.2.3.4/29`). Service addresses are typically put in
- // the last /16 from the container CIDR.
+ // the last `/16` from the container CIDR.
ServicesIpv4Cidr string `json:"servicesIpv4Cidr,omitempty"`
// Status: [Output only] The current status of this cluster.
@@ -231,6 +279,11 @@ type Cluster struct {
// status of this cluster, if available.
StatusMessage string `json:"statusMessage,omitempty"`
+ // Subnetwork: The name of the Google Compute Engine
+ // [subnetwork](/compute/docs/subnetworks) to which the cluster is
+ // connected.
+ Subnetwork string `json:"subnetwork,omitempty"`
+
// Zone: [Output only] The name of the Google Compute Engine
// [zone](/compute/docs/zones#available) in which the cluster resides.
Zone string `json:"zone,omitempty"`
@@ -239,7 +292,7 @@ type Cluster struct {
// server.
googleapi.ServerResponse `json:"-"`
- // ForceSendFields is a list of field names (e.g. "ClusterIpv4Cidr") to
+ // ForceSendFields is a list of field names (e.g. "AddonsConfig") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -255,13 +308,30 @@ func (s *Cluster) MarshalJSON() ([]byte, error) {
}
// ClusterUpdate: ClusterUpdate describes an update to the cluster.
+// Exactly one update can be applied to a cluster with each request, so
+// at most one field can be provided.
type ClusterUpdate struct {
+ // DesiredAddonsConfig: Configurations for the various addons available
+ // to run in the cluster.
+ DesiredAddonsConfig *AddonsConfig `json:"desiredAddonsConfig,omitempty"`
+
+ // DesiredMasterVersion: The Kubernetes version to change the master to.
+ // The only valid value is the latest supported version. Use "-" to have
+ // the server automatically select the latest version.
+ DesiredMasterVersion string `json:"desiredMasterVersion,omitempty"`
+
+ // DesiredMonitoringService: The monitoring service the cluster should
+ // use to write metrics. Currently available options: *
+ // "monitoring.googleapis.com" - the Google Cloud Monitoring service *
+ // "none" - no metrics will be exported from the cluster
+ DesiredMonitoringService string `json:"desiredMonitoringService,omitempty"`
+
// DesiredNodeVersion: The Kubernetes version to change the nodes to
- // (typically an upgrade). Use "-" to upgrade to the latest version
+ // (typically an upgrade). Use `-` to upgrade to the latest version
// supported by the server.
DesiredNodeVersion string `json:"desiredNodeVersion,omitempty"`
- // ForceSendFields is a list of field names (e.g. "DesiredNodeVersion")
+ // ForceSendFields is a list of field names (e.g. "DesiredAddonsConfig")
// to unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -297,6 +367,56 @@ func (s *CreateClusterRequest) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
+// HorizontalPodAutoscaling: Configuration options for the horizontal
+// pod autoscaling feature, which increases or decreases the number of
+// replica pods a replication controller has based on the resource usage
+// of the existing pods.
+type HorizontalPodAutoscaling struct {
+ // Disabled: Whether the Horizontal Pod Autoscaling feature is enabled
+ // in the cluster. When enabled, it ensures that a Heapster pod is
+ // running in the cluster, which is also used by the Cloud Monitoring
+ // service.
+ Disabled bool `json:"disabled,omitempty"`
+
+ // ForceSendFields is a list of field names (e.g. "Disabled") to
+ // unconditionally include in API requests. By default, fields with
+ // empty values are omitted from API requests. However, any non-pointer,
+ // non-interface field appearing in ForceSendFields will be sent to the
+ // server regardless of whether the field is empty or not. This may be
+ // used to include empty fields in Patch requests.
+ ForceSendFields []string `json:"-"`
+}
+
+func (s *HorizontalPodAutoscaling) MarshalJSON() ([]byte, error) {
+ type noMethod HorizontalPodAutoscaling
+ raw := noMethod(*s)
+ return gensupport.MarshalJSON(raw, s.ForceSendFields)
+}
+
+// HttpLoadBalancing: Configuration options for the HTTP (L7) load
+// balancing controller addon, which makes it easy to set up HTTP load
+// balancers for services in a cluster.
+type HttpLoadBalancing struct {
+ // Disabled: Whether the HTTP Load Balancing controller is enabled in
+ // the cluster. When enabled, it runs a small pod in the cluster that
+ // manages the load balancers.
+ Disabled bool `json:"disabled,omitempty"`
+
+ // ForceSendFields is a list of field names (e.g. "Disabled") to
+ // unconditionally include in API requests. By default, fields with
+ // empty values are omitted from API requests. However, any non-pointer,
+ // non-interface field appearing in ForceSendFields will be sent to the
+ // server regardless of whether the field is empty or not. This may be
+ // used to include empty fields in Patch requests.
+ ForceSendFields []string `json:"-"`
+}
+
+func (s *HttpLoadBalancing) MarshalJSON() ([]byte, error) {
+ type noMethod HttpLoadBalancing
+ raw := noMethod(*s)
+ return gensupport.MarshalJSON(raw, s.ForceSendFields)
+}
+
// ListClustersResponse: ListClustersResponse is the result of
// ListClustersRequest.
type ListClustersResponse struct {
@@ -304,6 +424,10 @@ type ListClustersResponse struct {
// across all ones.
Clusters []*Cluster `json:"clusters,omitempty"`
+ // MissingZones: If any zones are listed here, the list of clusters
+ // returned may be missing those zones.
+ MissingZones []string `json:"missingZones,omitempty"`
+
// ServerResponse contains the HTTP response code and headers from the
// server.
googleapi.ServerResponse `json:"-"`
@@ -326,6 +450,10 @@ func (s *ListClustersResponse) MarshalJSON() ([]byte, error) {
// ListOperationsResponse: ListOperationsResponse is the result of
// ListOperationsRequest.
type ListOperationsResponse struct {
+ // MissingZones: If any zones are listed here, the list of operations
+ // returned may be missing the operations from those zones.
+ MissingZones []string `json:"missingZones,omitempty"`
+
// Operations: A list of operations in the project in the specified
// zone.
Operations []*Operation `json:"operations,omitempty"`
@@ -334,7 +462,7 @@ type ListOperationsResponse struct {
// server.
googleapi.ServerResponse `json:"-"`
- // ForceSendFields is a list of field names (e.g. "Operations") to
+ // ForceSendFields is a list of field names (e.g. "MissingZones") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -353,25 +481,25 @@ func (s *ListOperationsResponse) MarshalJSON() ([]byte, error) {
// endpoint. Authentication can be done using HTTP basic auth or using
// client certificates.
type MasterAuth struct {
- // ClientCertificate: [Output only] Base64 encoded public certificate
+ // ClientCertificate: [Output only] Base64-encoded public certificate
// used by clients to authenticate to the cluster endpoint.
ClientCertificate string `json:"clientCertificate,omitempty"`
- // ClientKey: [Output only] Base64 encoded private key used by clients
+ // ClientKey: [Output only] Base64-encoded private key used by clients
// to authenticate to the cluster endpoint.
ClientKey string `json:"clientKey,omitempty"`
- // ClusterCaCertificate: [Output only] Base64 encoded public certificate
+ // ClusterCaCertificate: [Output only] Base64-encoded public certificate
// that is the root of trust for the cluster.
ClusterCaCertificate string `json:"clusterCaCertificate,omitempty"`
- // Password: The password to use for HTTP basic authentication when
- // accessing the Kubernetes master endpoint. Because the master endpoint
- // is open to the internet, you should create a strong password.
+ // Password: The password to use for HTTP basic authentication to the
+ // master endpoint. Because the master endpoint is open to the Internet,
+ // you should create a strong password.
Password string `json:"password,omitempty"`
- // Username: The username to use for HTTP basic authentication when
- // accessing the Kubernetes master endpoint.
+ // Username: The username to use for HTTP basic authentication to the
+ // master endpoint.
Username string `json:"username,omitempty"`
// ForceSendFields is a list of field names (e.g. "ClientCertificate")
@@ -389,7 +517,7 @@ func (s *MasterAuth) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// NodeConfig: Per-node parameters.
+// NodeConfig: Parameters that describe the nodes in a cluster.
type NodeConfig struct {
// DiskSizeGb: Size of the disk attached to each node, specified in GB.
// The smallest allowed disk size is 10GB. If unspecified, the default
@@ -401,13 +529,29 @@ type NodeConfig struct {
// unspecified, the default machine type is `n1-standard-1`.
MachineType string `json:"machineType,omitempty"`
+ // Metadata: The metadata key/value pairs assigned to instances in the
+ // cluster. Keys must conform to the regexp [a-zA-Z0-9-_]+ and be less
+ // than 128 bytes in length. These are reflected as part of a URL in the
+ // metadata server. Additionally, to avoid ambiguity, keys must not
+ // conflict with any other metadata keys for the project or be one of
+ // the four reserved keys: "instance-template", "kube-env",
+ // "startup-script", and "user-data" Values are free-form strings, and
+ // only have meaning as interpreted by the image running in the
+ // instance. The only restriction placed on them is that each value's
+ // size must be less than or equal to 32 KB. The total size of all keys
+ // and values must be less than 512 KB.
+ Metadata map[string]string `json:"metadata,omitempty"`
+
// OauthScopes: The set of Google API scopes to be made available on all
// of the node VMs under the "default" service account. The following
// scopes are recommended, but not required, and by default are not
// included: * `https://www.googleapis.com/auth/compute` is required for
// mounting persistent storage on your nodes. *
// `https://www.googleapis.com/auth/devstorage.read_only` is required
- // for communicating with *gcr.io*. If unspecified, no scopes are added.
+ // for communicating with **gcr.io** (the [Google Container
+ // Registry](/container-registry/)). If unspecified, no scopes are
+ // added, unless Cloud Logging or Cloud Monitoring are enabled, in which
+ // case their required scopes will be added.
OauthScopes []string `json:"oauthScopes,omitempty"`
// ForceSendFields is a list of field names (e.g. "DiskSizeGb") to
@@ -425,9 +569,13 @@ func (s *NodeConfig) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// Operation: Defines the operation resource. All fields are output
+// Operation: This operation resource represents operations that may
+// have happened or are happening on the cluster. All fields are output
// only.
type Operation struct {
+ // Detail: Detailed operation progress, if available.
+ Detail string `json:"detail,omitempty"`
+
// Name: The server-assigned ID for the operation.
Name string `json:"name,omitempty"`
@@ -440,6 +588,9 @@ type Operation struct {
// "UPGRADE_MASTER"
// "UPGRADE_NODES"
// "REPAIR_CLUSTER"
+ // "UPDATE_CLUSTER"
+ // "CREATE_NODE_POOL"
+ // "DELETE_NODE_POOL"
OperationType string `json:"operationType,omitempty"`
// SelfLink: Server-defined URL for the resource.
@@ -470,7 +621,7 @@ type Operation struct {
// server.
googleapi.ServerResponse `json:"-"`
- // ForceSendFields is a list of field names (e.g. "Name") to
+ // ForceSendFields is a list of field names (e.g. "Detail") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -485,9 +636,10 @@ func (s *Operation) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// ServerConfig: Container Engine Server configuration.
+// ServerConfig: Container Engine service configuration.
type ServerConfig struct {
- // DefaultClusterVersion: What version this server deploys by default.
+ // DefaultClusterVersion: Version of Kubernetes the service deploys by
+ // default.
DefaultClusterVersion string `json:"defaultClusterVersion,omitempty"`
// ValidNodeVersions: List of valid node upgrade target versions.
@@ -513,7 +665,8 @@ func (s *ServerConfig) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields)
}
-// UpdateClusterRequest: UpdateClusterRequest updates a cluster.
+// UpdateClusterRequest: UpdateClusterRequest updates the settings of a
+// cluster.
type UpdateClusterRequest struct {
// Update: A description of the update.
Update *ClusterUpdate `json:"update,omitempty"`
@@ -645,13 +798,13 @@ func (c *ProjectsZonesGetServerconfigCall) Do(opts ...googleapi.CallOption) (*Se
// ],
// "parameters": {
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
// },
// "zone": {
- // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or \"-\" for all zones.",
+ // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for.",
// "location": "path",
// "required": true,
// "type": "string"
@@ -680,9 +833,9 @@ type ProjectsZonesClustersCreateCall struct {
}
// Create: Creates a cluster, consisting of the specified number and
-// type of Google Compute Engine instances, plus a Kubernetes master
-// endpoint. By default, the cluster is created in the project's
-// [default network](/compute/docs/networking#networks_1). One firewall
+// type of Google Compute Engine instances. By default, the cluster is
+// created in the project's [default
+// network](/compute/docs/networks-and-firewalls#networks). One firewall
// is added for the cluster. After cluster creation, the cluster creates
// routes for each node to allow the containers on that node to
// communicate with all other instances in the cluster. Finally, an
@@ -772,7 +925,7 @@ func (c *ProjectsZonesClustersCreateCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a cluster, consisting of the specified number and type of Google Compute Engine instances, plus a Kubernetes master endpoint. By default, the cluster is created in the project's [default network](/compute/docs/networking#networks_1). One firewall is added for the cluster. After cluster creation, the cluster creates routes for each node to allow the containers on that node to communicate with all other instances in the cluster. Finally, an entry is added to the project's global metadata indicating which CIDR range is being used by the cluster.",
+ // "description": "Creates a cluster, consisting of the specified number and type of Google Compute Engine instances. By default, the cluster is created in the project's [default network](/compute/docs/networks-and-firewalls#networks). One firewall is added for the cluster. After cluster creation, the cluster creates routes for each node to allow the containers on that node to communicate with all other instances in the cluster. Finally, an entry is added to the project's global metadata indicating which CIDR range is being used by the cluster.",
// "httpMethod": "POST",
// "id": "container.projects.zones.clusters.create",
// "parameterOrder": [
@@ -781,7 +934,7 @@ func (c *ProjectsZonesClustersCreateCall) Do(opts ...googleapi.CallOption) (*Ope
// ],
// "parameters": {
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -820,7 +973,10 @@ type ProjectsZonesClustersDeleteCall struct {
// Delete: Deletes the cluster, including the Kubernetes endpoint and
// all worker nodes. Firewalls and routes that were configured during
-// cluster creation are also deleted.
+// cluster creation are also deleted. Other Google Compute Engine
+// resources that might be in use by the cluster (e.g. load balancer
+// resources) will not be deleted if they weren't present at the initial
+// create time.
func (r *ProjectsZonesClustersService) Delete(projectId string, zone string, clusterId string) *ProjectsZonesClustersDeleteCall {
c := &ProjectsZonesClustersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.projectId = projectId
@@ -900,7 +1056,7 @@ func (c *ProjectsZonesClustersDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the cluster, including the Kubernetes endpoint and all worker nodes. Firewalls and routes that were configured during cluster creation are also deleted.",
+ // "description": "Deletes the cluster, including the Kubernetes endpoint and all worker nodes. Firewalls and routes that were configured during cluster creation are also deleted. Other Google Compute Engine resources that might be in use by the cluster (e.g. load balancer resources) will not be deleted if they weren't present at the initial create time.",
// "httpMethod": "DELETE",
// "id": "container.projects.zones.clusters.delete",
// "parameterOrder": [
@@ -916,7 +1072,7 @@ func (c *ProjectsZonesClustersDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
// "type": "string"
// },
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -951,7 +1107,7 @@ type ProjectsZonesClustersGetCall struct {
ctx_ context.Context
}
-// Get: Gets a specific cluster.
+// Get: Gets the details of a specific cluster.
func (r *ProjectsZonesClustersService) Get(projectId string, zone string, clusterId string) *ProjectsZonesClustersGetCall {
c := &ProjectsZonesClustersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.projectId = projectId
@@ -1044,7 +1200,7 @@ func (c *ProjectsZonesClustersGetCall) Do(opts ...googleapi.CallOption) (*Cluste
}
return ret, nil
// {
- // "description": "Gets a specific cluster.",
+ // "description": "Gets the details of a specific cluster.",
// "httpMethod": "GET",
// "id": "container.projects.zones.clusters.get",
// "parameterOrder": [
@@ -1060,7 +1216,7 @@ func (c *ProjectsZonesClustersGetCall) Do(opts ...googleapi.CallOption) (*Cluste
// "type": "string"
// },
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -1195,7 +1351,7 @@ func (c *ProjectsZonesClustersListCall) Do(opts ...googleapi.CallOption) (*ListC
// ],
// "parameters": {
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -1230,7 +1386,7 @@ type ProjectsZonesClustersUpdateCall struct {
ctx_ context.Context
}
-// Update: Update settings of a specific cluster.
+// Update: Updates the settings of a specific cluster.
func (r *ProjectsZonesClustersService) Update(projectId string, zone string, clusterId string, updateclusterrequest *UpdateClusterRequest) *ProjectsZonesClustersUpdateCall {
c := &ProjectsZonesClustersUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.projectId = projectId
@@ -1317,7 +1473,7 @@ func (c *ProjectsZonesClustersUpdateCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Update settings of a specific cluster.",
+ // "description": "Updates the settings of a specific cluster.",
// "httpMethod": "PUT",
// "id": "container.projects.zones.clusters.update",
// "parameterOrder": [
@@ -1333,7 +1489,7 @@ func (c *ProjectsZonesClustersUpdateCall) Do(opts ...googleapi.CallOption) (*Ope
// "type": "string"
// },
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -1480,7 +1636,7 @@ func (c *ProjectsZonesOperationsGetCall) Do(opts ...googleapi.CallOption) (*Oper
// "type": "string"
// },
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
@@ -1615,13 +1771,13 @@ func (c *ProjectsZonesOperationsListCall) Do(opts ...googleapi.CallOption) (*Lis
// ],
// "parameters": {
// "projectId": {
- // "description": "The Google Developers Console [project ID or project number](https://developers.google.com/console/help/new/#projectnumber).",
+ // "description": "The Google Developers Console [project ID or project number](https://support.google.com/cloud/answer/6158840).",
// "location": "path",
// "required": true,
// "type": "string"
// },
// "zone": {
- // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or \"-\" for all zones.",
+ // "description": "The name of the Google Compute Engine [zone](/compute/docs/zones#available) to return operations for, or `-` for all zones.",
// "location": "path",
// "required": true,
// "type": "string"
diff --git a/vendor/google.golang.org/api/dns/v1/dns-api.json b/vendor/google.golang.org/api/dns/v1/dns-api.json
index a365334552fe..6c0a65b5cd6e 100644
--- a/vendor/google.golang.org/api/dns/v1/dns-api.json
+++ b/vendor/google.golang.org/api/dns/v1/dns-api.json
@@ -1,11 +1,11 @@
{
"kind": "discovery#restDescription",
- "etag": "\"ye6orv2F-1npMW3u9suM3a7C5Bo/zoueaaoAZQGFJohVmI5skQDTvqg\"",
+ "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/YvTvEjSR_sXxd8XvUihYx8e9Xjo\"",
"discoveryVersion": "v1",
"id": "dns:v1",
"name": "dns",
"version": "v1",
- "revision": "20150729",
+ "revision": "20160209",
"title": "Google Cloud DNS API",
"description": "The Google Cloud DNS API provides services for configuring and serving authoritative DNS records.",
"ownerDomain": "google.com",
@@ -115,7 +115,7 @@
},
"startTime": {
"type": "string",
- "description": "The time that this operation was started by the server. This is in RFC3339 text format."
+ "description": "The time that this operation was started by the server (output only). This is in RFC3339 text format."
},
"status": {
"type": "string",
diff --git a/vendor/google.golang.org/api/dns/v1/dns-gen.go b/vendor/google.golang.org/api/dns/v1/dns-gen.go
index 0d74b6f3de57..d9d98592d5c0 100644
--- a/vendor/google.golang.org/api/dns/v1/dns-gen.go
+++ b/vendor/google.golang.org/api/dns/v1/dns-gen.go
@@ -146,8 +146,8 @@ type Change struct {
// string "dns#change".
Kind string `json:"kind,omitempty"`
- // StartTime: The time that this operation was started by the server.
- // This is in RFC3339 text format.
+ // StartTime: The time that this operation was started by the server
+ // (output only). This is in RFC3339 text format.
StartTime string `json:"startTime,omitempty"`
// Status: Status of the operation (output only).
diff --git a/vendor/google.golang.org/api/gensupport/backoff.go b/vendor/google.golang.org/api/gensupport/backoff.go
index 0cc3e406b6cc..1356140472a6 100644
--- a/vendor/google.golang.org/api/gensupport/backoff.go
+++ b/vendor/google.golang.org/api/gensupport/backoff.go
@@ -4,31 +4,43 @@
package gensupport
-import "time"
+import (
+ "math/rand"
+ "time"
+)
type BackoffStrategy interface {
- // Pause returns the duration of the next pause before a retry should be attempted.
- Pause() time.Duration
+ // Pause returns the duration of the next pause and true if the operation should be
+ // retried, or false if no further retries should be attempted.
+ Pause() (time.Duration, bool)
// Reset restores the strategy to its initial state.
Reset()
}
+// ExponentialBackoff performs exponential backoff as per https://en.wikipedia.org/wiki/Exponential_backoff.
+// The initial pause time is given by Base.
+// Once the total pause time exceeds Max, Pause will indicate no further retries.
type ExponentialBackoff struct {
- BasePause time.Duration
- nextPause time.Duration
+ Base time.Duration
+ Max time.Duration
+ total time.Duration
+ n uint
}
-func (eb *ExponentialBackoff) Pause() time.Duration {
- if eb.nextPause == 0 {
- eb.Reset()
+func (eb *ExponentialBackoff) Pause() (time.Duration, bool) {
+ if eb.total > eb.Max {
+ return 0, false
}
- d := eb.nextPause
- eb.nextPause *= 2
- return d
+ // The next pause is selected from randomly from [0, 2^n * Base).
+ d := time.Duration(rand.Int63n((1 << eb.n) * int64(eb.Base)))
+ eb.total += d
+ eb.n++
+ return d, true
}
func (eb *ExponentialBackoff) Reset() {
- eb.nextPause = eb.BasePause
+ eb.n = 0
+ eb.total = 0
}
diff --git a/vendor/google.golang.org/api/gensupport/media.go b/vendor/google.golang.org/api/gensupport/media.go
index 685e08d5afc2..817f46f5d274 100644
--- a/vendor/google.golang.org/api/gensupport/media.go
+++ b/vendor/google.golang.org/api/gensupport/media.go
@@ -165,7 +165,9 @@ func CombineBodyMedia(body io.Reader, bodyContentType string, media io.Reader, m
func typeHeader(contentType string) textproto.MIMEHeader {
h := make(textproto.MIMEHeader)
- h.Set("Content-Type", contentType)
+ if contentType != "" {
+ h.Set("Content-Type", contentType)
+ }
return h
}
diff --git a/vendor/google.golang.org/api/gensupport/resumable.go b/vendor/google.golang.org/api/gensupport/resumable.go
index adefc8897416..b3e774aa497a 100644
--- a/vendor/google.golang.org/api/gensupport/resumable.go
+++ b/vendor/google.golang.org/api/gensupport/resumable.go
@@ -16,14 +16,16 @@ import (
)
const (
- // statusResumeIncomplete is the code returned by the Google uploader when the transfer is not yet complete.
+ // statusResumeIncomplete is the code returned by the Google uploader
+ // when the transfer is not yet complete.
statusResumeIncomplete = 308
-)
-// DefaultBackoffStrategy returns a default strategy to use for retrying failed upload requests.
-func DefaultBackoffStrategy() BackoffStrategy {
- return &ExponentialBackoff{BasePause: time.Second}
-}
+ // statusTooManyRequests is returned by the storage API if the
+ // per-project limits have been temporarily exceeded. The request
+ // should be retried.
+ // https://cloud.google.com/storage/docs/json_api/v1/status-codes#standardcodes
+ statusTooManyRequests = 429
+)
// ResumableUpload is used by the generated APIs to provide resumable uploads.
// It is not used by developers directly.
@@ -130,7 +132,8 @@ func contextDone(ctx context.Context) bool {
}
// Upload starts the process of a resumable upload with a cancellable context.
-// It retries indefinitely (using exponential backoff) until cancelled.
+// It retries using the provided back off strategy until cancelled or the
+// strategy indicates to stop retrying.
// It is called from the auto-generated API code and is not visible to the user.
// rx is private to the auto-generated API code.
// Exactly one of resp or err will be nil. If resp is non-nil, the caller must call resp.Body.Close.
@@ -153,6 +156,33 @@ func (rx *ResumableUpload) Upload(ctx context.Context) (resp *http.Response, err
}
resp, err = rx.transferChunk(ctx)
+
+ var status int
+ if resp != nil {
+ status = resp.StatusCode
+ }
+
+ // Check if we should retry the request.
+ if shouldRetry(status, err) {
+ var retry bool
+ pause, retry = backoff.Pause()
+ if retry {
+ if resp != nil && resp.Body != nil {
+ resp.Body.Close()
+ }
+ continue
+ }
+ }
+
+ // If the chunk was uploaded successfully, but there's still
+ // more to go, upload the next chunk without any delay.
+ if status == statusResumeIncomplete {
+ pause = 0
+ backoff.Reset()
+ resp.Body.Close()
+ continue
+ }
+
// It's possible for err and resp to both be non-nil here, but we expose a simpler
// contract to our callers: exactly one of resp and err will be non-nil. This means
// that any response body must be closed here before returning a non-nil error.
@@ -162,16 +192,7 @@ func (rx *ResumableUpload) Upload(ctx context.Context) (resp *http.Response, err
}
return nil, err
}
- if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusOK {
- return resp, nil
- }
- resp.Body.Close()
- if resp.StatusCode == statusResumeIncomplete {
- pause = 0
- backoff.Reset()
- } else {
- pause = backoff.Pause()
- }
+ return resp, nil
}
}
diff --git a/vendor/google.golang.org/api/gensupport/retry.go b/vendor/google.golang.org/api/gensupport/retry.go
new file mode 100644
index 000000000000..7f83d1da99fa
--- /dev/null
+++ b/vendor/google.golang.org/api/gensupport/retry.go
@@ -0,0 +1,77 @@
+package gensupport
+
+import (
+ "io"
+ "net"
+ "net/http"
+ "time"
+
+ "golang.org/x/net/context"
+)
+
+// Retry invokes the given function, retrying it multiple times if the connection failed or
+// the HTTP status response indicates the request should be attempted again. ctx may be nil.
+func Retry(ctx context.Context, f func() (*http.Response, error), backoff BackoffStrategy) (*http.Response, error) {
+ for {
+ resp, err := f()
+
+ var status int
+ if resp != nil {
+ status = resp.StatusCode
+ }
+
+ // Return if we shouldn't retry.
+ pause, retry := backoff.Pause()
+ if !shouldRetry(status, err) || !retry {
+ return resp, err
+ }
+
+ // Ensure the response body is closed, if any.
+ if resp != nil && resp.Body != nil {
+ resp.Body.Close()
+ }
+
+ // Pause, but still listen to ctx.Done if context is not nil.
+ var done <-chan struct{}
+ if ctx != nil {
+ done = ctx.Done()
+ }
+ select {
+ case <-done:
+ return nil, ctx.Err()
+ case <-time.After(pause):
+ }
+ }
+}
+
+// DefaultBackoffStrategy returns a default strategy to use for retrying failed upload requests.
+func DefaultBackoffStrategy() BackoffStrategy {
+ return &ExponentialBackoff{
+ Base: 250 * time.Millisecond,
+ Max: 16 * time.Second,
+ }
+}
+
+// shouldRetry returns true if the HTTP response / error indicates that the
+// request should be attempted again.
+func shouldRetry(status int, err error) bool {
+ // Retry for 5xx response codes.
+ if 500 <= status && status < 600 {
+ return true
+ }
+
+ // Retry on statusTooManyRequests{
+ if status == statusTooManyRequests {
+ return true
+ }
+
+ // Retry on unexpected EOFs and temporary network errors.
+ if err == io.ErrUnexpectedEOF {
+ return true
+ }
+ if err, ok := err.(net.Error); ok {
+ return err.Temporary()
+ }
+
+ return false
+}
diff --git a/vendor/google.golang.org/api/googleapi/googleapi.go b/vendor/google.golang.org/api/googleapi/googleapi.go
index 8796e3e097b7..03e9acdd80ff 100644
--- a/vendor/google.golang.org/api/googleapi/googleapi.go
+++ b/vendor/google.golang.org/api/googleapi/googleapi.go
@@ -220,9 +220,13 @@ type contentTypeOption string
func (ct contentTypeOption) setOptions(o *MediaOptions) {
o.ContentType = string(ct)
+ if o.ContentType == "" {
+ o.ForceEmptyContentType = true
+ }
}
-// ContentType returns a MediaOption which sets the content type of data to be uploaded.
+// ContentType returns a MediaOption which sets the Content-Type header for media uploads.
+// If ctype is empty, the Content-Type header will be omitted.
func ContentType(ctype string) MediaOption {
return contentTypeOption(ctype)
}
@@ -248,8 +252,10 @@ func ChunkSize(size int) MediaOption {
// MediaOptions stores options for customizing media upload. It is not used by developers directly.
type MediaOptions struct {
- ContentType string
- ChunkSize int
+ ContentType string
+ ForceEmptyContentType bool
+
+ ChunkSize int
}
// ProcessMediaOptions stores options from opts in a MediaOptions.
diff --git a/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go b/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go
index 8a84813fe52e..7c103ba1386d 100644
--- a/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go
+++ b/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go
@@ -2,26 +2,15 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Package uritemplates is a level 4 implementation of RFC 6570 (URI
+// Package uritemplates is a level 3 implementation of RFC 6570 (URI
// Template, http://tools.ietf.org/html/rfc6570).
-//
-// To use uritemplates, parse a template string and expand it with a value
-// map:
-//
-// template, _ := uritemplates.Parse("https://api.github.com/repos{/user,repo}")
-// values := make(map[string]interface{})
-// values["user"] = "jtacoma"
-// values["repo"] = "uritemplates"
-// expanded, _ := template.ExpandString(values)
-// fmt.Printf(expanded)
-//
+// uritemplates does not support composite values (in Go: slices or maps)
+// and so does not qualify as a level 4 implementation.
package uritemplates
import (
"bytes"
"errors"
- "fmt"
- "reflect"
"regexp"
"strconv"
"strings"
@@ -45,52 +34,47 @@ func pctEncode(src []byte) []byte {
return dst
}
-func escape(s string, allowReserved bool) (escaped string) {
+func escape(s string, allowReserved bool) string {
if allowReserved {
- escaped = string(reserved.ReplaceAllFunc([]byte(s), pctEncode))
- } else {
- escaped = string(unreserved.ReplaceAllFunc([]byte(s), pctEncode))
+ return string(reserved.ReplaceAllFunc([]byte(s), pctEncode))
}
- return escaped
+ return string(unreserved.ReplaceAllFunc([]byte(s), pctEncode))
}
-// A UriTemplate is a parsed representation of a URI template.
-type UriTemplate struct {
+// A uriTemplate is a parsed representation of a URI template.
+type uriTemplate struct {
raw string
parts []templatePart
}
-// Parse parses a URI template string into a UriTemplate object.
-func Parse(rawtemplate string) (template *UriTemplate, err error) {
- template = new(UriTemplate)
- template.raw = rawtemplate
- split := strings.Split(rawtemplate, "{")
- template.parts = make([]templatePart, len(split)*2-1)
+// parse parses a URI template string into a uriTemplate object.
+func parse(rawTemplate string) (*uriTemplate, error) {
+ split := strings.Split(rawTemplate, "{")
+ parts := make([]templatePart, len(split)*2-1)
for i, s := range split {
if i == 0 {
if strings.Contains(s, "}") {
- err = errors.New("unexpected }")
- break
- }
- template.parts[i].raw = s
- } else {
- subsplit := strings.Split(s, "}")
- if len(subsplit) != 2 {
- err = errors.New("malformed template")
- break
+ return nil, errors.New("unexpected }")
}
- expression := subsplit[0]
- template.parts[i*2-1], err = parseExpression(expression)
- if err != nil {
- break
- }
- template.parts[i*2].raw = subsplit[1]
+ parts[i].raw = s
+ continue
}
+ subsplit := strings.Split(s, "}")
+ if len(subsplit) != 2 {
+ return nil, errors.New("malformed template")
+ }
+ expression := subsplit[0]
+ var err error
+ parts[i*2-1], err = parseExpression(expression)
+ if err != nil {
+ return nil, err
+ }
+ parts[i*2].raw = subsplit[1]
}
- if err != nil {
- template = nil
- }
- return template, err
+ return &uriTemplate{
+ raw: rawTemplate,
+ parts: parts,
+ }, nil
}
type templatePart struct {
@@ -160,6 +144,8 @@ func parseExpression(expression string) (result templatePart, err error) {
}
func parseTerm(term string) (result templateTerm, err error) {
+ // TODO(djd): Remove "*" suffix parsing once we check that no APIs have
+ // mistakenly used that attribute.
if strings.HasSuffix(term, "*") {
result.explode = true
term = term[:len(term)-1]
@@ -185,175 +171,50 @@ func parseTerm(term string) (result templateTerm, err error) {
}
// Expand expands a URI template with a set of values to produce a string.
-func (self *UriTemplate) Expand(value interface{}) (string, error) {
- values, ismap := value.(map[string]interface{})
- if !ismap {
- if m, ismap := struct2map(value); !ismap {
- return "", errors.New("expected map[string]interface{}, struct, or pointer to struct.")
- } else {
- return self.Expand(m)
- }
- }
+func (t *uriTemplate) Expand(values map[string]string) string {
var buf bytes.Buffer
- for _, p := range self.parts {
- err := p.expand(&buf, values)
- if err != nil {
- return "", err
- }
+ for _, p := range t.parts {
+ p.expand(&buf, values)
}
- return buf.String(), nil
+ return buf.String()
}
-func (self *templatePart) expand(buf *bytes.Buffer, values map[string]interface{}) error {
- if len(self.raw) > 0 {
- buf.WriteString(self.raw)
- return nil
+func (tp *templatePart) expand(buf *bytes.Buffer, values map[string]string) {
+ if len(tp.raw) > 0 {
+ buf.WriteString(tp.raw)
+ return
}
- var zeroLen = buf.Len()
- buf.WriteString(self.first)
- var firstLen = buf.Len()
- for _, term := range self.terms {
+ var first = true
+ for _, term := range tp.terms {
value, exists := values[term.name]
if !exists {
continue
}
- if buf.Len() != firstLen {
- buf.WriteString(self.sep)
- }
- switch v := value.(type) {
- case string:
- self.expandString(buf, term, v)
- case []interface{}:
- self.expandArray(buf, term, v)
- case map[string]interface{}:
- if term.truncate > 0 {
- return errors.New("cannot truncate a map expansion")
- }
- self.expandMap(buf, term, v)
- default:
- if m, ismap := struct2map(value); ismap {
- if term.truncate > 0 {
- return errors.New("cannot truncate a map expansion")
- }
- self.expandMap(buf, term, m)
- } else {
- str := fmt.Sprintf("%v", value)
- self.expandString(buf, term, str)
- }
+ if first {
+ buf.WriteString(tp.first)
+ first = false
+ } else {
+ buf.WriteString(tp.sep)
}
+ tp.expandString(buf, term, value)
}
- if buf.Len() == firstLen {
- original := buf.Bytes()[:zeroLen]
- buf.Reset()
- buf.Write(original)
- }
- return nil
}
-func (self *templatePart) expandName(buf *bytes.Buffer, name string, empty bool) {
- if self.named {
+func (tp *templatePart) expandName(buf *bytes.Buffer, name string, empty bool) {
+ if tp.named {
buf.WriteString(name)
if empty {
- buf.WriteString(self.ifemp)
+ buf.WriteString(tp.ifemp)
} else {
buf.WriteString("=")
}
}
}
-func (self *templatePart) expandString(buf *bytes.Buffer, t templateTerm, s string) {
+func (tp *templatePart) expandString(buf *bytes.Buffer, t templateTerm, s string) {
if len(s) > t.truncate && t.truncate > 0 {
s = s[:t.truncate]
}
- self.expandName(buf, t.name, len(s) == 0)
- buf.WriteString(escape(s, self.allowReserved))
-}
-
-func (self *templatePart) expandArray(buf *bytes.Buffer, t templateTerm, a []interface{}) {
- if len(a) == 0 {
- return
- } else if !t.explode {
- self.expandName(buf, t.name, false)
- }
- for i, value := range a {
- if t.explode && i > 0 {
- buf.WriteString(self.sep)
- } else if i > 0 {
- buf.WriteString(",")
- }
- var s string
- switch v := value.(type) {
- case string:
- s = v
- default:
- s = fmt.Sprintf("%v", v)
- }
- if len(s) > t.truncate && t.truncate > 0 {
- s = s[:t.truncate]
- }
- if self.named && t.explode {
- self.expandName(buf, t.name, len(s) == 0)
- }
- buf.WriteString(escape(s, self.allowReserved))
- }
-}
-
-func (self *templatePart) expandMap(buf *bytes.Buffer, t templateTerm, m map[string]interface{}) {
- if len(m) == 0 {
- return
- }
- if !t.explode {
- self.expandName(buf, t.name, len(m) == 0)
- }
- var firstLen = buf.Len()
- for k, value := range m {
- if firstLen != buf.Len() {
- if t.explode {
- buf.WriteString(self.sep)
- } else {
- buf.WriteString(",")
- }
- }
- var s string
- switch v := value.(type) {
- case string:
- s = v
- default:
- s = fmt.Sprintf("%v", v)
- }
- if t.explode {
- buf.WriteString(escape(k, self.allowReserved))
- buf.WriteRune('=')
- buf.WriteString(escape(s, self.allowReserved))
- } else {
- buf.WriteString(escape(k, self.allowReserved))
- buf.WriteRune(',')
- buf.WriteString(escape(s, self.allowReserved))
- }
- }
-}
-
-func struct2map(v interface{}) (map[string]interface{}, bool) {
- value := reflect.ValueOf(v)
- switch value.Type().Kind() {
- case reflect.Ptr:
- return struct2map(value.Elem().Interface())
- case reflect.Struct:
- m := make(map[string]interface{})
- for i := 0; i < value.NumField(); i++ {
- tag := value.Type().Field(i).Tag
- var name string
- if strings.Contains(string(tag), ":") {
- name = tag.Get("uri")
- } else {
- name = strings.TrimSpace(string(tag))
- }
- if len(name) == 0 {
- name = value.Type().Field(i).Name
- }
- m[name] = value.Field(i).Interface()
- }
- return m, true
- }
- return nil, false
+ tp.expandName(buf, t.name, len(s) == 0)
+ buf.WriteString(escape(s, tp.allowReserved))
}
diff --git a/vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go b/vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go
index 399ef4623698..eff260a6925f 100644
--- a/vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go
+++ b/vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go
@@ -1,13 +1,13 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
package uritemplates
-func Expand(path string, expansions map[string]string) (string, error) {
- template, err := Parse(path)
+func Expand(path string, values map[string]string) (string, error) {
+ template, err := parse(path)
if err != nil {
return "", err
}
- values := make(map[string]interface{})
- for k, v := range expansions {
- values[k] = v
- }
- return template.Expand(values)
+ return template.Expand(values), nil
}
diff --git a/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-api.json b/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-api.json
index 6c371985f58e..c6672c2f106e 100644
--- a/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-api.json
+++ b/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-api.json
@@ -1,12 +1,12 @@
{
"kind": "discovery#restDescription",
- "etag": "\"ye6orv2F-1npMW3u9suM3a7C5Bo/CCmkz9wJbxqIZMDlSyPUsI6BFWQ\"",
+ "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/OLx7eYKI1NQCi-ys96oQ7ZJUHE8\"",
"discoveryVersion": "v1",
"id": "sqladmin:v1beta4",
"name": "sqladmin",
"canonicalName": "SQL Admin",
"version": "v1beta4",
- "revision": "20151117",
+ "revision": "20160222",
"title": "Cloud SQL Administration API",
"description": "API for Cloud SQL database instance management.",
"ownerDomain": "google.com",
@@ -321,14 +321,18 @@
"type": "object",
"description": "A Cloud SQL instance resource.",
"properties": {
+ "backendType": {
+ "type": "string",
+ "description": "FIRST_GEN: Basic Cloud SQL instance that runs in a Google-managed container.\nSECOND_GEN: A newer Cloud SQL backend that runs in a Compute Engine VM.\nEXTERNAL: A MySQL server that is not managed by Google."
+ },
"currentDiskSize": {
"type": "string",
- "description": "The current disk usage of the instance in bytes.",
+ "description": "The current disk usage of the instance in bytes. This property has been deprecated. Users should use the \"cloudsql.googleapis.com/database/disk/bytes_used\" metric in Cloud Monitoring API instead. Please see https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ for details.",
"format": "int64"
},
"databaseVersion": {
"type": "string",
- "description": "The database engine type and version. Can be MYSQL_5_5 or MYSQL_5_6. Defaults to MYSQL_5_5. The databaseVersion can not be changed after instance creation."
+ "description": "The database engine type and version. Can be MYSQL_5_5 or MYSQL_5_6. Defaults to MYSQL_5_6. The databaseVersion can not be changed after instance creation."
},
"etag": {
"type": "string",
@@ -336,13 +340,15 @@
},
"failoverReplica": {
"type": "object",
- "description": "The name and status of the failover replica. Only applies to Second Generation instances.",
+ "description": "The name and status of the failover replica. This property is applicable only to Second Generation instances.",
"properties": {
"available": {
- "type": "boolean"
+ "type": "boolean",
+ "description": "The availability status of the failover replica. A false status indicates that the failover replica is out of sync. The master can only failover to the falover replica when the status is true."
},
"name": {
- "type": "string"
+ "type": "string",
+ "description": "The name of the failover replica."
}
}
},
@@ -359,7 +365,7 @@
},
"ipv6Address": {
"type": "string",
- "description": "The IPv6 address assigned to the instance."
+ "description": "The IPv6 address assigned to the instance. This property is applicable only to First Generation instances."
},
"kind": {
"type": "string",
@@ -394,7 +400,7 @@
},
"region": {
"type": "string",
- "description": "The geographical region. Can be us-central, asia-east1 or europe-west1. Defaults to us-central. The region can not be changed after instance creation."
+ "description": "The geographical region. Can be us-central (FIRST_GEN instances only), us-central1 (SECOND_GEN instances only), asia-east1 or europe-west1. Defaults to us-central or us-central1 depending on the instance type (First Generation or Second Generation). The region can not be changed after instance creation."
},
"replicaConfiguration": {
"$ref": "ReplicaConfiguration",
@@ -417,7 +423,7 @@
},
"serviceAccountEmailAddress": {
"type": "string",
- "description": "The service account email address assigned to the instance."
+ "description": "The service account email address assigned to the instance. This property is applicable only to Second Generation instances."
},
"settings": {
"$ref": "Settings",
@@ -432,6 +438,13 @@
"state": {
"type": "string",
"description": "The current serving state of the Cloud SQL instance. This can be one of the following.\nRUNNABLE: The instance is running, or is ready to run when accessed.\nSUSPENDED: The instance is not available, for example due to problems with billing.\nPENDING_CREATE: The instance is being created.\nMAINTENANCE: The instance is down for maintenance.\nFAILED: The instance creation failed.\nUNKNOWN_STATE: The state of the instance is unknown."
+ },
+ "suspensionReason": {
+ "type": "array",
+ "description": "If the instance state is SUSPENDED, the reason for the suspension.",
+ "items": {
+ "type": "string"
+ }
}
}
},
@@ -1050,11 +1063,11 @@
"properties": {
"activationPolicy": {
"type": "string",
- "description": "The activation policy for this instance. This specifies when the instance should be activated and is applicable only when the instance state is RUNNABLE. This can be one of the following.\nALWAYS: The instance should always be active.\nNEVER: The instance should never be activated.\nON_DEMAND: The instance is activated upon receiving requests."
+ "description": "The activation policy for this instance. This specifies when the instance should be activated and is applicable only when the instance state is RUNNABLE. This can be one of the following.\nALWAYS: The instance should always be active.\nNEVER: The instance should never be activated.\nON_DEMAND: The instance is activated upon receiving requests; only applicable to First Generation instances."
},
"authorizedGaeApplications": {
"type": "array",
- "description": "The App Engine app IDs that can access this instance.",
+ "description": "The App Engine app IDs that can access this instance. This property is only applicable to First Generation instances.",
"items": {
"type": "string"
}
@@ -1065,16 +1078,16 @@
},
"crashSafeReplicationEnabled": {
"type": "boolean",
- "description": "Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled."
+ "description": "Configuration specific to read replica instances. Indicates whether database flags for crash-safe replication are enabled. This property is only applicable to First Generation instances."
},
"dataDiskSizeGb": {
"type": "string",
- "description": "The size of data disk, in GB. Only supported for 2nd Generation instances. The data disk size minimum is 10GB.",
+ "description": "The size of data disk, in GB. The data disk size minimum is 10GB. This property is only applicable to Second Generation instances.",
"format": "int64"
},
"dataDiskType": {
"type": "string",
- "description": "The type of data disk. Only supported for 2nd Generation instances. The default type is SSD."
+ "description": "The type of data disk. Only supported for Second Generation instances. The default type is PD_SSD. This property is only applicable to Second Generation instances."
},
"databaseFlags": {
"type": "array",
@@ -1089,7 +1102,7 @@
},
"ipConfiguration": {
"$ref": "IpConfiguration",
- "description": "The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance."
+ "description": "The settings for IP Management. This allows to enable or disable the instance IP and manage which external networks can connect to the instance. The IPv4 address cannot be disabled for Second Generation instances."
},
"kind": {
"type": "string",
@@ -1098,19 +1111,19 @@
},
"locationPreference": {
"$ref": "LocationPreference",
- "description": "The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or GCE zone for better performance."
+ "description": "The location preference settings. This allows the instance to be located as near as possible to either an App Engine app or GCE zone for better performance. App Engine co-location is only applicable to First Generation instances."
},
"maintenanceWindow": {
"$ref": "MaintenanceWindow",
- "description": "The maintenance window for this instance. This specifies when the instance may be restarted for maintenance purposes."
+ "description": "The maintenance window for this instance. This specifies when the instance may be restarted for maintenance purposes. This property is only applicable to Second Generation instances."
},
"pricingPlan": {
"type": "string",
- "description": "The pricing plan for this instance. This can be either PER_USE or PACKAGE."
+ "description": "The pricing plan for this instance. This can be either PER_USE or PACKAGE. Only PER_USE is supported for Second Generation instances."
},
"replicationType": {
"type": "string",
- "description": "The type of replication this instance uses. This can be either ASYNCHRONOUS or SYNCHRONOUS."
+ "description": "The type of replication this instance uses. This can be either ASYNCHRONOUS or SYNCHRONOUS. This property is only applicable to First Generation instances."
},
"settingsVersion": {
"type": "string",
@@ -1319,7 +1332,7 @@
},
"host": {
"type": "string",
- "description": "The host name from which the user can connect. For insert operations, host defaults to an empty string. For update operations, host is specified as part of the request URL. The host name is not mutable with this API."
+ "description": "The host name from which the user can connect. For insert operations, host defaults to an empty string. For update operations, host is specified as part of the request URL. The host name cannot be updated after insertion."
},
"instance": {
"type": "string",
@@ -1499,7 +1512,7 @@
"id": "sql.databases.delete",
"path": "projects/{project}/instances/{instance}/databases/{database}",
"httpMethod": "DELETE",
- "description": "Deletes a resource containing information about a database inside a Cloud SQL instance.",
+ "description": "Deletes a database from a Cloud SQL instance.",
"parameters": {
"database": {
"type": "string",
@@ -1743,7 +1756,7 @@
"id": "sql.instances.clone",
"path": "projects/{project}/instances/{instance}/clone",
"httpMethod": "POST",
- "description": "Creates a Cloud SQL instance as a clone of the source instance.",
+ "description": "Creates a Cloud SQL instance as a clone of the source instance. The API is not ready for Second Generation instances yet.",
"parameters": {
"instance": {
"type": "string",
diff --git a/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-gen.go b/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-gen.go
index cfe634ce37a8..85a7e7a6d51f 100644
--- a/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-gen.go
+++ b/vendor/google.golang.org/api/sqladmin/v1beta4/sqladmin-gen.go
@@ -459,19 +459,31 @@ func (s *DatabaseFlags) MarshalJSON() ([]byte, error) {
// DatabaseInstance: A Cloud SQL instance resource.
type DatabaseInstance struct {
+ // BackendType: FIRST_GEN: Basic Cloud SQL instance that runs in a
+ // Google-managed container.
+ // SECOND_GEN: A newer Cloud SQL backend that runs in a Compute Engine
+ // VM.
+ // EXTERNAL: A MySQL server that is not managed by Google.
+ BackendType string `json:"backendType,omitempty"`
+
// CurrentDiskSize: The current disk usage of the instance in bytes.
+ // This property has been deprecated. Users should use the
+ // "cloudsql.googleapis.com/database/disk/bytes_used" metric in Cloud
+ // Monitoring API instead. Please see
+ // https://groups.google.com/d/msg/google-cloud-sql-announce/I_7-F9EBhT0/BtvFtdFeAgAJ for
+ // details.
CurrentDiskSize int64 `json:"currentDiskSize,omitempty,string"`
// DatabaseVersion: The database engine type and version. Can be
- // MYSQL_5_5 or MYSQL_5_6. Defaults to MYSQL_5_5. The databaseVersion
+ // MYSQL_5_5 or MYSQL_5_6. Defaults to MYSQL_5_6. The databaseVersion
// can not be changed after instance creation.
DatabaseVersion string `json:"databaseVersion,omitempty"`
// Etag: HTTP 1.1 Entity tag for the resource.
Etag string `json:"etag,omitempty"`
- // FailoverReplica: The name and status of the failover replica. Only
- // applies to Second Generation instances.
+ // FailoverReplica: The name and status of the failover replica. This
+ // property is applicable only to Second Generation instances.
FailoverReplica *DatabaseInstanceFailoverReplica `json:"failoverReplica,omitempty"`
// InstanceType: The instance type. This can be one of the
@@ -487,7 +499,8 @@ type DatabaseInstance struct {
// IpAddresses: The assigned IP addresses for the instance.
IpAddresses []*IpMapping `json:"ipAddresses,omitempty"`
- // Ipv6Address: The IPv6 address assigned to the instance.
+ // Ipv6Address: The IPv6 address assigned to the instance. This property
+ // is applicable only to First Generation instances.
Ipv6Address string `json:"ipv6Address,omitempty"`
// Kind: This is always sql#instance.
@@ -512,9 +525,11 @@ type DatabaseInstance struct {
// instance. The Google apps domain is prefixed if applicable.
Project string `json:"project,omitempty"`
- // Region: The geographical region. Can be us-central, asia-east1 or
- // europe-west1. Defaults to us-central. The region can not be changed
- // after instance creation.
+ // Region: The geographical region. Can be us-central (FIRST_GEN
+ // instances only), us-central1 (SECOND_GEN instances only), asia-east1
+ // or europe-west1. Defaults to us-central or us-central1 depending on
+ // the instance type (First Generation or Second Generation). The region
+ // can not be changed after instance creation.
Region string `json:"region,omitempty"`
// ReplicaConfiguration: Configuration specific to read-replicas
@@ -531,7 +546,8 @@ type DatabaseInstance struct {
ServerCaCert *SslCert `json:"serverCaCert,omitempty"`
// ServiceAccountEmailAddress: The service account email address
- // assigned to the instance.
+ // assigned to the instance. This property is applicable only to Second
+ // Generation instances.
ServiceAccountEmailAddress string `json:"serviceAccountEmailAddress,omitempty"`
// Settings: The user settings.
@@ -549,11 +565,15 @@ type DatabaseInstance struct {
// UNKNOWN_STATE: The state of the instance is unknown.
State string `json:"state,omitempty"`
+ // SuspensionReason: If the instance state is SUSPENDED, the reason for
+ // the suspension.
+ SuspensionReason []string `json:"suspensionReason,omitempty"`
+
// ServerResponse contains the HTTP response code and headers from the
// server.
googleapi.ServerResponse `json:"-"`
- // ForceSendFields is a list of field names (e.g. "CurrentDiskSize") to
+ // ForceSendFields is a list of field names (e.g. "BackendType") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -569,10 +589,15 @@ func (s *DatabaseInstance) MarshalJSON() ([]byte, error) {
}
// DatabaseInstanceFailoverReplica: The name and status of the failover
-// replica. Only applies to Second Generation instances.
+// replica. This property is applicable only to Second Generation
+// instances.
type DatabaseInstanceFailoverReplica struct {
+ // Available: The availability status of the failover replica. A false
+ // status indicates that the failover replica is out of sync. The master
+ // can only failover to the falover replica when the status is true.
Available bool `json:"available,omitempty"`
+ // Name: The name of the failover replica.
Name string `json:"name,omitempty"`
// ForceSendFields is a list of field names (e.g. "Available") to
@@ -1443,11 +1468,13 @@ type Settings struct {
// following.
// ALWAYS: The instance should always be active.
// NEVER: The instance should never be activated.
- // ON_DEMAND: The instance is activated upon receiving requests.
+ // ON_DEMAND: The instance is activated upon receiving requests; only
+ // applicable to First Generation instances.
ActivationPolicy string `json:"activationPolicy,omitempty"`
// AuthorizedGaeApplications: The App Engine app IDs that can access
- // this instance.
+ // this instance. This property is only applicable to First Generation
+ // instances.
AuthorizedGaeApplications []string `json:"authorizedGaeApplications,omitempty"`
// BackupConfiguration: The daily backup configuration for the instance.
@@ -1455,15 +1482,18 @@ type Settings struct {
// CrashSafeReplicationEnabled: Configuration specific to read replica
// instances. Indicates whether database flags for crash-safe
- // replication are enabled.
+ // replication are enabled. This property is only applicable to First
+ // Generation instances.
CrashSafeReplicationEnabled bool `json:"crashSafeReplicationEnabled,omitempty"`
- // DataDiskSizeGb: The size of data disk, in GB. Only supported for 2nd
- // Generation instances. The data disk size minimum is 10GB.
+ // DataDiskSizeGb: The size of data disk, in GB. The data disk size
+ // minimum is 10GB. This property is only applicable to Second
+ // Generation instances.
DataDiskSizeGb int64 `json:"dataDiskSizeGb,omitempty,string"`
- // DataDiskType: The type of data disk. Only supported for 2nd
- // Generation instances. The default type is SSD.
+ // DataDiskType: The type of data disk. Only supported for Second
+ // Generation instances. The default type is PD_SSD. This property is
+ // only applicable to Second Generation instances.
DataDiskType string `json:"dataDiskType,omitempty"`
// DatabaseFlags: The database flags passed to the instance at startup.
@@ -1475,7 +1505,8 @@ type Settings struct {
// IpConfiguration: The settings for IP Management. This allows to
// enable or disable the instance IP and manage which external networks
- // can connect to the instance.
+ // can connect to the instance. The IPv4 address cannot be disabled for
+ // Second Generation instances.
IpConfiguration *IpConfiguration `json:"ipConfiguration,omitempty"`
// Kind: This is always sql#settings.
@@ -1483,20 +1514,24 @@ type Settings struct {
// LocationPreference: The location preference settings. This allows the
// instance to be located as near as possible to either an App Engine
- // app or GCE zone for better performance.
+ // app or GCE zone for better performance. App Engine co-location is
+ // only applicable to First Generation instances.
LocationPreference *LocationPreference `json:"locationPreference,omitempty"`
// MaintenanceWindow: The maintenance window for this instance. This
// specifies when the instance may be restarted for maintenance
- // purposes.
+ // purposes. This property is only applicable to Second Generation
+ // instances.
MaintenanceWindow *MaintenanceWindow `json:"maintenanceWindow,omitempty"`
// PricingPlan: The pricing plan for this instance. This can be either
- // PER_USE or PACKAGE.
+ // PER_USE or PACKAGE. Only PER_USE is supported for Second Generation
+ // instances.
PricingPlan string `json:"pricingPlan,omitempty"`
// ReplicationType: The type of replication this instance uses. This can
- // be either ASYNCHRONOUS or SYNCHRONOUS.
+ // be either ASYNCHRONOUS or SYNCHRONOUS. This property is only
+ // applicable to First Generation instances.
ReplicationType string `json:"replicationType,omitempty"`
// SettingsVersion: The version of instance settings. This is a required
@@ -1770,8 +1805,8 @@ type User struct {
// Host: The host name from which the user can connect. For insert
// operations, host defaults to an empty string. For update operations,
- // host is specified as part of the request URL. The host name is not
- // mutable with this API.
+ // host is specified as part of the request URL. The host name cannot be
+ // updated after insertion.
Host string `json:"host,omitempty"`
// Instance: The name of the Cloud SQL instance. This does not include
@@ -2314,8 +2349,7 @@ type DatabasesDeleteCall struct {
ctx_ context.Context
}
-// Delete: Deletes a resource containing information about a database
-// inside a Cloud SQL instance.
+// Delete: Deletes a database from a Cloud SQL instance.
func (r *DatabasesService) Delete(project string, instance string, database string) *DatabasesDeleteCall {
c := &DatabasesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -2395,7 +2429,7 @@ func (c *DatabasesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes a resource containing information about a database inside a Cloud SQL instance.",
+ // "description": "Deletes a database from a Cloud SQL instance.",
// "httpMethod": "DELETE",
// "id": "sql.databases.delete",
// "parameterOrder": [
@@ -3258,7 +3292,7 @@ type InstancesCloneCall struct {
}
// Clone: Creates a Cloud SQL instance as a clone of the source
-// instance.
+// instance. The API is not ready for Second Generation instances yet.
func (r *InstancesService) Clone(project string, instance string, instancesclonerequest *InstancesCloneRequest) *InstancesCloneCall {
c := &InstancesCloneCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -3343,7 +3377,7 @@ func (c *InstancesCloneCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Creates a Cloud SQL instance as a clone of the source instance.",
+ // "description": "Creates a Cloud SQL instance as a clone of the source instance. The API is not ready for Second Generation instances yet.",
// "httpMethod": "POST",
// "id": "sql.instances.clone",
// "parameterOrder": [
diff --git a/vendor/google.golang.org/api/storage/v1/storage-api.json b/vendor/google.golang.org/api/storage/v1/storage-api.json
index 4f4e9ef41633..3768b46877ad 100644
--- a/vendor/google.golang.org/api/storage/v1/storage-api.json
+++ b/vendor/google.golang.org/api/storage/v1/storage-api.json
@@ -1,13 +1,13 @@
{
"kind": "discovery#restDescription",
- "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/obxe7w0FjJgGPQFNs6hMClVbJfI\"",
+ "etag": "\"bRFOOrZKfO9LweMbPqu0kcu6De8/KVPQfwGxQTBtH0g1kuij0C9i4uc\"",
"discoveryVersion": "v1",
"id": "storage:v1",
"name": "storage",
"version": "v1",
- "revision": "20160121",
+ "revision": "20160304",
"title": "Cloud Storage JSON API",
- "description": "Lets you store and retrieve potentially-large, immutable data objects.",
+ "description": "Stores and retrieves potentially large, immutable data objects.",
"ownerDomain": "google.com",
"ownerName": "Google",
"icons": {
diff --git a/vendor/google.golang.org/api/storage/v1/storage-gen.go b/vendor/google.golang.org/api/storage/v1/storage-gen.go
index 89ab8b7baae8..a299044395b2 100644
--- a/vendor/google.golang.org/api/storage/v1/storage-gen.go
+++ b/vendor/google.golang.org/api/storage/v1/storage-gen.go
@@ -5993,18 +5993,29 @@ func (c *ObjectsInsertCall) Projection(projection string) *ObjectsInsertCall {
return c
}
-// Media specifies the media to upload in a single chunk. At most one of
-// Media and ResumableMedia may be set.
+// Media specifies the media to upload in one or more chunks. The chunk
+// size may be controlled by supplying a MediaOption generated by
+// googleapi.ChunkSize. The chunk size defaults to
+// googleapi.DefaultUploadChunkSize.The Content-Type header used in the
+// upload request will be determined by sniffing the contents of r,
+// unless a MediaOption generated by googleapi.ContentType is
+// supplied.
+// At most one of Media and ResumableMedia may be set.
func (c *ObjectsInsertCall) Media(r io.Reader, options ...googleapi.MediaOption) *ObjectsInsertCall {
opts := googleapi.ProcessMediaOptions(options)
chunkSize := opts.ChunkSize
- r, c.mediaType_ = gensupport.DetermineContentType(r, opts.ContentType)
+ if !opts.ForceEmptyContentType {
+ r, c.mediaType_ = gensupport.DetermineContentType(r, opts.ContentType)
+ }
c.media_, c.resumableBuffer_ = gensupport.PrepareUpload(r, chunkSize)
return c
}
// ResumableMedia specifies the media to upload in chunks and can be
-// canceled with ctx. ResumableMedia is deprecated in favour of Media.
+// canceled with ctx.
+//
+// Deprecated: use Media instead.
+//
// At most one of Media and ResumableMedia may be set. mediaType
// identifies the MIME media type of the upload, such as "image/png". If
// mediaType is "", it will be auto-detected. The provided ctx will
@@ -6074,7 +6085,7 @@ func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) {
googleapi.Expand(req.URL, map[string]string{
"bucket": c.bucket,
})
- if c.resumableBuffer_ != nil {
+ if c.resumableBuffer_ != nil && c.mediaType_ != "" {
req.Header.Set("X-Upload-Content-Type", c.mediaType_)
}
req.Header.Set("Content-Type", ctype)
@@ -6094,7 +6105,9 @@ func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) {
// was returned.
func (c *ObjectsInsertCall) Do(opts ...googleapi.CallOption) (*Object, error) {
gensupport.SetOptions(c.urlParams_, opts...)
- res, err := c.doRequest("json")
+ res, err := gensupport.Retry(c.ctx_, func() (*http.Response, error) {
+ return c.doRequest("json")
+ }, gensupport.DefaultBackoffStrategy())
if res != nil && res.StatusCode == http.StatusNotModified {
if res.Body != nil {
res.Body.Close()
@@ -6134,6 +6147,9 @@ func (c *ObjectsInsertCall) Do(opts ...googleapi.CallOption) (*Object, error) {
return nil, err
}
defer res.Body.Close()
+ if err := googleapi.CheckResponse(res); err != nil {
+ return nil, err
+ }
}
ret := &Object{
ServerResponse: googleapi.ServerResponse{
diff --git a/website/.bundle/config b/website/.bundle/config
index df11c7518e0c..2fbf0ffd7101 100644
--- a/website/.bundle/config
+++ b/website/.bundle/config
@@ -1,2 +1 @@
----
-BUNDLE_DISABLE_SHARED_GEMS: '1'
+--- {}
diff --git a/website/Gemfile.lock b/website/Gemfile.lock
index f6cb5aa72698..a4113925deb5 100644
--- a/website/Gemfile.lock
+++ b/website/Gemfile.lock
@@ -1,6 +1,6 @@
GIT
remote: https://github.com/hashicorp/middleman-hashicorp
- revision: 4de731a2eb809f0ccbaddf18043b267806123465
+ revision: adc9159aeb1be03513925527326d5f25266f9732
specs:
middleman-hashicorp (0.2.0)
bootstrap-sass (~> 3.3)
@@ -27,7 +27,7 @@ GEM
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
- autoprefixer-rails (6.3.4)
+ autoprefixer-rails (6.3.6)
execjs
bootstrap-sass (3.3.6)
autoprefixer-rails (>= 5.2.1)
@@ -152,7 +152,7 @@ GEM
redcarpet (3.3.4)
ref (2.0.0)
rouge (1.10.1)
- sass (3.4.21)
+ sass (3.4.22)
sprockets (2.12.4)
hike (~> 1.2)
multi_json (~> 1.0)
@@ -187,6 +187,3 @@ PLATFORMS
DEPENDENCIES
middleman-hashicorp!
-
-BUNDLED WITH
- 1.11.2
diff --git a/website/config.rb b/website/config.rb
index c2d3c7a9274c..a56358f33891 100644
--- a/website/config.rb
+++ b/website/config.rb
@@ -2,6 +2,6 @@
activate :hashicorp do |h|
h.name = "terraform"
- h.version = "0.6.14"
+ h.version = "0.6.15"
h.github_slug = "hashicorp/terraform"
end
diff --git a/website/packer.json b/website/packer.json
index b230c7e51075..5732112c3fa9 100644
--- a/website/packer.json
+++ b/website/packer.json
@@ -27,8 +27,10 @@
"FASTLY_API_KEY={{ user `fastly_api_key` }}"
],
"inline": [
- "apt-get update",
- "apt-get install -y build-essential curl git libffi-dev s3cmd wget",
+ "apt-get -qq update",
+ "apt-get -yqq install build-essential curl git libffi-dev wget",
+ "apt-get -yqq install python-pip",
+ "pip install s3cmd",
"cd /app",
"bundle check || bundle install --jobs 7",
diff --git a/website/scripts/deploy.sh b/website/scripts/deploy.sh
index 06d84265de56..9b67c543792a 100755
--- a/website/scripts/deploy.sh
+++ b/website/scripts/deploy.sh
@@ -64,15 +64,29 @@ if [ -z "$NO_UPLOAD" ]; then
# The s3cmd guessed mime type for text files is often wrong. This is
# problematic for some assets, so force their mime types to be correct.
+ echo "Overriding javascript mime-types..."
s3cmd \
--mime-type="application/javascript" \
- modify "s3://hc-sites/$PROJECT/latest/**/*.js"
+ --exclude "*" \
+ --include "*.js" \
+ --recursive \
+ modify "s3://hc-sites/$PROJECT/latest/"
+
+ echo "Overriding css mime-types..."
s3cmd \
--mime-type="text/css" \
- modify "s3://hc-sites/$PROJECT/latest/**/*.css"
+ --exclude "*" \
+ --include "*.css" \
+ --recursive \
+ modify "s3://hc-sites/$PROJECT/latest/"
+
+ echo "Overriding svg mime-types..."
s3cmd \
--mime-type="image/svg+xml" \
- modify "s3://hc-sites/$PROJECT/latest/**/*.svg"
+ --exclude "*" \
+ --include "*.svg" \
+ --recursive \
+ modify "s3://hc-sites/$PROJECT/latest/"
fi
# Perform a soft-purge of the surrogate key.
diff --git a/website/source/assets/stylesheets/_announcement-bnr.scss b/website/source/assets/stylesheets/_announcement-bnr.scss
new file mode 100755
index 000000000000..b1cb8c6e0064
--- /dev/null
+++ b/website/source/assets/stylesheets/_announcement-bnr.scss
@@ -0,0 +1,142 @@
+//
+// announcement bnr
+// --------------------------------------------------
+
+$enterprise-bnr-font-weight: 300;
+$enterprise-bnr-consul-color: #B52A55;
+$enterprise-color-dark-white: #A9B1B5;
+
+body{
+ // when _announcment-bnr.erb (ie. Consul Enterprise Announcment) is being used in layout we need to push down content to accomodate
+ // add this class to body
+ &.-displaying-bnr{
+ #header{
+ > .container{
+ padding-top: 8px;
+ -webkit-transform: translateY(32px);
+ -ms-transform: translateY(32px);
+ transform: translateY(32px);
+ }
+ }
+
+ #jumbotron {
+ .container{
+ .jumbo-logo-wrap{
+ margin-top: 160px;
+ }
+ }
+ }
+
+ &.page-sub{
+ #header{
+ > .container{
+ padding-bottom: 32px;
+ }
+ }
+ }
+ }
+}
+
+
+#announcement-bnr {
+ height: 40px;
+ flex-shrink: 0;
+ background-color: #000;
+
+ &.-absolute{
+ position: absolute;
+ top: 0;
+ left: 0;
+ width: 100%;
+ z-index: 9999;
+ }
+
+ a,p{
+ font-size: 14px;
+ color: $enterprise-color-dark-white;
+ font-family: $header-font-family;
+ font-weight: $enterprise-bnr-font-weight;
+ font-size: 13px;
+ line-height: 40px;
+ margin-bottom: 0;
+ }
+
+ .link-highlight{
+ display: inline-block;
+ margin-left: 3px;
+ color: lighten($purple, 10%);
+ font-weight: 400;
+ -webkit-transform: translateY(1px);
+ -ms-transform: translateY(1px);
+ transform: translateY(1px);
+ }
+
+ .enterprise-logo{
+ position: relative;
+ top: 4px;
+
+ &:hover{
+ text-decoration: none;
+
+ svg{
+ rect{
+ fill: $enterprise-color-dark-white;
+ }
+ }
+ }
+
+ svg{
+ width: 156px;
+ height: 18px;
+ fill: $white;
+ margin-right: 4px;
+ margin-left: 3px;
+
+ rect{
+ @include transition(all .1s ease-in);
+ }
+ }
+ }
+}
+
+.hcaret{
+ display: inline-block;
+ -moz-transform: translate(0, -1px) rotate(135deg);
+ -webkit-transform: translate(0, -1px) rotate(135deg);
+ transform: translate(0, -1px) rotate(135deg);
+ width: 7px;
+ height: 7px;
+ border-top: 1px solid lighten($purple, 10%);
+ border-left: 1px solid lighten($purple, 10%);
+ @include transition(all .1s ease-in);
+}
+
+@media (max-width: 768px) {
+ #announcement-bnr {
+ .tagline{
+ display: none;
+ }
+ }
+}
+
+@media (max-width: 320px) {
+ #announcement-bnr {
+ a,p{
+ font-size: 12px;
+ }
+
+ .link-highlight{
+ display: inline-block;
+ margin-left: 1px;
+ }
+
+ .enterprise-logo svg{
+ width: 128px;
+ margin-left: 2px;
+ }
+
+ .hcaret{
+ display: none;
+ }
+ }
+}
diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss
index 2fdeba66de33..37c7f1816335 100755
--- a/website/source/assets/stylesheets/_docs.scss
+++ b/website/source/assets/stylesheets/_docs.scss
@@ -14,6 +14,7 @@ body.layout-azurerm,
body.layout-clc,
body.layout-cloudflare,
body.layout-cloudstack,
+body.layout-cobbler,
body.layout-consul,
body.layout-datadog,
body.layout-digitalocean,
@@ -26,6 +27,7 @@ body.layout-fastly,
body.layout-google,
body.layout-heroku,
body.layout-influxdb,
+body.layout-librato,
body.layout-mailgun,
body.layout-mysql,
body.layout-openstack,
@@ -34,6 +36,7 @@ body.layout-postgresql,
body.layout-powerdns,
body.layout-rundeck,
body.layout-statuscake,
+body.layout-softlayer,
body.layout-template,
body.layout-tls,
body.layout-ultradns,
diff --git a/website/source/assets/stylesheets/application.scss b/website/source/assets/stylesheets/application.scss
index 3776f905661a..27dd8558462d 100755
--- a/website/source/assets/stylesheets/application.scss
+++ b/website/source/assets/stylesheets/application.scss
@@ -22,6 +22,7 @@
@import 'hashicorp-shared/_hashicorp-sidebar';
// Components
+@import '_announcement-bnr';
@import '_header';
@import '_footer';
@import '_jumbotron';
diff --git a/website/source/docs/commands/fmt.html.markdown b/website/source/docs/commands/fmt.html.markdown
index bb48ae9576aa..96e2be19eb9a 100644
--- a/website/source/docs/commands/fmt.html.markdown
+++ b/website/source/docs/commands/fmt.html.markdown
@@ -25,4 +25,4 @@ The command-line flags are all optional. The list of available flags are:
* `-list=true` - List files whose formatting differs (disabled if using STDIN)
* `-write=true` - Write result to source file instead of STDOUT (disabled if
using STDIN)
-* `-diff=false` - Display diffs instead of rewriting files
+* `-diff=false` - Display diffs of formatting changes
diff --git a/website/source/docs/configuration/variables.html.md b/website/source/docs/configuration/variables.html.md
index 1062b32bb38b..ea454c1dcbc9 100644
--- a/website/source/docs/configuration/variables.html.md
+++ b/website/source/docs/configuration/variables.html.md
@@ -153,7 +153,13 @@ $ TF_VAR_image=foo terraform apply
## Variable Files
Variables can be collected in files and passed all at once using the
-`-var-file=foo` flag.
+`-var-file=foo.tfvars` flag. The format for variables in `.tfvars`
+files is:
+```
+foo = "bar"
+xyz = "abc"
+
+```
The flag can be used multiple times per command invocation:
@@ -165,22 +171,18 @@ terraform apply -var-file=foo.tfvars -var-file=bar.tfvars
variable file (reading left to right) will be the definition used. Put more
simply, the last time a variable is defined is the one which will be used.
-##Example:
+### Precedence example:
Both these files have the variable `baz` defined:
_foo.tfvars_
```
-variable "baz" {
- default = "foo"
-}
+baz = "foo"
```
_bar.tfvars_
```
-variable "baz" {
- default = "bar"
-}
+baz = "bar"
```
When they are passed in the following order:
diff --git a/website/source/docs/providers/aws/index.html.markdown b/website/source/docs/providers/aws/index.html.markdown
index 7bc328dad4cd..949c67f623ce 100644
--- a/website/source/docs/providers/aws/index.html.markdown
+++ b/website/source/docs/providers/aws/index.html.markdown
@@ -39,7 +39,7 @@ explained below:
- Static credentials
- Environment variables
- Shared credentials file
-
+- EC2 Role
### Static credentials ###
@@ -96,6 +96,21 @@ provider "aws" {
}
```
+###EC2 Role
+
+If you're running Terraform from an EC2 instance with IAM Instance Profile
+using IAM Role, Terraform will just ask
+[the metadata API](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials)
+endpoint for credentials.
+
+This is a preferred approach over any other when running in EC2 as you can avoid
+hardcoding credentials. Instead these are leased on-the-fly by Terraform
+which reduces the chance of leakage.
+
+You can provide custom metadata API endpoint via `AWS_METADATA_ENDPOINT` variable
+which expects the endpoint URL including the version
+and defaults to `http://169.254.169.254:80/latest`.
+
## Argument Reference
The following arguments are supported in the `provider` block:
@@ -156,4 +171,24 @@ Nested `endpoints` block supports the followings:
* `elb` - (Optional) Use this to override the default endpoint
URL constructed from the `region`. It's typically used to connect to
- custom elb endpoints.
\ No newline at end of file
+ custom elb endpoints.
+
+## Getting the Account ID
+
+If you use either `allowed_account_ids` or `forbidden_account_ids`,
+Terraform uses several approaches to get the actual account ID
+in order to compare it with allowed/forbidden ones.
+
+Approaches differ per auth providers:
+
+ * EC2 instance w/ IAM Instance Profile - [Metadata API](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
+ is always used
+ * All other providers (ENV vars, shared creds file, ...)
+ will try two approaches in the following order
+ * `iam:GetUser` - typically useful for IAM Users. It also means
+ that each user needs to be privileged to call `iam:GetUser` for themselves.
+ * `iam:ListRoles` - this is specifically useful for IdP-federated profiles
+ which cannot use `iam:GetUser`. It also means that each federated user
+ need to be _assuming_ an IAM role which allows `iam:ListRoles`.
+ There is currently no better clean way to get account ID
+ out of the API when using federated account unfortunately.
diff --git a/website/source/docs/providers/aws/r/api_gateway_account.html.markdown b/website/source/docs/providers/aws/r/api_gateway_account.html.markdown
new file mode 100644
index 000000000000..23afc8dfb7c7
--- /dev/null
+++ b/website/source/docs/providers/aws/r/api_gateway_account.html.markdown
@@ -0,0 +1,84 @@
+---
+layout: "aws"
+page_title: "AWS: aws_api_gateway_account"
+sidebar_current: "docs-aws-resource-api-gateway-account"
+description: |-
+ Provides a settings of an API Gateway Account.
+---
+
+# aws\_api\_gateway\_account
+
+Provides a settings of an API Gateway Account. Settings is applied region-wide per `provider` block.
+
+-> **Note:** As there is no API method for deleting account settings or resetting it to defaults, destroying this resource will keep your account settings intact
+
+## Example Usage
+
+```
+resource "aws_api_gateway_account" "demo" {
+ cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
+}
+
+resource "aws_iam_role" "cloudwatch" {
+ name = "api_gateway_cloudwatch_global"
+ assume_role_policy = < **NOTE:** When using `ELB` as the health_check_type, `health_check_grace_period` is required.
-
## Waiting for Capacity
A newly-created ASG is initially empty and begins to scale to `min_size` (or
diff --git a/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown b/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown
new file mode 100644
index 000000000000..8bfa53c7d8e4
--- /dev/null
+++ b/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown
@@ -0,0 +1,356 @@
+---
+layout: "aws"
+page_title: "AWS: cloudfront_distribution"
+sidebar_current: "docs-aws-resource-cloudfront-distribution"
+description: |-
+ Provides a CloudFront web distribution resource.
+---
+
+# aws\_cloudfront\_distribution
+
+Creates an Amazon CloudFront web distribution.
+
+For information about CloudFront distributions, see the
+[Amazon CloudFront Developer Guide][1]. For specific information about creating
+CloudFront web distributions, see the [POST Distribution][2] page in the Amazon
+CloudFront API Reference.
+
+~> **NOTE:** CloudFront distributions take about 15 minutes to a deployed state
+after creation or modification. During this time, deletes to resources will be
+blocked. If you need to delete a distribution that is enabled and you do not
+want to wait, you need to use the `retain_on_delete` flag.
+
+## Example Usage
+
+The following example below creates a CloudFront distribution with an S3 origin.
+
+```
+resource "aws_cloudfront_distribution" "s3_distribution" {
+ origin {
+ domain_name = "mybucket.s3.amazonaws.com"
+ origin_id = "myS3Origin"
+
+ s3_origin_config {
+ origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567"
+ }
+ }
+
+ enabled = true
+ comment = "Some comment"
+ default_root_object = "index.html"
+
+ logging_config {
+ include_cookies = false
+ bucket = "mylogs.s3.amazonaws.com"
+ prefix = "myprefix"
+ }
+
+ aliases = ["mysite.example.com", "yoursite.example.com"]
+
+ default_cache_behavior {
+ allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
+ cached_methods = ["GET", "HEAD"]
+ target_origin_id = "myS3Origin"
+
+ forwarded_values {
+ query_string = false
+
+ cookies {
+ forward = "none"
+ }
+ }
+
+ viewer_protocol_policy = "allow-all"
+ min_ttl = 0
+ default_ttl = 3600
+ max_ttl = 86400
+ }
+
+ price_class = "PriceClass_200"
+
+ restrictions {
+ geo_restriction {
+ restriction_type = "whitelist"
+ locations = ["US", "CA", "GB", "DE"]
+ }
+ }
+
+ viewer_certificate {
+ cloudfront_default_certificate = true
+ }
+}
+```
+
+## Argument Reference
+
+The CloudFront distribution argument layout is a complex structure composed
+of several sub-resources - these resources are laid out below.
+
+### Top-Level Arguments
+
+ * `aliases` (Optional) - Extra CNAMEs (alternate domain names), if any, for
+ this distribution.
+
+ * `cache_behavior` (Optional) - A [cache behavior](#cache-behavior-arguments)
+ resource for this distribution (multiples allowed).
+
+ * `comment` (Optional) - Any comments you want to include about the
+ distribution.
+
+ * `custom_error_response` (Optional) - One or more [custom error
+ response](#custom-error-response-arguments) elements (multiples allowed).
+
+ * `default_cache_behavior` (Required) - The [default cache
+ behavior](#default-cache-behavior-arguments) for this distribution (maximum
+ one).
+
+ * `default_root_object` (Optional) - The object that you want CloudFront to
+ return (for example, index.html) when an end user requests the root URL.
+
+ * `enabled` (Required) - Whether the distribution is enabled to accept end
+ user requests for content.
+
+ * `logging_config` (Optional) - The [logging
+ configuration](#logging-config-arguments) that controls how logs are written
+ to your distribution (maximum one).
+
+ * `origin` (Required) - One or more [origins](#origin-arguments) for this
+ distribution (multiples allowed).
+
+ * `price_class` (Optional) - The price class for this distribution. One of
+ `PriceClass_All`, `PriceClass_200`, `PriceClass_100`
+
+ * `restrictions` (Required) - The [restriction
+ configuration](#restrictions-arguments) for this distribution (maximum one).
+
+ * `viewer_certificate` (Required) - The [SSL
+ configuration](#viewer-certificate-arguments) for this distribution (maximum
+ one).
+
+ * `web_acl_id` (Optional) - If you're using AWS WAF to filter CloudFront
+ requests, the Id of the AWS WAF web ACL that is associated with the
+ distribution.
+
+ * `retain_on_delete` (Optional) - Disables the distribution instead of
+ deleting it when destroying the resource through Terraform. If this is set,
+ the distribution needs to be deleted manually afterwards. Default: `false`.
+
+#### Cache Behavior Arguments
+
+ * `allowed_methods` (Required) - Controls which HTTP methods CloudFront
+ processes and forwards to your Amazon S3 bucket or your custom origin.
+
+ * `cached_methods` (Required) - Controls whether CloudFront caches the
+ response to requests using the specified HTTP methods.
+
+ * `compress` (Optional) - Whether you want CloudFront to automatically
+ compress content for web requests that include `Accept-Encoding: gzip` in
+ the request header (default: `false`).
+
+ * `default_ttl` (Required) - The default amount of time (in seconds) that an
+ object is in a CloudFront cache before CloudFront forwards another request
+ in the absence of an `Cache-Control max-age` or `Expires` header.
+
+ * `forwarded_values` (Required) - The [forwarded values
+ configuration](#forwarded-values-arguments) that specifies how CloudFront
+ handles query strings, cookies and headers (maximum one).
+
+ * `max_ttl` (Required) - The maximum amount of time (in seconds) that an
+ object is in a CloudFront cache before CloudFront forwards another request
+ to your origin to determine whether the object has been updated. Only
+ effective in the presence of `Cache-Control max-age`, `Cache-Control
+ s-maxage`, and `Expires` headers.
+
+ * `min_ttl` (Required) - The minimum amount of time that you want objects to
+ stay in CloudFront caches before CloudFront queries your origin to see
+ whether the object has been updated.
+
+ * `path_pattern` (Required) - The pattern (for example, `images/*.jpg)` that
+ specifies which requests you want this cache behavior to apply to.
+
+ * `smooth_streaming` (Optional) - Indicates whether you want to distribute
+ media files in Microsoft Smooth Streaming format using the origin that is
+ associated with this cache behavior.
+
+ * `target_origin_id` (Required) - The value of ID for the origin that you want
+ CloudFront to route requests to when a request matches the path pattern
+ either for a cache behavior or for the default cache behavior.
+
+ * `trusted_signers` (Optional) - The AWS accounts, if any, that you want to
+ allow to create signed URLs for private content.
+
+ * `viewer_protocol_policy` (Required) - Use this element to specify the
+ protocol that users can use to access the files in the origin specified by
+ TargetOriginId when a request matches the path pattern in PathPattern. One
+ of `allow-all`, `https-only`, or `redirect-to-https`.
+
+##### Forwarded Values Arguments
+
+ * `cookies` (Optional) - The [forwarded values cookies](#cookies-arguments)
+ that specifies how CloudFront handles cookies (maximum one).
+
+ * `headers` (Optional) - Specifies the Headers, if any, that you want
+ CloudFront to vary upon for this cache behavior. Specify `*` to include all
+ headers.
+
+ * `query_string` (Required) - Indicates whether you want CloudFront to forward
+ query strings to the origin that is associated with this cache behavior.
+
+##### Cookies Arguments
+
+ * `forward` (Required) - Specifies whether you want CloudFront to forward
+ cookies to the origin that is associated with this cache behavior. You can
+ specify `all`, `none` or `whitelist`.
+
+ * `whitelisted_names` (Optional) - If you have specified `whitelist` to
+ `forward`, the whitelisted cookies that you want CloudFront to forward to
+ your origin.
+
+#### Custom Error Response Arguments
+
+ * `error_caching_min_ttl` (Optional) - The minimum amount of time you want
+ HTTP error codes to stay in CloudFront caches before CloudFront queries your
+ origin to see whether the object has been updated.
+
+ * `error_code` (Required) - The 4xx or 5xx HTTP status code that you want to
+ customize.
+
+ * `response_code` (Optional) - The HTTP status code that you want CloudFront
+ to return with the custom error page to the viewer.
+
+ * `response_page_path` (Optional) - The path of the custom error page (for
+ example, `/custom_404.html`).
+
+#### Default Cache Behavior Arguments
+
+The arguments for `default_cache_behavior` are the same as for
+[`cache_behavior`](#cache-behavior-arguments), except for the `path_pattern`
+argument is not required.
+
+#### Logging Config Arguments
+
+ * `bucket` (Required) - The Amazon S3 bucket to store the access logs in, for
+ example, `myawslogbucket.s3.amazonaws.com`.
+
+ * `include_cookies` (Optional) - Specifies whether you want CloudFront to
+ include cookies in access logs (default: `false`).
+
+ * `prefix` (Optional) - An optional string that you want CloudFront to prefix
+ to the access log filenames for this distribution, for example, `myprefix/`.
+
+#### Origin Arguments
+
+ * `custom_origin_config` - The [CloudFront custom
+ origin](#custom-origin-config-arguments) configuration information. If an S3
+ origin is required, use `s3_origin_config` instead.
+
+ * `domain_name` (Required) - The DNS domain name of either the S3 bucket, or
+ web site of your custom origin.
+
+ * `custom_header` (Optional) - One or more sub-resources with `name` and
+ `value` parameters that specify header data that will be sent to the origin
+ (multiples allowed).
+
+ * `origin_id` (Required) - A unique identifier for the origin.
+
+ * `origin_path` (Optional) - An optional element that causes CloudFront to
+ request your content from a directory in your Amazon S3 bucket or your
+ custom origin.
+
+ * `s3_origin_config` - The [CloudFront S3 origin](#s3-origin-config-arguments)
+ configuration information. If a custom origin is required, use
+ `custom_origin_config` instead.
+
+##### Custom Origin Config Arguments
+
+ * `http_port` (Required) - The HTTP port the custom origin listens on.
+
+ * `https_port` (Required) - The HTTPS port the custom origin listens on.
+
+ * `origin_protocol_policy` (Required) - The origin protocol policy to apply to
+ your origin. One of `http-only`, `https-only`, or `match-viewer`.
+
+ * `origin_ssl_protocols` (Required) - The SSL/TLS protocols that you want
+ CloudFront to use when communicating with your origin over HTTPS. A list of
+ one or more of `SSLv3`, `TLSv1`, `TLSv1.1`, and `TLSv1.2`.
+
+##### S3 Origin Config Arguments
+
+* `origin_access_identity` (Optional) - The [CloudFront origin access
+ identity][5] to associate with the origin.
+
+#### Restrictions Arguments
+
+The `restrictions` sub-resource takes another single sub-resource named
+`geo_restriction` (see the example for usage).
+
+The arguments of `geo_restriction` are:
+
+ * `locations` (Optional) - The [ISO 3166-1-alpha-2 codes][4] for which you
+ want CloudFront either to distribute your content (`whitelist`) or not
+ distribute your content (`blacklist`).
+
+ * `restriction_type` (Required) - The method that you want to use to restrict
+ distribution of your content by country: `none`, `whitelist`, or
+ `blacklist`.
+
+#### Viewer Certificate Arguments
+
+ * `acm_certificate_arn` - The ARN of the [AWS Certificate Manager][6]
+ certificate that you wish to use with this distribution. Specify this,
+ `cloudfront_default_certificate`, or `iam_certificate_id`.
+
+ * `cloudfront_default_certificate` - `true` if you want viewers to use HTTPS
+ to request your objects and you're using the CloudFront domain name for your
+ distribution. Specify this, `acm_certificate_arn`, or `iam_certificate_id`.
+
+ * `iam_certificate_id` - The IAM certificate identifier of the custom viewer
+ certificate for this distribution if you are using a custom domain. Specify
+ this, `acm_certificate_arn`, or `cloudfront_default_certificate`.
+
+ * `minimum_protocol_version` - The minimum version of the SSL protocol that
+ you want CloudFront to use for HTTPS connections. One of `SSLv3` or `TLSv1`.
+ Default: `SSLv3`. **NOTE**: If you are using a custom certificate (specified
+ with `acm_certificate_arn` or `iam_certificate_id`), and have specified
+ `sni-only` in `ssl_support_method`, `TLSv1` must be specified.
+
+ * `ssl_support_method`: Specifies how you want CloudFront to serve HTTPS
+ requests. One of `vip` or `sni-only`. Required if you specify
+ `acm_certificate_arn` or `iam_certificate_id`. **NOTE:** `vip` causes
+ CloudFront to use a dedicated IP address and may incur extra charges.
+
+## Attribute Reference
+
+The following attributes are exported:
+
+ * `id` - The identifier for the distribution. For example: `EDFDVBD632BHDS5`.
+
+ * `caller_reference` - Internal value used by CloudFront to allow future
+ updates to the distribution configuration.
+
+ * `status` - The current status of the distribution. `Deployed` if the
+ distribution's information is fully propagated throughout the Amazon
+ CloudFront system.
+
+ * `active_trusted_signers` - The key pair IDs that CloudFront is aware of for
+ each trusted signer, if the distribution is set up to serve private content
+ with signed URLs.
+
+ * `domain_name` - The domain name corresponding to the distribution. For
+ example: `d604721fxaaqy9.cloudfront.net`.
+
+ * `last_modified_time` - The date and time the distribution was last modified.
+
+ * `in_progress_validation_batches` - The number of invalidation batches
+ currently in progress.
+
+ * `etag` - The current version of the distribution's information. For example:
+ `E2QWRUHAPOMQZL`.
+
+
+[1]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
+[2]: http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/CreateDistribution.html
+[3]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
+[4]: http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm
+[5]: /docs/providers/aws/r/cloudfront_origin_access_identity.html
+[6]: https://aws.amazon.com/certificate-manager/
diff --git a/website/source/docs/providers/aws/r/cloudfront_origin_access_identity.html.markdown b/website/source/docs/providers/aws/r/cloudfront_origin_access_identity.html.markdown
new file mode 100644
index 000000000000..df74f9d89828
--- /dev/null
+++ b/website/source/docs/providers/aws/r/cloudfront_origin_access_identity.html.markdown
@@ -0,0 +1,58 @@
+---
+layout: "aws"
+page_title: "AWS: cloudfront_origin_access_identity"
+sidebar_current: "docs-aws-resource-cloudfront-origin-access-identity"
+description: |-
+ Provides a CloudFront origin access identity.
+---
+
+# aws\_cloudfront\_origin\_access\_identity
+
+Creates an Amazon CloudFront origin access identity.
+
+For information about CloudFront distributions, see the
+[Amazon CloudFront Developer Guide][1]. For more information on generating
+origin access identities, see
+[Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content][2].
+
+## Example Usage
+
+The following example below creates a CloudFront origin access identity.
+
+```
+resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
+ comment = "Some comment"
+}
+```
+
+## Argument Reference
+
+* `comment` (Optional) - An optional comment for the origin access identity.
+
+## Attribute Reference
+
+The following attributes are exported:
+
+* `id` - The identifier for the distribution. For example: `EDFDVBD632BHDS5`.
+* `caller_reference` - Internal value used by CloudFront to allow future updates to the origin access identity.
+* `cloudfront_access_identity_path` - A shortcut to the full path for the origin access identity to use in CloudFront, see below.
+* `etag` - The current version of the origin access identity's information. For example: E2QWRUHAPOMQZL.
+* `s3_canonical_user_id` - The Amazon S3 canonical user ID for the origin access identity, which you use when giving the origin access identity read permission to an object in Amazon S3.
+
+## Using With CloudFront
+
+Normally, when referencing an origin access identity in CloudFront, you need to
+prefix the ID with the `origin-access-identity/cloudfront/` special path.
+The `cloudfront_access_identity_path` allows this to be circumvented.
+The below snippet demonstrates use with the `s3_origin_config` structure for the
+[`aws_cloudfront_web_distribution`][3] resource:
+
+```
+s3_origin_config {
+ origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
+}
+```
+
+[1]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
+[2]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
+[3]: /docs/providers/aws/r/cloudfront_distribution.html
diff --git a/website/source/docs/providers/aws/r/cloudwatch_log_subscription_filter.html.markdown b/website/source/docs/providers/aws/r/cloudwatch_log_subscription_filter.html.markdown
new file mode 100644
index 000000000000..9bc1b2d95fd6
--- /dev/null
+++ b/website/source/docs/providers/aws/r/cloudwatch_log_subscription_filter.html.markdown
@@ -0,0 +1,39 @@
+---
+layout: "aws"
+page_title: "AWS: aws_cloudwatch_log_subscription_filter"
+sidebar_current: "docs-aws-resource-cloudwatch-log-subscription-filter"
+description: |-
+ Provides a CloudWatch Logs subscription filter.
+---
+
+# aws\_cloudwatch\_logs\_subscription\_filter
+
+Provides a CloudWatch Logs subscription filter resource.
+
+## Example Usage
+
+```
+resource "aws_cloudwatch_log_subscription_filter" "test_lambdafunction_logfilter" {
+ name = "test_lambdafunction_logfilter"
+ role_arn = "${aws_iam_role.iam_for_lambda.arn}"
+ log_group_name = "/aws/lambda/example_lambda_name"
+ filter_pattern = "logtype test"
+ destination_arn = "${aws_kinesis_stream.test_logstream.arn}"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` - (Required) A name for the subscription filter
+* `destination_arn` - (Required) The ARN of the destination to deliver matching log events to. Currently only Kinesis stream / a logical destination
+* `filter_pattern` - (Required) A valid CloudWatch Logs filter pattern for subscribing to a filtered stream of log events.
+* `log_group_name` - (Required) The name of the log group to associate the subscription filter with
+* `role_arn` - (Optional) The ARN of an IAM role that grants Amazon CloudWatch Logs permissions to deliver ingested log events to the destination stream
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `arn` - The Amazon Resource Name (ARN) specifying the log subscription filter.
diff --git a/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown b/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown
index 3b295aeafd7e..246607ec4cb9 100644
--- a/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown
+++ b/website/source/docs/providers/aws/r/codedeploy_deployment_group.html.markdown
@@ -29,14 +29,14 @@ resource "aws_iam_role_policy" "foo_policy" {
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
- "autoscaling:DescribeAutoScalingGroups",
+ "autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"tag:GetTags",
- "tag:GetResources"
+ "tag:GetResources"
],
"Resource": "*"
}
@@ -70,11 +70,18 @@ resource "aws_codedeploy_deployment_group" "foo" {
app_name = "${aws_codedeploy_app.foo_app.name}"
deployment_group_name = "bar"
service_role_arn = "${aws_iam_role.foo_role.arn}"
+
ec2_tag_filter {
key = "filterkey"
type = "KEY_AND_VALUE"
value = "filtervalue"
}
+
+ trigger_configuration {
+ trigger_events = ["DeploymentFailure"]
+ trigger_name = "foo-trigger"
+ trigger_target_arn = "foo-topic-arn"
+ }
}
```
@@ -89,6 +96,7 @@ The following arguments are supported:
* `deployment_config_name` - (Optional) The name of the group's deployment config. The default is "CodeDeployDefault.OneAtATime".
* `ec2_tag_filter` - (Optional) Tag filters associated with the group. See the AWS docs for details.
* `on_premises_instance_tag_filter` - (Optional) On premise tag filters associated with the group. See the AWS docs for details.
+* `trigger_configuration` - (Optional) A Trigger Configuration block. Trigger Configurations are documented below.
Both ec2_tag_filter and on_premises_tag_filter blocks support the following:
@@ -96,6 +104,12 @@ Both ec2_tag_filter and on_premises_tag_filter blocks support the following:
* `type` - (Optional) The type of the tag filter, either KEY_ONLY, VALUE_ONLY, or KEY_AND_VALUE.
* `value` - (Optional) The value of the tag filter.
+Add triggers to a Deployment Group to receive notifications about events related to deployments or instances in the group. Notifications are sent to subscribers of the SNS topic associated with the trigger. CodeDeploy must have permission to publish to the topic from this deployment group. Trigger Configurations support the following:
+
+ * `trigger_events` - (Required) The event type or types for which notifications are triggered. The following values are supported: `DeploymentStart`, `DeploymentSuccess`, `DeploymentFailure`, `DeploymentStop`, `InstanceStart`, `InstanceSuccess`, `InstanceFailure`.
+ * `trigger_name` - (Required) The name of the notification trigger.
+ * `trigger_target_arn` - (Required) The ARN of the SNS topic through which notifications are sent.
+
## Attributes Reference
The following attributes are exported:
diff --git a/website/source/docs/providers/aws/r/default_network_acl.html.markdown b/website/source/docs/providers/aws/r/default_network_acl.html.markdown
new file mode 100644
index 000000000000..e50a786b79bb
--- /dev/null
+++ b/website/source/docs/providers/aws/r/default_network_acl.html.markdown
@@ -0,0 +1,176 @@
+---
+layout: "aws"
+page_title: "AWS: aws_default_network_acl"
+sidebar_current: "docs-aws-resource-default-network-acl"
+description: |-
+ Manage the default Network ACL resource.
+---
+
+# aws\_default\_network\_acl
+
+Provides a resource to manage the default AWS Network ACL. VPC Only.
+
+Each VPC created in AWS comes with a Default Network ACL that can be managed, but not
+destroyed. **This is an advanced resource**, and has special caveats to be aware
+of when using it. Please read this document in its entirety before using this
+resource.
+
+The `aws_default_network_acl` behaves differently from normal resources, in that
+Terraform does not _create_ this resource, but instead attempts to "adopt" it
+into management. We can do this because each VPC created has a Default Network
+ACL that cannot be destroyed, and is created with a known set of default rules.
+
+When Terraform first adopts the Default Network ACL, it **immediately removes all
+rules in the ACL**. It then proceeds to create any rules specified in the
+configuration. This step is required so that only the rules specified in the
+configuration are created.
+
+For more information about Network ACLs, see the AWS Documentation on
+[Network ACLs][aws-network-acls].
+
+## Basic Example Usage, with default rules
+
+The following config gives the Default Network ACL the same rules that AWS
+includes, but pulls the resource under management by Terraform. This means that
+any ACL rules added or changed will be detected as drift.
+
+```
+resource "aws_vpc" "mainvpc" {
+ cidr_block = "10.1.0.0/16"
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.mainvpc.default_network_acl_id}"
+
+ ingress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+
+ egress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+}
+```
+
+## Example config to deny all Egress traffic, allowing Ingress
+
+The following denies all Egress traffic by omitting any `egress` rules, while
+including the default `ingress` rule to allow all traffic.
+
+```
+resource "aws_vpc" "mainvpc" {
+ cidr_block = "10.1.0.0/16"
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.mainvpc.default_network_acl_id}"
+
+ ingress {
+ protocol = -1
+ rule_no = 100
+ action = "allow"
+ cidr_block = "0.0.0.0/0"
+ from_port = 0
+ to_port = 0
+ }
+
+}
+```
+
+## Example config to deny all traffic to any Subnet in the Default Network ACL:
+
+This config denies all traffic in the Default ACL. This can be useful if you
+want a locked down default to force all resources in the VPC to assign a
+non-default ACL.
+
+```
+resource "aws_vpc" "mainvpc" {
+ cidr_block = "10.1.0.0/16"
+}
+
+resource "aws_default_network_acl" "default" {
+ default_network_acl_id = "${aws_vpc.mainvpc.default_network_acl_id}"
+ # no rules defined, deny all traffic in this ACL
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `default_network_acl_id` - (Required) The Network ACL ID to manage. This
+attribute is exported from `aws_vpc`, or manually found via the AWS Console.
+* `subnet_ids` - (Optional) A list of Subnet IDs to apply the ACL to. See the
+notes below on managing Subnets in the Default VPC
+* `ingress` - (Optional) Specifies an ingress rule. Parameters defined below.
+* `egress` - (Optional) Specifies an egress rule. Parameters defined below.
+* `tags` - (Optional) A mapping of tags to assign to the resource.
+
+Both `egress` and `ingress` support the following keys:
+
+* `from_port` - (Required) The from port to match.
+* `to_port` - (Required) The to port to match.
+* `rule_no` - (Required) The rule number. Used for ordering.
+* `action` - (Required) The action to take.
+* `protocol` - (Required) The protocol to match. If using the -1 'all'
+protocol, you must specify a from and to port of 0.
+* `cidr_block` - (Optional) The CIDR block to match. This must be a
+valid network mask.
+* `icmp_type` - (Optional) The ICMP type to be used. Default 0.
+* `icmp_code` - (Optional) The ICMP type code to be used. Default 0.
+
+~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html
+
+### Managing Subnets in the Default Network ACL
+
+Within a VPC, all Subnets must be associated with a Network ACL. In order to
+"delete" the association between a Subnet and a non-default Network ACL, the
+association is destroyed by replacing it with an association between the Subnet
+and the Default ACL instead.
+
+When managing the Default Network ACL, you cannot "remove" Subnets.
+Instead, they must be reassigned to another Network ACL, or the Subnet itself must be
+destroyed. Because of these requirements, removing the `subnet_ids` attribute from the
+configuration of a `aws_default_network_acl` resource may result in a reoccurring
+plan, until the Subnets are reassigned to another Network ACL or are destroyed.
+
+Because Subnets are by default associated with the Default Network ACL, any
+non-explicit association will show up as a plan to remove the Subnet. For
+example: if you have a custom `aws_network_acl` with two subnets attached, and
+you remove the `aws_network_acl` resource, after successfully destroying this
+resource future plans will show a diff on the managed `aws_default_network_acl`,
+as those two Subnets have been orphaned by the now destroyed network acl and thus
+adopted by the Default Network ACL. In order to avoid a reoccurring plan, they
+will need to be reassigned, destroyed, or added to the `subnet_ids` attribute of
+the `aws_default_network_acl` entry.
+
+### Removing `aws_default_network_acl` from your configuration
+
+Each AWS VPC comes with a Default Network ACL that cannot be deleted. The `aws_default_network_acl`
+allows you to manage this Network ACL, but Terraform cannot destroy it. Removing
+this resource from your configuration will remove it from your statefile and
+management, **but will not destroy the Network ACL.** All Subnets associations
+and ingress or egress rules will be left as they are at the time of removal. You
+can resume managing them via the AWS Console.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The ID of the Default Network ACL
+* `vpc_id` - The ID of the associated VPC
+* `ingress` - Set of ingress rules
+* `egress` - Set of egress rules
+* `subnet_ids` – IDs of associated Subnets
+
+[aws-network-acls]: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
diff --git a/website/source/docs/providers/aws/r/ebs_volume.html.md b/website/source/docs/providers/aws/r/ebs_volume.html.md
index 78d902b3e0be..b8050df14852 100644
--- a/website/source/docs/providers/aws/r/ebs_volume.html.md
+++ b/website/source/docs/providers/aws/r/ebs_volume.html.md
@@ -22,6 +22,8 @@ resource "aws_ebs_volume" "example" {
}
```
+~> **NOTE**: One of `size` or `snapshot_id` is required when specifying an EBS volume
+
## Argument Reference
The following arguments are supported:
diff --git a/website/source/docs/providers/aws/r/elastic_beanstalk_configuration_template.html.markdown b/website/source/docs/providers/aws/r/elastic_beanstalk_configuration_template.html.markdown
index a493f58ff4aa..4f2fcc993e1d 100644
--- a/website/source/docs/providers/aws/r/elastic_beanstalk_configuration_template.html.markdown
+++ b/website/source/docs/providers/aws/r/elastic_beanstalk_configuration_template.html.markdown
@@ -43,7 +43,6 @@ The following arguments are supported:
off of. Example stacks can be found in the [Amazon API documentation][1]
-
## Option Settings
The `setting` field supports the following format:
diff --git a/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown b/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown
index afe8f510c93c..2c7523870d16 100644
--- a/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown
+++ b/website/source/docs/providers/aws/r/elastic_beanstalk_environment.html.markdown
@@ -60,9 +60,11 @@ this time the Elastic Beanstalk API does not provide a programatic way of
changing these tags after initial application
-
## Option Settings
+Some options can be stack-specific, check [AWS Docs](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html)
+for supported options and examples.
+
The `setting` and `all_settings` mappings support the following format:
* `namespace` - (Optional) unique namespace identifying the option's
@@ -78,9 +80,9 @@ The following attributes are exported:
* `description` - Description of the Elastic Beanstalk Environment.
* `tier` - The environment tier specified.
* `application` – The Elastic Beanstalk Application specified for this environment.
-* `setting` – Settings specifically set for this Environment.
-* `all_settings` – List of all option settings configured in the Environment. These
- are a combination of default settings and their overrides from `settings` in
+* `setting` – Settings specifically set for this Environment.
+* `all_settings` – List of all option settings configured in the Environment. These
+ are a combination of default settings and their overrides from `setting` in
the configuration.
* `cname` - Fully qualified DNS name for the Environment.
* `autoscaling_groups` - The autoscaling groups used by this environment.
diff --git a/website/source/docs/providers/aws/r/elb.html.markdown b/website/source/docs/providers/aws/r/elb.html.markdown
index 5c644fe90539..b8efdee528f9 100644
--- a/website/source/docs/providers/aws/r/elb.html.markdown
+++ b/website/source/docs/providers/aws/r/elb.html.markdown
@@ -96,7 +96,7 @@ Listeners support the following:
* `lb_port` - (Required) The port to listen on for the load balancer
* `lb_protocol` - (Required) The protocol to listen on. Valid values are `HTTP`,
`HTTPS`, `TCP`, or `SSL`
-* `ssl_certificate_id` - (Optional) The id of an SSL certificate you have
+* `ssl_certificate_id` - (Optional) The ARN of an SSL certificate you have
uploaded to AWS IAM. **Only valid when `lb_protocol` is either HTTPS or SSL**
Health Check supports the following:
diff --git a/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown b/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown
index 8fee7bf029ff..25e2d56e90ea 100644
--- a/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown
+++ b/website/source/docs/providers/aws/r/iam_server_certificate.html.markdown
@@ -91,7 +91,7 @@ resource "aws_elb" "ourapp" {
The following arguments are supported:
* `name` - (Optional) The name of the Server Certificate. Do not include the
- path in this value.If omitted, Terraform will assign a random, unique name.
+ path in this value. If omitted, Terraform will assign a random, unique name.
* `name_prefix` - (Optional) Creates a unique name beginning with the specified
prefix. Conflicts with `name`.
* `certificate_body` – (Required) The contents of the public key certificate in
diff --git a/website/source/docs/providers/aws/r/instance.html.markdown b/website/source/docs/providers/aws/r/instance.html.markdown
index d67b49ff3abe..c93e66c53af6 100644
--- a/website/source/docs/providers/aws/r/instance.html.markdown
+++ b/website/source/docs/providers/aws/r/instance.html.markdown
@@ -69,7 +69,6 @@ instances. See [Shutdown Behavior](https://docs.aws.amazon.com/AWSEC2/latest/Use
"Instance Store") volumes on the instance. See [Block Devices](#block-devices) below for details.
-
## Block devices
Each of the `*_block_device` attributes controls a portion of the AWS
diff --git a/website/source/docs/providers/aws/r/lambda_function.html.markdown b/website/source/docs/providers/aws/r/lambda_function.html.markdown
index 41857cda7fb6..ef95b7831836 100644
--- a/website/source/docs/providers/aws/r/lambda_function.html.markdown
+++ b/website/source/docs/providers/aws/r/lambda_function.html.markdown
@@ -77,5 +77,5 @@ resource "aws_lambda_function" "test_lambda" {
[3]: https://docs.aws.amazon.com/lambda/latest/dg/walkthrough-custom-events-create-test-function.html
[4]: https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html
[5]: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
-[6]: https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html#API_CreateFunction_RequestBody
+[6]: https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html#SSS-CreateFunction-request-Runtime
[7]: http://docs.aws.amazon.com/lambda/latest/dg/vpc.html
diff --git a/website/source/docs/providers/aws/r/launch_configuration.html.markdown b/website/source/docs/providers/aws/r/launch_configuration.html.markdown
index d32a1c80611d..dfe85aa1822b 100644
--- a/website/source/docs/providers/aws/r/launch_configuration.html.markdown
+++ b/website/source/docs/providers/aws/r/launch_configuration.html.markdown
@@ -109,7 +109,6 @@ The following arguments are supported:
`"default"` or `"dedicated"`, see [AWS's Create Launch Configuration](http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchConfiguration.html)
for more details
-
## Block devices
Each of the `*_block_device` attributes controls a portion of the AWS
diff --git a/website/source/docs/providers/aws/r/network_acl_rule.html.markdown b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown
index e5766756fe15..c5e7a327b2fc 100644
--- a/website/source/docs/providers/aws/r/network_acl_rule.html.markdown
+++ b/website/source/docs/providers/aws/r/network_acl_rule.html.markdown
@@ -43,6 +43,8 @@ The following arguments are supported:
* `icmp_type` - (Optional) ICMP protocol: The ICMP type. Required if specifying ICMP for the protocol. e.g. -1
* `icmp_code` - (Optional) ICMP protocol: The ICMP code. Required if specifying ICMP for the protocol. e.g. -1
+~> **NOTE:** If the value of `protocol` is `-1` or `all`, the `from_port` and `to_port` values will be ignored and the rule will apply to all ports.
+
~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html
## Attributes Reference
diff --git a/website/source/docs/providers/aws/r/opsworks_application.html.markdown b/website/source/docs/providers/aws/r/opsworks_application.html.markdown
new file mode 100644
index 000000000000..d4795c29aab8
--- /dev/null
+++ b/website/source/docs/providers/aws/r/opsworks_application.html.markdown
@@ -0,0 +1,94 @@
+---
+layout: "aws"
+page_title: "AWS: aws_opsworks_aplication"
+sidebar_current: "docs-aws-resource-opsworks-application"
+description: |-
+ Provides an OpsWorks application resource.
+---
+
+# aws\_opsworks\_application
+
+Provides an OpsWorks application resource.
+
+## Example Usage
+
+```
+resource "aws_opsworks_application" "foo-app" {
+ name = "foobar application"
+ short_name = "foobar"
+ stack_id = "${aws_opsworks_stack.stack.id}"
+ type = "rails"
+ description = "This is a Rails application"
+ domains = [
+ "example.com",
+ "sub.example.com"
+ ]
+ environment = {
+ key = "key"
+ value = "value"
+ secure = false
+ }
+ app_source = {
+ type = "git"
+ revision = "master"
+ url = "https://github.com/example.git"
+ }
+ enable_ssl = true
+ ssl_configuration = {
+ private_key = "${file("./foobar.key")}"
+ certificate = "${file("./foobar.crt")}"
+ }
+ document_root = "public"
+ auto_bundle_on_deploy = true
+ rails_env = "staging"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` - (Required) A human-readable name for the application.
+* `short_name` - (Required) A short, machine-readable name for the application. This can only be defined on resource creation and ignored on resource update.
+* `stack_id` - (Required) The id of the stack the application will belong to.
+* `type` - (Required) Opsworks application type. One of `aws-flow-ruby`, `java`, `rails`, `php`, `nodejs`, `static` or `other`.
+* `description` - (Optional) A description of the app.
+* `environment` - (Optional) Object to define environment variables. Object is described below.
+* `enable_ssl` - (Optional) Whether to enable SSL for the app. This must be set in order to let `ssl_configuration.private_key`, `ssl_configuration.certificate` and `ssl_configuration.chain` take effect.
+* `ssl_configuration` - (Optional) The SSL configuration of the app. Object is described below.
+* `app_source` - (Optional) SCM configuration of the app as described below.
+* `data_source_arn` - (Optional) The data source's ARN.
+* `data_source_type` - (Optional) The data source's type one of `AutoSelectOpsworksMysqlInstance`, `OpsworksMysqlInstance`, or `RdsDbInstance`.
+* `data_source_database_name` - (Optional) The database name.
+* `domains` - (Optional) A list of virtual host alias.
+* `document_root` - (Optional) Subfolder for the document root for application of type `rails`.
+* `auto_bundle_on_deploy` - (Optional) Run bundle install when deploying for application of type `rails`.
+* `rails_env` - (Required if `type` = `rails`) The name of the Rails environment for application of type `rails`.
+* `aws_flow_ruby_settings` - (Optional) Specify activity and workflow workers for your app using the aws-flow gem.
+
+An `app_source` block supports the following arguments (can only be defined once per resource):
+
+* `type` - (Required) The type of source to use. For example, "archive".
+* `url` - (Required) The URL where the app resource can be found.
+* `username` - (Optional) Username to use when authenticating to the source.
+* `password` - (Optional) Password to use when authenticating to the source.
+* `ssh_key` - (Optional) SSH key to use when authenticating to the source.
+* `revision` - (Optional) For sources that are version-aware, the revision to use.
+
+An `environment` block supports the following arguments:
+
+* `key` - (Required) Variable name.
+* `value` - (Required) Variable value.
+* `secret` - (Optional) Set visibility of the variable value to `true` or `false`.
+
+A `ssl_configuration` block supports the following arguments (can only be defined once per resource):
+
+* `private_key` - (Required) The private key; the contents of the certificate's domain.key file.
+* `certificate` - (Required) The contents of the certificate's domain.crt file.
+* `chain` - (Optional) Can be used to specify an intermediate certificate authority key or client authentication.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The id of the application.
diff --git a/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown
index 7f04202d4c79..b43ce8a2dd0a 100644
--- a/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_custom_layer.html.markdown
@@ -39,6 +39,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown
index 29c8fc68e261..3425eb196e3d 100644
--- a/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_ganglia_layer.html.markdown
@@ -40,6 +40,7 @@ The following arguments are supported:
* `username` - (Optiona) The username to use for Ganglia. Defaults to "opsworks".
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown
index 68b54a646f0c..baeff6172861 100644
--- a/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_haproxy_layer.html.markdown
@@ -43,6 +43,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_instance.html.markdown b/website/source/docs/providers/aws/r/opsworks_instance.html.markdown
new file mode 100644
index 000000000000..cfb14bd13d92
--- /dev/null
+++ b/website/source/docs/providers/aws/r/opsworks_instance.html.markdown
@@ -0,0 +1,135 @@
+---
+layout: "aws"
+page_title: "AWS: aws_opsworks_instance"
+sidebar_current: "docs-aws-resource-opsworks-instance"
+description: |-
+ Provides an OpsWorks instance resource.
+---
+
+# aws\_opsworks\_instance
+
+Provides an OpsWorks instance resource.
+
+## Example Usage
+
+```
+resource "aws_opsworks_instance" "my-instance" {
+ stack_id = "${aws_opsworks_stack.my-stack.id}"
+
+ layer_ids = [
+ "${aws_opsworks_custom_layer.my-layer.id}",
+ ]
+
+ instance_type = "t2.micro"
+ os = "Amazon Linux 2015.09"
+ state = "stopped"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `instance_type` - (Required) The type of instance to start
+* `stack_id` - (Required) The id of the stack the instance will belong to.
+* `layer_ids` - (Required) The ids of the layers the instance will belong to.
+* `state` - (Optional) The desired state of the instance. Can be either `"running"` or `"stopped"`.
+* `install_updates_on_boot` - (Optional) Controls where to install OS and package updates when the instance boots. Defaults to `true`.
+* `auto_scaling_type` - (Optional) Creates load-based or time-based instances. If set, can be either: `"load"` or `"timer"`.
+* `availability_zone` - (Optional) Name of the availability zone where instances will be created
+ by default.
+* `ebs_optimized` - (Optional) If true, the launched EC2 instance will be EBS-optimized.
+* `hostname` - (Optional) The instance's host name.
+* `architecture` - (Optional) Machine architecture for created instances. Can be either `"x86_64"` (the default) or `"i386"`
+* `ami_id` - (Optional) The AMI to use for the instance. If an AMI is specified, `os` must be `"Custom"`.
+* `os` - (Optional) Name of operating system that will be installed.
+* `root_device_type` - (Optional) Name of the type of root device instances will have by default. Can be either `"ebs"` or `"instance-store"`
+* `ssh_key_name` - (Optional) Name of the SSH keypair that instances will have by default.
+* `agent_version` - (Optional) The AWS OpsWorks agent to install. Defaults to `"INHERIT"`.
+* `subnet_id` - (Optional) Subnet ID to attach to
+* `virtualization_type` - (Optional) Keyword to choose what virtualization mode created instances
+ will use. Can be either `"paravirtual"` or `"hvm"`.
+* `root_block_device` - (Optional) Customize details about the root block
+ device of the instance. See [Block Devices](#block-devices) below for details.
+* `ebs_block_device` - (Optional) Additional EBS block devices to attach to the
+ instance. See [Block Devices](#block-devices) below for details.
+* `ephemeral_block_device` - (Optional) Customize Ephemeral (also known as
+ "Instance Store") volumes on the instance. See [Block Devices](#block-devices) below for details.
+
+
+## Block devices
+
+Each of the `*_block_device` attributes controls a portion of the AWS
+Instance's "Block Device Mapping". It's a good idea to familiarize yourself with [AWS's Block Device
+Mapping docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
+to understand the implications of using these attributes.
+
+The `root_block_device` mapping supports the following:
+
+* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`,
+ or `"io1"`. (Default: `"standard"`).
+* `volume_size` - (Optional) The size of the volume in gigabytes.
+* `iops` - (Optional) The amount of provisioned
+ [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html).
+ This must be set with a `volume_type` of `"io1"`.
+* `delete_on_termination` - (Optional) Whether the volume should be destroyed
+ on instance termination (Default: `true`).
+
+Modifying any of the `root_block_device` settings requires resource
+replacement.
+
+Each `ebs_block_device` supports the following:
+
+* `device_name` - The name of the device to mount.
+* `snapshot_id` - (Optional) The Snapshot ID to mount.
+* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`,
+ or `"io1"`. (Default: `"standard"`).
+* `volume_size` - (Optional) The size of the volume in gigabytes.
+* `iops` - (Optional) The amount of provisioned
+ [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html).
+ This must be set with a `volume_type` of `"io1"`.
+* `delete_on_termination` - (Optional) Whether the volume should be destroyed
+ on instance termination (Default: `true`).
+* `encrypted` - (Optional) Enables [EBS
+ encryption](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html)
+ on the volume (Default: `false`). Cannot be used with `snapshot_id`.
+
+Modifying any `ebs_block_device` currently requires resource replacement.
+
+Each `ephemeral_block_device` supports the following:
+
+* `device_name` - The name of the block device to mount on the instance.
+* `virtual_name` - The [Instance Store Device
+ Name](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames)
+ (e.g. `"ephemeral0"`)
+
+Each AWS Instance type has a different set of Instance Store block devices
+available for attachment. AWS [publishes a
+list](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes)
+of which ephemeral devices are available on each type. The devices are always
+identified by the `virtual_name` in the format `"ephemeral{0..N}"`.
+
+~> **NOTE:** Currently, changes to `*_block_device` configuration of _existing_
+resources cannot be automatically detected by Terraform. After making updates
+to block device configuration, resource recreation can be manually triggered by
+using the [`taint` command](/docs/commands/taint.html).
+
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The id of the OpsWorks instance.
+* `agent_version` - The AWS OpsWorks agent version.
+* `availability_zone` - The availability zone of the instance.
+* `ssh_key_name` - The key name of the instance
+* `public_dns` - The public DNS name assigned to the instance. For EC2-VPC, this
+ is only available if you've enabled DNS hostnames for your VPC
+* `public_ip` - The public IP address assigned to the instance, if applicable.
+* `private_dns` - The private DNS name assigned to the instance. Can only be
+ used inside the Amazon EC2, and only available if you've enabled DNS hostnames
+ for your VPC
+* `private_ip` - The private IP address assigned to the instance
+* `subnet_id` - The VPC subnet ID.
+* `security_group_ids` - The associated security groups.
+
diff --git a/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown
index 0463fbba76aa..7d4b3eb232c9 100644
--- a/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_java_app_layer.html.markdown
@@ -41,6 +41,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown
index 31d4728063e3..fcbcf16f8b3a 100644
--- a/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_memcached_layer.html.markdown
@@ -37,6 +37,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown
index 0cc11b73f5d8..85033868646e 100644
--- a/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_mysql_layer.html.markdown
@@ -38,6 +38,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown
index ea0fdeb9b401..e9ab9d597e3c 100644
--- a/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_nodejs_app_layer.html.markdown
@@ -37,6 +37,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown
index 7d5d8ab8f751..8335c4154d87 100644
--- a/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_php_app_layer.html.markdown
@@ -36,6 +36,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown
index 27ea7a979c7c..3d2c10fb5df5 100644
--- a/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_rails_app_layer.html.markdown
@@ -42,6 +42,7 @@ The following arguments are supported:
* `system_packages` - (Optional) Names of a set of system packages to install on the layer's instances.
* `use_ebs_optimized_instances` - (Optional) Whether to use EBS-optimized instances.
* `ebs_volume` - (Optional) `ebs_volume` blocks, as described below, will each create an EBS volume and connect it to the layer's instances.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow
custom Chef recipes to be applied to layer instances at the five different
diff --git a/website/source/docs/providers/aws/r/opsworks_stack.html.markdown b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown
index 84ee99ad4c0e..784c032f229d 100644
--- a/website/source/docs/providers/aws/r/opsworks_stack.html.markdown
+++ b/website/source/docs/providers/aws/r/opsworks_stack.html.markdown
@@ -59,6 +59,7 @@ The following arguments are supported:
* `use_opsworks_security_groups` - (Optional) Boolean value controlling whether the standard OpsWorks
security groups apply to created instances.
* `vpc_id` - (Optional) The id of the VPC that this stack belongs to.
+* `custom_json` - (Optional) Custom JSON attributes to apply to the entire stack.
The `custom_cookbooks_source` block supports the following arguments:
diff --git a/website/source/docs/providers/aws/r/s3_bucket.html.markdown b/website/source/docs/providers/aws/r/s3_bucket.html.markdown
index 7149ff989ca7..7eeb119d7fa2 100644
--- a/website/source/docs/providers/aws/r/s3_bucket.html.markdown
+++ b/website/source/docs/providers/aws/r/s3_bucket.html.markdown
@@ -97,6 +97,66 @@ resource "aws_s3_bucket" "b" {
}
```
+### Using object lifecycle
+
+```
+resource "aws_s3_bucket" "bucket" {
+ bucket = "my-bucket"
+ acl = "private"
+
+ lifecycle_rule {
+ id = "log"
+ prefix = "log/"
+ enabled = true
+
+ transition {
+ days = 30
+ storage_class = "STANDARD_IA"
+ }
+ transition {
+ days = 60
+ storage_class = "GLACIER"
+ }
+ expiration {
+ days = 90
+ }
+ }
+ lifecycle_rule {
+ id = "log"
+ prefix = "tmp/"
+ enabled = true
+
+ expiration {
+ date = "2016-01-12"
+ }
+ }
+}
+
+resource "aws_s3_bucket" "versioning_bucket" {
+ bucket = "my-versioning-bucket"
+ acl = "private"
+ versioning {
+ enabled = false
+ }
+ lifecycle_rule {
+ prefix = "config/"
+ enabled = true
+
+ noncurrent_version_transition {
+ days = 30
+ storage_class = "STANDARD_IA"
+ }
+ noncurrent_version_transition {
+ days = 60
+ storage_class = "GLACIER"
+ }
+ noncurrent_version_expiration {
+ days = 90
+ }
+ }
+}
+```
+
## Argument Reference
The following arguments are supported:
@@ -111,6 +171,7 @@ The following arguments are supported:
* `cors_rule` - (Optional) A rule of [Cross-Origin Resource Sharing](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html) (documented below).
* `versioning` - (Optional) A state of [versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) (documented below)
* `logging` - (Optional) A settings of [bucket logging](https://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html) (documented below).
+* `lifecycle_rule` - (Optional) A configuration of [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) (documented below).
The `website` object supports the following:
@@ -137,6 +198,40 @@ The `logging` object supports the following:
* `target_bucket` - (Required) The name of the bucket that will receive the log objects.
* `target_prefix` - (Optional) To specify a key prefix for log objects.
+The 'lifecycle_rule' object supports the following:
+
+* `id` - (Optional) Unique identifier for the rule.
+* `prefix` - (Required) Object key prefix identifying one or more objects to which the rule applies.
+* `enabled` - (Required) Specifies lifecycle rule status.
+* `abort_incomplete_multipart_upload_days` (Optional) Specifies the number of days after initiating a multipart upload when the multipart upload must be completed.
+* `expiration` - (Optional) Specifies a period in the object's expire (documented below).
+* `transition` - (Optional) Specifies a period in the object's transitions (documented below).
+* `noncurrent_version_expiration` - (Optional) Specifies when noncurrent object versions expire (documented below).
+* `noncurrent_version_transition` - (Optional) Specifies when noncurrent object versions transitions (documented below).
+
+At least one of `expiration`, `transition`, `noncurrent_version_expiration`, `noncurrent_version_transition` must be specified.
+
+The `expiration` object supports the following
+
+* `date` (Optional) Specifies the date after which you want the corresponding action to take effect.
+* `days` (Optional) Specifies the number of days after object creation when the specific rule action takes effect.
+* `expired_object_delete_marker` (Optional) On a versioned bucket (versioning-enabled or versioning-suspended bucket), you can add this element in the lifecycle configuration to direct Amazon S3 to delete expired object delete markers.
+
+The `transition` object supports the following
+
+* `date` (Optional) Specifies the date after which you want the corresponding action to take effect.
+* `days` (Optional) Specifies the number of days after object creation when the specific rule action takes effect.
+* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the object to transition. Can be `STANDARD_IA` or `GLACIER`.
+
+The `noncurrent_version_expiration` object supports the following
+
+* `days` (Required) Specifies the number of days an object is noncurrent object versions expire.
+
+The `noncurrent_version_transition` object supports the following
+
+* `days` (Required) Specifies the number of days an object is noncurrent object versions expire.
+* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the noncurrent versions object to transition. Can be `STANDARD_IA` or `GLACIER`.
+
## Attributes Reference
The following attributes are exported:
diff --git a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown
index 89d9797b6c72..c037aa9c832c 100644
--- a/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown
+++ b/website/source/docs/providers/aws/r/s3_bucket_object.html.markdown
@@ -23,6 +23,27 @@ resource "aws_s3_bucket_object" "object" {
}
```
+### Encrypting with KMS Key
+
+```
+resource "aws_kms_key" "examplekms" {
+ description = "KMS key 1"
+ deletion_window_in_days = 7
+}
+
+resource "aws_s3_bucket" "examplebucket" {
+ bucket = "examplebuckettftest"
+ acl = "private"
+}
+
+resource "aws_s3_bucket_object" "examplebucket_object" {
+ key = "someobject"
+ bucket = "${aws_s3_bucket.examplebucket.bucket}"
+ source = "index.html"
+ kms_key_id = "${aws_kms_key.examplekms.arn}"
+}
+```
+
## Argument Reference
The following arguments are supported:
@@ -31,13 +52,17 @@ The following arguments are supported:
* `key` - (Required) The name of the object once it is in the bucket.
* `source` - (Required) The path to the source file being uploaded to the bucket.
* `content` - (Required unless `source` given) The literal content being uploaded to the bucket.
-* `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for futher details.
+* `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details.
* `content_disposition` - (Optional) Specifies presentational information for the object. Read [wc3 content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information.
* `content_encoding` - (Optional) Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information.
* `content_language` - (Optional) The language the content is in e.g. en-US or en-GB.
* `content_type` - (Optional) A standard MIME type describing the format of the object data, e.g. application/octet-stream. All Valid MIME Types are valid for this input.
-* `etag` - (Optional) Used to trigger updates. The only meaningful value is `${md5(file("path/to/file"))}`
-* `kms_key_id` - (Optional) Specifies the AWS KMS key ID to use for object encryption.
+* `etag` - (Optional) Used to trigger updates. The only meaningful value is `${md5(file("path/to/file"))}`.
+This attribute is not compatible with `kms_key_id`
+* `kms_key_id` - (Optional) Specifies the AWS KMS Key ID to use for object encryption.
+This value is a fully qualified **ARN** of the KMS Key. If using `aws_kms_key`,
+use the exported `arn` attribute:
+ `kms_key_id = "${aws_kms_key.foo.arn}"`
Either `source` or `content` must be provided to specify the bucket content.
These two arguments are mutually-exclusive.
diff --git a/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown b/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown
index 9f45a2005c53..d2b6233bfcf6 100644
--- a/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown
+++ b/website/source/docs/providers/azurerm/r/virtual_machine.html.markdown
@@ -175,7 +175,7 @@ For more information on the different example configurations, please check out t
`os_profile_linux_config` supports the following:
* `disable_password_authentication` - (Required) Specifies whether password authentication should be disabled.
-* `ssh_keys` - (Optional) Specifies a collection of `key_path` and `key_data` to be placed on the virtual machine.
+* `ssh_keys` - (Optional) Specifies a collection of `path` and `key_data` to be placed on the virtual machine.
`os_profile_secrets` supports the following:
@@ -191,4 +191,4 @@ For more information on the different example configurations, please check out t
The following attributes are exported:
-* `id` - The virtual machine ID.
\ No newline at end of file
+* `id` - The virtual machine ID.
diff --git a/website/source/docs/providers/cloudflare/r/record.html.markdown b/website/source/docs/providers/cloudflare/r/record.html.markdown
index d1f705c41d26..c3ca31448a05 100644
--- a/website/source/docs/providers/cloudflare/r/record.html.markdown
+++ b/website/source/docs/providers/cloudflare/r/record.html.markdown
@@ -33,6 +33,7 @@ The following arguments are supported:
* `type` - (Required) The type of the record
* `ttl` - (Optional) The TTL of the record
* `priority` - (Optional) The priority of the record
+* `proxied` - (Optional) Whether the record gets CloudFlares origin protection.
## Attributes Reference
@@ -45,4 +46,5 @@ The following attributes are exported:
* `ttl` - The TTL of the record
* `priority` - The priority of the record
* `hostname` - The FQDN of the record
+* `proxied` - (Optional) Whether the record gets CloudFlares origin protection.
diff --git a/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown b/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown
index acec778d9847..4abd541ae73e 100644
--- a/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown
@@ -14,7 +14,7 @@ Creates egress firewall rules for a given network.
```
resource "cloudstack_egress_firewall" "default" {
- network = "test-network"
+ network_id = "6eb22f91-7454-4107-89f4-36afcdf33021"
rule {
cidr_list = ["10.0.0.0/8"]
@@ -28,8 +28,11 @@ resource "cloudstack_egress_firewall" "default" {
The following arguments are supported:
-* `network` - (Required) The network for which to create the egress firewall
- rules. Changing this forces a new resource to be created.
+* `network_id` - (Required) The network ID for which to create the egress
+ firewall rules. Changing this forces a new resource to be created.
+
+* `network` - (Required, Deprecated) The network for which to create the egress
+ firewall rules. Changing this forces a new resource to be created.
* `managed` - (Optional) USE WITH CAUTION! If enabled all the egress firewall
rules for this network will be managed by this resource. This means it will
diff --git a/website/source/docs/providers/cloudstack/r/firewall.html.markdown b/website/source/docs/providers/cloudstack/r/firewall.html.markdown
index 4120306f53b5..f5e174aeb23e 100644
--- a/website/source/docs/providers/cloudstack/r/firewall.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/firewall.html.markdown
@@ -14,7 +14,7 @@ Creates firewall rules for a given IP address.
```
resource "cloudstack_firewall" "default" {
- ip_address = "192.168.0.1"
+ ip_address_id = "30b21801-d4b3-4174-852b-0c0f30bdbbfb"
rule {
cidr_list = ["10.0.0.0/8"]
@@ -28,8 +28,8 @@ resource "cloudstack_firewall" "default" {
The following arguments are supported:
-* `ip_address` - (Required) The IP address or ID for which to create the firewall
- rules. Changing this forces a new resource to be created.
+* `ip_address_id` - (Required) The IP address ID for which to create the
+ firewall rules. Changing this forces a new resource to be created.
* `ipaddress` - (Required, Deprecated) The IP address or ID for which to create
the firewall rules. Changing this forces a new resource to be created.
diff --git a/website/source/docs/providers/cloudstack/r/instance.html.markdown b/website/source/docs/providers/cloudstack/r/instance.html.markdown
index 40bbc6d82761..bc8f6a2d54cb 100644
--- a/website/source/docs/providers/cloudstack/r/instance.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/instance.html.markdown
@@ -17,7 +17,7 @@ disk offering, and template.
resource "cloudstack_instance" "web" {
name = "server-1"
service_offering= "small"
- network = "network-1"
+ network_id = "6eb22f91-7454-4107-89f4-36afcdf33021"
template = "CentOS 6.5"
zone = "zone-1"
}
@@ -31,12 +31,17 @@ The following arguments are supported:
* `display_name` - (Optional) The display name of the instance.
+* `group` - (Optional) The group name of the instance.
+
* `service_offering` - (Required) The name or ID of the service offering used
for this instance.
-* `network` - (Optional) The name or ID of the network to connect this instance
+* `network_id` - (Optional) The ID of the network to connect this instance
to. Changing this forces a new resource to be created.
+* `network` - (Optional, Deprecated) The name or ID of the network to connect
+ this instance to. Changing this forces a new resource to be created.
+
* `ip_address` - (Optional) The IP address to assign to this instance. Changing
this forces a new resource to be created.
diff --git a/website/source/docs/providers/cloudstack/r/ipaddress.html.markdown b/website/source/docs/providers/cloudstack/r/ipaddress.html.markdown
index 45315a0f7821..eeed95f9b65c 100644
--- a/website/source/docs/providers/cloudstack/r/ipaddress.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/ipaddress.html.markdown
@@ -14,7 +14,7 @@ Acquires and associates a public IP.
```
resource "cloudstack_ipaddress" "default" {
- network = "test-network"
+ network_id = "6eb22f91-7454-4107-89f4-36afcdf33021"
}
```
@@ -22,16 +22,24 @@ resource "cloudstack_ipaddress" "default" {
The following arguments are supported:
-* `network` - (Optional) The name or ID of the network for which an IP address should
+* `network_id` - (Optional) The ID of the network for which an IP address should
be acquired and associated. Changing this forces a new resource to be created.
-* `vpc` - (Optional) The name or ID of the VPC for which an IP address should
- be acquired and associated. Changing this forces a new resource to be created.
+* `network` - (Optional, Deprecated) The name or ID of the network for which an IP
+ addess should be acquired and associated. Changing this forces a new resource
+ to be created.
+
+* `vpc_id` - (Optional) The ID of the VPC for which an IP address should be
+ acquired and associated. Changing this forces a new resource to be created.
+
+* `vpc` - (Optional, Deprecated) The name or ID of the VPC for which an IP address
+ should be acquired and associated. Changing this forces a new resource to be
+ created.
* `project` - (Optional) The name or ID of the project to deploy this
instance to. Changing this forces a new resource to be created.
-*NOTE: Either `network` or `vpc` should have a value!*
+*NOTE: Either `network_id` or `vpc_id` should have a value!*
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/loadbalancer_rule.html.markdown b/website/source/docs/providers/cloudstack/r/loadbalancer_rule.html.markdown
index eb374096bc6f..65a252a2d9fb 100644
--- a/website/source/docs/providers/cloudstack/r/loadbalancer_rule.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/loadbalancer_rule.html.markdown
@@ -16,11 +16,11 @@ Creates a loadbalancer rule.
resource "cloudstack_loadbalancer_rule" "default" {
name = "loadbalancer-rule-1"
description = "Loadbalancer rule 1"
- ip_address = "192.168.0.1"
+ ip_address_id = "30b21801-d4b3-4174-852b-0c0f30bdbbfb"
algorithm = "roundrobin"
private_port = 80
public_port = 80
- members = ["server-1", "server-2"]
+ member_ids = ["f8141e2f-4e7e-4c63-9362-986c908b7ea7"]
}
```
@@ -33,16 +33,20 @@ The following arguments are supported:
* `description` - (Optional) The description of the load balancer rule.
-* `ip_address` - (Required) Public ip address from where the network traffic
- will be load balanced from. Changing this forces a new resource to be
- created.
+* `ip_address_id` - (Required) Public IP address ID from where the network
+ traffic will be load balanced from. Changing this forces a new resource
+ to be created.
-* `ipaddress` - (Required, Deprecated) Public ip address from where the
+* `ipaddress` - (Required, Deprecated) Public IP address from where the
network traffic will be load balanced from. Changing this forces a new
resource to be created.
-* `network` - (Optional) The guest network this rule will be created for.
- Required when public IP address is not associated with any Guest network
+* `network_id` - (Optional) The network ID this rule will be created for.
+ Required when public IP address is not associated with any network yet
+ (VPC case).
+
+* `network` - (Optional, Deprecated) The network this rule will be created
+ for. Required when public IP address is not associated with any network
yet (VPC case).
* `algorithm` - (Required) Load balancer rule algorithm (source, roundrobin,
@@ -56,8 +60,11 @@ The following arguments are supported:
will be load balanced from. Changing this forces a new resource to be
created.
-* `members` - (Required) List of instances to assign to the load balancer rule.
- Changing this forces a new resource to be created.
+* `member_ids` - (Required) List of instance IDs to assign to the load balancer
+ rule. Changing this forces a new resource to be created.
+
+* `members` - (Required, Deprecated) List of instances to assign to the load
+ balancer rule. Changing this forces a new resource to be created.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/network.html.markdown b/website/source/docs/providers/cloudstack/r/network.html.markdown
index cf7b1ae67bd9..3f00f2ee584f 100644
--- a/website/source/docs/providers/cloudstack/r/network.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/network.html.markdown
@@ -50,11 +50,17 @@ The following arguments are supported:
required by the Network Offering if specifyVlan=true is set. Only the ROOT
admin can set this value.
-* `vpc` - (Optional) The name or ID of the VPC to create this network for. Changing
+* `vpc_id` - (Optional) The ID of the VPC to create this network for. Changing
this forces a new resource to be created.
-* `aclid` - (Optional) The ID of a network ACL that should be attached to the
- network. Changing this forces a new resource to be created.
+* `vpc` - (Optional, Deprecated) The name or ID of the VPC to create this network
+ for. Changing this forces a new resource to be created.
+
+* `acl_id` - (Optional) The network ACL ID that should be attached to the network.
+ Changing this forces a new resource to be created.
+
+* `aclid` - (Optional, Deprecated) The ID of a network ACL that should be attached
+ to the network. Changing this forces a new resource to be created.
* `project` - (Optional) The name or ID of the project to deploy this
instance to. Changing this forces a new resource to be created.
diff --git a/website/source/docs/providers/cloudstack/r/network_acl.html.markdown b/website/source/docs/providers/cloudstack/r/network_acl.html.markdown
index 0001cbffb755..c8d5e433ab79 100644
--- a/website/source/docs/providers/cloudstack/r/network_acl.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/network_acl.html.markdown
@@ -15,7 +15,7 @@ Creates a Network ACL for the given VPC.
```
resource "cloudstack_network_acl" "default" {
name = "test-acl"
- vpc = "vpc-1"
+ vpc_id = "76f6e8dc-07e3-4971-b2a2-8831b0cc4cb4"
}
```
@@ -25,10 +25,15 @@ The following arguments are supported:
* `name` - (Required) The name of the ACL. Changing this forces a new resource
to be created.
+
* `description` - (Optional) The description of the ACL. Changing this forces a
new resource to be created.
-* `vpc` - (Required) The name or ID of the VPC to create this ACL for. Changing
- this forces a new resource to be created.
+
+* `vpc_id` - (Required) The ID of the VPC to create this ACL for. Changing this
+ forces a new resource to be created.
+
+* `vpc` - (Required, Deprecated) The name or ID of the VPC to create this ACL
+ for. Changing this forces a new resource to be created.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown b/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown
index 267eca346558..4b0ebaa9dfaf 100644
--- a/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown
@@ -14,7 +14,7 @@ Creates network ACL rules for a given network ACL.
```
resource "cloudstack_network_acl_rule" "default" {
- aclid = "f3843ce0-334c-4586-bbd3-0c2e2bc946c6"
+ acl_id = "f3843ce0-334c-4586-bbd3-0c2e2bc946c6"
rule {
action = "allow"
@@ -30,9 +30,12 @@ resource "cloudstack_network_acl_rule" "default" {
The following arguments are supported:
-* `aclid` - (Required) The network ACL ID for which to create the rules.
+* `acl_id` - (Required) The network ACL ID for which to create the rules.
Changing this forces a new resource to be created.
+* `aclid` - (Required, Deprecated) The network ACL ID for which to create
+ the rules. Changing this forces a new resource to be created.
+
* `managed` - (Optional) USE WITH CAUTION! If enabled all the firewall rules for
this network ACL will be managed by this resource. This means it will delete
all firewall rules that are not in your config! (defaults false)
diff --git a/website/source/docs/providers/cloudstack/r/nic.html.markdown b/website/source/docs/providers/cloudstack/r/nic.html.markdown
index 38aacd87d48c..597b40f6cd60 100644
--- a/website/source/docs/providers/cloudstack/r/nic.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/nic.html.markdown
@@ -16,9 +16,9 @@ Basic usage:
```
resource "cloudstack_nic" "test" {
- network = "network-2"
+ network_id = "6eb22f91-7454-4107-89f4-36afcdf33021"
ip_address = "192.168.1.1"
- virtual_machine = "server-1"
+ virtual_machine_id = "f8141e2f-4e7e-4c63-9362-986c908b7ea7"
}
```
@@ -26,17 +26,24 @@ resource "cloudstack_nic" "test" {
The following arguments are supported:
-* `network` - (Required) The name or ID of the network to plug the NIC into. Changing
+* `network_id` - (Required) The ID of the network to plug the NIC into. Changing
this forces a new resource to be created.
+* `network` - (Required, Deprecated) The name or ID of the network to plug the
+ NIC into. Changing this forces a new resource to be created.
+
* `ip_address` - (Optional) The IP address to assign to the NIC. Changing this
forces a new resource to be created.
-* `ipaddress` - (Optional, Deprecated) The IP address to assign to the NIC. Changing
- this forces a new resource to be created.
+* `ipaddress` - (Optional, Deprecated) The IP address to assign to the NIC.
+ Changing this forces a new resource to be created.
+
+* `virtual_machine_id` - (Required) The ID of the virtual machine to which to
+ attach the NIC. Changing this forces a new resource to be created.
-* `virtual_machine` - (Required) The name or ID of the virtual machine to which
- to attach the NIC. Changing this forces a new resource to be created.
+* `virtual_machine` - (Required, Deprecated) The name or ID of the virtual
+ machine to which to attach the NIC. Changing this forces a new resource to
+ be created.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/port_forward.html.markdown b/website/source/docs/providers/cloudstack/r/port_forward.html.markdown
index 41e3b0b39f72..19e7d4ab6de5 100644
--- a/website/source/docs/providers/cloudstack/r/port_forward.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/port_forward.html.markdown
@@ -14,13 +14,13 @@ Creates port forwards.
```
resource "cloudstack_port_forward" "default" {
- ip_address = "192.168.0.1"
+ ip_address_id = "30b21801-d4b3-4174-852b-0c0f30bdbbfb"
forward {
protocol = "tcp"
private_port = 80
public_port = 8080
- virtual_machine = "server-1"
+ virtual_machine_id = "f8141e2f-4e7e-4c63-9362-986c908b7ea7"
}
}
```
@@ -29,8 +29,8 @@ resource "cloudstack_port_forward" "default" {
The following arguments are supported:
-* `ip_address` - (Required) The IP address for which to create the port forwards.
- Changing this forces a new resource to be created.
+* `ip_address_id` - (Required) The IP address ID for which to create the port
+ forwards. Changing this forces a new resource to be created.
* `ipaddress` - (Required, Deprecated) The IP address for which to create the port
forwards. Changing this forces a new resource to be created.
@@ -51,7 +51,10 @@ The `forward` block supports:
* `public_port` - (Required) The public port to forward from.
-* `virtual_machine` - (Required) The name or ID of the virtual machine to forward to.
+* `virtual_machine_id` - (Required) The ID of the virtual machine to forward to.
+
+* `virtual_machine` - (Required, Deprecated) The name or ID of the virtual
+ machine to forward to.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/secondary_ipaddress.html.markdown b/website/source/docs/providers/cloudstack/r/secondary_ipaddress.html.markdown
index 6907796f5603..85d27b01c323 100644
--- a/website/source/docs/providers/cloudstack/r/secondary_ipaddress.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/secondary_ipaddress.html.markdown
@@ -14,7 +14,7 @@ Assigns a secondary IP to a NIC.
```
resource "cloudstack_secondary_ipaddress" "default" {
- virtual_machine = "server-1"
+ virtual_machine_id = "server-1"
}
```
@@ -23,20 +23,28 @@ resource "cloudstack_secondary_ipaddress" "default" {
The following arguments are supported:
* `ip_address` - (Optional) The IP address to attach the to NIC. If not supplied
- an IP address will be selected randomly. Changing this forces a new resource
- to be created.
+ an IP address will be selected randomly. Changing this forces a new resource
+ to be created.
* `ipaddress` - (Optional, Deprecated) The IP address to attach the to NIC. If
not supplied an IP address will be selected randomly. Changing this forces
a new resource to be created.
-* `nicid` - (Optional) The ID of the NIC to which you want to attach the
- secondary IP address. Changing this forces a new resource to be
- created (defaults to the ID of the primary NIC)
+* `nic_id` - (Optional) The NIC ID to which you want to attach the secondary IP
+ address. Changing this forces a new resource to be created (defaults to the
+ ID of the primary NIC)
-* `virtual_machine` - (Required) The name or ID of the virtual machine to which
- you want to attach the secondary IP address. Changing this forces a new
- resource to be created.
+* `nicid` - (Optional, Deprecated) The ID of the NIC to which you want to attach
+ the secondary IP address. Changing this forces a new resource to be created
+ (defaults to the ID of the primary NIC)
+
+* `virtual_machine_id` - (Required) The ID of the virtual machine to which you
+ want to attach the secondary IP address. Changing this forces a new resource
+ to be created.
+
+* `virtual_machine` - (Required, Deprecated) The name or ID of the virtual
+ machine to which you want to attach the secondary IP address. Changing this
+ forces a new resource to be created.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/static_nat.html.markdown b/website/source/docs/providers/cloudstack/r/static_nat.html.markdown
index f899309a092c..2f7caf1ab5c3 100644
--- a/website/source/docs/providers/cloudstack/r/static_nat.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/static_nat.html.markdown
@@ -14,8 +14,8 @@ Enables static NAT for a given IP address
```
resource "cloudstack_static_nat" "default" {
- ipaddress = "192.168.0.1"
- virtual_machine = "server-1"
+ ip_address_id = "f8141e2f-4e7e-4c63-9362-986c908b7ea7"
+ virtual_machine_id = "6ca2a163-bc68-429c-adc8-ab4a620b1bb3"
}
```
@@ -23,18 +23,16 @@ resource "cloudstack_static_nat" "default" {
The following arguments are supported:
-* `ipaddress` - (Required) The name or ID of the public IP address for which
- static NAT will be enabled. Changing this forces a new resource to be
- created.
+* `ip_address_id` - (Required) The public IP address ID for which static
+ NAT will be enabled. Changing this forces a new resource to be created.
-* `network` - (Optional) The name or ID of the network of the VM the static
- NAT will be enabled for. Required when public IP address is not
- associated with any guest network yet (VPC case). Changing this forces
- a new resource to be created.
+* `network_id` - (Optional) The network ID of the VM the static NAT will be
+ enabled for. Required when public IP address is not associated with any
+ guest network yet (VPC case). Changing this forces a new resource to be
+ created.
-* `virtual_machine` - (Required) The name or ID of the virtual machine to
- enable the static NAT feature for. Changing this forces a new resource
- to be created.
+* `virtual_machine_id` - (Required) The virtual machine ID to enable the
+ static NAT feature for. Changing this forces a new resource to be created.
* `vm_guest_ip` - (Optional) The virtual machine IP address for the port
forwarding rule (useful when the virtual machine has a secondairy NIC).
diff --git a/website/source/docs/providers/cloudstack/r/template.html.markdown b/website/source/docs/providers/cloudstack/r/template.html.markdown
index b31c24db026e..99525395d05a 100644
--- a/website/source/docs/providers/cloudstack/r/template.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/template.html.markdown
@@ -31,8 +31,8 @@ The following arguments are supported:
* `display_text` - (Optional) The display name of the template.
-* `format` - (Required) The format of the template. Valid values are "QCOW2",
- "RAW", and "VHD".
+* `format` - (Required) The format of the template. Valid values are `QCOW2`,
+ `RAW`, and `VHD`.
* `hypervisor` - (Required) The target hypervisor for the template. Changing
this forces a new resource to be created.
@@ -43,11 +43,14 @@ The following arguments are supported:
* `url` - (Required) The URL of where the template is hosted. Changing this
forces a new resource to be created.
+* `project` - (Optional) The name or ID of the project to create this template for.
+ Changing this forces a new resource to be created.
+
* `zone` - (Required) The name or ID of the zone where this template will be created.
Changing this forces a new resource to be created.
* `is_dynamically_scalable` - (Optional) Set to indicate if the template contains
- tools to support dynamic scaling of VM cpu/memory.
+ tools to support dynamic scaling of VM cpu/memory (defaults false)
* `is_extractable` - (Optional) Set to indicate if the template is extractable
(defaults false)
diff --git a/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown b/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown
index 3ecf17cbca65..355fcdb086c8 100644
--- a/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown
@@ -16,8 +16,8 @@ Basic usage:
```
resource "cloudstack_vpn_connection" "default" {
- customergatewayid = "xxx"
- vpngatewayid = "xxx"
+ customer_gateway_id = "8dab9381-ae73-48b8-9a3d-c460933ef5f7"
+ vpn_gateway_id = "a7900060-f8a8-44eb-be15-ea54cf499703"
}
```
@@ -25,10 +25,16 @@ resource "cloudstack_vpn_connection" "default" {
The following arguments are supported:
-* `customergatewayid` - (Required) The Customer Gateway ID to connect.
+* `customer_gateway_id` - (Required) The Customer Gateway ID to connect.
Changing this forces a new resource to be created.
-* `vpngatewayid` - (Required) The VPN Gateway ID to connect.
+* `customergatewayid` - (Required, Deprecated) The Customer Gateway ID
+ to connect. Changing this forces a new resource to be created.
+
+* `vpn_gateway_id` - (Required) The VPN Gateway ID to connect. Changing
+ this forces a new resource to be created.
+
+* `vpngatewayid` - (Required, Deprecated) The VPN Gateway ID to connect.
Changing this forces a new resource to be created.
## Attributes Reference
diff --git a/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown b/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown
index 5bf9cf389cc4..1c74bf1a14d7 100644
--- a/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown
+++ b/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown
@@ -16,7 +16,7 @@ Basic usage:
```
resource "cloudstack_vpn_gateway" "default" {
- vpc = "test-vpc"
+ vpc_id = "f8141e2f-4e7e-4c63-9362-986c908b7ea7"
}
```
@@ -24,9 +24,12 @@ resource "cloudstack_vpn_gateway" "default" {
The following arguments are supported:
-* `vpc` - (Required) The name or ID of the VPC for which to create the VPN Gateway.
+* `vpc_id` - (Required) The ID of the VPC for which to create the VPN Gateway.
Changing this forces a new resource to be created.
+* `vpc` - (Required, Deprecated) The name or ID of the VPC for which to create
+ the VPN Gateway. Changing this forces a new resource to be created.
+
## Attributes Reference
The following attributes are exported:
diff --git a/website/source/docs/providers/cobbler/index.html.markdown b/website/source/docs/providers/cobbler/index.html.markdown
new file mode 100644
index 000000000000..b4193aa6ae29
--- /dev/null
+++ b/website/source/docs/providers/cobbler/index.html.markdown
@@ -0,0 +1,45 @@
+---
+layout: "cobbler"
+page_title: "Provider: Cobbler"
+sidebar_current: "docs-cobbler-index"
+description: |-
+ The Cobbler provider is used to interact with a locally installed,
+ Cobbler service.
+---
+
+# Cobbler Provider
+
+The Cobbler provider is used to interact with a locally installed
+[Cobbler](http://cobbler.github.io) service. The provider needs
+to be configured with the proper credentials before it can be used.
+
+Use the navigation to the left to read about the available resources.
+
+## Example Usage
+
+```
+# Configure the Cobbler provider
+provider "cobbler" {
+ username = "${var.cobbler_username}"
+ password = "${var.cobbler_password}"
+ url = "${var.cobbler_url}"
+}
+
+# Create a Cobbler Distro
+resource "cobbler_distro" "ubuntu-1404-x86_64" {
+ ...
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `username` - (Required) The username to the Cobbler service. This can
+ also be specified with the `COBBLER_USERNAME` shell environment variable.
+
+* `password` - (Required) The password to the Cobbler service. This can
+ also be specified with the `COBBLER_PASSWORD` shell environment variable.
+
+* `url` - (Required) The url to the Cobbler service. This can
+ also be specified with the `COBBLER_URL` shell environment variable.
diff --git a/website/source/docs/providers/cobbler/r/distro.html.markdown b/website/source/docs/providers/cobbler/r/distro.html.markdown
new file mode 100644
index 000000000000..faf1663783cc
--- /dev/null
+++ b/website/source/docs/providers/cobbler/r/distro.html.markdown
@@ -0,0 +1,84 @@
+---
+layout: "cobbler"
+page_title: "Cobbler: cobbler_distro"
+sidebar_current: "docs-cobbler-resource-distro"
+description: |-
+ Manages a distribution within Cobbler.
+---
+
+# cobbler\_distro
+
+Manages a distribution within Cobbler.
+
+## Example Usage
+
+```
+resource "cobbler_distro" "ubuntu-1404-x86_64" {
+ name = "foo"
+ breed = "ubuntu"
+ os_version = "trusty"
+ arch = "x86_64"
+ kernel = "/var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/linux"
+ initrd = "/var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/initrd.gz"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `arch` - (Required) The architecture of the distro. Valid options
+ are: i386, x86_64, ia64, ppc, ppc64, s390, arm.
+
+* `breed` - (Required) The "breed" of distribution. Valid options
+ are: redhat, fedora, centos, scientific linux, suse, debian, and
+ ubuntu. These choices may vary depending on the version of Cobbler
+ in use.
+
+* `boot_files` - (Optional) Files copied into tftpboot beyond the
+ kernel/initrd.
+
+* `comment` - (Optional) Free form text description.
+
+* `fetchable_files` - (Optional) Templates for tftp or wget.
+
+* `kernel` - (Required) Absolute path to kernel on filesystem. This
+ must already exist prior to creating the distro.
+
+* `kernel_options` - (Optional) Kernel options to use with the
+ kernel.
+
+* `kernel_options_post` - (Optional) Post install Kernel options to
+ use with the kernel after installation.
+
+* `initrd` - (Required) Absolute path to initrd on filesystem. This
+ must already exist prior to creating the distro.
+
+* `mgmt_classes` - (Optional) Management classes for external config
+ management.
+
+* `name` - (Required) A name for the distro.
+
+* `os_version` - (Required) The version of the distro you are
+ creating. This varies with the version of Cobbler you are using.
+ An updated signature list may need to be obtained in order to
+ support a newer version. Example: `trusty`.
+
+* `owners` - (Optional) Owners list for authz_ownership.
+
+* `redhat_management_key` - (Optional) Red Hat Management key.
+
+* `redhat_management_server` - (Optional) Red Hat Management server.
+
+* `template_files` - (Optional) File mappings for built-in config
+ management.
+
+## Attributes Reference
+
+All of the above Optional attributes are also exported.
+
+## Notes
+
+The path to the `kernel` and `initrd` files must exist before
+creating a Distro. Usually this involves running `cobbler import ...`
+prior to creating the Distro.
diff --git a/website/source/docs/providers/cobbler/r/kickstart_file.html.markdown b/website/source/docs/providers/cobbler/r/kickstart_file.html.markdown
new file mode 100644
index 000000000000..8ae230f72c38
--- /dev/null
+++ b/website/source/docs/providers/cobbler/r/kickstart_file.html.markdown
@@ -0,0 +1,29 @@
+---
+layout: "cobbler"
+page_title: "Cobbler: cobbler_kickstart_file"
+sidebar_current: "docs-cobbler-resource-kickstart_file"
+description: |-
+ Manages a Kickstart File within Cobbler.
+---
+
+# cobbler\_kickstart\_file
+
+Manages a Kickstart File within Cobbler.
+
+## Example Usage
+
+```
+resource "cobbler_kickstart_file" "my_kickstart" {
+ name = "/var/lib/cobbler/kickstarts/my_kickstart.ks"
+ body = ""
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `body` - (Required) The body of the kickstart file.
+
+* `name` - (Required) The name of the kickstart file. This must be
+ the full path, including `/var/lib/cobbler/kickstarts`.
diff --git a/website/source/docs/providers/cobbler/r/profile.html.markdown b/website/source/docs/providers/cobbler/r/profile.html.markdown
new file mode 100644
index 000000000000..7334b3e030c6
--- /dev/null
+++ b/website/source/docs/providers/cobbler/r/profile.html.markdown
@@ -0,0 +1,92 @@
+---
+layout: "cobbler"
+page_title: "Cobbler: cobbler_profile"
+sidebar_current: "docs-cobbler-resource-profile"
+description: |-
+ Manages a Profile within Cobbler.
+---
+
+# cobbler\_profile
+
+Manages a Profile within Cobbler.
+
+## Example Usage
+
+```
+resource "cobbler_profile" "my_profile" {
+ name = "/var/lib/cobbler/snippets/my_snippet"
+ distro = "ubuntu-1404-x86_64"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `boot_files` - (Optional) Files copied into tftpboot beyond the
+ kernel/initrd.
+
+* `comment` - (Optional) Free form text description.
+
+* `distro` - (Optional) Parent distribution.
+
+* `enable_gpxe` - (Optional) Use gPXE instead of PXELINUX for
+ advanced booting options.
+
+* `enable_menu` - (Optional) Enable a boot menu.
+
+* `fetchable_files` - (Optional) Templates for tftp or wget.
+
+* `kernel_options` - (Optional) Kernel options for the profile.
+
+* `kernel_options_post` - (Optional) Post install kernel options.
+
+* `kickstart` - (Optional) The kickstart file to use.
+
+* `ks_meta` - (Optional) Kickstart metadata.
+
+* `mgmt_classes` - (Optional) For external configuration management.
+
+* `mgmt_parameters` - (Optional) Parameters which will be handed to
+ your management application (Must be a valid YAML dictionary).
+
+* `name_servers_search` - (Optional) Name server search settings.
+
+* `name_servers` - (Optional) Name servers.
+
+* `name` - (Required) The name of the profile.
+
+* `owners` - (Optional) Owners list for authz_ownership.
+
+* `proxy` - (Optional) Proxy URL.
+
+* `redhat_management_key` - (Optional) Red Hat Management Key.
+
+* `redhat_management_server` - (Optional) RedHat Management Server.
+
+* `repos` - (Optional) Repos to auto-assign to this profile.
+
+* `template_files` - (Optional) File mappings for built-in config
+ management.
+
+* `template_remote_kickstarts` - (Optional) remote kickstart
+ templates.
+
+* `virt_auto_boot` - (Optional) Auto boot virtual machines.
+
+* `virt_bridge` - (Optional) The bridge for virtual machines.
+
+* `virt_cpus` - (Optional) The number of virtual CPUs.
+
+* `virt_file_size` - (Optional) The virtual machine file size.
+
+* `virt_path` - (Optional) The virtual machine path.
+
+* `virt_ram` - (Optional) The amount of RAM for the virtual machine.
+
+* `virt_type` - (Optional) The type of virtual machine. Valid options
+ are: xenpv, xenfv, qemu, kvm, vmware, openvz.
+
+## Attributes Reference
+
+All of the above Optional attributes are also exported.
diff --git a/website/source/docs/providers/cobbler/r/snippet.html.markdown b/website/source/docs/providers/cobbler/r/snippet.html.markdown
new file mode 100644
index 000000000000..f239e7fd6c11
--- /dev/null
+++ b/website/source/docs/providers/cobbler/r/snippet.html.markdown
@@ -0,0 +1,29 @@
+---
+layout: "cobbler"
+page_title: "Cobbler: cobbler_snippet"
+sidebar_current: "docs-cobbler-resource-snippet"
+description: |-
+ Manages a Snippet within Cobbler.
+---
+
+# cobbler\_snippet
+
+Manages a Snippet within Cobbler.
+
+## Example Usage
+
+```
+resource "cobbler_snippet" "my_snippet" {
+ name = "/var/lib/cobbler/snippets/my_snippet"
+ body = ""
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `body` - (Required) The body of the snippet.
+
+* `name` - (Required) The name of the snippet. This must be the full
+ path, including `/var/lib/cobbler/snippets`.
diff --git a/website/source/docs/providers/cobbler/r/system.html.markdown b/website/source/docs/providers/cobbler/r/system.html.markdown
new file mode 100644
index 000000000000..29b3c368d2dc
--- /dev/null
+++ b/website/source/docs/providers/cobbler/r/system.html.markdown
@@ -0,0 +1,189 @@
+---
+layout: "cobbler"
+page_title: "Cobbler: cobbler_system"
+sidebar_current: "docs-cobbler-resource-system"
+description: |-
+ Manages a System within Cobbler.
+---
+
+# cobbler\_system
+
+Manages a System within Cobbler.
+
+## Example Usage
+
+```
+resource "cobbler_system" "my_system" {
+ name = "my_system"
+ profile = "${cobbler_profile.my_profile.name}"
+ name_servers = ["8.8.8.8", "8.8.4.4"]
+ comment = "I'm a system"
+
+ interface {
+ name = "eth0"
+ mac_address = "aa:bb:cc:dd:ee:ff"
+ static = true
+ ip_address = "1.2.3.4"
+ netmask = "255.255.255.0"
+ }
+
+ interface {
+ name = "eth1"
+ mac_address = "aa:bb:cc:dd:ee:fa"
+ static = true
+ ip_address = "1.2.3.5"
+ netmask = "255.255.255.0"
+ }
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `boot_files` - (Optional) TFTP boot files copied into tftpboot.
+
+* `comment` - (Optional) Free form text description
+
+* `enable_gpxe` - (Optional) Use gPXE instead of PXELINUX.
+
+* `fetchable_files` - (Optional) Templates for tftp or wget.
+
+* `gateway` - (Optional) Network gateway.
+
+* `hostname` - (Optional) Hostname of the system.
+
+* `image` - (Optional) Parent image (if no profile is used).
+
+* `interface` - (Optional)
+
+* `ipv6_default_device` - (Optional) IPv6 default device.
+
+* `kernel_options` - (Optional) Kernel options.
+ ex: selinux=permissive.
+
+* `kernel_options_post` - (Optional) Kernel options (post install).
+
+* `kickstart` - (Optional) Path to kickstart template.
+
+* `ks_meta` - (Optional) Kickstart metadata.
+
+* `ldap_enabled` - (Optional) Configure LDAP at next config update.
+
+* `ldap_type` - (Optional) LDAP management type.
+
+* `mgmt_classes` - (Optional) Management classes for external config
+ management.
+* `mgmt_parameters` - (Optional) Parameters which will be handed to
+ your management application. Must be a valid YAML dictionary.
+
+* `monit_enabled` - (Optional) Configure monit on this machine at
+ next config update.
+
+* `name_servers_search` - (Optional) Name servers search path.
+
+* `name_servers` - (Optional) Name servers.
+
+* `name` - (Required) The name of the system.
+
+* `netboot_enabled` - (Optional) (re)Install this machine at next
+ boot.
+
+* `owners` - (Optional) Owners list for authz_ownership.
+
+* `power_address` - (Optional) Power management address.
+
+* `power_id` - (Optional) Usually a plug number or blade name if
+ power type requires it.
+
+* `power_pass` - (Optional) Power management password.
+
+* `power_type` - (Optional) Power management type.
+
+* `power_user` - (Optional) Power management user.
+
+* `profile` - (Required) Parent profile.
+
+* `proxy` - (Optional) Proxy URL.
+
+* `redhat_management_key` - (Optional) Red Hat management key.
+
+* `redhat_management_server` - (Optional) Red Hat management server.
+
+* `status` - (Optional) System status (development, testing,
+ acceptance, production).
+
+* `template_files` - (Optional) File mappings for built-in
+ configuration management.
+
+* `template_remote_kickstarts` - (Optional) template remote
+ kickstarts.
+
+* `virt_auto_boot` - (Optional) Auto boot the VM.
+
+* `virt_cpus` - (Optional) Number of virtual CPUs in the VM.
+
+* `virt_disk_driver` - (Optional) The on-disk format for the
+ virtualization disk.
+
+* `virt_file_size` - (Optional) Virt file size.
+
+* `virt_path` - (Optional) Path to the VM.
+
+* `virt_pxe_boot` - (Optional) Use PXE to build this VM?
+
+* `virt_ram` - (Optional) The amount of RAM for the VM.
+
+* `virt_type` - (Optional) Virtualization technology to use: xenpv,
+ xenfv, qemu, kvm, vmware, openvz.
+
+The `interface` block supports:
+
+* `name` - (Required) The device name of the interface. ex: eth0.
+
+* `cnames` - (Optional) Canonical name records.
+
+* `dhcp_tag` - (Optional) DHCP tag.
+
+* `dns_name` - (Optional) DNS name.
+
+* `bonding_opts` - (Optional) Options for bonded interfaces.
+
+* `bridge_opts` - (Optional) Options for bridge interfaces.
+
+* `gateway` - (Optional) Per-interface gateway.
+
+* `interface_type` - (Optional) The type of interface: na, master,
+ slave, bond, bond_slave, bridge, bridge_slave, bonded_bridge_slave.
+
+* `interface_master` - (Optional) The master interface when slave.
+
+* `ip_address` - (Optional) The IP address of the interface.
+
+* `ipv6_address` - (Optional) The IPv6 address of the interface.
+
+* `ipv6_mtu` - (Optional) The MTU of the IPv6 adress.
+
+* `ipv6_static_routes` - (Optional) Static routes for the IPv6
+ interface.
+
+* `ipv6_default_gateway` - (Optional) The default gateawy for the
+ IPv6 address / interface.
+
+* `mac_address` - (Optional) The MAC address of the interface.
+
+* `management` - (Optional) Whether this interface is a management
+ interface.
+
+* `netmask` - (Optional) The IPv4 netmask of the interface.
+
+* `static` - (Optional) Whether the interface should be static or
+ DHCP.
+
+* `static_routes` - (Optional) Static routes for the interface.
+
+* `virt_bridge` - (Optional) The virtual bridge to attach to.
+
+## Attribute Reference
+
+All optional attributes listed above are also exported.
diff --git a/website/source/docs/providers/datadog/r/monitor.html.markdown b/website/source/docs/providers/datadog/r/monitor.html.markdown
index 6d5a8fbe0a12..e14a36fc0761 100644
--- a/website/source/docs/providers/datadog/r/monitor.html.markdown
+++ b/website/source/docs/providers/datadog/r/monitor.html.markdown
@@ -61,7 +61,7 @@ The following arguments are supported:
* `warning`
* `critical`
* `notify_no_data` (Optional) A boolean indicating whether this monitor will notify when data stops reporting. Defaults
- to false.
+ to true.
* `no_data_timeframe` (Optional) The number of minutes before a monitor will notify when data stops reporting. Must be at
least 2x the monitor timeframe for metric alerts or 2 minutes for service checks. Default: 2x timeframe for
metric alerts, 2 minutes for service checks.
diff --git a/website/source/docs/providers/docker/r/image.html.markdown b/website/source/docs/providers/docker/r/image.html.markdown
index 584f373af9a9..d6c0dcf56508 100644
--- a/website/source/docs/providers/docker/r/image.html.markdown
+++ b/website/source/docs/providers/docker/r/image.html.markdown
@@ -33,6 +33,9 @@ The following arguments are supported:
always be updated on the host to the latest. If this is false, as long as an
image is downloaded with the correct tag, it won't be redownloaded if
there is a newer image.
+* `keep_locally` - (Optional, boolean) If true, then the Docker image won't be
+ deleted on destroy operation. If this is false, it will delete the image from
+ the docker local storage on destroy operation.
## Attributes Reference
diff --git a/website/source/docs/providers/fastly/r/service_v1.html.markdown b/website/source/docs/providers/fastly/r/service_v1.html.markdown
index 8e84234f0616..38e047f16089 100644
--- a/website/source/docs/providers/fastly/r/service_v1.html.markdown
+++ b/website/source/docs/providers/fastly/r/service_v1.html.markdown
@@ -40,7 +40,7 @@ resource "fastly_service_v1" "demo" {
```
-Basic usage with an Amazon S3 Website:
+Basic usage with an Amazon S3 Website, and removes the `x-amz-request-id` header:
```
resource "fastly_service_v1" "demo" {
@@ -57,6 +57,19 @@ resource "fastly_service_v1" "demo" {
port = 80
}
+ header {
+ destination = "http.x-amz-request-id"
+ type = "cache"
+ action = "delete"
+ name = "remove x-amz-request-id"
+ }
+
+ gzip {
+ name = "file extensions and content types"
+ extensions = ["css", "js"]
+ content_types = ["text/html", "text/css"]
+ }
+
default_host = "${aws_s3_bucket.website.name}.s3-website-us-west-2.amazonaws.com"
force_destroy = true
@@ -76,7 +89,7 @@ resource "aws_s3_bucket" "website" {
**Note:** For an AWS S3 Bucket, the Backend address is
`.s3-website-.amazonaws.com`. The `default_host` attribute
should be set to `.s3-website-.amazonaws.com`. See the
-Fastly documentation on [Amazon S3][fastly-s3]
+Fastly documentation on [Amazon S3][fastly-s3].
## Argument Reference
@@ -84,13 +97,19 @@ The following arguments are supported:
* `name` - (Required) The unique name for the Service to create
* `domain` - (Required) A set of Domain names to serve as entry points for your
-Service. Defined below.
+Service. Defined below
* `backend` - (Required) A set of Backends to service requests from your Domains.
-Defined below.
+Defined below
+* `gzip` - (Required) A set of gzip rules to control automatic gzipping of
+content. Defined below
+* `header` - (Optional) A set of Headers to manipulate for each request. Defined
+below
* `default_host` - (Optional) The default hostname
* `default_ttl` - (Optional) The default Time-to-live (TTL) for requests
* `force_destroy` - (Optional) Services that are active cannot be destroyed. In
order to destroy the Service, set `force_destroy` to `true`. Default `false`.
+* `s3logging` - (Optional) A set of S3 Buckets to send streaming logs too.
+Defined below
The `domain` block supports:
@@ -114,8 +133,61 @@ Default `1000`
Default `200`
* `port` - (Optional) The port number Backend responds on. Default `80`
* `ssl_check_cert` - (Optional) Be strict on checking SSL certs. Default `true`
-* `weight` - (Optional) How long to wait for the first bytes in milliseconds.
-Default `100`
+* `weight` - (Optional) The [portion of traffic](https://docs.fastly.com/guides/performance-tuning/load-balancing-configuration.html#how-weight-affects-load-balancing) to send to this Backend. Each Backend receives `weight / total` of the traffic. Default `100`
+
+The `gzip` block supports:
+
+* `name` - (Required) A unique name
+* `content_types` - (Optional) content-type for each type of content you wish to
+have dynamically gzipped. Ex: `["text/html", "text/css"]`
+* `extensions` - (Optional) File extensions for each file type to dynamically
+gzip. Ex: `["css", "js"]`
+
+
+The `Header` block supports adding, removing, or modifying Request and Response
+headers. See Fastly's documentation on
+[Adding or modifying headers on HTTP requests and responses](https://docs.fastly.com/guides/basic-configuration/adding-or-modifying-headers-on-http-requests-and-responses#field-description-table) for more detailed information on any
+of the properties below.
+
+* `name` - (Required) A unique name to refer to this header attribute
+* `action` - (Required) The Header manipulation action to take; must be one of
+`set`, `append`, `delete`, `regex`, or `regex_repeat`
+* `type` - (Required) The Request type to apply the selected Action on
+* `destination` - (Required) The name of the header that is going to be affected
+by the Action
+* `ignore_if_set` - (Optional) Do not add the header if it is already present.
+(Only applies to `set` action.). Default `false`
+* `source` - (Optional) Variable to be used as a source for the header content
+(Does not apply to `delete` action.)
+* `regex` - (Optional) Regular expression to use (Only applies to `regex` and `regex_repeat` actions.)
+* `substitution` - (Optional) Value to substitute in place of regular expression. (Only applies to `regex` and `regex_repeat`.)
+* `priority` - (Optional) Lower priorities execute first. (Default: `100`.)
+
+The `s3logging` block supports:
+
+* `name` - (Required) A unique name to identify this S3 Logging Bucket
+* `bucket_name` - (Optional) An optional comment about the Domain
+* `s3_access_key` - (Required) AWS Access Key of an account with the required
+permissions to post logs. It is **strongly** recommended you create a separate
+IAM user with permissions to only operate on this Bucket. This key will be
+not be encrypted. You can provide this key via an environment variable, `FASTLY_S3_ACCESS_KEY`
+* `s3_secret_key` - (Required) AWS Secret Key of an account with the required
+permissions to post logs. It is **strongly** recommended you create a separate
+IAM user with permissions to only operate on this Bucket. This secret will be
+not be encrypted. You can provide this secret via an environment variable, `FASTLY_S3_SECRET_KEY`
+* `path` - (Optional) Path to store the files. Must end with a trailing slash.
+If this field is left empty, the files will be saved in the bucket's root path.
+* `domain` - (Optional) If you created the S3 bucket outside of `us-east-1`,
+then specify the corresponding bucket endpoint. Ex: `s3-us-west-2.amazonaws.com`
+* `period` - (Optional) How frequently the logs should be transferred, in
+seconds. Default `3600`
+* `gzip_level` - (Optional) Level of GZIP compression, from `0-9`. `0` is no
+compression. `1` is fastest and least compressed, `9` is slowest and most
+compressed. Default `0`
+* `format` - (Optional) Apache-style string or VCL variables to use for log formatting. Default
+Apache Common Log format (`%h %l %u %t %r %>s`)
+* `timestamp_format` - (Optional) `strftime` specified timestamp formatting (default `%Y-%m-%dT%H:%M:%S.000`).
+
## Attributes Reference
@@ -126,6 +198,8 @@ The following attributes are exported:
* `active_version` - The currently active version of your Fastly Service
* `domain` – Set of Domains. See above for details
* `backend` – Set of Backends. See above for details
+* `header` – Set of Headers. See above for details
+* `s3logging` – Set of S3 Logging configurations. See above for details
* `default_host` – Default host specified
* `default_ttl` - Default TTL
* `force_destroy` - Force the destruction of the Service on delete
@@ -133,4 +207,3 @@ The following attributes are exported:
[fastly-s3]: https://docs.fastly.com/guides/integrations/amazon-s3
[fastly-cname]: https://docs.fastly.com/guides/basic-setup/adding-cname-records
-
diff --git a/website/source/docs/providers/google/index.html.markdown b/website/source/docs/providers/google/index.html.markdown
index 641e2b419093..936cc26121b4 100644
--- a/website/source/docs/providers/google/index.html.markdown
+++ b/website/source/docs/providers/google/index.html.markdown
@@ -39,17 +39,28 @@ The following keys can be used to configure the provider.
retrieving this file are below. Credentials may be blank if you are running
Terraform from a GCE instance with a properly-configured [Compute Engine
Service Account](https://cloud.google.com/compute/docs/authentication). This
- can also be specified with the `GOOGLE_CREDENTIALS` or `GOOGLE_CLOUD_KEYFILE_JSON`
- shell environment variable, containing the contents of the credentials file.
+ can also be specified using any of the following environment variables
+ (listed in order of precedence):
+
+ * `GOOGLE_CREDENTIALS`
+ * `GOOGLE_CLOUD_KEYFILE_JSON`
+ * `GCLOUD_KEYFILE_JSON`
+
+* `project` - (Required) The ID of the project to apply any resources to. This
+ can be specified using any of the following environment variables (listed in
+ order of precedence):
+
+ * `GOOGLE_PROJECT`
+ * `GCLOUD_PROJECT`
+ * `CLOUDSDK_CORE_PROJECT`
* `region` - (Required) The region to operate under. This can also be specified
- with the `GOOGLE_REGION` shell environment variable.
+ using any of the following environment variables (listed in order of
+ precedence):
-* `project` - (Optional) The ID of the project to apply resources in. This
- can also be specified with the `GOOGLE_PROJECT` shell environment variable.
- If unspecified, users will need to specify the `project` attribute for
- all resources. If specified, resources which do not depend on a project will
- ignore this value.
+ * `GOOGLE_REGION`
+ * `GCLOUD_REGION`
+ * `CLOUDSDK_COMPUTE_REGION`
The following keys are supported for backwards compatibility, and may be
removed in a future version:
diff --git a/website/source/docs/providers/google/r/container_cluster.html.markdown b/website/source/docs/providers/google/r/container_cluster.html.markdown
index c3fc3a6a388c..693ed2ac865d 100644
--- a/website/source/docs/providers/google/r/container_cluster.html.markdown
+++ b/website/source/docs/providers/google/r/container_cluster.html.markdown
@@ -50,6 +50,7 @@ resource "google_container_cluster" "primary" {
* `zone` - (Required) The zone that all resources should be created in.
- - -
+* `addons_config` - (Optional) The configuration for addons supported by Google Container Engine
* `cluster_ipv4_cidr` - (Optional) The IP address range of the container pods in
this cluster. Default is an automatically assigned CIDR.
@@ -78,6 +79,8 @@ resource "google_container_cluster" "primary" {
* `project` - (Optional) The project in which the resource belongs. If it
is not provided, the provider project is used.
+* `subnetwork` - (Optional) The name of the Google Compute Engine subnetwork in which the cluster's instances are launched
+
**Master Auth** supports the following arguments:
* `password` - The password to use for HTTP basic authentication when accessing
@@ -103,6 +106,23 @@ resource "google_container_cluster" "primary" {
* `https://www.googleapis.com/auth/logging.write` (if `logging_service` points to Google)
* `https://www.googleapis.com/auth/monitoring` (if `monitoring_service` points to Google)
+**Addons Config** supports the following addons:
+
+* `http_load_balancing` - (Optional) The status of the HTTP Load Balancing addon. It is enabled by default; set `disabled = true` to disable.
+* `horizontal_pod_autoscaling` - (Optional) The status of the Horizontal Pod Autoscaling addon. It is enabled by default; set `disabled = true` to disable.
+
+This example `addons_config` disables both addons:
+```
+addons_config {
+ http_load_balancing {
+ disabled = false
+ }
+ horizontal_pod_autoscaling {
+ disabled = false
+ }
+}
+```
+
## Attributes Reference
In addition to the arguments listed above, the following computed attributes are
diff --git a/website/source/docs/providers/librato/index.html.markdown b/website/source/docs/providers/librato/index.html.markdown
new file mode 100644
index 000000000000..4874f4d27cc6
--- /dev/null
+++ b/website/source/docs/providers/librato/index.html.markdown
@@ -0,0 +1,39 @@
+---
+layout: "librato"
+page_title: "Provider: Librato"
+sidebar_current: "docs-librato-index"
+description: |-
+ The Librato provider is used to interact with the resources supported by Librato. The provider needs to be configured with the proper credentials before it can be used.
+---
+
+# Librato Provider
+
+The Librato provider is used to interact with the
+resources supported by Librato. The provider needs to be configured
+with the proper credentials before it can be used.
+
+Use the navigation to the left to read about the available resources.
+
+## Example Usage
+
+```
+# Configure the Librato provider
+provider "librato" {
+ email = "ops@company.com"
+ token = "${var.librato_token}"
+}
+
+# Create a new space
+resource "librato_space" "default" {
+ ...
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `token` - (Required) Librato API token. It must be provided, but it can also
+ be sourced from the `LIBRATO_TOKEN` environment variable.
+* `email` - (Required) Librato email address. It must be provided, but it can
+ also be sourced from the `LIBRATO_EMAIL` environment variable.
diff --git a/website/source/docs/providers/librato/r/space.html.markdown b/website/source/docs/providers/librato/r/space.html.markdown
new file mode 100644
index 000000000000..44317be0d329
--- /dev/null
+++ b/website/source/docs/providers/librato/r/space.html.markdown
@@ -0,0 +1,34 @@
+---
+layout: "librato"
+page_title: "Librato: librato_space"
+sidebar_current: "docs-librato-resource-space"
+description: |-
+ Provides a Librato Space resource. This can be used to create and manage spaces on Librato.
+---
+
+# librato\_space
+
+Provides a Librato Space resource. This can be used to
+create and manage spaces on Librato.
+
+## Example Usage
+
+```
+# Create a new Librato space
+resource "librato_space" "default" {
+ name = "My New Space"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name of the space.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The ID of the space.
+* `name` - The name of the space.
diff --git a/website/source/docs/providers/librato/r/space_chart.html.markdown b/website/source/docs/providers/librato/r/space_chart.html.markdown
new file mode 100644
index 000000000000..ad9b20fa9707
--- /dev/null
+++ b/website/source/docs/providers/librato/r/space_chart.html.markdown
@@ -0,0 +1,110 @@
+---
+layout: "librato"
+page_title: "Librato: librato_space_chart"
+sidebar_current: "docs-librato-resource-space-chart"
+description: |-
+ Provides a Librato Space Chart resource. This can be used to create and manage charts in Librato Spaces.
+---
+
+# librato\_space\_chart
+
+Provides a Librato Space Chart resource. This can be used to
+create and manage charts in Librato Spaces.
+
+## Example Usage
+
+```
+# Create a new Librato space
+resource "librato_space" "my_space" {
+ name = "My New Space"
+}
+
+# Create a new chart
+resource "librato_space_chart" "server_temperature" {
+ name = "Server Temperature"
+ space_id = "${librato_space.my_space.id}"
+
+ stream {
+ metric = "server_temp"
+ source = "app1"
+ }
+
+ stream {
+ metric = "environmental_temp"
+ source = "*"
+ group_function = "breakout"
+ summary_function = "average"
+ }
+
+ stream {
+ metric = "server_temp"
+ source = "%"
+ group_function = "average"
+ }
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `space_id` - (Required) The ID of the space this chart should be in.
+* `name` - (Required) The title of the chart when it is displayed.
+* `type` - (Optional) Indicates the type of chart. Must be one of line or
+ stacked (default to line).
+* `min` - (Optional) The minimum display value of the chart's Y-axis.
+* `max` - (Optional) The maximum display value of the chart's Y-axis.
+* `label` - (Optional) The Y-axis label.
+* `related_space` - (Optional) The ID of another space to which this chart is
+ related.
+* `stream` - (Optional) Nested block describing a metric to use for data in the
+ chart. The structure of this block is described below.
+
+The `stream` block supports:
+
+* `metric` - (Required) The name of the metric. May not be specified if
+ `composite` is specified.
+* `source` - (Required) The name of a source, or `*` to include all sources.
+ This field will also accept specific wildcard entries. For example
+ us-west-\*-app will match us-west-21-app but not us-west-12-db. Use % to
+ specify a dynamic source that will be provided after the instrument or
+ dashboard has loaded, or in the URL. May not be specified if `composite` is
+ specified.
+* `group_function` - (Required) How to process the results when multiple sources
+ will be returned. Value must be one of average, sum, breakout. If average or
+ sum, a single line will be drawn representing the average or sum
+ (respectively) of all sources. If the group_function is breakout, a separate
+ line will be drawn for each source. If this property is not supplied, the
+ behavior will default to average. May not be specified if `composite` is
+ specified.
+* `composite` - (Required) A composite metric query string to execute when this
+ stream is displayed. May not be specified if `metric`, `source` or
+ `group_function` is specified.
+* `summary_function` - (Optional) When visualizing complex measurements or a
+ rolled-up measurement, this allows you to choose which statistic to use.
+ Defaults to "average". Valid options are: "max", "min", "average", "sum" or
+ "count".
+* `name` - (Optional) A display name to use for the stream when generating the
+ tooltip.
+* `color` - (Optional) Sets a color to use when rendering the stream. Must be a
+ seven character string that represents the hex code of the color e.g.
+ "#52D74C".
+* `units_short` - (Optional) Unit value string to use as the tooltip label.
+* `units_long` - (Optional) String value to set as they Y-axis label. All
+ streams that share the same units_long value will be plotted on the same
+ Y-axis.
+* `min` - (Optional) Theoretical minimum Y-axis value.
+* `max` - (Optional) Theoretical maximum Y-axis value.
+* `transform_function` - (Optional) Linear formula to run on each measurement
+ prior to visualizaton.
+* `period` - (Optional) An integer value of seconds that defines the period this
+ stream reports at. This aids in the display of the stream and allows the
+ period to be used in stream display transforms.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The ID of the chart.
+* `space_id` - The ID of the space this chart should be in.
+* `title` - The title of the chart when it is displayed.
diff --git a/website/source/docs/providers/openstack/index.html.markdown b/website/source/docs/providers/openstack/index.html.markdown
index 52f94e46abc6..2800bfaf5e73 100644
--- a/website/source/docs/providers/openstack/index.html.markdown
+++ b/website/source/docs/providers/openstack/index.html.markdown
@@ -46,7 +46,18 @@ The following arguments are supported:
* `password` - (Optional; Required if not using `api_key`) If omitted, the
`OS_PASSWORD` environment variable is used.
-* `api_key` - (Optional; Required if not using `password`)
+* `token` - (Optional; Required if not using `user_name` and `password`)
+ A token is an expiring, temporary means of access issued via the
+ Keystone service. By specifying a token, you do not have to
+ specify a username/password combination, since the token was
+ already created by a username/password out of band of Terraform.
+ If ommitted, the `OS_AUTH_TOKEN` environment variable is used.
+
+* `api_key` - (Optional; Required if not using `password`) An API Key
+ is issued by a cloud provider as alternative password. Unless
+ your cloud provider has documentation referencing an API Key,
+ you can safely ignore this argument. If omitted, the `OS_API_KEY`
+ environment variable is used.
* `domain_id` - (Optional) If omitted, the `OS_DOMAIN_ID` environment
variable is used.
@@ -62,10 +73,55 @@ The following arguments are supported:
* `insecure` - (Optional) Explicitly allow the provider to perform
"insecure" SSL requests. If omitted, default value is `false`
+* `cacert_file` - (Optional) Specify a custom CA certificate when communicating
+ over SSL. If omitted, the `OS_CACERT` environment variable is used.
+
* `endpoint_type` - (Optional) Specify which type of endpoint to use from the
service catalog. It can be set using the OS_ENDPOINT_TYPE environment
variable. If not set, public endpoints is used.
+## Rackspace Compatibility
+
+Using this OpenStack provider with Rackspace is not supported and not
+guaranteed to work; however, users have reported success with the
+following notes in mind:
+
+* Interacting with instances has been seen to work. Interacting with
+all other resources is either untested or known to not work.
+
+* Use your _password_ instead of your Rackspace API KEY.
+
+* Explicitly define the public and private networks in your
+instances as shown below:
+
+```
+resource "openstack_compute_instance_v2" "my_instance" {
+ name = "my_instance"
+ region = "DFW"
+ image_id = "fabe045f-43f8-4991-9e6c-5cabd617538c"
+ flavor_id = "general1-4"
+ key_pair = "provisioning_key"
+
+ network {
+ uuid = "00000000-0000-0000-0000-000000000000"
+ name = "public"
+ }
+
+ network {
+ uuid = "11111111-1111-1111-1111-111111111111"
+ name = "private"
+ }
+}
+```
+
+If you try using this provider with Rackspace and run into bugs, you
+are welcomed to open a bug report / issue on Github, but please keep
+in mind that this is unsupported and the reported bug may not be
+able to be fixed.
+
+If you have successfully used this provider with Rackspace and can
+add any additional comments, please let us know.
+
## Testing and Development
In order to run the Acceptance Tests for development, the following environment
diff --git a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown
index 02886c59f863..8baf17a54f0c 100644
--- a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown
+++ b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown
@@ -12,16 +12,185 @@ Manages a V2 VM instance resource within OpenStack.
## Example Usage
+### Basic Instance
+
```
-resource "openstack_compute_instance_v2" "test-server" {
- name = "tf-test"
+resource "openstack_compute_instance_v2" "basic" {
+ name = "basic"
image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
metadata {
this = "that"
}
+
+ network {
+ name = "my_network"
+ }
+}
+```
+
+### Instance With Attached Volume
+
+```
+resource "openstack_blockstorage_volume_v1" "myvol" {
+ name = "myvol"
+ size = 1
+}
+
+resource "openstack_compute_instance_v2" "volume-attached" {
+ name = "volume-attached"
+ image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
+ flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
+ network {
+ name = "my_network"
+ }
+
+ volume {
+ volume_id = "${openstack_blockstorage_volume_v1.myvol.id}"
+ }
+}
+```
+
+### Boot From Volume
+
+```
+resource "openstack_compute_instance_v2" "boot-from-volume" {
+ name = "boot-from-volume"
+ flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
+ block_device {
+ uuid = ""
+ source_type = "image"
+ volume_size = 5
+ boot_index = 0
+ destination_type = "volume"
+ delete_on_termination = true
+ }
+
+ network {
+ name = "my_network"
+ }
+}
+```
+
+### Boot From an Existing Volume
+
+```
+resource "openstack_blockstorage_volume_v1" "myvol" {
+ name = "myvol"
+ size = 5
+ image_id = ""
+}
+
+resource "openstack_compute_instance_v2" "boot-from-volume" {
+ name = "bootfromvolume"
+ flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
+ block_device {
+ uuid = "${openstack_blockstorage_volume_v1.myvol.id}"
+ source_type = "volume"
+ boot_index = 0
+ destination_type = "volume"
+ delete_on_termination = true
+ }
+
+ network {
+ name = "my_network"
+ }
+}
+```
+
+### Instance With Multiple Networks
+
+```
+resource "openstack_compute_floatingip_v2" "myip" {
+ pool = "my_pool"
+}
+
+resource "openstack_compute_instance_v2" "multi-net" {
+ name = "multi-net"
+ image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
+ flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
+ network {
+ name = "my_first_network"
+ }
+
+ network {
+ name = "my_second_network"
+ floating_ip = "${openstack_compute_floatingip_v2.myip.address}"
+ # Terraform will use this network for provisioning
+ access_network = true
+ }
+}
+```
+
+### Instance With Personality
+
+```
+resource "openstack_compute_instance_v2" "personality" {
+ name = "personality"
+ image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
+ flavor_id = "3"
key_pair = "my_key_pair_name"
- security_groups = ["test-group-1"]
+ security_groups = ["default"]
+
+ personality {
+ file = "/path/to/file/on/instance.txt
+ content = "contents of file"
+ }
+
+ network {
+ name = "my_network"
+ }
+}
+```
+
+### Instance with Multiple Ephemeral Disks
+
+```
+resource "openstack_compute_instance_v2" "multi-eph" {
+ name = "multi_eph"
+ image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743"
+ flavor_id = "3"
+ key_pair = "my_key_pair_name"
+ security_groups = ["default"]
+
+ block_device {
+ boot_index = 0
+ delete_on_termination = true
+ destination_type = "local"
+ source_type = "image"
+ uuid = ""
+ }
+
+ block_device {
+ boot_index = -1
+ delete_on_termination = true
+ destination_type = "local"
+ source_type = "blank"
+ volume_size = 1
+ }
+
+ block_device {
+ boot_index = -1
+ delete_on_termination = true
+ destination_type = "local"
+ source_type = "blank"
+ volume_size = 1
+ }
}
```
@@ -36,12 +205,12 @@ The following arguments are supported:
* `name` - (Required) A unique name for the resource.
* `image_id` - (Optional; Required if `image_name` is empty and not booting
- from a volume) The image ID of the desired image for the server. Changing
- this creates a new server.
+ from a volume. Do not specify if booting from a volume.) The image ID of
+ the desired image for the server. Changing this creates a new server.
* `image_name` - (Optional; Required if `image_id` is empty and not booting
- from a volume) The name of the desired image for the server. Changing this
- creates a new server.
+ from a volume. Do not specify if booting from a volume.) The name of the
+ desired image for the server. Changing this creates a new server.
* `flavor_id` - (Optional; Required if `flavor_name` is empty) The flavor ID of
the desired flavor for the server. Changing this resizes the existing server.
diff --git a/website/source/docs/providers/openstack/r/networking_router_route_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_router_route_v2.html.markdown
new file mode 100644
index 000000000000..7713a52c148a
--- /dev/null
+++ b/website/source/docs/providers/openstack/r/networking_router_route_v2.html.markdown
@@ -0,0 +1,76 @@
+---
+layout: "openstack"
+page_title: "OpenStack: openstack_networking_router_route_v2"
+sidebar_current: "docs-openstack-resource-networking-router-route-v2"
+description: |-
+ Creates a routing entry on a OpenStack V2 router.
+---
+
+# openstack\_networking\_router_route_v2
+
+Creates a routing entry on a OpenStack V2 router.
+
+## Example Usage
+
+```
+resource "openstack_networking_router_v2" "router_1" {
+ name = "router_1"
+ admin_state_up = "true"
+}
+
+resource "openstack_networking_network_v2" "network_1" {
+ name = "network_1"
+ admin_state_up = "true"
+}
+
+resource "openstack_networking_subnet_v2" "subnet_1" {
+ network_id = "${openstack_networking_network_v2.network_1.id}"
+ cidr = "192.168.199.0/24"
+ ip_version = 4
+}
+
+resource "openstack_networking_router_interface_v2" "int_1" {
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}"
+}
+
+resource "openstack_networking_router_route_v2" "router_route_1" {
+ depends_on = ["openstack_networking_router_interface_v2.int_1"]
+ router_id = "${openstack_networking_router_v2.router_1.id}"
+ destination_cidr = "10.0.1.0/24"
+ next_hop = "192.168.199.254"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `region` - (Required) The region in which to obtain the V2 networking client.
+ A networking client is needed to configure a routing entry on a router. If omitted, the
+ `OS_REGION_NAME` environment variable is used. Changing this creates a new
+ routing entry.
+
+* `router_id` - (Required) ID of the router this routing entry belongs to. Changing
+ this creates a new routing entry.
+
+* `destination_cidr` - (Required) CIDR block to match on the packet’s destination IP. Changing
+ this creates a new routing entry.
+
+* `next_hop` - (Required) IP address of the next hop gateway. Changing
+ this creates a new routing entry.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `region` - See Argument Reference above.
+* `router_id` - See Argument Reference above.
+* `destination_cidr` - See Argument Reference above.
+* `next_hop` - See Argument Reference above.
+
+## Notes
+
+The `next_hop` IP address must be directly reachable from the router at the ``openstack_networking_router_route_v2``
+resource creation time. You can ensure that by explicitly specifying a dependency on the ``openstack_networking_router_interface_v2``
+resource that connects the next hop to the router, as in the example above.
diff --git a/website/source/docs/providers/openstack/r/networking_router_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_router_v2.html.markdown
index 04a261a38cf2..5540adb62c40 100644
--- a/website/source/docs/providers/openstack/r/networking_router_v2.html.markdown
+++ b/website/source/docs/providers/openstack/r/networking_router_v2.html.markdown
@@ -48,6 +48,8 @@ The following arguments are supported:
* `tenant_id` - (Optional) The owner of the floating IP. Required if admin wants
to create a router for another tenant. Changing this creates a new router.
+* `value_specs` - (Optional) Map of additional driver-specific options.
+
## Attributes Reference
The following attributes are exported:
@@ -57,3 +59,4 @@ The following attributes are exported:
* `admin_state_up` - See Argument Reference above.
* `external_gateway` - See Argument Reference above.
* `tenant_id` - See Argument Reference above.
+* `value_specs` - See Argument Reference above.
diff --git a/website/source/docs/providers/openstack/r/networking_secgroup_rule_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_secgroup_rule_v2.html.markdown
new file mode 100644
index 000000000000..e80ac6cf1f17
--- /dev/null
+++ b/website/source/docs/providers/openstack/r/networking_secgroup_rule_v2.html.markdown
@@ -0,0 +1,89 @@
+---
+layout: "openstack"
+page_title: "OpenStack: openstack_networking_secgroup_rule_v2"
+sidebar_current: "docs-openstack-resource-networking-secgroup-rule-v2"
+description: |-
+ Manages a V2 Neutron security group rule resource within OpenStack.
+---
+
+# openstack\_networking\_secgroup\_rule_v2
+
+Manages a V2 neutron security group rule resource within OpenStack.
+Unlike Nova security groups, neutron separates the group from the rules
+and also allows an admin to target a specific tenant_id.
+
+## Example Usage
+
+```
+resource "openstack_networking_secgroup_v2" "secgroup_1" {
+ name = "secgroup_1"
+ description = "My neutron security group"
+}
+
+resource "openstack_networking_secgroup_rule_v2" "secgroup_rule_1" {
+ direction = "ingress"
+ ethertype = "IPv4"
+ protocol = "tcp"
+ port_range_min = 22
+ port_range_max = 22
+ remote_ip_prefix = "0.0.0.0/0"
+ security_group_id = "${openstack_networking_secgroup_v2.secgroup_1.id}"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `region` - (Required) The region in which to obtain the V2 networking client.
+ A networking client is needed to create a port. If omitted, the
+ `OS_REGION_NAME` environment variable is used. Changing this creates a new
+ security group rule.
+
+* `direction` - (Required) The direction of the rule, valid values are __ingress__
+ or __egress__. Changing this creates a new security group rule.
+
+* `ethertype` - (Required) The layer 3 protocol type, valid values are __IPv4__
+ or __IPv6__. Changing this creates a new security group rule.
+
+* `protocol` - (Optional) The layer 4 protocol type, valid values are __tcp__,
+ __udp__ or __icmp__. This is required if you want to specify a port range.
+ Changing this creates a new security group rule.
+
+* `port_range_min` - (Optional) The lower part of the allowed port range, valid
+ integer value needs to be between 1 and 65535. Changing this creates a new
+ security group rule.
+
+* `port_range_max` - (Optional) The higher part of the allowed port range, valid
+ integer value needs to be between 1 and 65535. Changing this creates a new
+ security group rule.
+
+* `remote_ip_prefix` - (Optional) The remote CIDR, the value needs to be a valid
+ CIDR (i.e. 192.168.0.0/16). Changing this creates a new security group rule.
+
+* `remote_group_id` - (Optional) The remote group id, the value needs to be an
+ Openstack ID of a security group in the same tenant. Changing this creates
+ a new security group rule.
+
+* `security_group_id` - (Required) The security group id the rule shoudl belong
+ to, the value needs to be an Openstack ID of a security group in the same
+ tenant. Changing this creates a new security group rule.
+
+* `tenant_id` - (Optional) The owner of the security group. Required if admin
+ wants to create a port for another tenant. Changing this creates a new
+ security group rule.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `region` - See Argument Reference above.
+* `direction` - See Argument Reference above.
+* `ethertype` - See Argument Reference above.
+* `protocol` - See Argument Reference above.
+* `port_range_min` - See Argument Reference above.
+* `port_range_max` - See Argument Reference above.
+* `remote_ip_prefix` - See Argument Reference above.
+* `remote_group_id` - See Argument Reference above.
+* `security_group_id` - See Argument Reference above.
+* `tenant_id` - See Argument Reference above.
diff --git a/website/source/docs/providers/openstack/r/networking_secgroup_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_secgroup_v2.html.markdown
new file mode 100644
index 000000000000..ca49e4b66b23
--- /dev/null
+++ b/website/source/docs/providers/openstack/r/networking_secgroup_v2.html.markdown
@@ -0,0 +1,50 @@
+---
+layout: "openstack"
+page_title: "OpenStack: openstack_networking_secgroup_v2"
+sidebar_current: "docs-openstack-resource-networking-secgroup-v2"
+description: |-
+ Manages a V2 Neutron security group resource within OpenStack.
+---
+
+# openstack\_networking\_secgroup_v2
+
+Manages a V2 neutron security group resource within OpenStack.
+Unlike Nova security groups, neutron separates the group from the rules
+and also allows an admin to target a specific tenant_id.
+
+## Example Usage
+
+```
+resource "openstack_networking_secgroup_v2" "secgroup_1" {
+ name = "secgroup_1"
+ description = "My neutron security group"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `region` - (Required) The region in which to obtain the V2 networking client.
+ A networking client is needed to create a port. If omitted, the
+ `OS_REGION_NAME` environment variable is used. Changing this creates a new
+ security group.
+
+* `name` - (Required) A unique name for the security group. Changing this
+ creates a new security group.
+
+* `description` - (Optional) A unique name for the security group. Changing this
+ creates a new security group.
+
+* `tenant_id` - (Optional) The owner of the security group. Required if admin
+ wants to create a port for another tenant. Changing this creates a new
+ security group.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `region` - See Argument Reference above.
+* `name` - See Argument Reference above.
+* `description` - See Argument Reference above.
+* `tenant_id` - See Argument Reference above.
diff --git a/website/source/docs/providers/postgresql/index.html.markdown b/website/source/docs/providers/postgresql/index.html.markdown
index 36761b626a36..87d0ba87fa5c 100644
--- a/website/source/docs/providers/postgresql/index.html.markdown
+++ b/website/source/docs/providers/postgresql/index.html.markdown
@@ -20,6 +20,7 @@ provider "postgresql" {
port = 5432
username = "postgres_user"
password = "postgres_password"
+ ssl_mode = "require"
}
```
@@ -58,6 +59,9 @@ resource "postgresql_database" "my_db2" {
The following arguments are supported:
* `host` - (Required) The address for the postgresql server connection.
-* `port` - (Optional) The port for the postgresql server connection. (Default 5432)
+* `port` - (Optional) The port for the postgresql server connection. The default is `5432`.
* `username` - (Required) Username for the server connection.
-* `password` - (Optional) Password for the server connection.
\ No newline at end of file
+* `password` - (Optional) Password for the server connection.
+* `ssl_mode` - (Optional) Set the priority for an SSL connection to the server.
+ The default is `prefer`; the full set of options and their implications
+ can be seen [in the libpq SSL guide](http://www.postgresql.org/docs/9.4/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION).
diff --git a/website/source/docs/providers/softlayer/index.html.markdown b/website/source/docs/providers/softlayer/index.html.markdown
new file mode 100644
index 000000000000..efb5254a8d8f
--- /dev/null
+++ b/website/source/docs/providers/softlayer/index.html.markdown
@@ -0,0 +1,84 @@
+---
+layout: "softlayer"
+page_title: "Provider: SoftLayer"
+sidebar_current: "docs-softlayer-index"
+description: |-
+ The Docker provider is used to interact with Docker containers and images.
+---
+
+# SoftLayer Provider
+
+The SoftLayer provider is used to manage SoftLayer resources.
+
+Use the navigation to the left to read about the available resources.
+
+
+Note: The SoftLayer provider is new as of Terraform 0.X.
+It is ready to be used but many features are still being added. If there
+is a SoftLayer feature missing, please report it in the GitHub repo.
+
+
+## Example Usage
+
+Here is an example that will setup the following:
++ An SSH key resource.
++ A virtual server resource that uses an existing SSH key.
++ A virtual server resource using an existing SSH key and a Terraform managed SSH key (created as "test_key_1" in the example below).
+
+(create this as sl.tf and run terraform commands from this directory):
+
+```hcl
+provider "softlayer" {
+ username = ""
+ api_key = ""
+}
+
+# This will create a new SSH key that will show up under the \
+# Devices>Manage>SSH Keys in the SoftLayer console.
+resource "softlayer_ssh_key" "test_key_1" {
+ name = "test_key_1"
+ public_key = "${file(\"~/.ssh/id_rsa_test_key_1.pub\")}"
+ # Windows Example:
+ # public_key = "${file(\"C:\ssh\keys\path\id_rsa_test_key_1.pub\")}"
+}
+
+# Virtual Server created with existing SSH Key already in SoftLayer \
+# inventory and not created using this Terraform template.
+resource "softlayer_virtual_guest" "my_server_1" {
+ name = "my_server_1"
+ domain = "example.com"
+ ssh_keys = ["123456"]
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 10
+ cpu = 1
+ ram = 1024
+}
+
+# Virtual Server created with a mix of previously existing and \
+# Terraform created/managed resources.
+resource "softlayer_virtual_guest" "my_server_2" {
+ name = "my_server_2"
+ domain = "example.com"
+ ssh_keys = ["123456", "${softlayer_ssh_key.test_key_1.id}"]
+ image = "CENTOS_6_64"
+ region = "ams01"
+ public_network_speed = 10
+ cpu = 1
+ ram = 1024
+}
+```
+
+You'll need to provide your SoftLayer username and API key,
+so that Terraform can connect. If you don't want to put
+credentials in your configuration file, you can leave them
+out:
+
+```
+provider "softlayer" {}
+```
+
+...and instead set these environment variables:
+
+- **SOFTLAYER_USERNAME**: Your SoftLayer username
+- **SOFTLAYER_API_KEY**: Your API key
diff --git a/website/source/docs/providers/softlayer/r/ssh_key.html.markdown b/website/source/docs/providers/softlayer/r/ssh_key.html.markdown
new file mode 100644
index 000000000000..3906620c3eb6
--- /dev/null
+++ b/website/source/docs/providers/softlayer/r/ssh_key.html.markdown
@@ -0,0 +1,39 @@
+---
+layout: "softlayer"
+page_title: "SoftLayer: ssh_key"
+sidebar_current: "docs-softlayer-resource-ssh-key"
+description: |-
+ Manages SoftLayer SSH Keys.
+---
+
+# softlayer\ssh_key
+
+Provides SSK keys. This allows SSH keys to be created, updated and deleted.
+For additional details please refer to [API documentation](http://sldn.softlayer.com/reference/datatypes/SoftLayer_Security_Ssh_Key).
+
+## Example Usage
+
+```
+resource "softlayer_ssh_key" "test_ssh_key" {
+ name = "test_ssh_key_name"
+ notes = "test_ssh_key_notes"
+ public_key = "ssh-rsa "
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` - (Required) A descriptive name used to identify a ssh key.
+* `public_key` - (Required) The public ssh key.
+* `notes` - (Optional) A small note about a ssh key to use at your discretion.
+
+Fields `name` and `notes` are editable.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - id of the new ssh key
+* `fingerprint` - sequence of bytes to authenticate or lookup a longer ssh key.
diff --git a/website/source/docs/providers/softlayer/r/virtual_guest.html.markdown b/website/source/docs/providers/softlayer/r/virtual_guest.html.markdown
new file mode 100644
index 000000000000..54d8f0edb783
--- /dev/null
+++ b/website/source/docs/providers/softlayer/r/virtual_guest.html.markdown
@@ -0,0 +1,134 @@
+---
+layout: "softlayer"
+page_title: "SoftLayer: virtual_guest"
+sidebar_current: "docs-softlayer-resource-virtual-guest"
+description: |-
+ Manages SoftLayer Virtual Guests.
+---
+
+# softlayer\virtual_guest
+
+Provides virtual guest resource. This allows virtual guests to be created, updated
+and deleted. For additional details please refer to [API documentation](http://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest).
+
+## Example Usage
+
+```
+# Create a new virtual guest using image "Debian"
+resource "softlayer_virtual_guest" "twc_terraform_sample" {
+ name = "twc-terraform-sample-name"
+ domain = "bar.example.com"
+ image = "DEBIAN_7_64"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = true
+ private_network_only = false
+ cpu = 1
+ ram = 1024
+ disks = [25, 10, 20]
+ user_data = "{\"value\":\"newvalue\"}"
+ dedicated_acct_host_only = true
+ local_disk = false
+ frontend_vlan_id = 1085155
+ backend_vlan_id = 1085157
+}
+```
+
+```
+# Create a new virtual guest using block device template
+resource "softlayer_virtual_guest" "terraform-sample-BDTGroup" {
+ name = "terraform-sample-blockDeviceTemplateGroup"
+ domain = "bar.example.com"
+ region = "ams01"
+ public_network_speed = 10
+ hourly_billing = false
+ cpu = 1
+ ram = 1024
+ local_disk = false
+ block_device_template_group_gid = "****-****-****-****-****"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` | *string*
+ * Hostname for the computing instance.
+ * **Required**
+* `domain` | *string*
+ * Domain for the computing instance.
+ * **Required**
+* `cpu` | *int*
+ * The number of CPU cores to allocate.
+ * **Required**
+* `ram` | *int*
+ * The amount of memory to allocate in megabytes.
+ * **Required**
+* `region` | *string*
+ * Specifies which datacenter the instance is to be provisioned in.
+ * **Required**
+* `hourly_billing` | *boolean*
+ * Specifies the billing type for the instance. When true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis.
+ * **Required**
+* `local_disk` | *boolean*
+ * Specifies the disk type for the instance. When true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned.
+ * **Required**
+* `dedicated_acct_host_only` | *boolean*
+ * Specifies whether or not the instance must only run on hosts with instances from the same account
+ * *Default*: nil
+ * *Optional*
+* `image` | *string*
+ * An identifier for the operating system to provision the computing instance with.
+ * **Conditionally required** - Disallowed when blockDeviceTemplateGroup.globalIdentifier is provided, as the template will specify the operating system.
+* `block_device_template_group_gid` | *string*
+ * A global identifier for the template to be used to provision the computing instance.
+ * **Conditionally required** - Disallowed when operatingSystemReferenceCode is provided, as the template will specify the operating system.
+* `public_network_speed` | *int*
+ * Specifies the connection speed for the instance's network components.
+ * *Default*: 10
+ * *Optional*
+* `private_network_only` | *boolean*
+ * Specifies whether or not the instance only has access to the private network. When true this flag specifies that a compute instance is to only have access to the private network.
+ * *Default*: False
+ * *Optional*
+* `frontend_vlan_id` | *int*
+ * Specifies the network vlan which is to be used for the frontend interface of the computing instance.
+ * *Default*: nil
+ * *Optional*
+* `backend_vlan_id` | *int*
+ * Specifies the network vlan which is to be used for the backend interface of the computing instance.
+ * *Default*: nil
+ * *Optional*
+* `disks` | *array*
+ * Block device and disk image settings for the computing instance
+ * *Optional*
+ * *Default*: The smallest available capacity for the primary disk will be used. If an image template is specified the disk capacity will be be provided by the template.
+* `user_data` | *string*
+ * Arbitrary data to be made available to the computing instance.
+ * *Default*: nil
+ * *Optional*
+* `ssh_keys` | *array*
+ * SSH keys to install on the computing instance upon provisioning.
+ * *Default*: nil
+ * *Optional*
+* `ipv4_address` | *string*
+ * Uses editObject call, template data [defined here](https://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Guest).
+ * *Default*: nil
+ * *Optional*
+* `ipv4_address_private` | *string*
+ * Uses editObject call, template data [defined here](https://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Guest).
+ * *Default*: nil
+ * *Optional*
+* `post_install_script_uri` | *string*
+ * As defined in the [SoftLayer_Virtual_Guest_SupplementalCreateObjectOptions](https://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Guest_SupplementalCreateObjectOptions).
+ * *Default*: nil
+ * *Optional*
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - id of the virtual guest.
+
+
diff --git a/website/source/docs/providers/template/r/cloudinit_config.html.markdown b/website/source/docs/providers/template/r/cloudinit_config.html.markdown
index d2d03d378879..69dc722b361a 100644
--- a/website/source/docs/providers/template/r/cloudinit_config.html.markdown
+++ b/website/source/docs/providers/template/r/cloudinit_config.html.markdown
@@ -58,9 +58,9 @@ resource "aws_instance" "web" {
The following arguments are supported:
-* `gzip` - (Optional) Specify whether or not to gzip the rendered output.
+* `gzip` - (Optional) Specify whether or not to gzip the rendered output. Default to `true`
-* `base64_encode` - (Optional) Base64 encoding of the rendered output.
+* `base64_encode` - (Optional) Base64 encoding of the rendered output. Default to `true`
* `part` - (Required) One may specify this many times, this creates a fragment of the rendered cloud-init config file. The order of the parts is maintained in the configuration is maintained in the rendered template.
diff --git a/website/source/docs/providers/triton/index.html.markdown b/website/source/docs/providers/triton/index.html.markdown
index 8434e6ec2612..d58e980f1aaf 100644
--- a/website/source/docs/providers/triton/index.html.markdown
+++ b/website/source/docs/providers/triton/index.html.markdown
@@ -17,7 +17,7 @@ Use the navigation to the left to read about the available resources.
```
provider "triton" {
account = "AccountName"
- key_material = "~/.ssh/id_rsa"
+ key_material = "${file("~/.ssh/id_rsa")}"
key_id = "25:d4:a9:fe:ef:e6:c0:bf:b4:4b:4b:d4:a8:8f:01:0f"
# If using a private installation of Triton, specify the URL
@@ -30,6 +30,6 @@ provider "triton" {
The following arguments are supported in the `provider` block:
* `account` - (Required) This is the name of the Triton account. It can also be provided via the `SDC_ACCOUNT` environment variable.
-* `key_material` - (Required) This is the path to the private key of an SSH key associated with the Triton account to be used.
+* `key_material` - (Required) This is the private key of an SSH key associated with the Triton account to be used.
* `key_id` - (Required) This is the fingerprint of the public key matching the key specified in `key_path`. It can be obtained via the command `ssh-keygen -l -E md5 -f /path/to/key`
* `url` - (Optional) This is the URL to the Triton API endpoint. It is required if using a private installation of Triton. The default is to use the Joyent public cloud.
diff --git a/website/source/docs/providers/vsphere/index.html.markdown b/website/source/docs/providers/vsphere/index.html.markdown
index 2138b5084f0b..b016a682fa90 100644
--- a/website/source/docs/providers/vsphere/index.html.markdown
+++ b/website/source/docs/providers/vsphere/index.html.markdown
@@ -35,6 +35,13 @@ resource "vsphere_folder" "frontend" {
path = "frontend"
}
+# Create a file
+resource "vsphere_file" "ubuntu_disk" {
+ datastore = "local"
+ source_file = "/home/ubuntu/my_disks/custom_ubuntu.vmdk"
+ destination_file = "/my_path/disks/custom_ubuntu.vmdk"
+}
+
# Create a virtual machine within the folder
resource "vsphere_virtual_machine" "web" {
name = "terraform-web"
@@ -69,15 +76,80 @@ The following arguments are used to configure the VMware vSphere Provider:
value is `false`. Can also be specified with the `VSPHERE_ALLOW_UNVERIFIED_SSL`
environment variable.
+## Required Privileges
+
+In order to use Terraform provider as non priviledged user, a Role within
+vCenter must be assigned the following privileges:
+
+* Datastore
+ - Allocate space
+ - Browse datastore
+ - Low level file operations
+ - Remove file
+ - Update virtual machine files
+ - Update virtual machine metadata
+
+* Folder (all)
+ - Create folder
+ - Delete folder
+ - Move folder
+ - Rename folder
+
+* Network
+ - Assign network
+
+* Resource
+ - Apply recommendation
+ - Assign virtual machine to resource pool
+
+* Virtual Machine
+ - Configuration (all) - for now
+ - Guest Operations (all) - for now
+ - Interaction (all)
+ - Inventory (all)
+ - Provisioning (all)
+
+These settings were tested with [vSphere
+6.0](https://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html)
+and [vSphere
+5.5](https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.security.doc%2FGUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html).
+For additional information on roles and permissions, please refer to official
+VMware documentation.
+
+## Virtual Machine Customization
+
+Guest Operating Systems can be configured using
+[customizations](https://pubs.vmware.com/vsphere-50/index.jsp#com.vmware.vsphere.vm_admin.doc_50/GUID-80F3F5B5-F795-45F1-B0FA-3709978113D5.html),
+in order to set things properties such as domain and hostname. This mechanism
+is not compatible with all operating systems, however. A list of compatible
+operating systems can be found
+[here](http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf)
+
+If customization is attempted on an operating system which is not supported, Terraform will
+create the virtual machine, but fail with the following error message:
+
+```
+Customization of the guest operating system 'debian6_64Guest' is not
+supported in this configuration. Microsoft Vista (TM) and Linux guests with
+Logical Volume Manager are supported only for recent ESX host and VMware Tools
+versions. Refer to vCenter documentation for supported configurations. ```
+```
+
+In order to skip the customization step for unsupported operating systems, use
+the `skip_customization` argument on the virtual machine resource.
+
## Acceptance Tests
The VMware vSphere provider's acceptance tests require the above provider
configuration fields to be set using the documented environment variables.
-In addition, the following environment variables are used in tests, and must be set to valid values for your VMware vSphere environment:
+In addition, the following environment variables are used in tests, and must be
+set to valid values for your VMware vSphere environment:
- * VSPHERE\_NETWORK\_GATEWAY
- * VSPHERE\_NETWORK\_IP\_ADDRESS
+ * VSPHERE\_IPV4\_GATEWAY
+ * VSPHERE\_IPV4\_ADDRESS
+ * VSPHERE\_IPV6\_GATEWAY
+ * VSPHERE\_IPV6\_ADDRESS
* VSPHERE\_NETWORK\_LABEL
* VSPHERE\_NETWORK\_LABEL\_DHCP
* VSPHERE\_TEMPLATE
@@ -89,6 +161,12 @@ The following environment variables depend on your vSphere environment:
* VSPHERE\_RESOURCE\_POOL
* VSPHERE\_DATASTORE
+The following additional environment variables are needed for running the
+"Mount ISO as CDROM media" acceptance tests.
+
+ * VSPHERE\_CDROM\_DATASTORE
+ * VSPHERE\_CDROM\_PATH
+
These are used to set and verify attributes on the `vsphere_virtual_machine`
resource in tests.
@@ -98,3 +176,5 @@ Once all these variables are in place, the tests can be run like this:
```
make testacc TEST=./builtin/providers/vsphere
```
+
+
diff --git a/website/source/docs/providers/vsphere/r/file.html.markdown b/website/source/docs/providers/vsphere/r/file.html.markdown
new file mode 100644
index 000000000000..023c4321e447
--- /dev/null
+++ b/website/source/docs/providers/vsphere/r/file.html.markdown
@@ -0,0 +1,30 @@
+---
+layout: "vsphere"
+page_title: "VMware vSphere: vsphere_file"
+sidebar_current: "docs-vsphere-resource-file"
+description: |-
+ Provides a VMware vSphere virtual machine file resource. This can be used to files (e.g. vmdk disks) from Terraform host machine to remote vSphere.
+-----------------------------------------------------------------------------------------------------------------------------------------------------
+
+# vsphere\_file
+
+Provides a VMware vSphere virtual machine file resource. This can be used to files (e.g. vmdk disks) from Terraform host machine to remote vSphere.
+
+## Example Usage
+
+```
+resource "vsphere_file" "ubuntu_disk" {
+ datastore = "local"
+ source_file = "/home/ubuntu/my_disks/custom_ubuntu.vmdk"
+ destination_file = "/my_path/disks/custom_ubuntu.vmdk"
+}
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `source_file` - (Required) The path to the file on Terraform host that will be uploaded to vSphere.
+* `destination_file` - (Required) The path to where the file should be uploaded to on vSphere.
+* `datacenter` - (Optional) The name of a Datacenter in which the file will be created/uploaded to.
+* `datastore` - (Required) The name of the Datastore in which to create/upload the file to.
diff --git a/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown
index 9812a0aed870..4268d6486271 100644
--- a/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown
+++ b/website/source/docs/providers/vsphere/r/virtual_machine.html.markdown
@@ -29,6 +29,34 @@ resource "vsphere_virtual_machine" "web" {
}
```
+## Example Usage VMware Cluster
+
+```
+resource "vsphere_virtual_machine" "lb" {
+ name = "lb01"
+ folder = "Loadbalancers"
+ vcpu = 2
+ memory = 4096
+ domain = "MYDOMAIN"
+ datacenter = "EAST"
+ cluster = "Production Cluster"
+ resource_pool = "Production Cluster/Resources/Production Servers"
+
+ gateway = "10.20.30.254"
+
+ network_interface {
+ label = "10_20_30_VMNet"
+ ipv4_address = "10.20.30.40"
+ ipv4_prefix_length = "24"
+ }
+
+ disk {
+ datastore = "EAST/VMFS01-EAST"
+ template = "Templates/Centos7"
+ }
+}
+```
+
## Argument Reference
The following arguments are supported:
@@ -36,39 +64,68 @@ The following arguments are supported:
* `name` - (Required) The virtual machine name
* `vcpu` - (Required) The number of virtual CPUs to allocate to the virtual machine
* `memory` - (Required) The amount of RAM (in MB) to allocate to the virtual machine
+* `memory_reservation` - (Optional) The amount of RAM (in MB) to reserve physical memory resource; defaults to 0 (means not to reserve)
* `datacenter` - (Optional) The name of a Datacenter in which to launch the virtual machine
* `cluster` - (Optional) Name of a Cluster in which to launch the virtual machine
-* `resource_pool` (Optional) The name of a Resource Pool in which to launch the virtual machine
-* `gateway` - (Optional) Gateway IP address to use for all network interfaces
+* `resource_pool` (Optional) The name of a Resource Pool in which to launch the virtual machine. Requires full path (see cluster example).
+* `gateway` - __Deprecated, please use `network_interface.ipv4_gateway` instead__.
* `domain` - (Optional) A FQDN for the virtual machine; defaults to "vsphere.local"
-* `time_zone` - (Optional) The [time zone](https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/timezone.html) to set on the virtual machine. Defaults to "Etc/UTC"
+* `time_zone` - (Optional) The [Linux](https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/timezone.html) or [Windows](https://msdn.microsoft.com/en-us/library/ms912391.aspx) time zone to set on the virtual machine. Defaults to "Etc/UTC"
* `dns_suffixes` - (Optional) List of name resolution suffixes for the virtual network adapter
* `dns_servers` - (Optional) List of DNS servers for the virtual network adapter; defaults to 8.8.8.8, 8.8.4.4
* `network_interface` - (Required) Configures virtual network interfaces; see [Network Interfaces](#network-interfaces) below for details.
* `disk` - (Required) Configures virtual disks; see [Disks](#disks) below for details
+* `cdrom` - (Optional) Configures a CDROM device and mounts an image as its media; see [CDROM](#cdrom) below for more details.
* `boot_delay` - (Optional) Time in seconds to wait for machine network to be ready.
+* `windows_opt_config` - (Optional) Extra options for clones of Windows machines.
+* `linked_clone` - (Optional) Specifies if the new machine is a [linked clone](https://www.vmware.com/support/ws5/doc/ws_clone_overview.html#wp1036396) of another machine or not.
* `custom_configuration_parameters` - (Optional) Map of values that is set as virtual machine custom configurations.
+* `skip_customization` - (Optional) skip virtual machine customization (useful if OS is not in the guest OS support matrix of VMware like "other3xLinux64Guest").
The `network_interface` block supports:
* `label` - (Required) Label to assign to this network interface
-* `ipv4_address` - (Optional) Static IP to assign to this network interface. Interface will use DHCP if this is left blank. Currently only IPv4 IP addresses are supported.
-* `ipv4_prefix_length` - (Optional) prefix length to use when statically assigning an IP.
+* `ipv4_address` - (Optional) Static IPv4 to assign to this network interface. Interface will use DHCP if this is left blank.
+* `ipv4_prefix_length` - (Optional) prefix length to use when statically assigning an IPv4 address.
+* `ipv4_gateway` - (Optional) IPv4 gateway IP address to use.
+* `ipv6_address` - (Optional) Static IPv6 to assign to this network interface. Interface will use DHCPv6 if this is left blank.
+* `ipv6_prefix_length` - (Optional) prefix length to use when statically assigning an IPv6.
+* `ipv6_gateway` - (Optional) IPv6 gateway IP address to use.
The following arguments are maintained for backwards compatibility and may be
removed in a future version:
-* `ip_address` - __Deprecated, please use `ipv4_address` instead_.
-* `subnet_mask` - __Deprecated, please use `ipv4_prefix_length` instead_.
+* `ip_address` - __Deprecated, please use `ipv4_address` instead__.
+* `subnet_mask` - __Deprecated, please use `ipv4_prefix_length` instead__.
+
+The `windows_opt_config` block supports:
+* `product_key` - (Optional) Serial number for new installation of Windows. This serial number is ignored if the original guest operating system was installed using a volume-licensed CD.
+* `admin_password` - (Optional) The password for the new `administrator` account. Omit for passwordless admin (using `""` does not work).
+* `domain` - (Optional) Domain that the new machine will be placed into. If `domain`, `domain_user`, and `domain_user_password` are not all set, all three will be ignored.
+* `domain_user` - (Optional) User that is a member of the specified domain.
+* `domain_user_password` - (Optional) Password for domain user, in plain text.
+
+
+## Disks
The `disk` block supports:
-* `template` - (Required if size not provided) Template for this disk.
+* `template` - (Required if size and bootable_vmdk_path not provided) Template for this disk.
* `datastore` - (Optional) Datastore for this disk
-* `size` - (Required if template not provided) Size of this disk (in GB).
+* `size` - (Required if template and bootable_vmdks_path not provided) Size of this disk (in GB).
* `iops` - (Optional) Number of virtual iops to allocate for this disk.
* `type` - (Optional) 'eager_zeroed' (the default), or 'thin' are supported options.
+* `vmdk` - (Required if template and size not provided) Path to a vmdk in a vSphere datastore.
+* `bootable` - (Optional) Set to 'true' if a vmdk was given and it should attempt to boot after creation.
+
+
+## CDROM
+
+The `cdrom` block supports:
+
+* `datastore` - (Required) The name of the datastore where the disk image is stored.
+* `path` - (Required) The absolute path to the image within the datastore.
## Attributes Reference
diff --git a/website/source/docs/provisioners/connection.html.markdown b/website/source/docs/provisioners/connection.html.markdown
index 52f7be7589a4..103487faf4d9 100644
--- a/website/source/docs/provisioners/connection.html.markdown
+++ b/website/source/docs/provisioners/connection.html.markdown
@@ -64,7 +64,7 @@ provisioner "file" {
* `timeout` - The timeout to wait for the connection to become available. This defaults
to 5 minutes. Should be provided as a string like "30s" or "5m".
-* `script_path` - The path used to copy scripts to meant for remote execution.
+* `script_path` - The path used to copy scripts meant for remote execution.
**Additional arguments only supported by the "ssh" connection type:**
diff --git a/website/source/docs/state/remote/s3.html.md b/website/source/docs/state/remote/s3.html.md
index d7ccb9848bca..84ca213239de 100644
--- a/website/source/docs/state/remote/s3.html.md
+++ b/website/source/docs/state/remote/s3.html.md
@@ -14,6 +14,10 @@ Stores the state as a given key in a given bucket on [Amazon S3](https://aws.ama
make them included in cleartext inside the persisted state.
Use of environment variables or config file is recommended.
+~> **Warning!** It is highly recommended to enable
+[Bucket Versioning](http://docs.aws.amazon.com/AmazonS3/latest/UG/enable-bucket-versioning.html)
+on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.
+
## Example Usage
```
diff --git a/website/source/downloads.html.erb b/website/source/downloads.html.erb
index 4331d081e1aa..bf89b1441386 100644
--- a/website/source/downloads.html.erb
+++ b/website/source/downloads.html.erb
@@ -25,8 +25,8 @@ description: |-
verify the checksums signature file
- which has been signed using HashiCorp's GPG key.
- You can also download older versions of Terraform from the releases service.
+ which has been signed using HashiCorp's GPG key.
+ You can also download older versions of Terraform from the releases service.
Checkout the v<%= latest_version %> CHANGELOG for information on the latest release.
@@ -53,7 +53,7 @@ description: |-
diff --git a/website/source/intro/getting-started/remote.html.markdown b/website/source/intro/getting-started/remote.html.markdown
index ed5b1b803ed8..9ad3e9279f6a 100644
--- a/website/source/intro/getting-started/remote.html.markdown
+++ b/website/source/intro/getting-started/remote.html.markdown
@@ -12,8 +12,8 @@ from a local machine. This is great for testing and development,
however in production environments it is more responsible to run
Terraform remotely and store a master Terraform state remotely.
-[Atlas](https://atlas.hashicorp.com/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform)
-is HashiCorp's solution for Terraform remote runs and
+[Atlas](https://atlas.hashicorp.com/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform),
+HashiCorp's solution for Terraform remote, runs an
infrastructure version control. Running Terraform
in Atlas allows teams to easily version, audit, and collaborate
on infrastructure changes. Each proposed change generates
@@ -29,9 +29,9 @@ from long-running Terraform processes.
You can learn how to use Terraform remotely with our [interactive tutorial](https://atlas.hashicorp.com/tutorial/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform)
or you can follow the outlined steps below.
-First, If you don't have an Atlas account, you can [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform).
+First, If you don't have an Atlas account, you can [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform).
-In order for the Terraform CLI to gain access to your Atlas account you're going to need to generate an access key. From the main menu, select your username in the top right corner to access your profile. Under `Personal`, click on the `Tokens` tab and hit generate.
+In order for the Terraform CLI to gain access to your Atlas account you're going to need to generate an access key. From the main menu, select your username in the top right corner to access your profile. Under `Personal`, click on the `Tokens` tab and hit generate.
For the purposes of this tutorial you can use this token by exporting it to your local shell session:
@@ -41,11 +41,11 @@ $ export ATLAS_TOKEN=ATLAS_ACCESS_TOKEN
Replace `ATLAS_ACCESS_TOKEN` with the token generated earlier
Then configure [Terraform remote state storage](/docs/commands/remote.html) with the command:
-
-```
+
+```
$ terraform remote config -backend-config="name=ATLAS_USERNAME/getting-started"
```
-
+
Replace `ATLAS_USERNAME` with your Atlas username.
Before you [push](/docs/commands/push.html) your Terraform configuration to Atlas you'll need to start a local version control system with at least one commit. Here is an example using `git`.
@@ -70,7 +70,7 @@ infrastructure changes.
Running Terraform in Atlas creates a complete history of
infrastructure changes, a sort of version control
for infrastructure. Similar to application version control
-systems such as Git or Subversion, this makes changes to
+systems such as Git or Subversion, this makes changes to
infrastructure an auditable, repeatable,
and collaborative process. With so much relying on the
stability of your infrastructure, version control is a
diff --git a/website/source/layouts/_announcement-bnr.erb b/website/source/layouts/_announcement-bnr.erb
new file mode 100644
index 000000000000..4773605a682c
--- /dev/null
+++ b/website/source/layouts/_announcement-bnr.erb
@@ -0,0 +1,18 @@
+
diff --git a/website/source/layouts/_footer.erb b/website/source/layouts/_footer.erb
index 009489759984..54a8caf68ce3 100644
--- a/website/source/layouts/_footer.erb
+++ b/website/source/layouts/_footer.erb
@@ -45,7 +45,8 @@
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
- ga('create', 'UA-53231375-1', 'auto');
+ ga('create', 'UA-53231375-1', 'terraform.io');
+ ga('require', 'linkid');
ga('send', 'pageview');
diff --git a/website/source/layouts/_header.erb b/website/source/layouts/_header.erb
index f6a3533533bb..8d98f7095d3f 100644
--- a/website/source/layouts/_header.erb
+++ b/website/source/layouts/_header.erb
@@ -1,4 +1,5 @@