diff --git a/CHANGELOG.md b/CHANGELOG.md index e5000e02896a..0cf4f2c46e0c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,72 @@ -## 0.4.0 (unreleased) +## 0.5.0 (unreleased) + +IMPROVEMENTS: + + * core: Improve error message on diff mismatch [GH-1501] + +BUG FIXES: + + * core: math on arbitrary variables works if first operand isn't a + numeric primitive. [GH-1381] + * core: avoid unnecessary cycles by pruning tainted destroys from + graph if there are no tainted resources [GH-1475] + * core: fix issue where destroy nodes weren't pruned in specific + edge cases around matching prefixes, which could cause cycles [GH-1527] + * core: fix issue causing diff mismatch errors in certain scenarios during + resource replacement [GH-1515] + * command: remote states with uppercase types work [GH-1356] + * provider/aws: launch configuration ID set after create success [GH-1518] + * provider/openstack: region config is not required [GH-1441] + +## 0.4.2 (April 10, 2015) + +BUG FIXES: + + * core: refresh won't remove outputs from state file [GH-1369] + * core: clarify "unknown variable" error [GH-1480] + * core: properly merge parent provider configs when asking for input + * provider/aws: fix panic possibility if RDS DB name is empty [GH-1460] + * provider/aws: fix issue detecting credentials for some resources [GH-1470] + * provider/google: fix issue causing unresolvable diffs when using legacy + `network` field on `google_compute_instance` [GH-1458] + +## 0.4.1 (April 9, 2015) + +IMPROVEMENTS: + + * provider/aws: Route 53 records can now update `ttl` and `records` attributes + without destroying/creating the record [GH-1396] + * provider/aws: Support changing additional attributes of RDS databases + without forcing a new resource [GH-1382] + +BUG FIXES: + + * core: module paths in ".terraform" are consistent across different + systems so copying your ".terraform" folder works. [GH-1418] + * core: don't validate providers too early when nested in a module [GH-1380] + * core: fix race condition in `count.index` interpolation [GH-1454] + * command/push: don't ask for input if terraform.tfvars is present + * command/remote-config: remove spurrious error "nil" when initializing + remote state on a new configuration. [GH-1392] + * provider/aws: Fix issue with Route 53 and pre-existing Hosted Zones [GH-1415] + * provider/aws: Fix refresh issue in Route 53 hosted zone [GH-1384] + * provider/aws: Fix issue when changing map-public-ip in Subnets #1234 + * provider/aws: Fix issue finding db subnets [GH-1377] + * provider/aws: Fix issues with `*_block_device` attributes on instances and + launch configs creating unresolvable diffs when certain optional + parameters were omitted from the config [GH-1445] + * provider/aws: Fix issue with `aws_launch_configuration` causing an + unnecessary diff for pre-0.4 environments [GH-1371] + * provider/aws: Fix several related issues with `aws_launch_configuration` + causing unresolvable diffs [GH-1444] + * provider/aws: Fix issue preventing launch configurations from being valid + in EC2 Classic [GH-1412] + * provider/aws: Fix issue in updating Route 53 records on refresh/read. [GH-1430] + * provider/docker: Don't ask for `cert_path` input on every run [GH-1432] + * provider/google: Fix issue causing unresolvable diff on instances with + `network_interface` [GH-1427] + +## 0.4.0 (April 2, 2015) BACKWARDS INCOMPATIBILITIES: @@ -6,20 +74,40 @@ BACKWARDS INCOMPATIBILITIES: the `remote` command: `terraform remote push` and `terraform remote pull`. The old `remote` functionality is now at `terraform remote config`. This consolidates all remote state management under one command. + * Period-prefixed configuration files are now ignored. This might break + existing Terraform configurations if you had period-prefixed files. + * The `block_device` attribute of `aws_instance` has been removed in favor + of three more specific attributes to specify block device mappings: + `root_block_device`, `ebs_block_device`, and `ephemeral_block_device`. + Configurations using the old attribute will generate a validation error + indicating that they must be updated to use the new fields [GH-1045]. FEATURES: * **New provider: `dme` (DNSMadeEasy)** [GH-855] + * **New provider: `docker` (Docker)** - Manage container lifecycle + using the standard Docker API. [GH-855] + * **New provider: `openstack` (OpenStack)** - Interact with the many resources + provided by OpenStack. [GH-924] + * **New feature: `terraform_remote_state` resource** - Reference remote + states from other Terraform runs to use Terraform outputs as inputs + into another Terraform run. * **New command: `taint`** - Manually mark a resource as tainted, causing a destroy and recreate on the next plan/apply. + * **New resource: `aws_vpn_gateway`** [GH-1137] + * **New resource: `aws_elastic_network_interfaces`** [GH-1149] * **Self-variables** can be used to reference the current resource's attributes within a provisioner. Ex. `${self.private_ip_address}` [GH-1033] - * **Continous state** saving during `terraform apply`. The state file is - continously updated as apply is running, meaning that the state is + * **Continuous state** saving during `terraform apply`. The state file is + continuously updated as apply is running, meaning that the state is less likely to become corrupt in a catastrophic case: terraform panic or system killing Terraform. * **Math operations** in interpolations. You can now do things like `${count.index+1}`. [GH-1068] + * **New AWS SDK:** Move to `aws-sdk-go` (hashicorp/aws-sdk-go), + a fork of the offical `awslabs` repo. We forked for stability while + `awslabs` refactored the library, and will move back to the officially + supported version in the next release. IMPROVEMENTS: @@ -31,10 +119,26 @@ IMPROVEMENTS: * **New config function: `split`** - Split a value based on a delimiter. This is useful for faking lists as parameters to modules. * **New resource: `digitalocean_ssh_key`** [GH-1074] + * config: Expand `~` with homedir in `file()` paths [GH-1338] * core: The serial of the state is only updated if there is an actual change. This will lower the amount of state changing on things like refresh. * core: Autoload `terraform.tfvars.json` as well as `terraform.tfvars` [GH-1030] + * core: `.tf` files that start with a period are now ignored. [GH-1227] + * command/remote-config: After enabling remote state, a `pull` is + automatically done initially. + * providers/google: Add `size` option to disk blocks for instances. [GH-1284] + * providers/aws: Improve support for tagging resources. + * providers/aws: Add a short syntax for Route 53 Record names, e.g. + `www` instead of `www.example.com`. + * providers/aws: Improve dependency violation error handling, when deleting + Internet Gateways or Auto Scaling groups [GH-1325]. + * provider/aws: Add non-destructive updates to AWS RDS. You can now upgrade + `egine_version`, `parameter_group_name`, and `multi_az` without forcing + a new database to be created.[GH-1341] + * providers/aws: Full support for block device mappings on instances and + launch configurations [GH-1045, GH-1364] + * provisioners/remote-exec: SSH agent support. [GH-1208] BUG FIXES: @@ -47,12 +151,31 @@ BUG FIXES: a computed attribute was used as part of a set parameter. [GH-1073] * core: Fix edge case where state containing both "resource" and "resource.0" would ignore the latter completely. [GH-1086] + * core: Modules with a source of a relative file path moving up + directories work properly, i.e. "../a" [GH-1232] * providers/aws: manually deleted VPC removes it from the state * providers/aws: `source_dest_check` regression fixed (now works). [GH-1020] - * providers/aws: Longer wait times for DB instances + * providers/aws: Longer wait times for DB instances. + * providers/aws: Longer wait times for route53 records (30 mins). [GH-1164] + * providers/aws: Fix support for TXT records in Route 53. [GH-1213] + * providers/aws: Fix support for wildcard records in Route 53. [GH-1222] + * providers/aws: Fix issue with ignoring the 'self' attribute of a + Security Group rule. [GH-1223] + * providers/aws: Fix issue with `sql_mode` in RDS parameter group always + causing an update. [GH-1225] + * providers/aws: Fix dependency violation with subnets and security groups + [GH-1252] + * providers/aws: Fix issue with refreshing `db_subnet_groups` causing an error + instead of updating state [GH-1254] + * providers/aws: Prevent empty string to be used as default + `health_check_type` [GH-1052] + * providers/aws: Add tags on AWS IG creation, not just on update [GH-1176] * providers/digitalocean: Waits until droplet is ready to be destroyed [GH-1057] * providers/digitalocean: More lenient about 404's while waiting [GH-1062] + * providers/digitalocean: FQDN for domain records in CNAME, MX, NS, etc. + Also fixes invalid updates in plans. [GH-863] * providers/google: Network data in state was not being stored. [GH-1095] + * providers/heroku: Fix panic when config vars block was empty. [GH-1211] PLUGIN CHANGES: @@ -79,7 +202,7 @@ IMPROVEMENTS: * provider/aws: The `aws_db_instance` resource no longer requires both `final_snapshot_identifier` and `skip_final_snapshot`; the presence or absence of the former now implies the latter. [GH-874] - * provider/aws: Avoid unecessary update of `aws_subnet` when + * provider/aws: Avoid unnecessary update of `aws_subnet` when `map_public_ip_on_launch` is not specified in config. [GH-898] * provider/aws: Add `apply_method` to `aws_db_parameter_group` [GH-897] * provider/aws: Add `storage_type` to `aws_db_instance` [GH-896] @@ -112,7 +235,7 @@ BUG FIXES: * command/apply: Fix regression where user variables weren't asked [GH-736] * helper/hashcode: Update `hash.String()` to always return a positive index. Fixes issue where specific strings would convert to a negative index - and be ommited when creating Route53 records. [GH-967] + and be omitted when creating Route53 records. [GH-967] * provider/aws: Automatically suffix the Route53 zone name on record names. [GH-312] * provider/aws: Instance should ignore root EBS devices. [GH-877] * provider/aws: Fix `aws_db_instance` to not recreate each time. [GH-874] @@ -518,3 +641,4 @@ BUG FIXES: * Initial release + diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 87f5ca66d8bb..f5554557f5e1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -53,8 +53,8 @@ If you have never worked with Go before, you will have to complete the following steps in order to be able to compile and test Terraform (or use the Vagrantfile in this repo to stand up a dev VM). -1. Install Go. Make sure the Go version is at least Go 1.2. Terraform will not work with anything less than - Go 1.2. On a Mac, you can `brew install go` to install Go 1.2. +1. Install Go. Make sure the Go version is at least Go 1.4. Terraform will not work with anything less than + Go 1.4. On a Mac, you can `brew install go` to install Go 1.4. 2. Set and export the `GOPATH` environment variable and update your `PATH`. For example, you can add to your `.bash_profile`. diff --git a/builtin/bins/provider-docker/main.go b/builtin/bins/provider-docker/main.go new file mode 100644 index 000000000000..a54af4c02f8a --- /dev/null +++ b/builtin/bins/provider-docker/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/docker" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: docker.Provider, + }) +} diff --git a/builtin/bins/provider-docker/main_test.go b/builtin/bins/provider-docker/main_test.go new file mode 100644 index 000000000000..06ab7d0f9a35 --- /dev/null +++ b/builtin/bins/provider-docker/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/bins/provider-openstack/main.go b/builtin/bins/provider-openstack/main.go new file mode 100644 index 000000000000..f897f1c5573b --- /dev/null +++ b/builtin/bins/provider-openstack/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/openstack" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: openstack.Provider, + }) +} diff --git a/builtin/bins/provider-terraform/main.go b/builtin/bins/provider-terraform/main.go new file mode 100644 index 000000000000..21f4da5d2627 --- /dev/null +++ b/builtin/bins/provider-terraform/main.go @@ -0,0 +1,12 @@ +package main + +import ( + "github.com/hashicorp/terraform/builtin/providers/terraform" + "github.com/hashicorp/terraform/plugin" +) + +func main() { + plugin.Serve(&plugin.ServeOpts{ + ProviderFunc: terraform.Provider, + }) +} diff --git a/builtin/bins/provider-terraform/main_test.go b/builtin/bins/provider-terraform/main_test.go new file mode 100644 index 000000000000..06ab7d0f9a35 --- /dev/null +++ b/builtin/bins/provider-terraform/main_test.go @@ -0,0 +1 @@ +package main diff --git a/builtin/providers/aws/autoscaling_tags.go b/builtin/providers/aws/autoscaling_tags.go new file mode 100644 index 000000000000..342caae5453a --- /dev/null +++ b/builtin/providers/aws/autoscaling_tags.go @@ -0,0 +1,170 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/autoscaling" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +// tagsSchema returns the schema to use for tags. +func autoscalingTagsSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "value": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "propagate_at_launch": &schema.Schema{ + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Set: autoscalingTagsToHash, + } +} + +func autoscalingTagsToHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["key"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["value"].(string))) + buf.WriteString(fmt.Sprintf("%t-", m["propagate_at_launch"].(bool))) + + return hashcode.String(buf.String()) +} + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tag" +func setAutoscalingTags(conn *autoscaling.AutoScaling, d *schema.ResourceData) error { + if d.HasChange("tag") { + oraw, nraw := d.GetChange("tag") + o := setToMapByKey(oraw.(*schema.Set), "key") + n := setToMapByKey(nraw.(*schema.Set), "key") + + resourceID := d.Get("name").(string) + c, r := diffAutoscalingTags( + autoscalingTagsFromMap(o, resourceID), + autoscalingTagsFromMap(n, resourceID), + resourceID) + create := autoscaling.CreateOrUpdateTagsType{ + Tags: c, + } + remove := autoscaling.DeleteTagsType{ + Tags: r, + } + + // Set tags + if len(r) > 0 { + log.Printf("[DEBUG] Removing autoscaling tags: %#v", r) + if err := conn.DeleteTags(&remove); err != nil { + return err + } + } + if len(c) > 0 { + log.Printf("[DEBUG] Creating autoscaling tags: %#v", c) + if err := conn.CreateOrUpdateTags(&create); err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffAutoscalingTags(oldTags, newTags []autoscaling.Tag, resourceID string) ([]autoscaling.Tag, []autoscaling.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + tag := map[string]interface{}{ + "value": *t.Value, + "propagate_at_launch": *t.PropagateAtLaunch, + } + create[*t.Key] = tag + } + + // Build the list of what to remove + var remove []autoscaling.Tag + for _, t := range oldTags { + old, ok := create[*t.Key].(map[string]interface{}) + + if !ok || old["value"] != *t.Value || old["propagate_at_launch"] != *t.PropagateAtLaunch { + // Delete it! + remove = append(remove, t) + } + } + + return autoscalingTagsFromMap(create, resourceID), remove +} + +// tagsFromMap returns the tags for the given map of data. +func autoscalingTagsFromMap(m map[string]interface{}, resourceID string) []autoscaling.Tag { + result := make([]autoscaling.Tag, 0, len(m)) + for k, v := range m { + attr := v.(map[string]interface{}) + result = append(result, autoscaling.Tag{ + Key: aws.String(k), + Value: aws.String(attr["value"].(string)), + PropagateAtLaunch: aws.Boolean(attr["propagate_at_launch"].(bool)), + ResourceID: aws.String(resourceID), + ResourceType: aws.String("auto-scaling-group"), + }) + } + + return result +} + +// autoscalingTagsToMap turns the list of tags into a map. +func autoscalingTagsToMap(ts []autoscaling.Tag) map[string]interface{} { + tags := make(map[string]interface{}) + for _, t := range ts { + tag := map[string]interface{}{ + "value": *t.Value, + "propagate_at_launch": *t.PropagateAtLaunch, + } + tags[*t.Key] = tag + } + + return tags +} + +// autoscalingTagDescriptionsToMap turns the list of tags into a map. +func autoscalingTagDescriptionsToMap(ts []autoscaling.TagDescription) map[string]map[string]interface{} { + tags := make(map[string]map[string]interface{}) + for _, t := range ts { + tag := map[string]interface{}{ + "value": *t.Value, + "propagate_at_launch": *t.PropagateAtLaunch, + } + tags[*t.Key] = tag + } + + return tags +} + +func setToMapByKey(s *schema.Set, key string) map[string]interface{} { + result := make(map[string]interface{}) + for _, rawData := range s.List() { + data := rawData.(map[string]interface{}) + result[data[key].(string)] = data + } + + return result +} diff --git a/builtin/providers/aws/autoscaling_tags_test.go b/builtin/providers/aws/autoscaling_tags_test.go new file mode 100644 index 000000000000..7d61e3b18a24 --- /dev/null +++ b/builtin/providers/aws/autoscaling_tags_test.go @@ -0,0 +1,122 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/aws-sdk-go/gen/autoscaling" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffAutoscalingTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]interface{} + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "bar", + "propagate_at_launch": true, + }, + }, + New: map[string]interface{}{ + "DifferentTag": map[string]interface{}{ + "value": "baz", + "propagate_at_launch": true, + }, + }, + Create: map[string]interface{}{ + "DifferentTag": map[string]interface{}{ + "value": "baz", + "propagate_at_launch": true, + }, + }, + Remove: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "bar", + "propagate_at_launch": true, + }, + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "bar", + "propagate_at_launch": true, + }, + }, + New: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "baz", + "propagate_at_launch": false, + }, + }, + Create: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "baz", + "propagate_at_launch": false, + }, + }, + Remove: map[string]interface{}{ + "Name": map[string]interface{}{ + "value": "bar", + "propagate_at_launch": true, + }, + }, + }, + } + + var resourceID = "sample" + + for i, tc := range cases { + awsTagsOld := autoscalingTagsFromMap(tc.Old, resourceID) + awsTagsNew := autoscalingTagsFromMap(tc.New, resourceID) + + c, r := diffAutoscalingTags(awsTagsOld, awsTagsNew, resourceID) + + cm := autoscalingTagsToMap(c) + rm := autoscalingTagsToMap(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: \n%#v\n%#v", i, cm, tc.Create) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: \n%#v\n%#v", i, rm, tc.Remove) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckAutoscalingTags( + ts *[]autoscaling.TagDescription, key string, expected map[string]interface{}) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := autoscalingTagDescriptionsToMap(*ts) + v, ok := m[key] + if !ok { + return fmt.Errorf("Missing tag: %s", key) + } + + if v["value"] != expected["value"].(string) || + v["propagate_at_launch"] != expected["propagate_at_launch"].(bool) { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} + +func testAccCheckAutoscalingTagNotExists(ts *[]autoscaling.TagDescription, key string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := autoscalingTagDescriptionsToMap(*ts) + if _, ok := m[key]; ok { + return fmt.Errorf("Tag exists when it should not: %s", key) + } + + return nil + } +} diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index 8bc9adab5634..abcbc412cb4c 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -3,21 +3,20 @@ package aws import ( "fmt" "log" - "strings" - "unicode" "github.com/hashicorp/terraform/helper/multierror" - "github.com/mitchellh/goamz/aws" - "github.com/mitchellh/goamz/ec2" - awsGo "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/aws" "github.com/hashicorp/aws-sdk-go/gen/autoscaling" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/aws-sdk-go/gen/elb" + "github.com/hashicorp/aws-sdk-go/gen/iam" "github.com/hashicorp/aws-sdk-go/gen/rds" "github.com/hashicorp/aws-sdk-go/gen/route53" "github.com/hashicorp/aws-sdk-go/gen/s3" - awsEC2 "github.com/hashicorp/aws-sdk-go/gen/ec2" + awsSDK "github.com/awslabs/aws-sdk-go/aws" + awsEC2 "github.com/awslabs/aws-sdk-go/service/ec2" ) type Config struct { @@ -29,13 +28,14 @@ type Config struct { type AWSClient struct { ec2conn *ec2.EC2 - awsEC2conn *awsEC2.EC2 elbconn *elb.ELB autoscalingconn *autoscaling.AutoScaling s3conn *s3.S3 r53conn *route53.Route53 region string rdsconn *rds.RDS + iamconn *iam.IAM + ec2SDKconn *awsEC2.EC2 } // Client configures and returns a fully initailized AWSClient @@ -45,14 +45,9 @@ func (c *Config) Client() (interface{}, error) { // Get the auth and region. This can fail if keys/regions were not // specified and we're attempting to use the environment. var errs []error - log.Println("[INFO] Building AWS auth structure") - auth, err := c.AWSAuth() - if err != nil { - errs = append(errs, err) - } log.Println("[INFO] Building AWS region structure") - region, err := c.AWSRegion() + err := c.ValidateRegion() if err != nil { errs = append(errs, err) } @@ -62,10 +57,9 @@ func (c *Config) Client() (interface{}, error) { // bucket storage in S3 client.region = c.Region - creds := awsGo.Creds(c.AccessKey, c.SecretKey, c.Token) + log.Println("[INFO] Building AWS auth structure") + creds := aws.DetectCreds(c.AccessKey, c.SecretKey, c.Token) - log.Println("[INFO] Initializing EC2 connection") - client.ec2conn = ec2.New(auth, region) log.Println("[INFO] Initializing ELB connection") client.elbconn = elb.New(creds, c.Region, nil) log.Println("[INFO] Initializing AutoScaling connection") @@ -80,8 +74,15 @@ func (c *Config) Client() (interface{}, error) { // See http://docs.aws.amazon.com/general/latest/gr/sigv4_changes.html log.Println("[INFO] Initializing Route53 connection") client.r53conn = route53.New(creds, "us-east-1", nil) - log.Println("[INFO] Initializing AWS-GO EC2 Connection") - client.awsEC2conn = awsEC2.New(creds, c.Region, nil) + log.Println("[INFO] Initializing EC2 Connection") + client.ec2conn = ec2.New(creds, c.Region, nil) + client.iamconn = iam.New(creds, c.Region, nil) + + sdkCreds := awsSDK.DetectCreds(c.AccessKey, c.SecretKey, c.Token) + client.ec2SDKconn = awsEC2.New(&awsSDK.Config{ + Credentials: sdkCreds, + Region: c.Region, + }) } if len(errs) > 0 { @@ -91,54 +92,17 @@ func (c *Config) Client() (interface{}, error) { return &client, nil } -// AWSAuth returns a valid aws.Auth object for access to AWS services, or -// an error if the authentication couldn't be resolved. -// -// TODO(mitchellh): Test in some way. -func (c *Config) AWSAuth() (aws.Auth, error) { - auth, err := aws.GetAuth(c.AccessKey, c.SecretKey) - if err == nil { - // Store the accesskey and secret that we got... - c.AccessKey = auth.AccessKey - c.SecretKey = auth.SecretKey - c.Token = auth.Token - } - - return auth, err -} - // IsValidRegion returns true if the configured region is a valid AWS // region and false if it's not -func (c *Config) IsValidRegion() bool { +func (c *Config) ValidateRegion() error { var regions = [11]string{"us-east-1", "us-west-2", "us-west-1", "eu-west-1", "eu-central-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "cn-north-1", "us-gov-west-1"} for _, valid := range regions { if c.Region == valid { - return true + return nil } } - return false -} - -// AWSRegion returns the configured region. -// -// TODO(mitchellh): Test in some way. -func (c *Config) AWSRegion() (aws.Region, error) { - if c.Region != "" { - if c.IsValidRegion() { - return aws.Regions[c.Region], nil - } else { - return aws.Region{}, fmt.Errorf("Not a valid region: %s", c.Region) - } - } - - md, err := aws.GetMetaData("placement/availability-zone") - if err != nil { - return aws.Region{}, err - } - - region := strings.TrimRightFunc(string(md), unicode.IsLetter) - return aws.Regions[region], nil + return fmt.Errorf("Not a valid region: %s", c.Region) } diff --git a/builtin/providers/aws/network_acl_entry.go b/builtin/providers/aws/network_acl_entry.go index 8ce88d81a276..e9f62ee127ed 100644 --- a/builtin/providers/aws/network_acl_entry.go +++ b/builtin/providers/aws/network_acl_entry.go @@ -2,11 +2,14 @@ package aws import ( "fmt" - "github.com/mitchellh/goamz/ec2" + "strconv" + + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" ) -func expandNetworkAclEntries(configured []interface{}, entryType string) ([]ec2.NetworkAclEntry, error) { - entries := make([]ec2.NetworkAclEntry, 0, len(configured)) +func expandNetworkAclEntries(configured []interface{}, entryType string) ([]*ec2.NetworkACLEntry, error) { + entries := make([]*ec2.NetworkACLEntry, 0, len(configured)) for _, eRaw := range configured { data := eRaw.(map[string]interface{}) protocol := data["protocol"].(string) @@ -15,37 +18,36 @@ func expandNetworkAclEntries(configured []interface{}, entryType string) ([]ec2. return nil, fmt.Errorf("Invalid Protocol %s for rule %#v", protocol, data) } p := extractProtocolInteger(data["protocol"].(string)) - e := ec2.NetworkAclEntry{ - Protocol: p, - PortRange: ec2.PortRange{ - From: data["from_port"].(int), - To: data["to_port"].(int), + e := &ec2.NetworkACLEntry{ + Protocol: aws.String(strconv.Itoa(p)), + PortRange: &ec2.PortRange{ + From: aws.Long(int64(data["from_port"].(int))), + To: aws.Long(int64(data["to_port"].(int))), }, - Egress: (entryType == "egress"), - RuleAction: data["action"].(string), - RuleNumber: data["rule_no"].(int), - CidrBlock: data["cidr_block"].(string), + Egress: aws.Boolean((entryType == "egress")), + RuleAction: aws.String(data["action"].(string)), + RuleNumber: aws.Long(int64(data["rule_no"].(int))), + CIDRBlock: aws.String(data["cidr_block"].(string)), } entries = append(entries, e) } - return entries, nil - } -func flattenNetworkAclEntries(list []ec2.NetworkAclEntry) []map[string]interface{} { +func flattenNetworkAclEntries(list []*ec2.NetworkACLEntry) []map[string]interface{} { entries := make([]map[string]interface{}, 0, len(list)) for _, entry := range list { entries = append(entries, map[string]interface{}{ - "from_port": entry.PortRange.From, - "to_port": entry.PortRange.To, - "action": entry.RuleAction, - "rule_no": entry.RuleNumber, - "protocol": extractProtocolString(entry.Protocol), - "cidr_block": entry.CidrBlock, + "from_port": *entry.PortRange.From, + "to_port": *entry.PortRange.To, + "action": *entry.RuleAction, + "rule_no": *entry.RuleNumber, + "protocol": *entry.Protocol, + "cidr_block": *entry.CIDRBlock, }) } + return entries } diff --git a/builtin/providers/aws/network_acl_entry_test.go b/builtin/providers/aws/network_acl_entry_test.go index a2d60abb8089..75de66d96f73 100644 --- a/builtin/providers/aws/network_acl_entry_test.go +++ b/builtin/providers/aws/network_acl_entry_test.go @@ -4,10 +4,11 @@ import ( "reflect" "testing" - "github.com/mitchellh/goamz/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" ) -func Test_expandNetworkAclEntry(t *testing.T) { +func Test_expandNetworkACLEntry(t *testing.T) { input := []interface{}{ map[string]interface{}{ "protocol": "tcp", @@ -28,30 +29,28 @@ func Test_expandNetworkAclEntry(t *testing.T) { } expanded, _ := expandNetworkAclEntries(input, "egress") - expected := []ec2.NetworkAclEntry{ - ec2.NetworkAclEntry{ - Protocol: 6, - PortRange: ec2.PortRange{ - From: 22, - To: 22, + expected := []*ec2.NetworkACLEntry{ + &ec2.NetworkACLEntry{ + Protocol: aws.String("6"), + PortRange: &ec2.PortRange{ + From: aws.Long(22), + To: aws.Long(22), }, - RuleAction: "deny", - RuleNumber: 1, - CidrBlock: "0.0.0.0/0", - Egress: true, - IcmpCode: ec2.IcmpCode{Code: 0, Type: 0}, + RuleAction: aws.String("deny"), + RuleNumber: aws.Long(1), + CIDRBlock: aws.String("0.0.0.0/0"), + Egress: aws.Boolean(true), }, - ec2.NetworkAclEntry{ - Protocol: 6, - PortRange: ec2.PortRange{ - From: 443, - To: 443, + &ec2.NetworkACLEntry{ + Protocol: aws.String("6"), + PortRange: &ec2.PortRange{ + From: aws.Long(443), + To: aws.Long(443), }, - RuleAction: "deny", - RuleNumber: 2, - CidrBlock: "0.0.0.0/0", - Egress: true, - IcmpCode: ec2.IcmpCode{Code: 0, Type: 0}, + RuleAction: aws.String("deny"), + RuleNumber: aws.Long(2), + CIDRBlock: aws.String("0.0.0.0/0"), + Egress: aws.Boolean(true), }, } @@ -64,28 +63,28 @@ func Test_expandNetworkAclEntry(t *testing.T) { } -func Test_flattenNetworkAclEntry(t *testing.T) { +func Test_flattenNetworkACLEntry(t *testing.T) { - apiInput := []ec2.NetworkAclEntry{ - ec2.NetworkAclEntry{ - Protocol: 6, - PortRange: ec2.PortRange{ - From: 22, - To: 22, + apiInput := []*ec2.NetworkACLEntry{ + &ec2.NetworkACLEntry{ + Protocol: aws.String("tcp"), + PortRange: &ec2.PortRange{ + From: aws.Long(22), + To: aws.Long(22), }, - RuleAction: "deny", - RuleNumber: 1, - CidrBlock: "0.0.0.0/0", + RuleAction: aws.String("deny"), + RuleNumber: aws.Long(1), + CIDRBlock: aws.String("0.0.0.0/0"), }, - ec2.NetworkAclEntry{ - Protocol: 6, - PortRange: ec2.PortRange{ - From: 443, - To: 443, + &ec2.NetworkACLEntry{ + Protocol: aws.String("tcp"), + PortRange: &ec2.PortRange{ + From: aws.Long(443), + To: aws.Long(443), }, - RuleAction: "deny", - RuleNumber: 2, - CidrBlock: "0.0.0.0/0", + RuleAction: aws.String("deny"), + RuleNumber: aws.Long(2), + CIDRBlock: aws.String("0.0.0.0/0"), }, } flattened := flattenNetworkAclEntries(apiInput) @@ -93,26 +92,26 @@ func Test_flattenNetworkAclEntry(t *testing.T) { expected := []map[string]interface{}{ map[string]interface{}{ "protocol": "tcp", - "from_port": 22, - "to_port": 22, + "from_port": int64(22), + "to_port": int64(22), "cidr_block": "0.0.0.0/0", "action": "deny", - "rule_no": 1, + "rule_no": int64(1), }, map[string]interface{}{ "protocol": "tcp", - "from_port": 443, - "to_port": 443, + "from_port": int64(443), + "to_port": int64(443), "cidr_block": "0.0.0.0/0", "action": "deny", - "rule_no": 2, + "rule_no": int64(2), }, } if !reflect.DeepEqual(flattened, expected) { t.Fatalf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - flattened[0], + flattened, expected) } diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index 0ab2919fd857..50596512e364 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -58,6 +58,7 @@ func Provider() terraform.ResourceProvider { "aws_launch_configuration": resourceAwsLaunchConfiguration(), "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), "aws_network_acl": resourceAwsNetworkAcl(), + "aws_network_interface": resourceAwsNetworkInterface(), "aws_route53_record": resourceAwsRoute53Record(), "aws_route53_zone": resourceAwsRoute53Zone(), "aws_route_table": resourceAwsRouteTable(), @@ -67,6 +68,7 @@ func Provider() terraform.ResourceProvider { "aws_subnet": resourceAwsSubnet(), "aws_vpc": resourceAwsVpc(), "aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(), + "aws_vpn_gateway": resourceAwsVpnGateway(), }, ConfigureFunc: providerConfigure, diff --git a/builtin/providers/aws/resource_aws_autoscaling_group.go b/builtin/providers/aws/resource_aws_autoscaling_group.go index efabb163856f..60b22ff265a9 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group.go @@ -118,6 +118,8 @@ func resourceAwsAutoscalingGroup() *schema.Resource { return hashcode.String(v.(string)) }, }, + + "tag": autoscalingTagsSchema(), }, } } @@ -133,11 +135,16 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) autoScalingGroupOpts.AvailabilityZones = expandStringList( d.Get("availability_zones").(*schema.Set).List()) + if v, ok := d.GetOk("tag"); ok { + autoScalingGroupOpts.Tags = autoscalingTagsFromMap( + setToMapByKey(v.(*schema.Set), "key"), d.Get("name").(string)) + } + if v, ok := d.GetOk("default_cooldown"); ok { autoScalingGroupOpts.DefaultCooldown = aws.Integer(v.(int)) } - if v, ok := d.GetOk("health_check"); ok && v.(string) != "" { + if v, ok := d.GetOk("health_check_type"); ok && v.(string) != "" { autoScalingGroupOpts.HealthCheckType = aws.String(v.(string)) } @@ -186,15 +193,16 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e } d.Set("availability_zones", g.AvailabilityZones) - d.Set("default_cooldown", *g.DefaultCooldown) - d.Set("desired_capacity", *g.DesiredCapacity) - d.Set("health_check_grace_period", *g.HealthCheckGracePeriod) - d.Set("health_check_type", *g.HealthCheckType) - d.Set("launch_configuration", *g.LaunchConfigurationName) + d.Set("default_cooldown", g.DefaultCooldown) + d.Set("desired_capacity", g.DesiredCapacity) + d.Set("health_check_grace_period", g.HealthCheckGracePeriod) + d.Set("health_check_type", g.HealthCheckType) + d.Set("launch_configuration", g.LaunchConfigurationName) d.Set("load_balancers", g.LoadBalancerNames) - d.Set("min_size", *g.MinSize) - d.Set("max_size", *g.MaxSize) - d.Set("name", *g.AutoScalingGroupName) + d.Set("min_size", g.MinSize) + d.Set("max_size", g.MaxSize) + d.Set("name", g.AutoScalingGroupName) + d.Set("tag", g.Tags) d.Set("vpc_zone_identifier", strings.Split(*g.VPCZoneIdentifier, ",")) d.Set("termination_policies", g.TerminationPolicies) @@ -224,6 +232,12 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) opts.MaxSize = aws.Integer(d.Get("max_size").(int)) } + if err := setAutoscalingTags(autoscalingconn, d); err != nil { + return err + } else { + d.SetPartial("tag") + } + log.Printf("[DEBUG] AutoScaling Group update configuration: %#v", opts) err := autoscalingconn.UpdateAutoScalingGroup(&opts) if err != nil { @@ -273,7 +287,12 @@ func resourceAwsAutoscalingGroupDelete(d *schema.ResourceData, meta interface{}) return err } - return nil + return resource.Retry(5*time.Minute, func() error { + if g, _ = getAwsAutoscalingGroup(d, meta); g != nil { + return fmt.Errorf("Auto Scaling Group still exists") + } + return nil + }) } func getAwsAutoscalingGroup( diff --git a/builtin/providers/aws/resource_aws_autoscaling_group_test.go b/builtin/providers/aws/resource_aws_autoscaling_group_test.go index d940bb40d653..09a4d73a6cf8 100644 --- a/builtin/providers/aws/resource_aws_autoscaling_group_test.go +++ b/builtin/providers/aws/resource_aws_autoscaling_group_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "reflect" "testing" "github.com/hashicorp/aws-sdk-go/aws" @@ -53,6 +54,44 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { resource.TestCheckResourceAttr( "aws_autoscaling_group.bar", "desired_capacity", "5"), testLaunchConfigurationName("aws_autoscaling_group.bar", &lc), + testAccCheckAutoscalingTags(&group.Tags, "Bar", map[string]interface{}{ + "value": "bar-foo", + "propagate_at_launch": true, + }), + ), + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_tags(t *testing.T) { + var group autoscaling.AutoScalingGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSAutoScalingGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + testAccCheckAutoscalingTags(&group.Tags, "Foo", map[string]interface{}{ + "value": "foo-bar", + "propagate_at_launch": true, + }), + ), + }, + + resource.TestStep{ + Config: testAccAWSAutoScalingGroupConfigUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + testAccCheckAutoscalingTagNotExists(&group.Tags, "Foo"), + testAccCheckAutoscalingTags(&group.Tags, "Bar", map[string]interface{}{ + "value": "bar-foo", + "propagate_at_launch": true, + }), ), }, }, @@ -130,7 +169,7 @@ func testAccCheckAWSAutoScalingGroupAttributes(group *autoscaling.AutoScalingGro } if *group.HealthCheckType != "ELB" { - return fmt.Errorf("Bad health_check_type: %s", *group.HealthCheckType) + return fmt.Errorf("Bad health_check_type,\nexpected: %s\ngot: %s", "ELB", *group.HealthCheckType) } if *group.HealthCheckGracePeriod != 300 { @@ -145,6 +184,21 @@ func testAccCheckAWSAutoScalingGroupAttributes(group *autoscaling.AutoScalingGro return fmt.Errorf("Bad launch configuration name: %s", *group.LaunchConfigurationName) } + t := autoscaling.TagDescription{ + Key: aws.String("Foo"), + Value: aws.String("foo-bar"), + PropagateAtLaunch: aws.Boolean(true), + ResourceType: aws.String("auto-scaling-group"), + ResourceID: group.AutoScalingGroupName, + } + + if !reflect.DeepEqual(group.Tags[0], t) { + return fmt.Errorf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + group.Tags[0], + t) + } + return nil } } @@ -226,6 +280,12 @@ resource "aws_autoscaling_group" "bar" { termination_policies = ["OldestInstance"] launch_configuration = "${aws_launch_configuration.foobar.name}" + + tag { + key = "Foo" + value = "foo-bar" + propagate_at_launch = true + } } ` @@ -253,6 +313,12 @@ resource "aws_autoscaling_group" "bar" { force_delete = true launch_configuration = "${aws_launch_configuration.new.name}" + + tag { + key = "Bar" + value = "bar-foo" + propagate_at_launch = true + } } ` diff --git a/builtin/providers/aws/resource_aws_db_instance.go b/builtin/providers/aws/resource_aws_db_instance.go index e99744a0f31b..267f51aef0b8 100644 --- a/builtin/providers/aws/resource_aws_db_instance.go +++ b/builtin/providers/aws/resource_aws_db_instance.go @@ -6,6 +6,7 @@ import ( "time" "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/iam" "github.com/hashicorp/aws-sdk-go/gen/rds" "github.com/hashicorp/terraform/helper/hashcode" @@ -17,6 +18,7 @@ func resourceAwsDbInstance() *schema.Resource { return &schema.Resource{ Create: resourceAwsDbInstanceCreate, Read: resourceAwsDbInstanceRead, + Update: resourceAwsDbInstanceUpdate, Delete: resourceAwsDbInstanceDelete, Schema: map[string]*schema.Schema{ @@ -35,7 +37,6 @@ func resourceAwsDbInstance() *schema.Resource { "password": &schema.Schema{ Type: schema.TypeString, Required: true, - ForceNew: true, }, "engine": &schema.Schema{ @@ -47,7 +48,6 @@ func resourceAwsDbInstance() *schema.Resource { "engine_version": &schema.Schema{ Type: schema.TypeString, Required: true, - ForceNew: true, }, "storage_encrypted": &schema.Schema{ @@ -59,14 +59,12 @@ func resourceAwsDbInstance() *schema.Resource { "allocated_storage": &schema.Schema{ Type: schema.TypeInt, Required: true, - ForceNew: true, }, "storage_type": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, - ForceNew: true, }, "identifier": &schema.Schema{ @@ -78,7 +76,6 @@ func resourceAwsDbInstance() *schema.Resource { "instance_class": &schema.Schema{ Type: schema.TypeString, Required: true, - ForceNew: true, }, "availability_zone": &schema.Schema{ @@ -91,7 +88,6 @@ func resourceAwsDbInstance() *schema.Resource { "backup_retention_period": &schema.Schema{ Type: schema.TypeInt, Optional: true, - ForceNew: true, Default: 1, }, @@ -99,27 +95,23 @@ func resourceAwsDbInstance() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ForceNew: true, }, "iops": &schema.Schema{ Type: schema.TypeInt, Optional: true, - ForceNew: true, }, "maintenance_window": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, - ForceNew: true, }, "multi_az": &schema.Schema{ Type: schema.TypeBool, Optional: true, Computed: true, - ForceNew: true, }, "port": &schema.Schema{ @@ -138,6 +130,7 @@ func resourceAwsDbInstance() *schema.Resource { "vpc_security_group_ids": &schema.Schema{ Type: schema.TypeSet, Optional: true, + Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: func(v interface{}) int { return hashcode.String(v.(string)) @@ -162,13 +155,13 @@ func resourceAwsDbInstance() *schema.Resource { Type: schema.TypeString, Optional: true, ForceNew: true, + Computed: true, }, "parameter_group_name": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, - ForceNew: true, }, "address": &schema.Schema{ @@ -185,12 +178,24 @@ func resourceAwsDbInstance() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + // apply_immediately is used to determine when the update modifications + // take place. + // See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html + "apply_immediately": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "tags": tagsSchema(), }, } } func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) opts := rds.CreateDBInstanceMessage{ AllocatedStorage: aws.Integer(d.Get("allocated_storage").(int)), DBInstanceClass: aws.String(d.Get("instance_class").(string)), @@ -201,6 +206,7 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error Engine: aws.String(d.Get("engine").(string)), EngineVersion: aws.String(d.Get("engine_version").(string)), StorageEncrypted: aws.Boolean(d.Get("storage_encrypted").(bool)), + Tags: tags, } if attr, ok := d.GetOk("storage_type"); ok { @@ -304,29 +310,65 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { return nil } - d.Set("name", *v.DBName) - d.Set("username", *v.MasterUsername) - d.Set("engine", *v.Engine) - d.Set("engine_version", *v.EngineVersion) - d.Set("allocated_storage", *v.AllocatedStorage) - d.Set("storage_type", *v.StorageType) - d.Set("instance_class", *v.DBInstanceClass) - d.Set("availability_zone", *v.AvailabilityZone) - d.Set("backup_retention_period", *v.BackupRetentionPeriod) - d.Set("backup_window", *v.PreferredBackupWindow) - d.Set("maintenance_window", *v.PreferredMaintenanceWindow) - d.Set("multi_az", *v.MultiAZ) - d.Set("port", *v.Endpoint.Port) - d.Set("db_subnet_group_name", *v.DBSubnetGroup.DBSubnetGroupName) + d.Set("name", v.DBName) + d.Set("username", v.MasterUsername) + d.Set("engine", v.Engine) + d.Set("engine_version", v.EngineVersion) + d.Set("allocated_storage", v.AllocatedStorage) + d.Set("storage_type", v.StorageType) + d.Set("instance_class", v.DBInstanceClass) + d.Set("availability_zone", v.AvailabilityZone) + d.Set("backup_retention_period", v.BackupRetentionPeriod) + d.Set("backup_window", v.PreferredBackupWindow) + d.Set("maintenance_window", v.PreferredMaintenanceWindow) + d.Set("multi_az", v.MultiAZ) + if v.DBSubnetGroup != nil { + d.Set("db_subnet_group_name", v.DBSubnetGroup.DBSubnetGroupName) + } if len(v.DBParameterGroups) > 0 { - d.Set("parameter_group_name", *v.DBParameterGroups[0].DBParameterGroupName) + d.Set("parameter_group_name", v.DBParameterGroups[0].DBParameterGroupName) } - d.Set("address", *v.Endpoint.Address) - d.Set("endpoint", fmt.Sprintf("%s:%d", *v.Endpoint.Address, *v.Endpoint.Port)) - d.Set("status", *v.DBInstanceStatus) - d.Set("storage_encrypted", *v.StorageEncrypted) + if v.Endpoint != nil { + d.Set("port", v.Endpoint.Port) + d.Set("address", v.Endpoint.Address) + + if v.Endpoint.Address != nil && v.Endpoint.Port != nil { + d.Set("endpoint", + fmt.Sprintf("%s:%d", *v.Endpoint.Address, *v.Endpoint.Port)) + } + } + + d.Set("status", v.DBInstanceStatus) + d.Set("storage_encrypted", v.StorageEncrypted) + + // list tags for resource + // set tags + conn := meta.(*AWSClient).rdsconn + arn, err := buildRDSARN(d, meta) + if err != nil { + name := "" + if v.DBName != nil && *v.DBName != "" { + name = *v.DBName + } + + log.Printf("[DEBUG] Error building ARN for DB Instance, not setting Tags for DB %s", name) + } else { + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceMessage{ + ResourceName: aws.String(arn), + }) + + if err != nil { + log.Printf("[DEBUG] Error retreiving tags for ARN: %s", arn) + } + + var dt []rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + } // Create an empty schema.Set to hold all vpc security group ids ids := &schema.Set{ @@ -390,6 +432,99 @@ func resourceAwsDbInstanceDelete(d *schema.ResourceData, meta interface{}) error return nil } +func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + d.Partial(true) + + req := &rds.ModifyDBInstanceMessage{ + ApplyImmediately: aws.Boolean(d.Get("apply_immediately").(bool)), + DBInstanceIdentifier: aws.String(d.Id()), + } + d.SetPartial("apply_immediately") + + if d.HasChange("allocated_storage") { + d.SetPartial("allocated_storage") + req.AllocatedStorage = aws.Integer(d.Get("allocated_storage").(int)) + } + if d.HasChange("backup_retention_period") { + d.SetPartial("backup_retention_period") + req.BackupRetentionPeriod = aws.Integer(d.Get("backup_retention_period").(int)) + } + if d.HasChange("instance_class") { + d.SetPartial("instance_class") + req.DBInstanceClass = aws.String(d.Get("instance_class").(string)) + } + if d.HasChange("parameter_group_name") { + d.SetPartial("parameter_group_name") + req.DBParameterGroupName = aws.String(d.Get("parameter_group_name").(string)) + } + if d.HasChange("engine_version") { + d.SetPartial("engine_version") + req.EngineVersion = aws.String(d.Get("engine_version").(string)) + } + if d.HasChange("iops") { + d.SetPartial("iops") + req.IOPS = aws.Integer(d.Get("iops").(int)) + } + if d.HasChange("backup_window") { + d.SetPartial("backup_window") + req.PreferredBackupWindow = aws.String(d.Get("backup_window").(string)) + } + if d.HasChange("maintenance_window") { + d.SetPartial("maintenance_window") + req.PreferredMaintenanceWindow = aws.String(d.Get("maintenance_window").(string)) + } + if d.HasChange("password") { + d.SetPartial("password") + req.MasterUserPassword = aws.String(d.Get("password").(string)) + } + if d.HasChange("multi_az") { + d.SetPartial("multi_az") + req.MultiAZ = aws.Boolean(d.Get("multi_az").(bool)) + } + if d.HasChange("storage_type") { + d.SetPartial("storage_type") + req.StorageType = aws.String(d.Get("storage_type").(string)) + } + + if d.HasChange("vpc_security_group_ids") { + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + var s []string + for _, v := range attr.List() { + s = append(s, v.(string)) + } + req.VPCSecurityGroupIDs = s + } + } + + if d.HasChange("vpc_security_group_ids") { + if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { + var s []string + for _, v := range attr.List() { + s = append(s, v.(string)) + } + req.DBSecurityGroups = s + } + } + + log.Printf("[DEBUG] DB Instance Modification request: %#v", req) + _, err := conn.ModifyDBInstance(req) + if err != nil { + return fmt.Errorf("Error modifying DB Instance %s: %s", d.Id(), err) + } + + if arn, err := buildRDSARN(d, meta); err == nil { + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + d.Partial(false) + return resourceAwsDbInstanceRead(d, meta) +} + func resourceAwsBbInstanceRetrieve( d *schema.ResourceData, meta interface{}) (*rds.DBInstance, error) { conn := meta.(*AWSClient).rdsconn @@ -439,3 +574,16 @@ func resourceAwsDbInstanceStateRefreshFunc( return v, *v.DBInstanceStatus, nil } } + +func buildRDSARN(d *schema.ResourceData, meta interface{}) (string, error) { + iamconn := meta.(*AWSClient).iamconn + region := meta.(*AWSClient).region + // An zero value GetUserRequest{} defers to the currently logged in user + resp, err := iamconn.GetUser(&iam.GetUserRequest{}) + if err != nil { + return "", err + } + user := resp.User + arn := fmt.Sprintf("arn:aws:rds:%s:%s:db:%s", region, *user.UserID, d.Id()) + return arn, nil +} diff --git a/builtin/providers/aws/resource_aws_db_instance_test.go b/builtin/providers/aws/resource_aws_db_instance_test.go index 3141990e664b..ba86d005ad99 100644 --- a/builtin/providers/aws/resource_aws_db_instance_test.go +++ b/builtin/providers/aws/resource_aws_db_instance_test.go @@ -2,7 +2,9 @@ package aws import ( "fmt" + "math/rand" "testing" + "time" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -24,8 +26,6 @@ func TestAccAWSDBInstance(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), testAccCheckAWSDBInstanceAttributes(&v), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "identifier", "foobarbaz-test-terraform"), resource.TestCheckResourceAttr( "aws_db_instance.bar", "allocated_storage", "10"), resource.TestCheckResourceAttr( @@ -133,9 +133,12 @@ func testAccCheckAWSDBInstanceExists(n string, v *rds.DBInstance) resource.TestC } } -const testAccAWSDBInstanceConfig = ` +// Database names cannot collide, and deletion takes so long, that making the +// name a bit random helps so able we can kill a test that's just waiting for a +// delete and not be blocked on kicking off another one. +var testAccAWSDBInstanceConfig = fmt.Sprintf(` resource "aws_db_instance" "bar" { - identifier = "foobarbaz-test-terraform" + identifier = "foobarbaz-test-terraform-%d" allocated_storage = 10 engine = "mysql" @@ -148,5 +151,4 @@ resource "aws_db_instance" "bar" { backup_retention_period = 0 parameter_group_name = "default.mysql5.6" -} -` +}`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) diff --git a/builtin/providers/aws/resource_aws_db_parameter_group.go b/builtin/providers/aws/resource_aws_db_parameter_group.go index cf40d3b26ed4..68c5b52e6c6d 100644 --- a/builtin/providers/aws/resource_aws_db_parameter_group.go +++ b/builtin/providers/aws/resource_aws_db_parameter_group.go @@ -4,6 +4,7 @@ import ( "bytes" "fmt" "log" + "strings" "time" "github.com/hashicorp/terraform/helper/hashcode" @@ -152,7 +153,7 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{}) os := o.(*schema.Set) ns := n.(*schema.Set) - // Expand the "parameter" set to goamz compat []rds.Parameter + // Expand the "parameter" set to aws-sdk-go compat []rds.Parameter parameters, err := expandParameters(ns.Difference(os).List()) if err != nil { return err @@ -220,7 +221,8 @@ func resourceAwsDbParameterHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) - buf.WriteString(fmt.Sprintf("%s-", m["value"].(string))) + // Store the value as a lower case string, to match how we store them in flattenParameters + buf.WriteString(fmt.Sprintf("%s-", strings.ToLower(m["value"].(string)))) return hashcode.String(buf.String()) } diff --git a/builtin/providers/aws/resource_aws_db_subnet_group.go b/builtin/providers/aws/resource_aws_db_subnet_group.go index d204c5f96e8a..1c1b49a710aa 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "strings" "time" "github.com/hashicorp/aws-sdk-go/aws" @@ -79,15 +80,31 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro describeResp, err := rdsconn.DescribeDBSubnetGroups(&describeOpts) if err != nil { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "DBSubnetGroupNotFoundFault" { + // Update state to indicate the db subnet no longer exists. + d.SetId("") + return nil + } return err } - if len(describeResp.DBSubnetGroups) != 1 || - *describeResp.DBSubnetGroups[0].DBSubnetGroupName != d.Id() { + if len(describeResp.DBSubnetGroups) == 0 { return fmt.Errorf("Unable to find DB Subnet Group: %#v", describeResp.DBSubnetGroups) } - subnetGroup := describeResp.DBSubnetGroups[0] + var subnetGroup rds.DBSubnetGroup + for _, s := range describeResp.DBSubnetGroups { + // AWS is down casing the name provided, so we compare lower case versions + // of the names. We lower case both our name and their name in the check, + // incase they change that someday. + if strings.ToLower(d.Id()) == strings.ToLower(*s.DBSubnetGroupName) { + subnetGroup = describeResp.DBSubnetGroups[0] + } + } + + if subnetGroup.DBSubnetGroupName == nil { + return fmt.Errorf("Unable to find DB Subnet Group: %#v", describeResp.DBSubnetGroups) + } d.Set("name", *subnetGroup.DBSubnetGroupName) d.Set("description", *subnetGroup.DBSubnetGroupDescription) diff --git a/builtin/providers/aws/resource_aws_db_subnet_group_test.go b/builtin/providers/aws/resource_aws_db_subnet_group_test.go index 2bee3a3ff0df..dd4b2d58f6ed 100644 --- a/builtin/providers/aws/resource_aws_db_subnet_group_test.go +++ b/builtin/providers/aws/resource_aws_db_subnet_group_test.go @@ -103,16 +103,22 @@ resource "aws_subnet" "foo" { cidr_block = "10.1.1.0/24" availability_zone = "us-west-2a" vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-1" + } } resource "aws_subnet" "bar" { cidr_block = "10.1.2.0/24" availability_zone = "us-west-2b" vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-dbsubnet-test-2" + } } resource "aws_db_subnet_group" "foo" { - name = "foo" + name = "FOO" description = "foo description" subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] } diff --git a/builtin/providers/aws/resource_aws_eip.go b/builtin/providers/aws/resource_aws_eip.go index 103f9bc5afee..de92983e2621 100644 --- a/builtin/providers/aws/resource_aws_eip.go +++ b/builtin/providers/aws/resource_aws_eip.go @@ -6,8 +6,8 @@ import ( "strings" "time" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -60,7 +60,7 @@ func resourceAwsEip() *schema.Resource { } func resourceAwsEipCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + ec2conn := meta.(*AWSClient).ec2SDKconn // By default, we're not in a VPC domainOpt := "" @@ -68,7 +68,7 @@ func resourceAwsEipCreate(d *schema.ResourceData, meta interface{}) error { domainOpt = "vpc" } - allocOpts := &ec2.AllocateAddressRequest{ + allocOpts := &ec2.AllocateAddressInput{ Domain: aws.String(domainOpt), } @@ -97,24 +97,24 @@ func resourceAwsEipCreate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + ec2conn := meta.(*AWSClient).ec2SDKconn domain := resourceAwsEipDomain(d) id := d.Id() - assocIds := []string{} - publicIps := []string{} + assocIds := []*string{} + publicIps := []*string{} if domain == "vpc" { - assocIds = []string{id} + assocIds = []*string{aws.String(id)} } else { - publicIps = []string{id} + publicIps = []*string{aws.String(id)} } log.Printf( "[DEBUG] EIP describe configuration: %#v, %#v (domain: %s)", assocIds, publicIps, domain) - req := &ec2.DescribeAddressesRequest{ + req := &ec2.DescribeAddressesInput{ AllocationIDs: assocIds, PublicIPs: publicIps, } @@ -148,7 +148,7 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { } func resourceAwsEipUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + ec2conn := meta.(*AWSClient).ec2SDKconn domain := resourceAwsEipDomain(d) @@ -156,14 +156,14 @@ func resourceAwsEipUpdate(d *schema.ResourceData, meta interface{}) error { if v, ok := d.GetOk("instance"); ok { instanceId := v.(string) - assocOpts := &ec2.AssociateAddressRequest{ + assocOpts := &ec2.AssociateAddressInput{ InstanceID: aws.String(instanceId), PublicIP: aws.String(d.Id()), } // more unique ID conditionals if domain == "vpc" { - assocOpts = &ec2.AssociateAddressRequest{ + assocOpts = &ec2.AssociateAddressInput{ InstanceID: aws.String(instanceId), AllocationID: aws.String(d.Id()), PublicIP: aws.String(""), @@ -181,7 +181,7 @@ func resourceAwsEipUpdate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsEipDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + ec2conn := meta.(*AWSClient).ec2SDKconn if err := resourceAwsEipRead(d, meta); err != nil { return err @@ -197,11 +197,11 @@ func resourceAwsEipDelete(d *schema.ResourceData, meta interface{}) error { var err error switch resourceAwsEipDomain(d) { case "vpc": - err = ec2conn.DisassociateAddress(&ec2.DisassociateAddressRequest{ + _, err = ec2conn.DisassociateAddress(&ec2.DisassociateAddressInput{ AssociationID: aws.String(d.Get("association_id").(string)), }) case "standard": - err = ec2conn.DisassociateAddress(&ec2.DisassociateAddressRequest{ + _, err = ec2conn.DisassociateAddress(&ec2.DisassociateAddressInput{ PublicIP: aws.String(d.Get("public_ip").(string)), }) } @@ -218,12 +218,12 @@ func resourceAwsEipDelete(d *schema.ResourceData, meta interface{}) error { log.Printf( "[DEBUG] EIP release (destroy) address allocation: %v", d.Id()) - err = ec2conn.ReleaseAddress(&ec2.ReleaseAddressRequest{ + _, err = ec2conn.ReleaseAddress(&ec2.ReleaseAddressInput{ AllocationID: aws.String(d.Id()), }) case "standard": log.Printf("[DEBUG] EIP release (destroy) address: %v", d.Id()) - err = ec2conn.ReleaseAddress(&ec2.ReleaseAddressRequest{ + _, err = ec2conn.ReleaseAddress(&ec2.ReleaseAddressInput{ PublicIP: aws.String(d.Id()), }) } diff --git a/builtin/providers/aws/resource_aws_eip_test.go b/builtin/providers/aws/resource_aws_eip_test.go index 79e88b8f3ba2..5120a9648b04 100644 --- a/builtin/providers/aws/resource_aws_eip_test.go +++ b/builtin/providers/aws/resource_aws_eip_test.go @@ -5,8 +5,8 @@ import ( "strings" "testing" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -58,16 +58,16 @@ func TestAccAWSEIP_instance(t *testing.T) { } func testAccCheckAWSEIPDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eip" { continue } - req := &ec2.DescribeAddressesRequest{ - AllocationIDs: []string{}, - PublicIPs: []string{rs.Primary.ID}, + req := &ec2.DescribeAddressesInput{ + AllocationIDs: []*string{}, + PublicIPs: []*string{aws.String(rs.Primary.ID)}, } describe, err := conn.DescribeAddresses(req) @@ -113,12 +113,12 @@ func testAccCheckAWSEIPExists(n string, res *ec2.Address) resource.TestCheckFunc return fmt.Errorf("No EIP ID is set") } - conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn if strings.Contains(rs.Primary.ID, "eipalloc") { - req := &ec2.DescribeAddressesRequest{ - AllocationIDs: []string{rs.Primary.ID}, - PublicIPs: []string{}, + req := &ec2.DescribeAddressesInput{ + AllocationIDs: []*string{aws.String(rs.Primary.ID)}, + PublicIPs: []*string{}, } describe, err := conn.DescribeAddresses(req) if err != nil { @@ -129,12 +129,12 @@ func testAccCheckAWSEIPExists(n string, res *ec2.Address) resource.TestCheckFunc *describe.Addresses[0].AllocationID != rs.Primary.ID { return fmt.Errorf("EIP not found") } - *res = describe.Addresses[0] + *res = *describe.Addresses[0] } else { - req := &ec2.DescribeAddressesRequest{ - AllocationIDs: []string{}, - PublicIPs: []string{rs.Primary.ID}, + req := &ec2.DescribeAddressesInput{ + AllocationIDs: []*string{}, + PublicIPs: []*string{aws.String(rs.Primary.ID)}, } describe, err := conn.DescribeAddresses(req) if err != nil { @@ -145,7 +145,7 @@ func testAccCheckAWSEIPExists(n string, res *ec2.Address) resource.TestCheckFunc *describe.Addresses[0].PublicIP != rs.Primary.ID { return fmt.Errorf("EIP not found") } - *res = describe.Addresses[0] + *res = *describe.Addresses[0] } return nil diff --git a/builtin/providers/aws/resource_aws_elb.go b/builtin/providers/aws/resource_aws_elb.go index e5ed9f3cfc4c..b15fe1afa564 100644 --- a/builtin/providers/aws/resource_aws_elb.go +++ b/builtin/providers/aws/resource_aws_elb.go @@ -154,6 +154,8 @@ func resourceAwsElb() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "tags": tagsSchema(), }, } } @@ -161,17 +163,18 @@ func resourceAwsElb() *schema.Resource { func resourceAwsElbCreate(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbconn - // Expand the "listener" set to goamz compat []elb.Listener + // Expand the "listener" set to aws-sdk-go compat []elb.Listener listeners, err := expandListeners(d.Get("listener").(*schema.Set).List()) if err != nil { return err } + tags := tagsFromMapELB(d.Get("tags").(map[string]interface{})) // Provision the elb - elbOpts := &elb.CreateAccessPointInput{ LoadBalancerName: aws.String(d.Get("name").(string)), Listeners: listeners, + Tags: tags, } if scheme, ok := d.GetOk("internal"); ok && scheme.(bool) { @@ -208,6 +211,8 @@ func resourceAwsElbCreate(d *schema.ResourceData, meta interface{}) error { d.SetPartial("security_groups") d.SetPartial("subnets") + d.Set("tags", tagsToMapELB(tags)) + if d.HasChange("health_check") { vs := d.Get("health_check").(*schema.Set).List() if len(vs) > 0 { @@ -267,6 +272,15 @@ func resourceAwsElbRead(d *schema.ResourceData, meta interface{}) error { d.Set("security_groups", lb.SecurityGroups) d.Set("subnets", lb.Subnets) + resp, err := elbconn.DescribeTags(&elb.DescribeTagsInput{ + LoadBalancerNames: []string{*lb.LoadBalancerName}, + }) + + var et []elb.Tag + if len(resp.TagDescriptions) > 0 { + et = resp.TagDescriptions[0].Tags + } + d.Set("tags", tagsToMapELB(et)) // There's only one health check, so save that to state as we // currently can if *lb.HealthCheck.Target != "" { @@ -357,6 +371,11 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { } } + if err := setTagsELB(elbconn, d); err != nil { + return err + } else { + d.SetPartial("tags") + } d.Partial(false) return resourceAwsElbRead(d, meta) diff --git a/builtin/providers/aws/resource_aws_elb_test.go b/builtin/providers/aws/resource_aws_elb_test.go index 037a9557dbfe..2fbe7ace8d6b 100644 --- a/builtin/providers/aws/resource_aws_elb_test.go +++ b/builtin/providers/aws/resource_aws_elb_test.go @@ -53,6 +53,61 @@ func TestAccAWSELB_basic(t *testing.T) { }) } +func TestAccAWSELB_tags(t *testing.T) { + var conf elb.LoadBalancerDescription + var td elb.TagDescription + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSELBConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.bar", &conf), + testAccCheckAWSELBAttributes(&conf), + resource.TestCheckResourceAttr( + "aws_elb.bar", "name", "foobar-terraform-test"), + testAccLoadTags(&conf, &td), + testAccCheckELBTags(&td.Tags, "bar", "baz"), + ), + }, + + resource.TestStep{ + Config: testAccAWSELBConfig_TagUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists("aws_elb.bar", &conf), + testAccCheckAWSELBAttributes(&conf), + resource.TestCheckResourceAttr( + "aws_elb.bar", "name", "foobar-terraform-test"), + testAccLoadTags(&conf, &td), + testAccCheckELBTags(&td.Tags, "foo", "bar"), + testAccCheckELBTags(&td.Tags, "new", "type"), + ), + }, + }, + }) +} + +func testAccLoadTags(conf *elb.LoadBalancerDescription, td *elb.TagDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).elbconn + + describe, err := conn.DescribeTags(&elb.DescribeTagsInput{ + LoadBalancerNames: []string{*conf.LoadBalancerName}, + }) + + if err != nil { + return err + } + if len(describe.TagDescriptions) > 0 { + *td = describe.TagDescriptions[0] + } + return nil + } +} + func TestAccAWSELB_InstanceAttaching(t *testing.T) { var conf elb.LoadBalancerDescription @@ -288,6 +343,31 @@ resource "aws_elb" "bar" { lb_protocol = "http" } + tags { + bar = "baz" + } + + cross_zone_load_balancing = true +} +` + +const testAccAWSELBConfig_TagUpdate = ` +resource "aws_elb" "bar" { + name = "foobar-terraform-test" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } + + tags { + foo = "bar" + new = "type" + } + cross_zone_load_balancing = true } ` diff --git a/builtin/providers/aws/resource_aws_instance.go b/builtin/providers/aws/resource_aws_instance.go index f78e0bec126c..85a35e41aa6f 100644 --- a/builtin/providers/aws/resource_aws_instance.go +++ b/builtin/providers/aws/resource_aws_instance.go @@ -3,17 +3,18 @@ package aws import ( "bytes" "crypto/sha1" + "encoding/base64" "encoding/hex" "fmt" "log" - "strconv" "strings" "time" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsInstance() *schema.Resource { @@ -23,6 +24,9 @@ func resourceAwsInstance() *schema.Resource { Update: resourceAwsInstanceUpdate, Delete: resourceAwsInstanceDelete, + SchemaVersion: 1, + MigrateState: resourceAwsInstanceMigrateState, + Schema: map[string]*schema.Schema{ "ami": &schema.Schema{ Type: schema.TypeString, @@ -136,40 +140,56 @@ func resourceAwsInstance() *schema.Resource { ForceNew: true, Optional: true, }, + "tenancy": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, + "tags": tagsSchema(), "block_device": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + Removed: "Split out into three sub-types; see Changelog and Docs", + }, + + "ebs_block_device": &schema.Schema{ Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "delete_on_termination": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + "device_name": &schema.Schema{ Type: schema.TypeString, Required: true, ForceNew: true, }, - "virtual_name": &schema.Schema{ - Type: schema.TypeString, + "encrypted": &schema.Schema{ + Type: schema.TypeBool, Optional: true, + Computed: true, ForceNew: true, }, - "snapshot_id": &schema.Schema{ - Type: schema.TypeString, + "iops": &schema.Schema{ + Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, }, - "volume_type": &schema.Schema{ + "snapshot_id": &schema.Schema{ Type: schema.TypeString, Optional: true, Computed: true, @@ -183,37 +203,56 @@ func resourceAwsInstance() *schema.Resource { ForceNew: true, }, - "delete_on_termination": &schema.Schema{ - Type: schema.TypeBool, + "volume_type": &schema.Schema{ + Type: schema.TypeString, Optional: true, - Default: true, + Computed: true, ForceNew: true, }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) + return hashcode.String(buf.String()) + }, + }, - "encrypted": &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Computed: true, - ForceNew: true, + "ephemeral_block_device": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, }, - "iops": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Computed: true, - ForceNew: true, + "virtual_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, }, }, }, - Set: resourceAwsInstanceBlockDevicesHash, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) + return hashcode.String(buf.String()) + }, }, "root_block_device": &schema.Schema{ - // TODO: This is a list because we don't support singleton - // sub-resources today. We'll enforce that the list only ever has + // TODO: This is a set because we don't support singleton + // sub-resources today. We'll enforce that the set only ever has // length zero or one below. When TF gains support for // sub-resources this can be converted. - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ @@ -228,11 +267,11 @@ func resourceAwsInstance() *schema.Resource { ForceNew: true, }, - "device_name": &schema.Schema{ - Type: schema.TypeString, + "iops": &schema.Schema{ + Type: schema.TypeInt, Optional: true, + Computed: true, ForceNew: true, - Default: "/dev/sda1", }, "volume_size": &schema.Schema{ @@ -248,15 +287,12 @@ func resourceAwsInstance() *schema.Resource { Computed: true, ForceNew: true, }, - - "iops": &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - Computed: true, - ForceNew: true, - }, }, }, + Set: func(v interface{}) int { + // there can be only one root device; no need to hash anything + return 0 + }, }, }, } @@ -268,97 +304,194 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { // Figure out user data userData := "" if v := d.Get("user_data"); v != nil { - userData = v.(string) + userData = base64.StdEncoding.EncodeToString([]byte(v.(string))) } - associatePublicIPAddress := false - if v := d.Get("associate_public_ip_address"); v != nil { - associatePublicIPAddress = v.(bool) + // check for non-default Subnet, and cast it to a String + var hasSubnet bool + subnet, hasSubnet := d.GetOk("subnet_id") + subnetID := subnet.(string) + + placement := &ec2.Placement{ + AvailabilityZone: aws.String(d.Get("availability_zone").(string)), + } + + if hasSubnet { + // Tenancy is only valid inside a VPC + // See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Placement.html + if v := d.Get("tenancy").(string); v != "" { + placement.Tenancy = aws.String(v) + } + } + + iam := &ec2.IAMInstanceProfileSpecification{ + Name: aws.String(d.Get("iam_instance_profile").(string)), } // Build the creation struct - runOpts := &ec2.RunInstances{ - ImageId: d.Get("ami").(string), - AvailZone: d.Get("availability_zone").(string), - InstanceType: d.Get("instance_type").(string), - KeyName: d.Get("key_name").(string), - SubnetId: d.Get("subnet_id").(string), - PrivateIPAddress: d.Get("private_ip").(string), - AssociatePublicIpAddress: associatePublicIPAddress, - UserData: []byte(userData), - EbsOptimized: d.Get("ebs_optimized").(bool), - IamInstanceProfile: d.Get("iam_instance_profile").(string), - Tenancy: d.Get("tenancy").(string), + runOpts := &ec2.RunInstancesRequest{ + ImageID: aws.String(d.Get("ami").(string)), + Placement: placement, + InstanceType: aws.String(d.Get("instance_type").(string)), + MaxCount: aws.Integer(1), + MinCount: aws.Integer(1), + UserData: aws.String(userData), + EBSOptimized: aws.Boolean(d.Get("ebs_optimized").(bool)), + IAMInstanceProfile: iam, } + associatePublicIPAddress := false + if v := d.Get("associate_public_ip_address"); v != nil { + associatePublicIPAddress = v.(bool) + } + + var groups []string if v := d.Get("security_groups"); v != nil { - if runOpts.SubnetId != "" { + // Security group names. + // For a nondefault VPC, you must use security group IDs instead. + // See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html + if hasSubnet { log.Printf("[WARN] Deprecated. Attempting to use 'security_groups' within a VPC instance. Use 'vpc_security_group_ids' instead.") } - for _, v := range v.(*schema.Set).List() { str := v.(string) + groups = append(groups, str) + } + } - var g ec2.SecurityGroup - // Deprecated, stop using the subnet ID here - if runOpts.SubnetId != "" { - g.Id = str - } else { - g.Name = str - } + if hasSubnet && associatePublicIPAddress { + // If we have a non-default VPC / Subnet specified, we can flag + // AssociatePublicIpAddress to get a Public IP assigned. By default these are not provided. + // You cannot specify both SubnetId and the NetworkInterface.0.* parameters though, otherwise + // you get: Network interfaces and an instance-level subnet ID may not be specified on the same request + // You also need to attach Security Groups to the NetworkInterface instead of the instance, + // to avoid: Network interfaces and an instance-level security groups may not be specified on + // the same request + ni := ec2.InstanceNetworkInterfaceSpecification{ + AssociatePublicIPAddress: aws.Boolean(associatePublicIPAddress), + DeviceIndex: aws.Integer(0), + SubnetID: aws.String(subnetID), + } - runOpts.SecurityGroups = append(runOpts.SecurityGroups, g) + if v, ok := d.GetOk("private_ip"); ok { + ni.PrivateIPAddress = aws.String(v.(string)) } - } - if v := d.Get("vpc_security_group_ids"); v != nil { - for _, v := range v.(*schema.Set).List() { - str := v.(string) + if v := d.Get("vpc_security_group_ids"); v != nil { + for _, v := range v.(*schema.Set).List() { + ni.Groups = append(ni.Groups, v.(string)) + } + } - var g ec2.SecurityGroup - g.Id = str + runOpts.NetworkInterfaces = []ec2.InstanceNetworkInterfaceSpecification{ni} + } else { + if subnetID != "" { + runOpts.SubnetID = aws.String(subnetID) + } - runOpts.SecurityGroups = append(runOpts.SecurityGroups, g) + if v, ok := d.GetOk("private_ip"); ok { + runOpts.PrivateIPAddress = aws.String(v.(string)) } + if runOpts.SubnetID != nil && + *runOpts.SubnetID != "" { + runOpts.SecurityGroupIDs = groups + } else { + runOpts.SecurityGroups = groups + } + + if v := d.Get("vpc_security_group_ids"); v != nil { + for _, v := range v.(*schema.Set).List() { + runOpts.SecurityGroupIDs = append(runOpts.SecurityGroupIDs, v.(string)) + } + } + } + + if v, ok := d.GetOk("key_name"); ok { + runOpts.KeyName = aws.String(v.(string)) } - blockDevices := make([]interface{}, 0) + blockDevices := make([]ec2.BlockDeviceMapping, 0) + + if v, ok := d.GetOk("ebs_block_device"); ok { + vL := v.(*schema.Set).List() + for _, v := range vL { + bd := v.(map[string]interface{}) + ebs := &ec2.EBSBlockDevice{ + DeleteOnTermination: aws.Boolean(bd["delete_on_termination"].(bool)), + } + + if v, ok := bd["snapshot_id"].(string); ok && v != "" { + ebs.SnapshotID = aws.String(v) + } + + if v, ok := bd["volume_size"].(int); ok && v != 0 { + ebs.VolumeSize = aws.Integer(v) + } + + if v, ok := bd["volume_type"].(string); ok && v != "" { + ebs.VolumeType = aws.String(v) + } + + if v, ok := bd["iops"].(int); ok && v > 0 { + ebs.IOPS = aws.Integer(v) + } - if v := d.Get("block_device"); v != nil { - blockDevices = append(blockDevices, v.(*schema.Set).List()...) + blockDevices = append(blockDevices, ec2.BlockDeviceMapping{ + DeviceName: aws.String(bd["device_name"].(string)), + EBS: ebs, + }) + } } - if v := d.Get("root_block_device"); v != nil { - rootBlockDevices := v.([]interface{}) - if len(rootBlockDevices) > 1 { - return fmt.Errorf("Cannot specify more than one root_block_device.") + if v, ok := d.GetOk("ephemeral_block_device"); ok { + vL := v.(*schema.Set).List() + for _, v := range vL { + bd := v.(map[string]interface{}) + blockDevices = append(blockDevices, ec2.BlockDeviceMapping{ + DeviceName: aws.String(bd["device_name"].(string)), + VirtualName: aws.String(bd["virtual_name"].(string)), + }) } - blockDevices = append(blockDevices, rootBlockDevices...) } - if len(blockDevices) > 0 { - runOpts.BlockDevices = make([]ec2.BlockDeviceMapping, len(blockDevices)) - for i, v := range blockDevices { + if v, ok := d.GetOk("root_block_device"); ok { + vL := v.(*schema.Set).List() + if len(vL) > 1 { + return fmt.Errorf("Cannot specify more than one root_block_device.") + } + for _, v := range vL { bd := v.(map[string]interface{}) - runOpts.BlockDevices[i].DeviceName = bd["device_name"].(string) - runOpts.BlockDevices[i].VolumeType = bd["volume_type"].(string) - runOpts.BlockDevices[i].VolumeSize = int64(bd["volume_size"].(int)) - runOpts.BlockDevices[i].DeleteOnTermination = bd["delete_on_termination"].(bool) - if v, ok := bd["virtual_name"].(string); ok { - runOpts.BlockDevices[i].VirtualName = v + ebs := &ec2.EBSBlockDevice{ + DeleteOnTermination: aws.Boolean(bd["delete_on_termination"].(bool)), } - if v, ok := bd["snapshot_id"].(string); ok { - runOpts.BlockDevices[i].SnapshotId = v + + if v, ok := bd["volume_size"].(int); ok && v != 0 { + ebs.VolumeSize = aws.Integer(v) } - if v, ok := bd["encrypted"].(bool); ok { - runOpts.BlockDevices[i].Encrypted = v + + if v, ok := bd["volume_type"].(string); ok && v != "" { + ebs.VolumeType = aws.String(v) } - if v, ok := bd["iops"].(int); ok { - runOpts.BlockDevices[i].IOPS = int64(v) + + if v, ok := bd["iops"].(int); ok && v > 0 { + ebs.IOPS = aws.Integer(v) + } + + if dn, err := fetchRootDeviceName(d.Get("ami").(string), ec2conn); err == nil { + blockDevices = append(blockDevices, ec2.BlockDeviceMapping{ + DeviceName: dn, + EBS: ebs, + }) + } else { + return err } } } + if len(blockDevices) > 0 { + runOpts.BlockDeviceMappings = blockDevices + } + // Create the instance log.Printf("[DEBUG] Run configuration: %#v", runOpts) runResp, err := ec2conn.RunInstances(runOpts) @@ -367,21 +500,21 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { } instance := &runResp.Instances[0] - log.Printf("[INFO] Instance ID: %s", instance.InstanceId) + log.Printf("[INFO] Instance ID: %s", *instance.InstanceID) // Store the resulting ID so we can look this up later - d.SetId(instance.InstanceId) + d.SetId(*instance.InstanceID) // Wait for the instance to become running so we can get some attributes // that aren't available until later. log.Printf( "[DEBUG] Waiting for instance (%s) to become running", - instance.InstanceId) + *instance.InstanceID) stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, Target: "running", - Refresh: InstanceStateRefreshFunc(ec2conn, instance.InstanceId), + Refresh: InstanceStateRefreshFunc(ec2conn, *instance.InstanceID), Timeout: 10 * time.Minute, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, @@ -391,16 +524,18 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { if err != nil { return fmt.Errorf( "Error waiting for instance (%s) to become ready: %s", - instance.InstanceId, err) + *instance.InstanceID, err) } instance = instanceRaw.(*ec2.Instance) // Initialize the connection info - d.SetConnInfo(map[string]string{ - "type": "ssh", - "host": instance.PublicIpAddress, - }) + if instance.PublicIPAddress != nil { + d.SetConnInfo(map[string]string{ + "type": "ssh", + "host": *instance.PublicIPAddress, + }) + } // Set our attributes if err := resourceAwsInstanceRead(d, meta); err != nil { @@ -414,11 +549,13 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error { func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error { ec2conn := meta.(*AWSClient).ec2conn - resp, err := ec2conn.Instances([]string{d.Id()}, ec2.NewFilter()) + resp, err := ec2conn.DescribeInstances(&ec2.DescribeInstancesRequest{ + InstanceIDs: []string{d.Id()}, + }) if err != nil { // If the instance was not found, return nil so that we can show // that the instance is gone. - if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidInstanceID.NotFound" { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidInstanceID.NotFound" { d.SetId("") return nil } @@ -436,29 +573,37 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error { instance := &resp.Reservations[0].Instances[0] // If the instance is terminated, then it is gone - if instance.State.Name == "terminated" { + if *instance.State.Name == "terminated" { d.SetId("") return nil } - d.Set("availability_zone", instance.AvailZone) + if instance.Placement != nil { + d.Set("availability_zone", instance.Placement.AvailabilityZone) + } + if instance.Placement.Tenancy != nil { + d.Set("tenancy", instance.Placement.Tenancy) + } + d.Set("key_name", instance.KeyName) - d.Set("public_dns", instance.DNSName) - d.Set("public_ip", instance.PublicIpAddress) + d.Set("public_dns", instance.PublicDNSName) + d.Set("public_ip", instance.PublicIPAddress) d.Set("private_dns", instance.PrivateDNSName) - d.Set("private_ip", instance.PrivateIpAddress) - d.Set("subnet_id", instance.SubnetId) - d.Set("ebs_optimized", instance.EbsOptimized) + d.Set("private_ip", instance.PrivateIPAddress) + if len(instance.NetworkInterfaces) > 0 { + d.Set("subnet_id", instance.NetworkInterfaces[0].SubnetID) + } else { + d.Set("subnet_id", instance.SubnetID) + } + d.Set("ebs_optimized", instance.EBSOptimized) d.Set("tags", tagsToMap(instance.Tags)) - d.Set("tenancy", instance.Tenancy) // Determine whether we're referring to security groups with // IDs or names. We use a heuristic to figure this out. By default, // we use IDs if we're in a VPC. However, if we previously had an // all-name list of security groups, we use names. Or, if we had any // IDs, we use IDs. - useID := instance.SubnetId != "" - // Deprecated: vpc security groups should be defined in vpc_security_group_ids + useID := instance.SubnetID != nil && *instance.SubnetID != "" if v := d.Get("security_groups"); v != nil { match := false for _, v := range v.(*schema.Set).List() { @@ -472,83 +617,47 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error { } // Build up the security groups - sgs := make([]string, len(instance.SecurityGroups)) - + sgs := make([]string, 0, len(instance.SecurityGroups)) if useID { - for i, sg := range instance.SecurityGroups { - sgs[i] = sg.Id + for _, sg := range instance.SecurityGroups { + sgs = append(sgs, *sg.GroupID) } - // Keep some backward compatibility. The user is warned on creation. - if d.Get("security_groups") != nil { - d.Set("security_groups", sgs) - } else { - d.Set("vpc_security_group_ids", sgs) + log.Printf("[DEBUG] Setting Security Group IDs: %#v", sgs) + if err := d.Set("vpc_security_group_ids", sgs); err != nil { + return err } } else { - for i, sg := range instance.SecurityGroups { - sgs[i] = sg.Name + for _, sg := range instance.SecurityGroups { + sgs = append(sgs, *sg.GroupName) } - d.Set("security_groups", sgs) - } - - blockDevices := make(map[string]ec2.BlockDevice) - for _, bd := range instance.BlockDevices { - blockDevices[bd.VolumeId] = bd - } - - volIDs := make([]string, 0, len(blockDevices)) - for volID := range blockDevices { - volIDs = append(volIDs, volID) - } - - volResp, err := ec2conn.Volumes(volIDs, ec2.NewFilter()) - if err != nil { - return err - } - - nonRootBlockDevices := make([]map[string]interface{}, 0) - rootBlockDevice := make([]interface{}, 0, 1) - for _, vol := range volResp.Volumes { - volSize, err := strconv.Atoi(vol.Size) - if err != nil { + log.Printf("[DEBUG] Setting Security Group Names: %#v", sgs) + if err := d.Set("security_groups", sgs); err != nil { return err } + } - blockDevice := make(map[string]interface{}) - blockDevice["device_name"] = blockDevices[vol.VolumeId].DeviceName - blockDevice["volume_type"] = vol.VolumeType - blockDevice["volume_size"] = volSize - blockDevice["delete_on_termination"] = - blockDevices[vol.VolumeId].DeleteOnTermination - - // If this is the root device, save it. We stop here since we - // can't put invalid keys into this map. - if blockDevice["device_name"] == instance.RootDeviceName { - rootBlockDevice = []interface{}{blockDevice} - continue - } - - blockDevice["snapshot_id"] = vol.SnapshotId - blockDevice["encrypted"] = vol.Encrypted - blockDevice["iops"] = vol.IOPS - nonRootBlockDevices = append(nonRootBlockDevices, blockDevice) + if err := readBlockDevices(d, instance, ec2conn); err != nil { + return err } - d.Set("block_device", nonRootBlockDevices) - d.Set("root_block_device", rootBlockDevice) return nil } func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error { ec2conn := meta.(*AWSClient).ec2conn - opts := new(ec2.ModifyInstance) - opts.SetSourceDestCheck = true - opts.SourceDestCheck = d.Get("source_dest_check").(bool) - - log.Printf("[INFO] Modifying instance %s: %#v", d.Id(), opts) - if _, err := ec2conn.ModifyInstance(d.Id(), opts); err != nil { - return err + // SourceDestCheck can only be set on VPC instances + if d.Get("subnet_id").(string) != "" { + log.Printf("[INFO] Modifying instance %s", d.Id()) + err := ec2conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeRequest{ + InstanceID: aws.String(d.Id()), + SourceDestCheck: &ec2.AttributeBooleanValue{ + Value: aws.Boolean(d.Get("source_dest_check").(bool)), + }, + }) + if err != nil { + return err + } } // TODO(mitchellh): wait for the attributes we modified to @@ -567,7 +676,10 @@ func resourceAwsInstanceDelete(d *schema.ResourceData, meta interface{}) error { ec2conn := meta.(*AWSClient).ec2conn log.Printf("[INFO] Terminating instance: %s", d.Id()) - if _, err := ec2conn.TerminateInstances([]string{d.Id()}); err != nil { + req := &ec2.TerminateInstancesRequest{ + InstanceIDs: []string{d.Id()}, + } + if _, err := ec2conn.TerminateInstances(req); err != nil { return fmt.Errorf("Error terminating instance: %s", err) } @@ -599,9 +711,11 @@ func resourceAwsInstanceDelete(d *schema.ResourceData, meta interface{}) error { // an EC2 instance. func InstanceStateRefreshFunc(conn *ec2.EC2, instanceID string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := conn.Instances([]string{instanceID}, ec2.NewFilter()) + resp, err := conn.DescribeInstances(&ec2.DescribeInstancesRequest{ + InstanceIDs: []string{instanceID}, + }) if err != nil { - if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidInstanceID.NotFound" { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidInstanceID.NotFound" { // Set this to nil as if we didn't find anything. resp = nil } else { @@ -617,15 +731,115 @@ func InstanceStateRefreshFunc(conn *ec2.EC2, instanceID string) resource.StateRe } i := &resp.Reservations[0].Instances[0] - return i, i.State.Name, nil + return i, *i.State.Name, nil + } +} + +func readBlockDevices(d *schema.ResourceData, instance *ec2.Instance, ec2conn *ec2.EC2) error { + ibds, err := readBlockDevicesFromInstance(instance, ec2conn) + if err != nil { + return err + } + + if err := d.Set("ebs_block_device", ibds["ebs"]); err != nil { + return err + } + if ibds["root"] != nil { + if err := d.Set("root_block_device", []interface{}{ibds["root"]}); err != nil { + return err + } } + + return nil } -func resourceAwsInstanceBlockDevicesHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) - buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) - buf.WriteString(fmt.Sprintf("%t-", m["delete_on_termination"].(bool))) - return hashcode.String(buf.String()) +func readBlockDevicesFromInstance(instance *ec2.Instance, ec2conn *ec2.EC2) (map[string]interface{}, error) { + blockDevices := make(map[string]interface{}) + blockDevices["ebs"] = make([]map[string]interface{}, 0) + blockDevices["root"] = nil + + instanceBlockDevices := make(map[string]ec2.InstanceBlockDeviceMapping) + for _, bd := range instance.BlockDeviceMappings { + if bd.EBS != nil { + instanceBlockDevices[*(bd.EBS.VolumeID)] = bd + } + } + + if len(instanceBlockDevices) == 0 { + return nil, nil + } + + volIDs := make([]string, 0, len(instanceBlockDevices)) + for volID := range instanceBlockDevices { + volIDs = append(volIDs, volID) + } + + // Need to call DescribeVolumes to get volume_size and volume_type for each + // EBS block device + volResp, err := ec2conn.DescribeVolumes(&ec2.DescribeVolumesRequest{ + VolumeIDs: volIDs, + }) + if err != nil { + return nil, err + } + + for _, vol := range volResp.Volumes { + instanceBd := instanceBlockDevices[*vol.VolumeID] + bd := make(map[string]interface{}) + + if instanceBd.EBS != nil && instanceBd.EBS.DeleteOnTermination != nil { + bd["delete_on_termination"] = *instanceBd.EBS.DeleteOnTermination + } + if vol.Size != nil { + bd["volume_size"] = *vol.Size + } + if vol.VolumeType != nil { + bd["volume_type"] = *vol.VolumeType + } + if vol.IOPS != nil { + bd["iops"] = *vol.IOPS + } + + if blockDeviceIsRoot(instanceBd, instance) { + blockDevices["root"] = bd + } else { + if instanceBd.DeviceName != nil { + bd["device_name"] = *instanceBd.DeviceName + } + if vol.Encrypted != nil { + bd["encrypted"] = *vol.Encrypted + } + if vol.SnapshotID != nil { + bd["snapshot_id"] = *vol.SnapshotID + } + + blockDevices["ebs"] = append(blockDevices["ebs"].([]map[string]interface{}), bd) + } + } + + return blockDevices, nil +} + +func blockDeviceIsRoot(bd ec2.InstanceBlockDeviceMapping, instance *ec2.Instance) bool { + return (bd.DeviceName != nil && + instance.RootDeviceName != nil && + *bd.DeviceName == *instance.RootDeviceName) +} + +func fetchRootDeviceName(ami string, conn *ec2.EC2) (aws.StringValue, error) { + if ami == "" { + return nil, fmt.Errorf("Cannot fetch root device name for blank AMI ID.") + } + + log.Printf("[DEBUG] Describing AMI %q to get root block device name", ami) + req := &ec2.DescribeImagesRequest{ImageIDs: []string{ami}} + if res, err := conn.DescribeImages(req); err == nil { + if len(res.Images) == 1 { + return res.Images[0].RootDeviceName, nil + } else { + return nil, fmt.Errorf("Expected 1 AMI for ID: %s, got: %#v", ami, res.Images) + } + } else { + return nil, err + } } diff --git a/builtin/providers/aws/resource_aws_instance_migrate.go b/builtin/providers/aws/resource_aws_instance_migrate.go new file mode 100644 index 000000000000..5d7075f7593e --- /dev/null +++ b/builtin/providers/aws/resource_aws_instance_migrate.go @@ -0,0 +1,113 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + "strings" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/terraform" +) + +func resourceAwsInstanceMigrateState( + v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { + switch v { + case 0: + log.Println("[INFO] Found AWS Instance State v0; migrating to v1") + return migrateStateV0toV1(is) + default: + return is, fmt.Errorf("Unexpected schema version: %d", v) + } + + return is, nil +} + +func migrateStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { + if is.Empty() { + log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") + return is, nil + } + + log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) + + // Delete old count + delete(is.Attributes, "block_device.#") + + oldBds, err := readV0BlockDevices(is) + if err != nil { + return is, err + } + // seed count fields for new types + is.Attributes["ebs_block_device.#"] = "0" + is.Attributes["ephemeral_block_device.#"] = "0" + // depending on if state was v0.3.7 or an earlier version, it might have + // root_block_device defined already + if _, ok := is.Attributes["root_block_device.#"]; !ok { + is.Attributes["root_block_device.#"] = "0" + } + for _, oldBd := range oldBds { + if err := writeV1BlockDevice(is, oldBd); err != nil { + return is, err + } + } + log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) + return is, nil +} + +func readV0BlockDevices(is *terraform.InstanceState) (map[string]map[string]string, error) { + oldBds := make(map[string]map[string]string) + for k, v := range is.Attributes { + if !strings.HasPrefix(k, "block_device.") { + continue + } + path := strings.Split(k, ".") + if len(path) != 3 { + return oldBds, fmt.Errorf("Found unexpected block_device field: %#v", k) + } + hashcode, attribute := path[1], path[2] + oldBd, ok := oldBds[hashcode] + if !ok { + oldBd = make(map[string]string) + oldBds[hashcode] = oldBd + } + oldBd[attribute] = v + delete(is.Attributes, k) + } + return oldBds, nil +} + +func writeV1BlockDevice( + is *terraform.InstanceState, oldBd map[string]string) error { + code := hashcode.String(oldBd["device_name"]) + bdType := "ebs_block_device" + if vn, ok := oldBd["virtual_name"]; ok && strings.HasPrefix(vn, "ephemeral") { + bdType = "ephemeral_block_device" + } else if dn, ok := oldBd["device_name"]; ok && dn == "/dev/sda1" { + bdType = "root_block_device" + } + + switch bdType { + case "ebs_block_device": + delete(oldBd, "virtual_name") + case "root_block_device": + delete(oldBd, "virtual_name") + delete(oldBd, "encrypted") + delete(oldBd, "snapshot_id") + case "ephemeral_block_device": + delete(oldBd, "delete_on_termination") + delete(oldBd, "encrypted") + delete(oldBd, "iops") + delete(oldBd, "volume_size") + delete(oldBd, "volume_type") + } + for attr, val := range oldBd { + attrKey := fmt.Sprintf("%s.%d.%s", bdType, code, attr) + is.Attributes[attrKey] = val + } + + countAttr := fmt.Sprintf("%s.#", bdType) + count, _ := strconv.Atoi(is.Attributes[countAttr]) + is.Attributes[countAttr] = strconv.Itoa(count + 1) + return nil +} diff --git a/builtin/providers/aws/resource_aws_instance_migrate_test.go b/builtin/providers/aws/resource_aws_instance_migrate_test.go new file mode 100644 index 000000000000..d392943315e2 --- /dev/null +++ b/builtin/providers/aws/resource_aws_instance_migrate_test.go @@ -0,0 +1,159 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/terraform" +) + +func TestAWSInstanceMigrateState(t *testing.T) { + cases := map[string]struct { + StateVersion int + Attributes map[string]string + Expected map[string]string + Meta interface{} + }{ + "v0.3.6 and earlier": { + StateVersion: 0, + Attributes: map[string]string{ + // EBS + "block_device.#": "2", + "block_device.3851383343.delete_on_termination": "true", + "block_device.3851383343.device_name": "/dev/sdx", + "block_device.3851383343.encrypted": "false", + "block_device.3851383343.snapshot_id": "", + "block_device.3851383343.virtual_name": "", + "block_device.3851383343.volume_size": "5", + "block_device.3851383343.volume_type": "standard", + // Ephemeral + "block_device.3101711606.delete_on_termination": "false", + "block_device.3101711606.device_name": "/dev/sdy", + "block_device.3101711606.encrypted": "false", + "block_device.3101711606.snapshot_id": "", + "block_device.3101711606.virtual_name": "ephemeral0", + "block_device.3101711606.volume_size": "", + "block_device.3101711606.volume_type": "", + // Root + "block_device.56575650.delete_on_termination": "true", + "block_device.56575650.device_name": "/dev/sda1", + "block_device.56575650.encrypted": "false", + "block_device.56575650.snapshot_id": "", + "block_device.56575650.volume_size": "10", + "block_device.56575650.volume_type": "standard", + }, + Expected: map[string]string{ + "ebs_block_device.#": "1", + "ebs_block_device.3851383343.delete_on_termination": "true", + "ebs_block_device.3851383343.device_name": "/dev/sdx", + "ebs_block_device.3851383343.encrypted": "false", + "ebs_block_device.3851383343.snapshot_id": "", + "ebs_block_device.3851383343.volume_size": "5", + "ebs_block_device.3851383343.volume_type": "standard", + "ephemeral_block_device.#": "1", + "ephemeral_block_device.2458403513.device_name": "/dev/sdy", + "ephemeral_block_device.2458403513.virtual_name": "ephemeral0", + "root_block_device.#": "1", + "root_block_device.3018388612.delete_on_termination": "true", + "root_block_device.3018388612.device_name": "/dev/sda1", + "root_block_device.3018388612.snapshot_id": "", + "root_block_device.3018388612.volume_size": "10", + "root_block_device.3018388612.volume_type": "standard", + }, + }, + "v0.3.7": { + StateVersion: 0, + Attributes: map[string]string{ + // EBS + "block_device.#": "2", + "block_device.3851383343.delete_on_termination": "true", + "block_device.3851383343.device_name": "/dev/sdx", + "block_device.3851383343.encrypted": "false", + "block_device.3851383343.snapshot_id": "", + "block_device.3851383343.virtual_name": "", + "block_device.3851383343.volume_size": "5", + "block_device.3851383343.volume_type": "standard", + "block_device.3851383343.iops": "", + // Ephemeral + "block_device.3101711606.delete_on_termination": "false", + "block_device.3101711606.device_name": "/dev/sdy", + "block_device.3101711606.encrypted": "false", + "block_device.3101711606.snapshot_id": "", + "block_device.3101711606.virtual_name": "ephemeral0", + "block_device.3101711606.volume_size": "", + "block_device.3101711606.volume_type": "", + "block_device.3101711606.iops": "", + // Root + "root_block_device.#": "1", + "root_block_device.3018388612.delete_on_termination": "true", + "root_block_device.3018388612.device_name": "/dev/sda1", + "root_block_device.3018388612.snapshot_id": "", + "root_block_device.3018388612.volume_size": "10", + "root_block_device.3018388612.volume_type": "io1", + "root_block_device.3018388612.iops": "1000", + }, + Expected: map[string]string{ + "ebs_block_device.#": "1", + "ebs_block_device.3851383343.delete_on_termination": "true", + "ebs_block_device.3851383343.device_name": "/dev/sdx", + "ebs_block_device.3851383343.encrypted": "false", + "ebs_block_device.3851383343.snapshot_id": "", + "ebs_block_device.3851383343.volume_size": "5", + "ebs_block_device.3851383343.volume_type": "standard", + "ephemeral_block_device.#": "1", + "ephemeral_block_device.2458403513.device_name": "/dev/sdy", + "ephemeral_block_device.2458403513.virtual_name": "ephemeral0", + "root_block_device.#": "1", + "root_block_device.3018388612.delete_on_termination": "true", + "root_block_device.3018388612.device_name": "/dev/sda1", + "root_block_device.3018388612.snapshot_id": "", + "root_block_device.3018388612.volume_size": "10", + "root_block_device.3018388612.volume_type": "io1", + "root_block_device.3018388612.iops": "1000", + }, + }, + } + + for tn, tc := range cases { + is := &terraform.InstanceState{ + ID: "i-abc123", + Attributes: tc.Attributes, + } + is, err := resourceAwsInstanceMigrateState( + tc.StateVersion, is, tc.Meta) + + if err != nil { + t.Fatalf("bad: %s, err: %#v", tn, err) + } + + for k, v := range tc.Expected { + if is.Attributes[k] != v { + t.Fatalf( + "bad: %s\n\n expected: %#v -> %#v\n got: %#v -> %#v\n in: %#v", + tn, k, v, k, is.Attributes[k], is.Attributes) + } + } + } +} + +func TestAWSInstanceMigrateState_empty(t *testing.T) { + var is *terraform.InstanceState + var meta interface{} + + // should handle nil + is, err := resourceAwsInstanceMigrateState(0, is, meta) + + if err != nil { + t.Fatalf("err: %#v", err) + } + if is != nil { + t.Fatalf("expected nil instancestate, got: %#v", is) + } + + // should handle non-nil but empty + is = &terraform.InstanceState{} + is, err = resourceAwsInstanceMigrateState(0, is, meta) + + if err != nil { + t.Fatalf("err: %#v", err) + } +} diff --git a/builtin/providers/aws/resource_aws_instance_test.go b/builtin/providers/aws/resource_aws_instance_test.go index e25d23542a2c..2dd863c10df4 100644 --- a/builtin/providers/aws/resource_aws_instance_test.go +++ b/builtin/providers/aws/resource_aws_instance_test.go @@ -5,24 +5,26 @@ import ( "reflect" "testing" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccAWSInstance_normal(t *testing.T) { var v ec2.Instance + var vol *ec2.Volume testCheck := func(*terraform.State) error { - if v.AvailZone != "us-west-2a" { - return fmt.Errorf("bad availability zone: %#v", v.AvailZone) + if *v.Placement.AvailabilityZone != "us-west-2a" { + return fmt.Errorf("bad availability zone: %#v", *v.Placement.AvailabilityZone) } if len(v.SecurityGroups) == 0 { return fmt.Errorf("no security groups: %#v", v.SecurityGroups) } - if v.SecurityGroups[0].Name != "tf_test_foo" { + if *v.SecurityGroups[0].GroupName != "tf_test_foo" { return fmt.Errorf("no security groups: %#v", v.SecurityGroups) } @@ -34,6 +36,21 @@ func TestAccAWSInstance_normal(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, Steps: []resource.TestStep{ + // Create a volume to cover #1249 + resource.TestStep{ + // Need a resource in this config so the provisioner will be available + Config: testAccInstanceConfig_pre, + Check: func(*terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + var err error + vol, err = conn.CreateVolume(&ec2.CreateVolumeRequest{ + AvailabilityZone: aws.String("us-west-2a"), + Size: aws.Integer(5), + }) + return err + }, + }, + resource.TestStep{ Config: testAccInstanceConfig, Check: resource.ComposeTestCheckFunc( @@ -43,7 +60,9 @@ func TestAccAWSInstance_normal(t *testing.T) { resource.TestCheckResourceAttr( "aws_instance.foo", "user_data", - "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"), + "3dc39dda39be1205215e776bad998da361a5955d"), + resource.TestCheckResourceAttr( + "aws_instance.foo", "ebs_block_device.#", "0"), ), }, @@ -59,9 +78,20 @@ func TestAccAWSInstance_normal(t *testing.T) { resource.TestCheckResourceAttr( "aws_instance.foo", "user_data", - "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"), + "3dc39dda39be1205215e776bad998da361a5955d"), + resource.TestCheckResourceAttr( + "aws_instance.foo", "ebs_block_device.#", "0"), ), }, + + // Clean up volume created above + resource.TestStep{ + Config: testAccInstanceConfig, + Check: func(*terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + return conn.DeleteVolume(&ec2.DeleteVolumeRequest{VolumeID: vol.VolumeID}) + }, + }, }, }) } @@ -73,9 +103,9 @@ func TestAccAWSInstance_blockDevices(t *testing.T) { return func(*terraform.State) error { // Map out the block devices by name, which should be unique. - blockDevices := make(map[string]ec2.BlockDevice) - for _, blockDevice := range v.BlockDevices { - blockDevices[blockDevice.DeviceName] = blockDevice + blockDevices := make(map[string]ec2.InstanceBlockDeviceMapping) + for _, blockDevice := range v.BlockDeviceMappings { + blockDevices[*blockDevice.DeviceName] = blockDevice } // Check if the root block device exists. @@ -109,32 +139,32 @@ func TestAccAWSInstance_blockDevices(t *testing.T) { "aws_instance.foo", &v), resource.TestCheckResourceAttr( "aws_instance.foo", "root_block_device.#", "1"), - resource.TestCheckResourceAttr( - "aws_instance.foo", "root_block_device.0.device_name", "/dev/sda1"), resource.TestCheckResourceAttr( "aws_instance.foo", "root_block_device.0.volume_size", "11"), - // this one is important because it's the only root_block_device - // attribute that comes back from the API. so checking it verifies - // that we set state properly resource.TestCheckResourceAttr( "aws_instance.foo", "root_block_device.0.volume_type", "gp2"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.#", "2"), + "aws_instance.foo", "ebs_block_device.#", "2"), + resource.TestCheckResourceAttr( + "aws_instance.foo", "ebs_block_device.2576023345.device_name", "/dev/sdb"), + resource.TestCheckResourceAttr( + "aws_instance.foo", "ebs_block_device.2576023345.volume_size", "9"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.172787947.device_name", "/dev/sdb"), + "aws_instance.foo", "ebs_block_device.2576023345.volume_type", "standard"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.172787947.volume_size", "9"), + "aws_instance.foo", "ebs_block_device.2554893574.device_name", "/dev/sdc"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.172787947.iops", "0"), - // Check provisioned SSD device + "aws_instance.foo", "ebs_block_device.2554893574.volume_size", "10"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.3336996981.volume_type", "io1"), + "aws_instance.foo", "ebs_block_device.2554893574.volume_type", "io1"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.3336996981.device_name", "/dev/sdc"), + "aws_instance.foo", "ebs_block_device.2554893574.iops", "100"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.3336996981.volume_size", "10"), + "aws_instance.foo", "ephemeral_block_device.#", "1"), resource.TestCheckResourceAttr( - "aws_instance.foo", "block_device.3336996981.iops", "100"), + "aws_instance.foo", "ephemeral_block_device.1692014856.device_name", "/dev/sde"), + resource.TestCheckResourceAttr( + "aws_instance.foo", "ephemeral_block_device.1692014856.virtual_name", "ephemeral0"), testCheck(), ), }, @@ -147,8 +177,8 @@ func TestAccAWSInstance_sourceDestCheck(t *testing.T) { testCheck := func(enabled bool) resource.TestCheckFunc { return func(*terraform.State) error { - if v.SourceDestCheck != enabled { - return fmt.Errorf("bad source_dest_check: %#v", v.SourceDestCheck) + if *v.SourceDestCheck != enabled { + return fmt.Errorf("bad source_dest_check: %#v", *v.SourceDestCheck) } return nil @@ -206,7 +236,26 @@ func TestAccAWSInstance_vpc(t *testing.T) { }) } -func TestAccInstance_tags(t *testing.T) { +func TestAccAWSInstance_NetworkInstanceSecurityGroups(t *testing.T) { + var v ec2.Instance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccInstanceNetworkInstanceSecurityGroups, + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists( + "aws_instance.foo_instance", &v), + ), + }, + }, + }) +} + +func TestAccAWSInstance_tags(t *testing.T) { var v ec2.Instance resource.Test(t, resource.TestCase{ @@ -236,13 +285,13 @@ func TestAccInstance_tags(t *testing.T) { }) } -func TestAccInstance_privateIP(t *testing.T) { +func TestAccAWSInstance_privateIP(t *testing.T) { var v ec2.Instance testCheckPrivateIP := func() resource.TestCheckFunc { return func(*terraform.State) error { - if v.PrivateIpAddress != "10.1.1.42" { - return fmt.Errorf("bad private IP: %s", v.PrivateIpAddress) + if *v.PrivateIPAddress != "10.1.1.42" { + return fmt.Errorf("bad private IP: %s", *v.PrivateIPAddress) } return nil @@ -265,13 +314,13 @@ func TestAccInstance_privateIP(t *testing.T) { }) } -func TestAccInstance_associatePublicIPAndPrivateIP(t *testing.T) { +func TestAccAWSInstance_associatePublicIPAndPrivateIP(t *testing.T) { var v ec2.Instance testCheckPrivateIP := func() resource.TestCheckFunc { return func(*terraform.State) error { - if v.PrivateIpAddress != "10.1.1.42" { - return fmt.Errorf("bad private IP: %s", v.PrivateIpAddress) + if *v.PrivateIPAddress != "10.1.1.42" { + return fmt.Errorf("bad private IP: %s", *v.PrivateIPAddress) } return nil @@ -303,8 +352,9 @@ func testAccCheckInstanceDestroy(s *terraform.State) error { } // Try to find the resource - resp, err := conn.Instances( - []string{rs.Primary.ID}, ec2.NewFilter()) + resp, err := conn.DescribeInstances(&ec2.DescribeInstancesRequest{ + InstanceIDs: []string{rs.Primary.ID}, + }) if err == nil { if len(resp.Reservations) > 0 { return fmt.Errorf("still exist.") @@ -314,7 +364,7 @@ func testAccCheckInstanceDestroy(s *terraform.State) error { } // Verify the error is what we want - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -338,8 +388,9 @@ func testAccCheckInstanceExists(n string, i *ec2.Instance) resource.TestCheckFun } conn := testAccProvider.Meta().(*AWSClient).ec2conn - resp, err := conn.Instances( - []string{rs.Primary.ID}, ec2.NewFilter()) + resp, err := conn.DescribeInstances(&ec2.DescribeInstancesRequest{ + InstanceIDs: []string{rs.Primary.ID}, + }) if err != nil { return err } @@ -369,6 +420,20 @@ func TestInstanceTenancySchema(t *testing.T) { } } +const testAccInstanceConfig_pre = ` +resource "aws_security_group" "tf_test_foo" { + name = "tf_test_foo" + description = "foo" + + ingress { + protocol = "icmp" + from_port = -1 + to_port = -1 + cidr_blocks = ["0.0.0.0/0"] + } +} +` + const testAccInstanceConfig = ` resource "aws_security_group" "tf_test_foo" { name = "tf_test_foo" @@ -389,7 +454,7 @@ resource "aws_instance" "foo" { instance_type = "m1.small" security_groups = ["${aws_security_group.tf_test_foo.name}"] - user_data = "foo" + user_data = "foo:-with-character's" } ` @@ -398,21 +463,25 @@ resource "aws_instance" "foo" { # us-west-2 ami = "ami-55a7ea65" instance_type = "m1.small" + root_block_device { - device_name = "/dev/sda1" volume_type = "gp2" volume_size = 11 } - block_device { + ebs_block_device { device_name = "/dev/sdb" volume_size = 9 } - block_device { + ebs_block_device { device_name = "/dev/sdc" volume_size = 10 volume_type = "io1" iops = 100 } + ephemeral_block_device { + device_name = "/dev/sde" + virtual_name = "ephemeral0" + } } ` @@ -530,3 +599,49 @@ resource "aws_instance" "foo" { private_ip = "10.1.1.42" } ` + +const testAccInstanceNetworkInstanceSecurityGroups = ` +resource "aws_internet_gateway" "gw" { + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + tags { + Name = "tf-network-test" + } +} + +resource "aws_security_group" "tf_test_foo" { + name = "tf_test_foo" + description = "foo" + vpc_id="${aws_vpc.foo.id}" + + ingress { + protocol = "icmp" + from_port = -1 + to_port = -1 + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_instance" "foo_instance" { + ami = "ami-21f78e11" + instance_type = "t1.micro" + security_groups = ["${aws_security_group.tf_test_foo.id}"] + subnet_id = "${aws_subnet.foo.id}" + associate_public_ip_address = true + depends_on = ["aws_internet_gateway.gw"] +} + +resource "aws_eip" "foo_eip" { + instance = "${aws_instance.foo_instance.id}" + vpc = true + depends_on = ["aws_internet_gateway.gw"] +} +` diff --git a/builtin/providers/aws/resource_aws_internet_gateway.go b/builtin/providers/aws/resource_aws_internet_gateway.go index 08f77a5c6414..3aeec2ebd464 100644 --- a/builtin/providers/aws/resource_aws_internet_gateway.go +++ b/builtin/providers/aws/resource_aws_internet_gateway.go @@ -5,8 +5,8 @@ import ( "log" "time" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -29,28 +29,33 @@ func resourceAwsInternetGateway() *schema.Resource { } func resourceAwsInternetGatewayCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn // Create the gateway log.Printf("[DEBUG] Creating internet gateway") - resp, err := ec2conn.CreateInternetGateway(nil) + resp, err := conn.CreateInternetGateway(nil) if err != nil { return fmt.Errorf("Error creating internet gateway: %s", err) } // Get the ID and store it - ig := resp.InternetGateway + ig := *resp.InternetGateway d.SetId(*ig.InternetGatewayID) log.Printf("[INFO] InternetGateway ID: %s", d.Id()) + err = setTagsSDK(conn, d) + if err != nil { + return err + } + // Attach the new gateway to the correct vpc return resourceAwsInternetGatewayAttach(d, meta) } func resourceAwsInternetGatewayRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn - igRaw, _, err := IGStateRefreshFunc(ec2conn, d.Id())() + igRaw, _, err := IGStateRefreshFunc(conn, d.Id())() if err != nil { return err } @@ -86,9 +91,9 @@ func resourceAwsInternetGatewayUpdate(d *schema.ResourceData, meta interface{}) } } - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn - if err := setTagsSDK(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } @@ -98,7 +103,7 @@ func resourceAwsInternetGatewayUpdate(d *schema.ResourceData, meta interface{}) } func resourceAwsInternetGatewayDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn // Detach if it is attached if err := resourceAwsInternetGatewayDetach(d, meta); err != nil { @@ -108,7 +113,7 @@ func resourceAwsInternetGatewayDelete(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Deleting Internet Gateway: %s", d.Id()) return resource.Retry(5*time.Minute, func() error { - err := ec2conn.DeleteInternetGateway(&ec2.DeleteInternetGatewayRequest{ + _, err := conn.DeleteInternetGateway(&ec2.DeleteInternetGatewayInput{ InternetGatewayID: aws.String(d.Id()), }) if err == nil { @@ -132,7 +137,7 @@ func resourceAwsInternetGatewayDelete(d *schema.ResourceData, meta interface{}) } func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn if d.Get("vpc_id").(string) == "" { log.Printf( @@ -146,7 +151,7 @@ func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{}) d.Id(), d.Get("vpc_id").(string)) - err := ec2conn.AttachInternetGateway(&ec2.AttachInternetGatewayRequest{ + _, err := conn.AttachInternetGateway(&ec2.AttachInternetGatewayInput{ InternetGatewayID: aws.String(d.Id()), VPCID: aws.String(d.Get("vpc_id").(string)), }) @@ -164,7 +169,7 @@ func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{}) stateConf := &resource.StateChangeConf{ Pending: []string{"detached", "attaching"}, Target: "available", - Refresh: IGAttachStateRefreshFunc(ec2conn, d.Id(), "available"), + Refresh: IGAttachStateRefreshFunc(conn, d.Id(), "available"), Timeout: 1 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -177,7 +182,7 @@ func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{}) } func resourceAwsInternetGatewayDetach(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn // Get the old VPC ID to detach from vpcID, _ := d.GetChange("vpc_id") @@ -194,39 +199,14 @@ func resourceAwsInternetGatewayDetach(d *schema.ResourceData, meta interface{}) d.Id(), vpcID.(string)) - wait := true - err := ec2conn.DetachInternetGateway(&ec2.DetachInternetGatewayRequest{ - InternetGatewayID: aws.String(d.Id()), - VPCID: aws.String(vpcID.(string)), - }) - if err != nil { - ec2err, ok := err.(aws.APIError) - if ok { - if ec2err.Code == "InvalidInternetGatewayID.NotFound" { - err = nil - wait = false - } else if ec2err.Code == "Gateway.NotAttached" { - err = nil - wait = false - } - } - - if err != nil { - return err - } - } - - if !wait { - return nil - } - // Wait for it to be fully detached before continuing log.Printf("[DEBUG] Waiting for internet gateway (%s) to detach", d.Id()) stateConf := &resource.StateChangeConf{ - Pending: []string{"attached", "detaching", "available"}, + Pending: []string{"detaching"}, Target: "detached", - Refresh: IGAttachStateRefreshFunc(ec2conn, d.Id(), "detached"), - Timeout: 1 * time.Minute, + Refresh: detachIGStateRefreshFunc(conn, d.Id(), vpcID.(string)), + Timeout: 2 * time.Minute, + Delay: 10 * time.Second, } if _, err := stateConf.WaitForState(); err != nil { return fmt.Errorf( @@ -237,12 +217,38 @@ func resourceAwsInternetGatewayDetach(d *schema.ResourceData, meta interface{}) return nil } +// InstanceStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// an EC2 instance. +func detachIGStateRefreshFunc(conn *ec2.EC2, instanceID, vpcID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + _, err := conn.DetachInternetGateway(&ec2.DetachInternetGatewayInput{ + InternetGatewayID: aws.String(instanceID), + VPCID: aws.String(vpcID), + }) + if err != nil { + ec2err, ok := err.(aws.APIError) + if ok { + if ec2err.Code == "InvalidInternetGatewayID.NotFound" { + return nil, "Not Found", err + } else if ec2err.Code == "Gateway.NotAttached" { + return "detached", "detached", nil + } else if ec2err.Code == "DependencyViolation" { + return nil, "detaching", nil + } + } + } + // DetachInternetGateway only returns an error, so if it's nil, assume we're + // detached + return "detached", "detached", nil + } +} + // IGStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch // an internet gateway. -func IGStateRefreshFunc(ec2conn *ec2.EC2, id string) resource.StateRefreshFunc { +func IGStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := ec2conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysRequest{ - InternetGatewayIDs: []string{id}, + resp, err := conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysInput{ + InternetGatewayIDs: []*string{aws.String(id)}, }) if err != nil { ec2err, ok := err.(aws.APIError) @@ -260,22 +266,22 @@ func IGStateRefreshFunc(ec2conn *ec2.EC2, id string) resource.StateRefreshFunc { return nil, "", nil } - ig := &resp.InternetGateways[0] + ig := resp.InternetGateways[0] return ig, "available", nil } } // IGAttachStateRefreshFunc returns a resource.StateRefreshFunc that is used // watch the state of an internet gateway's attachment. -func IGAttachStateRefreshFunc(ec2conn *ec2.EC2, id string, expected string) resource.StateRefreshFunc { +func IGAttachStateRefreshFunc(conn *ec2.EC2, id string, expected string) resource.StateRefreshFunc { var start time.Time return func() (interface{}, string, error) { if start.IsZero() { start = time.Now() } - resp, err := ec2conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysRequest{ - InternetGatewayIDs: []string{id}, + resp, err := conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysInput{ + InternetGatewayIDs: []*string{aws.String(id)}, }) if err != nil { ec2err, ok := err.(aws.APIError) @@ -293,11 +299,7 @@ func IGAttachStateRefreshFunc(ec2conn *ec2.EC2, id string, expected string) reso return nil, "", nil } - ig := &resp.InternetGateways[0] - - if time.Now().Sub(start) > 10*time.Second { - return ig, expected, nil - } + ig := resp.InternetGateways[0] if len(ig.Attachments) == 0 { // No attachments, we're detached diff --git a/builtin/providers/aws/resource_aws_internet_gateway_test.go b/builtin/providers/aws/resource_aws_internet_gateway_test.go index a990342f9cbe..63192554bf6a 100644 --- a/builtin/providers/aws/resource_aws_internet_gateway_test.go +++ b/builtin/providers/aws/resource_aws_internet_gateway_test.go @@ -4,13 +4,13 @@ import ( "fmt" "testing" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSInternetGateway(t *testing.T) { +func TestAccAWSInternetGateway_basic(t *testing.T) { var v, v2 ec2.InternetGateway testNotEqual := func(*terraform.State) error { @@ -86,7 +86,7 @@ func TestAccAWSInternetGateway_delete(t *testing.T) { }) } -func TestAccInternetGateway_tags(t *testing.T) { +func TestAccAWSInternetGateway_tags(t *testing.T) { var v ec2.InternetGateway resource.Test(t, resource.TestCase{ @@ -98,6 +98,7 @@ func TestAccInternetGateway_tags(t *testing.T) { Config: testAccCheckInternetGatewayConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists("aws_internet_gateway.foo", &v), + testAccCheckTagsSDK(&v.Tags, "foo", "bar"), ), }, @@ -114,7 +115,7 @@ func TestAccInternetGateway_tags(t *testing.T) { } func testAccCheckInternetGatewayDestroy(s *terraform.State) error { - ec2conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_internet_gateway" { @@ -122,8 +123,8 @@ func testAccCheckInternetGatewayDestroy(s *terraform.State) error { } // Try to find the resource - resp, err := ec2conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysRequest{ - InternetGatewayIDs: []string{rs.Primary.ID}, + resp, err := conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysInput{ + InternetGatewayIDs: []*string{aws.String(rs.Primary.ID)}, }) if err == nil { if len(resp.InternetGateways) > 0 { @@ -157,9 +158,9 @@ func testAccCheckInternetGatewayExists(n string, ig *ec2.InternetGateway) resour return fmt.Errorf("No ID is set") } - ec2conn := testAccProvider.Meta().(*AWSClient).awsEC2conn - resp, err := ec2conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysRequest{ - InternetGatewayIDs: []string{rs.Primary.ID}, + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + resp, err := conn.DescribeInternetGateways(&ec2.DescribeInternetGatewaysInput{ + InternetGatewayIDs: []*string{aws.String(rs.Primary.ID)}, }) if err != nil { return err @@ -168,7 +169,7 @@ func testAccCheckInternetGatewayExists(n string, ig *ec2.InternetGateway) resour return fmt.Errorf("InternetGateway not found") } - *ig = resp.InternetGateways[0] + *ig = *resp.InternetGateways[0] return nil } diff --git a/builtin/providers/aws/resource_aws_key_pair.go b/builtin/providers/aws/resource_aws_key_pair.go index 573a935670ef..13de149009ee 100644 --- a/builtin/providers/aws/resource_aws_key_pair.go +++ b/builtin/providers/aws/resource_aws_key_pair.go @@ -1,13 +1,12 @@ package aws import ( - "encoding/base64" "fmt" "github.com/hashicorp/terraform/helper/schema" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" ) func resourceAwsKeyPair() *schema.Resource { @@ -37,15 +36,15 @@ func resourceAwsKeyPair() *schema.Resource { } func resourceAwsKeyPairCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn keyName := d.Get("key_name").(string) publicKey := d.Get("public_key").(string) - req := &ec2.ImportKeyPairRequest{ + req := &ec2.ImportKeyPairInput{ KeyName: aws.String(keyName), - PublicKeyMaterial: []byte(base64.StdEncoding.EncodeToString([]byte(publicKey))), + PublicKeyMaterial: []byte(publicKey), } - resp, err := ec2conn.ImportKeyPair(req) + resp, err := conn.ImportKeyPair(req) if err != nil { return fmt.Errorf("Error import KeyPair: %s", err) } @@ -55,12 +54,11 @@ func resourceAwsKeyPairCreate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsKeyPairRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn - - req := &ec2.DescribeKeyPairsRequest{ - KeyNames: []string{d.Id()}, + conn := meta.(*AWSClient).ec2SDKconn + req := &ec2.DescribeKeyPairsInput{ + KeyNames: []*string{aws.String(d.Id())}, } - resp, err := ec2conn.DescribeKeyPairs(req) + resp, err := conn.DescribeKeyPairs(req) if err != nil { return fmt.Errorf("Error retrieving KeyPair: %s", err) } @@ -77,9 +75,9 @@ func resourceAwsKeyPairRead(d *schema.ResourceData, meta interface{}) error { } func resourceAwsKeyPairDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn - err := ec2conn.DeleteKeyPair(&ec2.DeleteKeyPairRequest{ + _, err := conn.DeleteKeyPair(&ec2.DeleteKeyPairInput{ KeyName: aws.String(d.Id()), }) return err diff --git a/builtin/providers/aws/resource_aws_key_pair_test.go b/builtin/providers/aws/resource_aws_key_pair_test.go index b601d479a1f0..851bca36e5fa 100644 --- a/builtin/providers/aws/resource_aws_key_pair_test.go +++ b/builtin/providers/aws/resource_aws_key_pair_test.go @@ -4,8 +4,8 @@ import ( "fmt" "testing" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -30,7 +30,7 @@ func TestAccAWSKeyPair_normal(t *testing.T) { } func testAccCheckAWSKeyPairDestroy(s *terraform.State) error { - ec2conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + ec2SDKconn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_key_pair" { @@ -38,8 +38,8 @@ func testAccCheckAWSKeyPairDestroy(s *terraform.State) error { } // Try to find key pair - resp, err := ec2conn.DescribeKeyPairs(&ec2.DescribeKeyPairsRequest{ - KeyNames: []string{rs.Primary.ID}, + resp, err := ec2SDKconn.DescribeKeyPairs(&ec2.DescribeKeyPairsInput{ + KeyNames: []*string{aws.String(rs.Primary.ID)}, }) if err == nil { if len(resp.KeyPairs) > 0 { @@ -81,10 +81,10 @@ func testAccCheckAWSKeyPairExists(n string, res *ec2.KeyPairInfo) resource.TestC return fmt.Errorf("No KeyPair name is set") } - ec2conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + ec2SDKconn := testAccProvider.Meta().(*AWSClient).ec2SDKconn - resp, err := ec2conn.DescribeKeyPairs(&ec2.DescribeKeyPairsRequest{ - KeyNames: []string{rs.Primary.ID}, + resp, err := ec2SDKconn.DescribeKeyPairs(&ec2.DescribeKeyPairsInput{ + KeyNames: []*string{aws.String(rs.Primary.ID)}, }) if err != nil { return err @@ -94,7 +94,7 @@ func testAccCheckAWSKeyPairExists(n string, res *ec2.KeyPairInfo) resource.TestC return fmt.Errorf("KeyPair not found") } - *res = resp.KeyPairs[0] + *res = *resp.KeyPairs[0] return nil } diff --git a/builtin/providers/aws/resource_aws_launch_configuration.go b/builtin/providers/aws/resource_aws_launch_configuration.go index e6b2f37425f8..a7e45ae275f7 100644 --- a/builtin/providers/aws/resource_aws_launch_configuration.go +++ b/builtin/providers/aws/resource_aws_launch_configuration.go @@ -1,6 +1,7 @@ package aws import ( + "bytes" "crypto/sha1" "encoding/base64" "encoding/hex" @@ -10,6 +11,7 @@ import ( "github.com/hashicorp/aws-sdk-go/aws" "github.com/hashicorp/aws-sdk-go/gen/autoscaling" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -24,7 +26,8 @@ func resourceAwsLaunchConfiguration() *schema.Resource { Schema: map[string]*schema.Schema{ "name": &schema.Schema{ Type: schema.TypeString, - Required: true, + Optional: true, + Computed: true, ForceNew: true, }, @@ -81,6 +84,7 @@ func resourceAwsLaunchConfiguration() *schema.Resource { "associate_public_ip_address": &schema.Schema{ Type: schema.TypeBool, Optional: true, + ForceNew: true, Default: false, }, @@ -89,27 +93,182 @@ func resourceAwsLaunchConfiguration() *schema.Resource { Optional: true, ForceNew: true, }, + + "ebs_optimized": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "placement_tenancy": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "ebs_block_device": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_on_termination": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + + "device_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "snapshot_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "volume_size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "volume_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) + return hashcode.String(buf.String()) + }, + }, + + "ephemeral_block_device": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "virtual_name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) + return hashcode.String(buf.String()) + }, + }, + + "root_block_device": &schema.Schema{ + // TODO: This is a set because we don't support singleton + // sub-resources today. We'll enforce that the set only ever has + // length zero or one below. When TF gains support for + // sub-resources this can be converted. + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + // "You can only modify the volume size, volume type, and Delete on + // Termination flag on the block device mapping entry for the root + // device volume." - bit.ly/ec2bdmap + Schema: map[string]*schema.Schema{ + "delete_on_termination": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + + "iops": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "volume_size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "volume_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + }, + Set: func(v interface{}) int { + // there can be only one root device; no need to hash anything + return 0 + }, + }, }, } } func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface{}) error { autoscalingconn := meta.(*AWSClient).autoscalingconn + ec2conn := meta.(*AWSClient).ec2conn - var createLaunchConfigurationOpts autoscaling.CreateLaunchConfigurationType - createLaunchConfigurationOpts.LaunchConfigurationName = aws.String(d.Get("name").(string)) - createLaunchConfigurationOpts.ImageID = aws.String(d.Get("image_id").(string)) - createLaunchConfigurationOpts.InstanceType = aws.String(d.Get("instance_type").(string)) + createLaunchConfigurationOpts := autoscaling.CreateLaunchConfigurationType{ + LaunchConfigurationName: aws.String(d.Get("name").(string)), + ImageID: aws.String(d.Get("image_id").(string)), + InstanceType: aws.String(d.Get("instance_type").(string)), + EBSOptimized: aws.Boolean(d.Get("ebs_optimized").(bool)), + } if v, ok := d.GetOk("user_data"); ok { - createLaunchConfigurationOpts.UserData = aws.String(base64.StdEncoding.EncodeToString([]byte(v.(string)))) - } - if v, ok := d.GetOk("associate_public_ip_address"); ok { - createLaunchConfigurationOpts.AssociatePublicIPAddress = aws.Boolean(v.(bool)) + userData := base64.StdEncoding.EncodeToString([]byte(v.(string))) + createLaunchConfigurationOpts.UserData = aws.String(userData) } + if v, ok := d.GetOk("iam_instance_profile"); ok { createLaunchConfigurationOpts.IAMInstanceProfile = aws.String(v.(string)) } + + if v, ok := d.GetOk("placement_tenancy"); ok { + createLaunchConfigurationOpts.PlacementTenancy = aws.String(v.(string)) + } + + if v, ok := d.GetOk("associate_public_ip_address"); ok { + createLaunchConfigurationOpts.AssociatePublicIPAddress = aws.Boolean(v.(bool)) + } + if v, ok := d.GetOk("key_name"); ok { createLaunchConfigurationOpts.KeyName = aws.String(v.(string)) } @@ -119,16 +278,111 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface if v, ok := d.GetOk("security_groups"); ok { createLaunchConfigurationOpts.SecurityGroups = expandStringList( - v.(*schema.Set).List()) + v.(*schema.Set).List(), + ) + } + + var blockDevices []autoscaling.BlockDeviceMapping + + if v, ok := d.GetOk("ebs_block_device"); ok { + vL := v.(*schema.Set).List() + for _, v := range vL { + bd := v.(map[string]interface{}) + ebs := &autoscaling.EBS{ + DeleteOnTermination: aws.Boolean(bd["delete_on_termination"].(bool)), + } + + if v, ok := bd["snapshot_id"].(string); ok && v != "" { + ebs.SnapshotID = aws.String(v) + } + + if v, ok := bd["volume_size"].(int); ok && v != 0 { + ebs.VolumeSize = aws.Integer(v) + } + + if v, ok := bd["volume_type"].(string); ok && v != "" { + ebs.VolumeType = aws.String(v) + } + + if v, ok := bd["iops"].(int); ok && v > 0 { + ebs.IOPS = aws.Integer(v) + } + + blockDevices = append(blockDevices, autoscaling.BlockDeviceMapping{ + DeviceName: aws.String(bd["device_name"].(string)), + EBS: ebs, + }) + } + } + + if v, ok := d.GetOk("ephemeral_block_device"); ok { + vL := v.(*schema.Set).List() + for _, v := range vL { + bd := v.(map[string]interface{}) + blockDevices = append(blockDevices, autoscaling.BlockDeviceMapping{ + DeviceName: aws.String(bd["device_name"].(string)), + VirtualName: aws.String(bd["virtual_name"].(string)), + }) + } + } + + if v, ok := d.GetOk("root_block_device"); ok { + vL := v.(*schema.Set).List() + if len(vL) > 1 { + return fmt.Errorf("Cannot specify more than one root_block_device.") + } + for _, v := range vL { + bd := v.(map[string]interface{}) + ebs := &autoscaling.EBS{ + DeleteOnTermination: aws.Boolean(bd["delete_on_termination"].(bool)), + } + + if v, ok := bd["volume_size"].(int); ok && v != 0 { + ebs.VolumeSize = aws.Integer(v) + } + + if v, ok := bd["volume_type"].(string); ok && v != "" { + ebs.VolumeType = aws.String(v) + } + + if v, ok := bd["iops"].(int); ok && v > 0 { + ebs.IOPS = aws.Integer(v) + } + + if dn, err := fetchRootDeviceName(d.Get("image_id").(string), ec2conn); err == nil { + blockDevices = append(blockDevices, autoscaling.BlockDeviceMapping{ + DeviceName: dn, + EBS: ebs, + }) + } else { + return err + } + } + } + + if len(blockDevices) > 0 { + createLaunchConfigurationOpts.BlockDeviceMappings = blockDevices + } + + var id string + if v, ok := d.GetOk("name"); ok { + id = v.(string) + } else { + hash := sha1.Sum([]byte(fmt.Sprintf("%#v", createLaunchConfigurationOpts))) + configName := fmt.Sprintf("terraform-%s", base64.URLEncoding.EncodeToString(hash[:])) + log.Printf("[DEBUG] Computed Launch config name: %s", configName) + id = configName } + createLaunchConfigurationOpts.LaunchConfigurationName = aws.String(id) - log.Printf("[DEBUG] autoscaling create launch configuration: %#v", createLaunchConfigurationOpts) + log.Printf( + "[DEBUG] autoscaling create launch configuration: %#v", createLaunchConfigurationOpts) err := autoscalingconn.CreateLaunchConfiguration(&createLaunchConfigurationOpts) if err != nil { return fmt.Errorf("Error creating launch configuration: %s", err) } - d.SetId(d.Get("name").(string)) + d.SetId(id) log.Printf("[INFO] launch configuration ID: %s", d.Id()) // We put a Retry here since sometimes eventual consistency bites @@ -140,6 +394,7 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface func resourceAwsLaunchConfigurationRead(d *schema.ResourceData, meta interface{}) error { autoscalingconn := meta.(*AWSClient).autoscalingconn + ec2conn := meta.(*AWSClient).ec2conn describeOpts := autoscaling.LaunchConfigurationNamesType{ LaunchConfigurationNames: []string{d.Id()}, @@ -164,28 +419,20 @@ func resourceAwsLaunchConfigurationRead(d *schema.ResourceData, meta interface{} lc := describConfs.LaunchConfigurations[0] - d.Set("key_name", *lc.KeyName) - d.Set("image_id", *lc.ImageID) - d.Set("instance_type", *lc.InstanceType) - d.Set("name", *lc.LaunchConfigurationName) + d.Set("key_name", lc.KeyName) + d.Set("image_id", lc.ImageID) + d.Set("instance_type", lc.InstanceType) + d.Set("name", lc.LaunchConfigurationName) - if lc.IAMInstanceProfile != nil { - d.Set("iam_instance_profile", *lc.IAMInstanceProfile) - } else { - d.Set("iam_instance_profile", nil) - } + d.Set("iam_instance_profile", lc.IAMInstanceProfile) + d.Set("ebs_optimized", lc.EBSOptimized) + d.Set("spot_price", lc.SpotPrice) + d.Set("security_groups", lc.SecurityGroups) - if lc.SpotPrice != nil { - d.Set("spot_price", *lc.SpotPrice) - } else { - d.Set("spot_price", nil) + if err := readLCBlockDevices(d, &lc, ec2conn); err != nil { + return err } - if lc.SecurityGroups != nil { - d.Set("security_groups", lc.SecurityGroups) - } else { - d.Set("security_groups", nil) - } return nil } @@ -206,3 +453,73 @@ func resourceAwsLaunchConfigurationDelete(d *schema.ResourceData, meta interface return nil } + +func readLCBlockDevices(d *schema.ResourceData, lc *autoscaling.LaunchConfiguration, ec2conn *ec2.EC2) error { + ibds, err := readBlockDevicesFromLaunchConfiguration(d, lc, ec2conn) + if err != nil { + return err + } + + if err := d.Set("ebs_block_device", ibds["ebs"]); err != nil { + return err + } + if err := d.Set("ephemeral_block_device", ibds["ephemeral"]); err != nil { + return err + } + if ibds["root"] != nil { + if err := d.Set("root_block_device", []interface{}{ibds["root"]}); err != nil { + return err + } + } else { + d.Set("root_block_device", []interface{}{}) + } + + return nil +} + +func readBlockDevicesFromLaunchConfiguration(d *schema.ResourceData, lc *autoscaling.LaunchConfiguration, ec2conn *ec2.EC2) ( + map[string]interface{}, error) { + blockDevices := make(map[string]interface{}) + blockDevices["ebs"] = make([]map[string]interface{}, 0) + blockDevices["ephemeral"] = make([]map[string]interface{}, 0) + blockDevices["root"] = nil + if len(lc.BlockDeviceMappings) == 0 { + return nil, nil + } + rootDeviceName, err := fetchRootDeviceName(d.Get("image_id").(string), ec2conn) + if err != nil { + return nil, err + } + for _, bdm := range lc.BlockDeviceMappings { + bd := make(map[string]interface{}) + if bdm.EBS != nil && bdm.EBS.DeleteOnTermination != nil { + bd["delete_on_termination"] = *bdm.EBS.DeleteOnTermination + } + if bdm.EBS != nil && bdm.EBS.VolumeSize != nil { + bd["volume_size"] = *bdm.EBS.VolumeSize + } + if bdm.EBS != nil && bdm.EBS.VolumeType != nil { + bd["volume_type"] = *bdm.EBS.VolumeType + } + if bdm.EBS != nil && bdm.EBS.IOPS != nil { + bd["iops"] = *bdm.EBS.IOPS + } + if bdm.DeviceName != nil && *bdm.DeviceName == *rootDeviceName { + blockDevices["root"] = bd + } else { + if bdm.DeviceName != nil { + bd["device_name"] = *bdm.DeviceName + } + if bdm.VirtualName != nil { + bd["virtual_name"] = *bdm.VirtualName + blockDevices["ephemeral"] = append(blockDevices["ephemeral"].([]map[string]interface{}), bd) + } else { + if bdm.EBS != nil && bdm.EBS.SnapshotID != nil { + bd["snapshot_id"] = *bdm.EBS.SnapshotID + } + blockDevices["ebs"] = append(blockDevices["ebs"].([]map[string]interface{}), bd) + } + } + } + return blockDevices, nil +} diff --git a/builtin/providers/aws/resource_aws_launch_configuration_test.go b/builtin/providers/aws/resource_aws_launch_configuration_test.go index 500d3ca07b43..f300ad258de3 100644 --- a/builtin/providers/aws/resource_aws_launch_configuration_test.go +++ b/builtin/providers/aws/resource_aws_launch_configuration_test.go @@ -2,7 +2,10 @@ package aws import ( "fmt" + "math/rand" + "strings" "testing" + "time" "github.com/hashicorp/aws-sdk-go/aws" "github.com/hashicorp/aws-sdk-go/gen/autoscaling" @@ -10,7 +13,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSLaunchConfiguration(t *testing.T) { +func TestAccAWSLaunchConfiguration_withBlockDevices(t *testing.T) { var conf autoscaling.LaunchConfiguration resource.Test(t, resource.TestCase{ @@ -26,21 +29,29 @@ func TestAccAWSLaunchConfiguration(t *testing.T) { resource.TestCheckResourceAttr( "aws_launch_configuration.bar", "image_id", "ami-21f78e11"), resource.TestCheckResourceAttr( - "aws_launch_configuration.bar", "name", "foobar-terraform-test"), - resource.TestCheckResourceAttr( - "aws_launch_configuration.bar", "instance_type", "t1.micro"), + "aws_launch_configuration.bar", "instance_type", "m1.small"), resource.TestCheckResourceAttr( "aws_launch_configuration.bar", "associate_public_ip_address", "true"), resource.TestCheckResourceAttr( "aws_launch_configuration.bar", "spot_price", ""), ), }, + }, + }) +} +func TestAccAWSLaunchConfiguration_withSpotPrice(t *testing.T) { + var conf autoscaling.LaunchConfiguration + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchConfigurationDestroy, + Steps: []resource.TestStep{ resource.TestStep{ - Config: TestAccAWSLaunchConfigurationWithSpotPriceConfig, + Config: testAccAWSLaunchConfigurationWithSpotPriceConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.bar", &conf), - testAccCheckAWSLaunchConfigurationAttributes(&conf), resource.TestCheckResourceAttr( "aws_launch_configuration.bar", "spot_price", "0.01"), ), @@ -49,6 +60,44 @@ func TestAccAWSLaunchConfiguration(t *testing.T) { }) } +func TestAccAWSLaunchConfiguration_withGeneratedName(t *testing.T) { + var conf autoscaling.LaunchConfiguration + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchConfigurationDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSLaunchConfigurationNoNameConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.bar", &conf), + testAccCheckAWSLaunchConfigurationGeneratedNamePrefix( + "aws_launch_configuration.bar", "terraform-"), + ), + }, + }, + }) +} + +func testAccCheckAWSLaunchConfigurationGeneratedNamePrefix( + resource, prefix string) resource.TestCheckFunc { + return func(s *terraform.State) error { + r, ok := s.RootModule().Resources[resource] + if !ok { + return fmt.Errorf("Resource not found") + } + name, ok := r.Primary.Attributes["name"] + if !ok { + return fmt.Errorf("Name attr not found: %#v", r.Primary.Attributes) + } + if !strings.HasPrefix(name, prefix) { + return fmt.Errorf("Name: %q, does not have prefix: %q", name, prefix) + } + return nil + } +} + func testAccCheckAWSLaunchConfigurationDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).autoscalingconn @@ -88,14 +137,40 @@ func testAccCheckAWSLaunchConfigurationAttributes(conf *autoscaling.LaunchConfig return fmt.Errorf("Bad image_id: %s", *conf.ImageID) } - if *conf.LaunchConfigurationName != "foobar-terraform-test" { + if !strings.HasPrefix(*conf.LaunchConfigurationName, "terraform-") { return fmt.Errorf("Bad name: %s", *conf.LaunchConfigurationName) } - if *conf.InstanceType != "t1.micro" { + if *conf.InstanceType != "m1.small" { return fmt.Errorf("Bad instance_type: %s", *conf.InstanceType) } + // Map out the block devices by name, which should be unique. + blockDevices := make(map[string]autoscaling.BlockDeviceMapping) + for _, blockDevice := range conf.BlockDeviceMappings { + blockDevices[*blockDevice.DeviceName] = blockDevice + } + + // Check if the root block device exists. + if _, ok := blockDevices["/dev/sda1"]; !ok { + fmt.Errorf("block device doesn't exist: /dev/sda1") + } + + // Check if the secondary block device exists. + if _, ok := blockDevices["/dev/sdb"]; !ok { + fmt.Errorf("block device doesn't exist: /dev/sdb") + } + + // Check if the third block device exists. + if _, ok := blockDevices["/dev/sdc"]; !ok { + fmt.Errorf("block device doesn't exist: /dev/sdc") + } + + // Check if the secondary block device exists. + if _, ok := blockDevices["/dev/sdb"]; !ok { + return fmt.Errorf("block device doesn't exist: /dev/sdb") + } + return nil } } @@ -133,23 +208,49 @@ func testAccCheckAWSLaunchConfigurationExists(n string, res *autoscaling.LaunchC } } -const testAccAWSLaunchConfigurationConfig = ` +var testAccAWSLaunchConfigurationConfig = fmt.Sprintf(` resource "aws_launch_configuration" "bar" { - name = "foobar-terraform-test" + name = "terraform-test-%d" image_id = "ami-21f78e11" - instance_type = "t1.micro" + instance_type = "m1.small" user_data = "foobar-user-data" associate_public_ip_address = true + + root_block_device { + volume_type = "gp2" + volume_size = 11 + } + ebs_block_device { + device_name = "/dev/sdb" + volume_size = 9 + } + ebs_block_device { + device_name = "/dev/sdc" + volume_size = 10 + volume_type = "io1" + iops = 100 + } + ephemeral_block_device { + device_name = "/dev/sde" + virtual_name = "ephemeral0" + } } -` +`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) -const TestAccAWSLaunchConfigurationWithSpotPriceConfig = ` +var testAccAWSLaunchConfigurationWithSpotPriceConfig = fmt.Sprintf(` resource "aws_launch_configuration" "bar" { - name = "foobar-terraform-test" + name = "terraform-test-%d" image_id = "ami-21f78e11" instance_type = "t1.micro" - user_data = "foobar-user-data" - associate_public_ip_address = true spot_price = "0.01" } +`, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + +const testAccAWSLaunchConfigurationNoNameConfig = ` +resource "aws_launch_configuration" "bar" { + image_id = "ami-21f78e11" + instance_type = "t1.micro" + user_data = "foobar-user-data-change" + associate_public_ip_address = false +} ` diff --git a/builtin/providers/aws/resource_aws_main_route_table_association.go b/builtin/providers/aws/resource_aws_main_route_table_association.go index f656f3760af3..23fbd5f0e073 100644 --- a/builtin/providers/aws/resource_aws_main_route_table_association.go +++ b/builtin/providers/aws/resource_aws_main_route_table_association.go @@ -4,8 +4,9 @@ import ( "fmt" "log" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsMainRouteTableAssociation() *schema.Resource { @@ -39,43 +40,43 @@ func resourceAwsMainRouteTableAssociation() *schema.Resource { } func resourceAwsMainRouteTableAssociationCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn vpcId := d.Get("vpc_id").(string) routeTableId := d.Get("route_table_id").(string) log.Printf("[INFO] Creating main route table association: %s => %s", vpcId, routeTableId) - mainAssociation, err := findMainRouteTableAssociation(ec2conn, vpcId) + mainAssociation, err := findMainRouteTableAssociation(conn, vpcId) if err != nil { return err } - resp, err := ec2conn.ReassociateRouteTable( - mainAssociation.AssociationId, - routeTableId, - ) + resp, err := conn.ReplaceRouteTableAssociation(&ec2.ReplaceRouteTableAssociationInput{ + AssociationID: mainAssociation.RouteTableAssociationID, + RouteTableID: aws.String(routeTableId), + }) if err != nil { return err } - d.Set("original_route_table_id", mainAssociation.RouteTableId) - d.SetId(resp.AssociationId) + d.Set("original_route_table_id", mainAssociation.RouteTableID) + d.SetId(*resp.NewAssociationID) log.Printf("[INFO] New main route table association ID: %s", d.Id()) return nil } func resourceAwsMainRouteTableAssociationRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn mainAssociation, err := findMainRouteTableAssociation( - ec2conn, + conn, d.Get("vpc_id").(string)) if err != nil { return err } - if mainAssociation.AssociationId != d.Id() { + if *mainAssociation.RouteTableAssociationID != d.Id() { // It seems it doesn't exist anymore, so clear the ID d.SetId("") } @@ -87,25 +88,28 @@ func resourceAwsMainRouteTableAssociationRead(d *schema.ResourceData, meta inter // original_route_table_id - this needs to stay recorded as the AWS-created // table from VPC creation. func resourceAwsMainRouteTableAssociationUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn vpcId := d.Get("vpc_id").(string) routeTableId := d.Get("route_table_id").(string) log.Printf("[INFO] Updating main route table association: %s => %s", vpcId, routeTableId) - resp, err := ec2conn.ReassociateRouteTable(d.Id(), routeTableId) + resp, err := conn.ReplaceRouteTableAssociation(&ec2.ReplaceRouteTableAssociationInput{ + AssociationID: aws.String(d.Id()), + RouteTableID: aws.String(routeTableId), + }) if err != nil { return err } - d.SetId(resp.AssociationId) + d.SetId(*resp.NewAssociationID) log.Printf("[INFO] New main route table association ID: %s", d.Id()) return nil } func resourceAwsMainRouteTableAssociationDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn vpcId := d.Get("vpc_id").(string) originalRouteTableId := d.Get("original_route_table_id").(string) @@ -113,35 +117,45 @@ func resourceAwsMainRouteTableAssociationDelete(d *schema.ResourceData, meta int vpcId, originalRouteTableId) - resp, err := ec2conn.ReassociateRouteTable(d.Id(), originalRouteTableId) + resp, err := conn.ReplaceRouteTableAssociation(&ec2.ReplaceRouteTableAssociationInput{ + AssociationID: aws.String(d.Id()), + RouteTableID: aws.String(originalRouteTableId), + }) if err != nil { return err } - log.Printf("[INFO] Resulting Association ID: %s", resp.AssociationId) + log.Printf("[INFO] Resulting Association ID: %s", *resp.NewAssociationID) return nil } -func findMainRouteTableAssociation(ec2conn *ec2.EC2, vpcId string) (*ec2.RouteTableAssociation, error) { - mainRouteTable, err := findMainRouteTable(ec2conn, vpcId) +func findMainRouteTableAssociation(conn *ec2.EC2, vpcId string) (*ec2.RouteTableAssociation, error) { + mainRouteTable, err := findMainRouteTable(conn, vpcId) if err != nil { return nil, err } for _, a := range mainRouteTable.Associations { - if a.Main { - return &a, nil + if *a.Main { + return a, nil } } return nil, fmt.Errorf("Could not find main routing table association for VPC: %s", vpcId) } -func findMainRouteTable(ec2conn *ec2.EC2, vpcId string) (*ec2.RouteTable, error) { - filter := ec2.NewFilter() - filter.Add("association.main", "true") - filter.Add("vpc-id", vpcId) - routeResp, err := ec2conn.DescribeRouteTables(nil, filter) +func findMainRouteTable(conn *ec2.EC2, vpcId string) (*ec2.RouteTable, error) { + mainFilter := &ec2.Filter{ + Name: aws.String("association.main"), + Values: []*string{aws.String("true")}, + } + vpcFilter := &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(vpcId)}, + } + routeResp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + Filters: []*ec2.Filter{mainFilter, vpcFilter}, + }) if err != nil { return nil, err } else if len(routeResp.RouteTables) != 1 { @@ -151,5 +165,5 @@ func findMainRouteTable(ec2conn *ec2.EC2, vpcId string) (*ec2.RouteTable, error) len(routeResp.RouteTables)) } - return &routeResp.RouteTables[0], nil + return routeResp.RouteTables[0], nil } diff --git a/builtin/providers/aws/resource_aws_main_route_table_association_test.go b/builtin/providers/aws/resource_aws_main_route_table_association_test.go index 937014cae9c8..35afeb513904 100644 --- a/builtin/providers/aws/resource_aws_main_route_table_association_test.go +++ b/builtin/providers/aws/resource_aws_main_route_table_association_test.go @@ -65,15 +65,15 @@ func testAccCheckMainRouteTableAssociation( return fmt.Errorf("Not found: %s", vpcResource) } - conn := testAccProvider.Meta().(*AWSClient).ec2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn mainAssociation, err := findMainRouteTableAssociation(conn, vpc.Primary.ID) if err != nil { return err } - if mainAssociation.AssociationId != rs.Primary.ID { + if *mainAssociation.RouteTableAssociationID != rs.Primary.ID { return fmt.Errorf("Found wrong main association: %s", - mainAssociation.AssociationId) + *mainAssociation.RouteTableAssociationID) } return nil diff --git a/builtin/providers/aws/resource_aws_network_acl.go b/builtin/providers/aws/resource_aws_network_acl.go index efafd7ffe5b7..5e38061447c0 100644 --- a/builtin/providers/aws/resource_aws_network_acl.go +++ b/builtin/providers/aws/resource_aws_network_acl.go @@ -6,10 +6,11 @@ import ( "log" "time" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsNetworkAcl() *schema.Resource { @@ -108,32 +109,34 @@ func resourceAwsNetworkAcl() *schema.Resource { func resourceAwsNetworkAclCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Create the Network Acl - createOpts := &ec2.CreateNetworkAcl{ - VpcId: d.Get("vpc_id").(string), + createOpts := &ec2.CreateNetworkACLInput{ + VPCID: aws.String(d.Get("vpc_id").(string)), } log.Printf("[DEBUG] Network Acl create config: %#v", createOpts) - resp, err := ec2conn.CreateNetworkAcl(createOpts) + resp, err := conn.CreateNetworkACL(createOpts) if err != nil { return fmt.Errorf("Error creating network acl: %s", err) } // Get the ID and store it - networkAcl := &resp.NetworkAcl - d.SetId(networkAcl.NetworkAclId) - log.Printf("[INFO] Network Acl ID: %s", networkAcl.NetworkAclId) + networkAcl := resp.NetworkACL + d.SetId(*networkAcl.NetworkACLID) + log.Printf("[INFO] Network Acl ID: %s", *networkAcl.NetworkACLID) // Update rules and subnet association once acl is created return resourceAwsNetworkAclUpdate(d, meta) } func resourceAwsNetworkAclRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn - resp, err := ec2conn.NetworkAcls([]string{d.Id()}, ec2.NewFilter()) + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsInput{ + NetworkACLIDs: []*string{aws.String(d.Id())}, + }) if err != nil { return err @@ -142,40 +145,40 @@ func resourceAwsNetworkAclRead(d *schema.ResourceData, meta interface{}) error { return nil } - networkAcl := &resp.NetworkAcls[0] - var ingressEntries []ec2.NetworkAclEntry - var egressEntries []ec2.NetworkAclEntry + networkAcl := resp.NetworkACLs[0] + var ingressEntries []*ec2.NetworkACLEntry + var egressEntries []*ec2.NetworkACLEntry // separate the ingress and egress rules - for _, e := range networkAcl.EntrySet { - if e.Egress == true { + for _, e := range networkAcl.Entries { + if *e.Egress == true { egressEntries = append(egressEntries, e) } else { ingressEntries = append(ingressEntries, e) } } - d.Set("vpc_id", networkAcl.VpcId) + d.Set("vpc_id", networkAcl.VPCID) d.Set("ingress", ingressEntries) d.Set("egress", egressEntries) - d.Set("tags", tagsToMap(networkAcl.Tags)) + d.Set("tags", tagsToMapSDK(networkAcl.Tags)) return nil } func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn d.Partial(true) if d.HasChange("ingress") { - err := updateNetworkAclEntries(d, "ingress", ec2conn) + err := updateNetworkAclEntries(d, "ingress", conn) if err != nil { return err } } if d.HasChange("egress") { - err := updateNetworkAclEntries(d, "egress", ec2conn) + err := updateNetworkAclEntries(d, "egress", conn) if err != nil { return err } @@ -186,17 +189,20 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error //associate new subnet with the acl. _, n := d.GetChange("subnet_id") newSubnet := n.(string) - association, err := findNetworkAclAssociation(newSubnet, ec2conn) + association, err := findNetworkAclAssociation(newSubnet, conn) if err != nil { return fmt.Errorf("Failed to update acl %s with subnet %s: %s", d.Id(), newSubnet, err) } - _, err = ec2conn.ReplaceNetworkAclAssociation(association.NetworkAclAssociationId, d.Id()) + _, err = conn.ReplaceNetworkACLAssociation(&ec2.ReplaceNetworkACLAssociationInput{ + AssociationID: association.NetworkACLAssociationID, + NetworkACLID: aws.String(d.Id()), + }) if err != nil { return err } } - if err := setTags(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } else { d.SetPartial("tags") @@ -206,7 +212,7 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error return resourceAwsNetworkAclRead(d, meta) } -func updateNetworkAclEntries(d *schema.ResourceData, entryType string, ec2conn *ec2.EC2) error { +func updateNetworkAclEntries(d *schema.ResourceData, entryType string, conn *ec2.EC2) error { o, n := d.GetChange(entryType) @@ -226,7 +232,11 @@ func updateNetworkAclEntries(d *schema.ResourceData, entryType string, ec2conn * } for _, remove := range toBeDeleted { // Delete old Acl - _, err := ec2conn.DeleteNetworkAclEntry(d.Id(), remove.RuleNumber, remove.Egress) + _, err := conn.DeleteNetworkACLEntry(&ec2.DeleteNetworkACLEntryInput{ + NetworkACLID: aws.String(d.Id()), + RuleNumber: remove.RuleNumber, + Egress: remove.Egress, + }) if err != nil { return fmt.Errorf("Error deleting %s entry: %s", entryType, err) } @@ -238,7 +248,15 @@ func updateNetworkAclEntries(d *schema.ResourceData, entryType string, ec2conn * } for _, add := range toBeCreated { // Add new Acl entry - _, err := ec2conn.CreateNetworkAclEntry(d.Id(), &add) + _, err := conn.CreateNetworkACLEntry(&ec2.CreateNetworkACLEntryInput{ + NetworkACLID: aws.String(d.Id()), + CIDRBlock: add.CIDRBlock, + Egress: add.Egress, + PortRange: add.PortRange, + Protocol: add.Protocol, + RuleAction: add.RuleAction, + RuleNumber: add.RuleNumber, + }) if err != nil { return fmt.Errorf("Error creating %s entry: %s", entryType, err) } @@ -247,27 +265,33 @@ func updateNetworkAclEntries(d *schema.ResourceData, entryType string, ec2conn * } func resourceAwsNetworkAclDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf("[INFO] Deleting Network Acl: %s", d.Id()) return resource.Retry(5*time.Minute, func() error { - if _, err := ec2conn.DeleteNetworkAcl(d.Id()); err != nil { - ec2err := err.(*ec2.Error) + _, err := conn.DeleteNetworkACL(&ec2.DeleteNetworkACLInput{ + NetworkACLID: aws.String(d.Id()), + }) + if err != nil { + ec2err := err.(aws.APIError) switch ec2err.Code { case "InvalidNetworkAclID.NotFound": return nil case "DependencyViolation": // In case of dependency violation, we remove the association between subnet and network acl. // This means the subnet is attached to default acl of vpc. - association, err := findNetworkAclAssociation(d.Get("subnet_id").(string), ec2conn) + association, err := findNetworkAclAssociation(d.Get("subnet_id").(string), conn) if err != nil { return fmt.Errorf("Dependency violation: Cannot delete acl %s: %s", d.Id(), err) } - defaultAcl, err := getDefaultNetworkAcl(d.Get("vpc_id").(string), ec2conn) + defaultAcl, err := getDefaultNetworkAcl(d.Get("vpc_id").(string), conn) if err != nil { return fmt.Errorf("Dependency violation: Cannot delete acl %s: %s", d.Id(), err) } - _, err = ec2conn.ReplaceNetworkAclAssociation(association.NetworkAclAssociationId, defaultAcl.NetworkAclId) + _, err = conn.ReplaceNetworkACLAssociation(&ec2.ReplaceNetworkACLAssociationInput{ + AssociationID: association.NetworkACLAssociationID, + NetworkACLID: defaultAcl.NetworkACLID, + }) return resource.RetryError{Err: err} default: // Any other error, we want to quit the retry loop immediately @@ -296,31 +320,44 @@ func resourceAwsNetworkAclEntryHash(v interface{}) int { return hashcode.String(buf.String()) } -func getDefaultNetworkAcl(vpc_id string, ec2conn *ec2.EC2) (defaultAcl *ec2.NetworkAcl, err error) { - filter := ec2.NewFilter() - filter.Add("default", "true") - filter.Add("vpc-id", vpc_id) - - resp, err := ec2conn.NetworkAcls([]string{}, filter) +func getDefaultNetworkAcl(vpc_id string, conn *ec2.EC2) (defaultAcl *ec2.NetworkACL, err error) { + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsInput{ + NetworkACLIDs: []*string{}, + Filters: []*ec2.Filter{ + &ec2.Filter{ + Name: aws.String("default"), + Values: []*string{aws.String("true")}, + }, + &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(vpc_id)}, + }, + }, + }) if err != nil { return nil, err } - return &resp.NetworkAcls[0], nil + return resp.NetworkACLs[0], nil } -func findNetworkAclAssociation(subnetId string, ec2conn *ec2.EC2) (networkAclAssociation *ec2.NetworkAclAssociation, err error) { - filter := ec2.NewFilter() - filter.Add("association.subnet-id", subnetId) - - resp, err := ec2conn.NetworkAcls([]string{}, filter) +func findNetworkAclAssociation(subnetId string, conn *ec2.EC2) (networkAclAssociation *ec2.NetworkACLAssociation, err error) { + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsInput{ + NetworkACLIDs: []*string{}, + Filters: []*ec2.Filter{ + &ec2.Filter{ + Name: aws.String("association.subnet-id"), + Values: []*string{aws.String(subnetId)}, + }, + }, + }) if err != nil { return nil, err } - for _, association := range resp.NetworkAcls[0].AssociationSet { - if association.SubnetId == subnetId { - return &association, nil + for _, association := range resp.NetworkACLs[0].Associations { + if *association.SubnetID == subnetId { + return association, nil } } return nil, fmt.Errorf("could not find association for subnet %s ", subnetId) diff --git a/builtin/providers/aws/resource_aws_network_acl_test.go b/builtin/providers/aws/resource_aws_network_acl_test.go index 939e8633e08b..c20c30d309ce 100644 --- a/builtin/providers/aws/resource_aws_network_acl_test.go +++ b/builtin/providers/aws/resource_aws_network_acl_test.go @@ -4,15 +4,16 @@ import ( "fmt" "testing" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" // "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" // "github.com/hashicorp/terraform/helper/schema" ) -func TestAccAWSNetworkAclsWithEgressAndIngressRules(t *testing.T) { - var networkAcl ec2.NetworkAcl +func TestAccAWSNetworkAcl_EgressAndIngressRules(t *testing.T) { + var networkAcl ec2.NetworkACL resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -24,37 +25,37 @@ func TestAccAWSNetworkAclsWithEgressAndIngressRules(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSNetworkAclExists("aws_network_acl.bar", &networkAcl), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.protocol", "tcp"), + "aws_network_acl.bar", "ingress.3409203205.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.rule_no", "1"), + "aws_network_acl.bar", "ingress.3409203205.rule_no", "1"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.from_port", "80"), + "aws_network_acl.bar", "ingress.3409203205.from_port", "80"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.to_port", "80"), + "aws_network_acl.bar", "ingress.3409203205.to_port", "80"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.action", "allow"), + "aws_network_acl.bar", "ingress.3409203205.action", "allow"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "ingress.580214135.cidr_block", "10.3.10.3/18"), + "aws_network_acl.bar", "ingress.3409203205.cidr_block", "10.3.10.3/18"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.protocol", "tcp"), + "aws_network_acl.bar", "egress.2579689292.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.rule_no", "2"), + "aws_network_acl.bar", "egress.2579689292.rule_no", "2"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.from_port", "443"), + "aws_network_acl.bar", "egress.2579689292.from_port", "443"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.to_port", "443"), + "aws_network_acl.bar", "egress.2579689292.to_port", "443"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.cidr_block", "10.3.2.3/18"), + "aws_network_acl.bar", "egress.2579689292.cidr_block", "10.3.2.3/18"), resource.TestCheckResourceAttr( - "aws_network_acl.bar", "egress.1730430240.action", "allow"), + "aws_network_acl.bar", "egress.2579689292.action", "allow"), ), }, }, }) } -func TestAccAWSNetworkAclsOnlyIngressRules(t *testing.T) { - var networkAcl ec2.NetworkAcl +func TestAccAWSNetworkAcl_OnlyIngressRules(t *testing.T) { + var networkAcl ec2.NetworkACL resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -67,25 +68,25 @@ func TestAccAWSNetworkAclsOnlyIngressRules(t *testing.T) { testAccCheckAWSNetworkAclExists("aws_network_acl.foos", &networkAcl), // testAccCheckSubnetAssociation("aws_network_acl.foos", "aws_subnet.blob"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.protocol", "tcp"), + "aws_network_acl.foos", "ingress.2750166237.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.rule_no", "1"), + "aws_network_acl.foos", "ingress.2750166237.rule_no", "2"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.from_port", "0"), + "aws_network_acl.foos", "ingress.2750166237.from_port", "443"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.to_port", "22"), + "aws_network_acl.foos", "ingress.2750166237.to_port", "443"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.action", "deny"), + "aws_network_acl.foos", "ingress.2750166237.action", "deny"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.cidr_block", "10.2.2.3/18"), + "aws_network_acl.foos", "ingress.2750166237.cidr_block", "10.2.2.3/18"), ), }, }, }) } -func TestAccAWSNetworkAclsOnlyIngressRulesChange(t *testing.T) { - var networkAcl ec2.NetworkAcl +func TestAccAWSNetworkAcl_OnlyIngressRulesChange(t *testing.T) { + var networkAcl ec2.NetworkACL resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -98,21 +99,21 @@ func TestAccAWSNetworkAclsOnlyIngressRulesChange(t *testing.T) { testAccCheckAWSNetworkAclExists("aws_network_acl.foos", &networkAcl), testIngressRuleLength(&networkAcl, 2), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.protocol", "tcp"), + "aws_network_acl.foos", "ingress.37211640.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.rule_no", "1"), + "aws_network_acl.foos", "ingress.37211640.rule_no", "1"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.from_port", "0"), + "aws_network_acl.foos", "ingress.37211640.from_port", "0"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.to_port", "22"), + "aws_network_acl.foos", "ingress.37211640.to_port", "22"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.action", "deny"), + "aws_network_acl.foos", "ingress.37211640.action", "deny"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.cidr_block", "10.2.2.3/18"), + "aws_network_acl.foos", "ingress.37211640.cidr_block", "10.2.2.3/18"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.2438803013.from_port", "443"), + "aws_network_acl.foos", "ingress.2750166237.from_port", "443"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.2438803013.rule_no", "2"), + "aws_network_acl.foos", "ingress.2750166237.rule_no", "2"), ), }, resource.TestStep{ @@ -121,25 +122,25 @@ func TestAccAWSNetworkAclsOnlyIngressRulesChange(t *testing.T) { testAccCheckAWSNetworkAclExists("aws_network_acl.foos", &networkAcl), testIngressRuleLength(&networkAcl, 1), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.protocol", "tcp"), + "aws_network_acl.foos", "ingress.37211640.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.rule_no", "1"), + "aws_network_acl.foos", "ingress.37211640.rule_no", "1"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.from_port", "0"), + "aws_network_acl.foos", "ingress.37211640.from_port", "0"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.to_port", "22"), + "aws_network_acl.foos", "ingress.37211640.to_port", "22"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.action", "deny"), + "aws_network_acl.foos", "ingress.37211640.action", "deny"), resource.TestCheckResourceAttr( - "aws_network_acl.foos", "ingress.3697634361.cidr_block", "10.2.2.3/18"), + "aws_network_acl.foos", "ingress.37211640.cidr_block", "10.2.2.3/18"), ), }, }, }) } -func TestAccAWSNetworkAclsOnlyEgressRules(t *testing.T) { - var networkAcl ec2.NetworkAcl +func TestAccAWSNetworkAcl_OnlyEgressRules(t *testing.T) { + var networkAcl ec2.NetworkACL resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -157,7 +158,7 @@ func TestAccAWSNetworkAclsOnlyEgressRules(t *testing.T) { }) } -func TestAccNetworkAcl_SubnetChange(t *testing.T) { +func TestAccAWSNetworkAcl_SubnetChange(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -191,16 +192,18 @@ func testAccCheckAWSNetworkAclDestroy(s *terraform.State) error { } // Retrieve the network acl - resp, err := conn.NetworkAcls([]string{rs.Primary.ID}, ec2.NewFilter()) + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsRequest{ + NetworkACLIDs: []string{rs.Primary.ID}, + }) if err == nil { - if len(resp.NetworkAcls) > 0 && resp.NetworkAcls[0].NetworkAclId == rs.Primary.ID { + if len(resp.NetworkACLs) > 0 && *resp.NetworkACLs[0].NetworkACLID == rs.Primary.ID { return fmt.Errorf("Network Acl (%s) still exists.", rs.Primary.ID) } return nil } - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -213,7 +216,7 @@ func testAccCheckAWSNetworkAclDestroy(s *terraform.State) error { return nil } -func testAccCheckAWSNetworkAclExists(n string, networkAcl *ec2.NetworkAcl) resource.TestCheckFunc { +func testAccCheckAWSNetworkAclExists(n string, networkAcl *ec2.NetworkACL) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -225,13 +228,15 @@ func testAccCheckAWSNetworkAclExists(n string, networkAcl *ec2.NetworkAcl) resou } conn := testAccProvider.Meta().(*AWSClient).ec2conn - resp, err := conn.NetworkAcls([]string{rs.Primary.ID}, nil) + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsRequest{ + NetworkACLIDs: []string{rs.Primary.ID}, + }) if err != nil { return err } - if len(resp.NetworkAcls) > 0 && resp.NetworkAcls[0].NetworkAclId == rs.Primary.ID { - *networkAcl = resp.NetworkAcls[0] + if len(resp.NetworkACLs) > 0 && *resp.NetworkACLs[0].NetworkACLID == rs.Primary.ID { + *networkAcl = resp.NetworkACLs[0] return nil } @@ -239,11 +244,11 @@ func testAccCheckAWSNetworkAclExists(n string, networkAcl *ec2.NetworkAcl) resou } } -func testIngressRuleLength(networkAcl *ec2.NetworkAcl, length int) resource.TestCheckFunc { +func testIngressRuleLength(networkAcl *ec2.NetworkACL, length int) resource.TestCheckFunc { return func(s *terraform.State) error { - var ingressEntries []ec2.NetworkAclEntry - for _, e := range networkAcl.EntrySet { - if e.Egress == false { + var ingressEntries []ec2.NetworkACLEntry + for _, e := range networkAcl.Entries { + if *e.Egress == false { ingressEntries = append(ingressEntries, e) } } @@ -262,20 +267,25 @@ func testAccCheckSubnetIsAssociatedWithAcl(acl string, sub string) resource.Test subnet := s.RootModule().Resources[sub] conn := testAccProvider.Meta().(*AWSClient).ec2conn - filter := ec2.NewFilter() - filter.Add("association.subnet-id", subnet.Primary.ID) - resp, err := conn.NetworkAcls([]string{networkAcl.Primary.ID}, filter) - + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsRequest{ + NetworkACLIDs: []string{networkAcl.Primary.ID}, + Filters: []ec2.Filter{ + ec2.Filter{ + Name: aws.String("association.subnet-id"), + Values: []string{subnet.Primary.ID}, + }, + }, + }) if err != nil { return err } - if len(resp.NetworkAcls) > 0 { + if len(resp.NetworkACLs) > 0 { return nil } - r, _ := conn.NetworkAcls([]string{}, ec2.NewFilter()) - fmt.Printf("\n\nall acls\n %#v\n\n", r.NetworkAcls) - conn.NetworkAcls([]string{}, filter) + // r, _ := conn.NetworkACLs([]string{}, ec2.NewFilter()) + // fmt.Printf("\n\nall acls\n %#v\n\n", r.NetworkAcls) + // conn.NetworkAcls([]string{}, filter) return fmt.Errorf("Network Acl %s is not associated with subnet %s", acl, sub) } @@ -287,14 +297,20 @@ func testAccCheckSubnetIsNotAssociatedWithAcl(acl string, subnet string) resourc subnet := s.RootModule().Resources[subnet] conn := testAccProvider.Meta().(*AWSClient).ec2conn - filter := ec2.NewFilter() - filter.Add("association.subnet-id", subnet.Primary.ID) - resp, err := conn.NetworkAcls([]string{networkAcl.Primary.ID}, filter) + resp, err := conn.DescribeNetworkACLs(&ec2.DescribeNetworkACLsRequest{ + NetworkACLIDs: []string{networkAcl.Primary.ID}, + Filters: []ec2.Filter{ + ec2.Filter{ + Name: aws.String("association.subnet-id"), + Values: []string{subnet.Primary.ID}, + }, + }, + }) if err != nil { return err } - if len(resp.NetworkAcls) > 0 { + if len(resp.NetworkACLs) > 0 { return fmt.Errorf("Network Acl %s is still associated with subnet %s", acl, subnet) } return nil diff --git a/builtin/providers/aws/resource_aws_network_interface.go b/builtin/providers/aws/resource_aws_network_interface.go new file mode 100644 index 000000000000..aebe635913ef --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_interface.go @@ -0,0 +1,270 @@ +package aws + +import ( + "bytes" + "fmt" + "log" + "strconv" + "time" + + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNetworkInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNetworkInterfaceCreate, + Read: resourceAwsNetworkInterfaceRead, + Update: resourceAwsNetworkInterfaceUpdate, + Delete: resourceAwsNetworkInterfaceDelete, + + Schema: map[string]*schema.Schema{ + + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "private_ips": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + + "security_groups": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + + "attachment": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "instance": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "device_index": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + }, + "attachment_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: resourceAwsEniAttachmentHash, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsNetworkInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + + conn := meta.(*AWSClient).ec2SDKconn + + request := &ec2.CreateNetworkInterfaceInput{ + Groups: expandStringListSDK(d.Get("security_groups").(*schema.Set).List()), + SubnetID: aws.String(d.Get("subnet_id").(string)), + PrivateIPAddresses: expandPrivateIPAddessesSDK(d.Get("private_ips").(*schema.Set).List()), + } + + log.Printf("[DEBUG] Creating network interface") + resp, err := conn.CreateNetworkInterface(request) + if err != nil { + return fmt.Errorf("Error creating ENI: %s", err) + } + + d.SetId(*resp.NetworkInterface.NetworkInterfaceID) + log.Printf("[INFO] ENI ID: %s", d.Id()) + return resourceAwsNetworkInterfaceUpdate(d, meta) +} + +func resourceAwsNetworkInterfaceRead(d *schema.ResourceData, meta interface{}) error { + + conn := meta.(*AWSClient).ec2SDKconn + describe_network_interfaces_request := &ec2.DescribeNetworkInterfacesInput{ + NetworkInterfaceIDs: []*string{aws.String(d.Id())}, + } + describeResp, err := conn.DescribeNetworkInterfaces(describe_network_interfaces_request) + + if err != nil { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidNetworkInterfaceID.NotFound" { + // The ENI is gone now, so just remove it from the state + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving ENI: %s", err) + } + if len(describeResp.NetworkInterfaces) != 1 { + return fmt.Errorf("Unable to find ENI: %#v", describeResp.NetworkInterfaces) + } + + eni := describeResp.NetworkInterfaces[0] + d.Set("subnet_id", eni.SubnetID) + d.Set("private_ips", flattenNetworkInterfacesPrivateIPAddessesSDK(eni.PrivateIPAddresses)) + d.Set("security_groups", flattenGroupIdentifiersSDK(eni.Groups)) + + // Tags + d.Set("tags", tagsToMapSDK(eni.TagSet)) + + if eni.Attachment != nil { + attachment := []map[string]interface{}{flattenAttachmentSDK(eni.Attachment)} + d.Set("attachment", attachment) + } else { + d.Set("attachment", nil) + } + + return nil +} + +func networkInterfaceAttachmentRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + + describe_network_interfaces_request := &ec2.DescribeNetworkInterfacesInput{ + NetworkInterfaceIDs: []*string{aws.String(id)}, + } + describeResp, err := conn.DescribeNetworkInterfaces(describe_network_interfaces_request) + + if err != nil { + log.Printf("[ERROR] Could not find network interface %s. %s", id, err) + return nil, "", err + } + + eni := describeResp.NetworkInterfaces[0] + hasAttachment := strconv.FormatBool(eni.Attachment != nil) + log.Printf("[DEBUG] ENI %s has attachment state %s", id, hasAttachment) + return eni, hasAttachment, nil + } +} + +func resourceAwsNetworkInterfaceDetach(oa *schema.Set, meta interface{}, eniId string) error { + // if there was an old attachment, remove it + if oa != nil && len(oa.List()) > 0 { + old_attachment := oa.List()[0].(map[string]interface{}) + detach_request := &ec2.DetachNetworkInterfaceInput{ + AttachmentID: aws.String(old_attachment["attachment_id"].(string)), + Force: aws.Boolean(true), + } + conn := meta.(*AWSClient).ec2SDKconn + _, detach_err := conn.DetachNetworkInterface(detach_request) + if detach_err != nil { + return fmt.Errorf("Error detaching ENI: %s", detach_err) + } + + log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", eniId) + stateConf := &resource.StateChangeConf{ + Pending: []string{"true"}, + Target: "false", + Refresh: networkInterfaceAttachmentRefreshFunc(conn, eniId), + Timeout: 10 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for ENI (%s) to become dettached: %s", eniId, err) + } + } + + return nil +} + +func resourceAwsNetworkInterfaceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2SDKconn + d.Partial(true) + + if d.HasChange("attachment") { + oa, na := d.GetChange("attachment") + + detach_err := resourceAwsNetworkInterfaceDetach(oa.(*schema.Set), meta, d.Id()) + if detach_err != nil { + return detach_err + } + + // if there is a new attachment, attach it + if na != nil && len(na.(*schema.Set).List()) > 0 { + new_attachment := na.(*schema.Set).List()[0].(map[string]interface{}) + di := new_attachment["device_index"].(int) + attach_request := &ec2.AttachNetworkInterfaceInput{ + DeviceIndex: aws.Long(int64(di)), + InstanceID: aws.String(new_attachment["instance"].(string)), + NetworkInterfaceID: aws.String(d.Id()), + } + _, attach_err := conn.AttachNetworkInterface(attach_request) + if attach_err != nil { + return fmt.Errorf("Error attaching ENI: %s", attach_err) + } + } + + d.SetPartial("attachment") + } + + if d.HasChange("security_groups") { + request := &ec2.ModifyNetworkInterfaceAttributeInput{ + NetworkInterfaceID: aws.String(d.Id()), + Groups: expandStringListSDK(d.Get("security_groups").(*schema.Set).List()), + } + + _, err := conn.ModifyNetworkInterfaceAttribute(request) + if err != nil { + return fmt.Errorf("Failure updating ENI: %s", err) + } + + d.SetPartial("security_groups") + } + + if err := setTagsSDK(conn, d); err != nil { + return err + } else { + d.SetPartial("tags") + } + + d.Partial(false) + + return resourceAwsNetworkInterfaceRead(d, meta) +} + +func resourceAwsNetworkInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2SDKconn + + log.Printf("[INFO] Deleting ENI: %s", d.Id()) + + detach_err := resourceAwsNetworkInterfaceDetach(d.Get("attachment").(*schema.Set), meta, d.Id()) + if detach_err != nil { + return detach_err + } + + deleteEniOpts := ec2.DeleteNetworkInterfaceInput{ + NetworkInterfaceID: aws.String(d.Id()), + } + if _, err := conn.DeleteNetworkInterface(&deleteEniOpts); err != nil { + return fmt.Errorf("Error deleting ENI: %s", err) + } + + return nil +} + +func resourceAwsEniAttachmentHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["instance"].(string))) + buf.WriteString(fmt.Sprintf("%d-", m["device_index"].(int))) + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/aws/resource_aws_network_interface_test.go b/builtin/providers/aws/resource_aws_network_interface_test.go new file mode 100644 index 000000000000..0f486df0b354 --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_interface_test.go @@ -0,0 +1,251 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSENI_basic(t *testing.T) { + var conf ec2.NetworkInterface + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSENIDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSENIConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSENIExists("aws_network_interface.bar", &conf), + testAccCheckAWSENIAttributes(&conf), + resource.TestCheckResourceAttr( + "aws_network_interface.bar", "private_ips.#", "1"), + resource.TestCheckResourceAttr( + "aws_network_interface.bar", "tags.Name", "bar_interface"), + ), + }, + }, + }) +} + +func TestAccAWSENI_attached(t *testing.T) { + var conf ec2.NetworkInterface + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSENIDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccAWSENIConfigWithAttachment, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSENIExists("aws_network_interface.bar", &conf), + testAccCheckAWSENIAttributesWithAttachment(&conf), + resource.TestCheckResourceAttr( + "aws_network_interface.bar", "private_ips.#", "1"), + resource.TestCheckResourceAttr( + "aws_network_interface.bar", "tags.Name", "bar_interface"), + ), + }, + }, + }) +} + +func testAccCheckAWSENIExists(n string, res *ec2.NetworkInterface) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ENI ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + describe_network_interfaces_request := &ec2.DescribeNetworkInterfacesInput{ + NetworkInterfaceIDs: []*string{aws.String(rs.Primary.ID)}, + } + describeResp, err := conn.DescribeNetworkInterfaces(describe_network_interfaces_request) + + if err != nil { + return err + } + + if len(describeResp.NetworkInterfaces) != 1 || + *describeResp.NetworkInterfaces[0].NetworkInterfaceID != rs.Primary.ID { + return fmt.Errorf("ENI not found") + } + + *res = *describeResp.NetworkInterfaces[0] + + return nil + } +} + +func testAccCheckAWSENIAttributes(conf *ec2.NetworkInterface) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if conf.Attachment != nil { + return fmt.Errorf("expected attachment to be nil") + } + + if *conf.AvailabilityZone != "us-west-2a" { + return fmt.Errorf("expected availability_zone to be us-west-2a, but was %s", *conf.AvailabilityZone) + } + + if len(conf.Groups) != 1 && *conf.Groups[0].GroupName != "foo" { + return fmt.Errorf("expected security group to be foo, but was %#v", conf.Groups) + } + + if *conf.PrivateIPAddress != "172.16.10.100" { + return fmt.Errorf("expected private ip to be 172.16.10.100, but was %s", *conf.PrivateIPAddress) + } + + if len(conf.TagSet) == 0 { + return fmt.Errorf("expected tags") + } + + return nil + } +} + +func testAccCheckAWSENIAttributesWithAttachment(conf *ec2.NetworkInterface) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if conf.Attachment == nil { + return fmt.Errorf("expected attachment to be set, but was nil") + } + + if *conf.Attachment.DeviceIndex != 1 { + return fmt.Errorf("expected attachment device index to be 1, but was %d", *conf.Attachment.DeviceIndex) + } + + if *conf.AvailabilityZone != "us-west-2a" { + return fmt.Errorf("expected availability_zone to be us-west-2a, but was %s", *conf.AvailabilityZone) + } + + if len(conf.Groups) != 1 && *conf.Groups[0].GroupName != "foo" { + return fmt.Errorf("expected security group to be foo, but was %#v", conf.Groups) + } + + if *conf.PrivateIPAddress != "172.16.10.100" { + return fmt.Errorf("expected private ip to be 172.16.10.100, but was %s", *conf.PrivateIPAddress) + } + + return nil + } +} + +func testAccCheckAWSENIDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_network_interface" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + describe_network_interfaces_request := &ec2.DescribeNetworkInterfacesInput{ + NetworkInterfaceIDs: []*string{aws.String(rs.Primary.ID)}, + } + _, err := conn.DescribeNetworkInterfaces(describe_network_interfaces_request) + + if err != nil { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidNetworkInterfaceID.NotFound" { + return nil + } + + return err + } + } + + return nil +} + +const testAccAWSENIConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "172.16.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "172.16.10.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_security_group" "foo" { + vpc_id = "${aws_vpc.foo.id}" + description = "foo" + name = "foo" +} + +resource "aws_network_interface" "bar" { + subnet_id = "${aws_subnet.foo.id}" + private_ips = ["172.16.10.100"] + security_groups = ["${aws_security_group.foo.id}"] + tags { + Name = "bar_interface" + } +} +` + +const testAccAWSENIConfigWithAttachment = ` +resource "aws_vpc" "foo" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-eni-test" + } +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "172.16.10.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-eni-test" + } +} + +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "172.16.11.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-eni-test" + } +} + +resource "aws_security_group" "foo" { + vpc_id = "${aws_vpc.foo.id}" + description = "foo" + name = "foo" +} + +resource "aws_instance" "foo" { + ami = "ami-c5eabbf5" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.bar.id}" + associate_public_ip_address = false + private_ip = "172.16.11.50" + tags { + Name = "tf-eni-test" + } +} + +resource "aws_network_interface" "bar" { + subnet_id = "${aws_subnet.foo.id}" + private_ips = ["172.16.10.100"] + security_groups = ["${aws_security_group.foo.id}"] + attachment { + instance = "${aws_instance.foo.id}" + device_index = 1 + } + tags { + Name = "bar_interface" + } +} +` diff --git a/builtin/providers/aws/resource_aws_route53_record.go b/builtin/providers/aws/resource_aws_route53_record.go index 96c7608fea00..24421849f0ec 100644 --- a/builtin/providers/aws/resource_aws_route53_record.go +++ b/builtin/providers/aws/resource_aws_route53_record.go @@ -18,6 +18,7 @@ func resourceAwsRoute53Record() *schema.Resource { return &schema.Resource{ Create: resourceAwsRoute53RecordCreate, Read: resourceAwsRoute53RecordRead, + Update: resourceAwsRoute53RecordUpdate, Delete: resourceAwsRoute53RecordDelete, Schema: map[string]*schema.Schema{ @@ -42,14 +43,12 @@ func resourceAwsRoute53Record() *schema.Resource { "ttl": &schema.Schema{ Type: schema.TypeInt, Required: true, - ForceNew: true, }, "records": &schema.Schema{ Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Required: true, - ForceNew: true, Set: func(v interface{}) int { return hashcode.String(v.(string)) }, @@ -58,26 +57,30 @@ func resourceAwsRoute53Record() *schema.Resource { } } +func resourceAwsRoute53RecordUpdate(d *schema.ResourceData, meta interface{}) error { + // Route 53 supports CREATE, DELETE, and UPSERT actions. We use UPSERT, and + // AWS dynamically determines if a record should be created or updated. + // Amazon Route 53 can update an existing resource record set only when all + // of the following values match: Name, Type + // (and SetIdentifier, which we don't use yet). + // See http://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets_Requests.html#change-rrsets-request-action + // + // Because we use UPSERT, for resouce update here we simply fall through to + // our resource create function. + return resourceAwsRoute53RecordCreate(d, meta) +} + func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).r53conn - zone := d.Get("zone_id").(string) + zone := cleanZoneID(d.Get("zone_id").(string)) zoneRecord, err := conn.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(zone)}) if err != nil { return err } - // Check if the current record name contains the zone suffix. - // If it does not, add the zone name to form a fully qualified name - // and keep AWS happy. - recordName := d.Get("name").(string) - zoneName := strings.Trim(*zoneRecord.HostedZone.Name, ".") - if !strings.HasSuffix(recordName, zoneName) { - d.Set("name", strings.Join([]string{recordName, zoneName}, ".")) - } - // Get the record - rec, err := resourceAwsRoute53RecordBuildSet(d) + rec, err := resourceAwsRoute53RecordBuildSet(d, *zoneRecord.HostedZone.Name) if err != nil { return err } @@ -101,7 +104,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Creating resource records for zone: %s, name: %s", - zone, d.Get("name").(string)) + zone, *rec.Name) wait := resource.StateChangeConf{ Pending: []string{"rejected"}, @@ -111,10 +114,12 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er Refresh: func() (interface{}, string, error) { resp, err := conn.ChangeResourceRecordSets(req) if err != nil { - if strings.Contains(err.Error(), "PriorRequestNotComplete") { - // There is some pending operation, so just retry - // in a bit. - return nil, "rejected", nil + if r53err, ok := err.(aws.APIError); ok { + if r53err.Code == "PriorRequestNotComplete" { + // There is some pending operation, so just retry + // in a bit. + return nil, "rejected", nil + } } return nil, "failure", err @@ -138,7 +143,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er Delay: 30 * time.Second, Pending: []string{"PENDING"}, Target: "INSYNC", - Timeout: 10 * time.Minute, + Timeout: 30 * time.Minute, MinTimeout: 5 * time.Second, Refresh: func() (result interface{}, state string, err error) { changeRequest := &route53.GetChangeRequest{ @@ -158,10 +163,18 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).r53conn - zone := d.Get("zone_id").(string) + zone := cleanZoneID(d.Get("zone_id").(string)) + + // get expanded name + zoneRecord, err := conn.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(zone)}) + if err != nil { + return err + } + en := expandRecordName(d.Get("name").(string), *zoneRecord.HostedZone.Name) + lopts := &route53.ListResourceRecordSetsRequest{ HostedZoneID: aws.String(cleanZoneID(zone)), - StartRecordName: aws.String(d.Get("name").(string)), + StartRecordName: aws.String(en), StartRecordType: aws.String(d.Get("type").(string)), } @@ -173,7 +186,8 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro // Scan for a matching record found := false for _, record := range resp.ResourceRecordSets { - if FQDN(*record.Name) != FQDN(*lopts.StartRecordName) { + name := cleanRecordName(*record.Name) + if FQDN(name) != FQDN(*lopts.StartRecordName) { continue } if strings.ToUpper(*record.Type) != strings.ToUpper(*lopts.StartRecordType) { @@ -182,7 +196,10 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro found = true - d.Set("records", record.ResourceRecords) + err := d.Set("records", flattenResourceRecords(record.ResourceRecords)) + if err != nil { + return fmt.Errorf("[DEBUG] Error setting records for: %s, error: %#v", en, err) + } d.Set("ttl", record.TTL) break @@ -198,12 +215,15 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).r53conn - zone := d.Get("zone_id").(string) + zone := cleanZoneID(d.Get("zone_id").(string)) log.Printf("[DEBUG] Deleting resource records for zone: %s, name: %s", zone, d.Get("name").(string)) - + zoneRecord, err := conn.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(zone)}) + if err != nil { + return err + } // Get the records - rec, err := resourceAwsRoute53RecordBuildSet(d) + rec, err := resourceAwsRoute53RecordBuildSet(d, *zoneRecord.HostedZone.Name) if err != nil { return err } @@ -232,15 +252,17 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er Refresh: func() (interface{}, string, error) { _, err := conn.ChangeResourceRecordSets(req) if err != nil { - if strings.Contains(err.Error(), "PriorRequestNotComplete") { - // There is some pending operation, so just retry - // in a bit. - return 42, "rejected", nil - } - - if strings.Contains(err.Error(), "InvalidChangeBatch") { - // This means that the record is already gone. - return 42, "accepted", nil + if r53err, ok := err.(aws.APIError); ok { + if r53err.Code == "PriorRequestNotComplete" { + // There is some pending operation, so just retry + // in a bit. + return 42, "rejected", nil + } + + if r53err.Code == "InvalidChangeBatch" { + // This means that the record is already gone. + return 42, "accepted", nil + } } return 42, "failure", err @@ -257,16 +279,20 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er return nil } -func resourceAwsRoute53RecordBuildSet(d *schema.ResourceData) (*route53.ResourceRecordSet, error) { +func resourceAwsRoute53RecordBuildSet(d *schema.ResourceData, zoneName string) (*route53.ResourceRecordSet, error) { recs := d.Get("records").(*schema.Set).List() - records := make([]route53.ResourceRecord, 0, len(recs)) - for _, r := range recs { - records = append(records, route53.ResourceRecord{Value: aws.String(r.(string))}) - } + records := expandResourceRecords(recs, d.Get("type").(string)) + // get expanded name + en := expandRecordName(d.Get("name").(string), zoneName) + + // Create the RecordSet request with the fully expanded name, e.g. + // sub.domain.com. Route 53 requires a fully qualified domain name, but does + // not require the trailing ".", which it will itself, so we don't call FQDN + // here. rec := &route53.ResourceRecordSet{ - Name: aws.String(d.Get("name").(string)), + Name: aws.String(en), Type: aws.String(d.Get("type").(string)), TTL: aws.Long(int64(d.Get("ttl").(int))), ResourceRecords: records, @@ -282,3 +308,27 @@ func FQDN(name string) string { return name + "." } } + +// Route 53 stores the "*" wildcard indicator as ASCII 42 and returns the +// octal equivalent, "\\052". Here we look for that, and convert back to "*" +// as needed. +func cleanRecordName(name string) string { + str := name + if strings.HasPrefix(name, "\\052") { + str = strings.Replace(name, "\\052", "*", 1) + log.Printf("[DEBUG] Replacing octal \\052 for * in: %s", name) + } + return str +} + +// Check if the current record name contains the zone suffix. +// If it does not, add the zone name to form a fully qualified name +// and keep AWS happy. +func expandRecordName(name, zone string) string { + rn := strings.TrimSuffix(name, ".") + zone = strings.TrimSuffix(zone, ".") + if !strings.HasSuffix(rn, zone) { + rn = strings.Join([]string{name, zone}, ".") + } + return rn +} diff --git a/builtin/providers/aws/resource_aws_route53_record_test.go b/builtin/providers/aws/resource_aws_route53_record_test.go index 08325c783f40..16eda813927e 100644 --- a/builtin/providers/aws/resource_aws_route53_record_test.go +++ b/builtin/providers/aws/resource_aws_route53_record_test.go @@ -9,9 +9,47 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/hashicorp/aws-sdk-go/aws" - awsr53 "github.com/hashicorp/aws-sdk-go/gen/route53" + route53 "github.com/hashicorp/aws-sdk-go/gen/route53" ) +func TestCleanRecordName(t *testing.T) { + cases := []struct { + Input, Output string + }{ + {"www.nonexample.com", "www.nonexample.com"}, + {"\\052.nonexample.com", "*.nonexample.com"}, + {"nonexample.com", "nonexample.com"}, + } + + for _, tc := range cases { + actual := cleanRecordName(tc.Input) + if actual != tc.Output { + t.Fatalf("input: %s\noutput: %s", tc.Input, actual) + } + } +} + +func TestExpandRecordName(t *testing.T) { + cases := []struct { + Input, Output string + }{ + {"www", "www.nonexample.com"}, + {"dev.www", "dev.www.nonexample.com"}, + {"*", "*.nonexample.com"}, + {"nonexample.com", "nonexample.com"}, + {"test.nonexample.com", "test.nonexample.com"}, + {"test.nonexample.com.", "test.nonexample.com"}, + } + + zone_name := "nonexample.com" + for _, tc := range cases { + actual := expandRecordName(tc.Input, zone_name) + if actual != tc.Output { + t.Fatalf("input: %s\noutput: %s", tc.Input, actual) + } + } +} + func TestAccRoute53Record(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -28,6 +66,22 @@ func TestAccRoute53Record(t *testing.T) { }) } +func TestAccRoute53Record_txtSupport(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53RecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRoute53RecordConfigTXT, + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53RecordExists("aws_route53_record.default"), + ), + }, + }, + }) +} + func TestAccRoute53Record_generatesSuffix(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -44,6 +98,30 @@ func TestAccRoute53Record_generatesSuffix(t *testing.T) { }) } +func TestAccRoute53Record_wildcard(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53RecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRoute53WildCardRecordConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53RecordExists("aws_route53_record.wildcard"), + ), + }, + + // Cause a change, which will trigger a refresh + resource.TestStep{ + Config: testAccRoute53WildCardRecordConfigUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53RecordExists("aws_route53_record.wildcard"), + ), + }, + }, + }) +} + func testAccCheckRoute53RecordDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).r53conn for _, rs := range s.RootModule().Resources { @@ -56,7 +134,7 @@ func testAccCheckRoute53RecordDestroy(s *terraform.State) error { name := parts[1] rType := parts[2] - lopts := &awsr53.ListResourceRecordSetsRequest{ + lopts := &route53.ListResourceRecordSetsRequest{ HostedZoneID: aws.String(cleanZoneID(zone)), StartRecordName: aws.String(name), StartRecordType: aws.String(rType), @@ -94,9 +172,11 @@ func testAccCheckRoute53RecordExists(n string) resource.TestCheckFunc { name := parts[1] rType := parts[2] - lopts := &awsr53.ListResourceRecordSetsRequest{ + en := expandRecordName(name, "notexample.com") + + lopts := &route53.ListResourceRecordSetsRequest{ HostedZoneID: aws.String(cleanZoneID(zone)), - StartRecordName: aws.String(name), + StartRecordName: aws.String(en), StartRecordType: aws.String(rType), } @@ -107,11 +187,14 @@ func testAccCheckRoute53RecordExists(n string) resource.TestCheckFunc { if len(resp.ResourceRecordSets) == 0 { return fmt.Errorf("Record does not exist") } - rec := resp.ResourceRecordSets[0] - if FQDN(*rec.Name) == FQDN(name) && *rec.Type == rType { - return nil + // rec := resp.ResourceRecordSets[0] + for _, rec := range resp.ResourceRecordSets { + recName := cleanRecordName(*rec.Name) + if FQDN(recName) == FQDN(en) && *rec.Type == rType { + return nil + } } - return fmt.Errorf("Record does not exist: %#v", rec) + return fmt.Errorf("Record does not exist: %#v", rs.Primary.ID) } } @@ -142,3 +225,60 @@ resource "aws_route53_record" "default" { records = ["127.0.0.1", "127.0.0.27"] } ` + +const testAccRoute53WildCardRecordConfig = ` +resource "aws_route53_zone" "main" { + name = "notexample.com" +} + +resource "aws_route53_record" "default" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "subdomain" + type = "A" + ttl = "30" + records = ["127.0.0.1", "127.0.0.27"] +} + +resource "aws_route53_record" "wildcard" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "*.notexample.com" + type = "A" + ttl = "30" + records = ["127.0.0.1"] +} +` + +const testAccRoute53WildCardRecordConfigUpdate = ` +resource "aws_route53_zone" "main" { + name = "notexample.com" +} + +resource "aws_route53_record" "default" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "subdomain" + type = "A" + ttl = "30" + records = ["127.0.0.1", "127.0.0.27"] +} + +resource "aws_route53_record" "wildcard" { + zone_id = "${aws_route53_zone.main.zone_id}" + name = "*.notexample.com" + type = "A" + ttl = "60" + records = ["127.0.0.1"] +} +` +const testAccRoute53RecordConfigTXT = ` +resource "aws_route53_zone" "main" { + name = "notexample.com" +} + +resource "aws_route53_record" "default" { + zone_id = "/hostedzone/${aws_route53_zone.main.zone_id}" + name = "subdomain" + type = "TXT" + ttl = "30" + records = ["lalalala"] +} +` diff --git a/builtin/providers/aws/resource_aws_route53_zone.go b/builtin/providers/aws/resource_aws_route53_zone.go index 6d9914b7f088..a16a711b72c6 100644 --- a/builtin/providers/aws/resource_aws_route53_zone.go +++ b/builtin/providers/aws/resource_aws_route53_zone.go @@ -16,6 +16,7 @@ func resourceAwsRoute53Zone() *schema.Resource { return &schema.Resource{ Create: resourceAwsRoute53ZoneCreate, Read: resourceAwsRoute53ZoneRead, + Update: resourceAwsRoute53ZoneUpdate, Delete: resourceAwsRoute53ZoneDelete, Schema: map[string]*schema.Schema{ @@ -29,6 +30,8 @@ func resourceAwsRoute53Zone() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "tags": tagsSchema(), }, } } @@ -72,7 +75,7 @@ func resourceAwsRoute53ZoneCreate(d *schema.ResourceData, meta interface{}) erro if err != nil { return err } - return nil + return resourceAwsRoute53ZoneUpdate(d, meta) } func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error { @@ -80,16 +83,48 @@ func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error _, err := r53.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(d.Id())}) if err != nil { // Handle a deleted zone - if strings.Contains(err.Error(), "404") { + if r53err, ok := err.(aws.APIError); ok && r53err.Code == "NoSuchHostedZone" { d.SetId("") return nil } return err } + // get tags + req := &route53.ListTagsForResourceRequest{ + ResourceID: aws.String(d.Id()), + ResourceType: aws.String("hostedzone"), + } + + resp, err := r53.ListTagsForResource(req) + if err != nil { + return err + } + + var tags []route53.Tag + if resp.ResourceTagSet != nil { + tags = resp.ResourceTagSet.Tags + } + + if err := d.Set("tags", tagsToMapR53(tags)); err != nil { + return err + } + return nil } +func resourceAwsRoute53ZoneUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).r53conn + + if err := setTagsR53(conn, d); err != nil { + return err + } else { + d.SetPartial("tags") + } + + return resourceAwsRoute53ZoneRead(d, meta) +} + func resourceAwsRoute53ZoneDelete(d *schema.ResourceData, meta interface{}) error { r53 := meta.(*AWSClient).r53conn diff --git a/builtin/providers/aws/resource_aws_route53_zone_test.go b/builtin/providers/aws/resource_aws_route53_zone_test.go index fa78634cf79b..0669f88b1cde 100644 --- a/builtin/providers/aws/resource_aws_route53_zone_test.go +++ b/builtin/providers/aws/resource_aws_route53_zone_test.go @@ -63,6 +63,9 @@ func TestCleanChangeID(t *testing.T) { } func TestAccRoute53Zone(t *testing.T) { + var zone route53.HostedZone + var td route53.ResourceTagSet + resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, @@ -71,7 +74,9 @@ func TestAccRoute53Zone(t *testing.T) { resource.TestStep{ Config: testAccRoute53ZoneConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExists("aws_route53_zone.main"), + testAccCheckRoute53ZoneExists("aws_route53_zone.main", &zone), + testAccLoadTagsR53(&zone, &td), + testAccCheckTagsR53(&td.Tags, "foo", "bar"), ), }, }, @@ -93,7 +98,7 @@ func testAccCheckRoute53ZoneDestroy(s *terraform.State) error { return nil } -func testAccCheckRoute53ZoneExists(n string) resource.TestCheckFunc { +func testAccCheckRoute53ZoneExists(n string, zone *route53.HostedZone) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -105,10 +110,34 @@ func testAccCheckRoute53ZoneExists(n string) resource.TestCheckFunc { } conn := testAccProvider.Meta().(*AWSClient).r53conn - _, err := conn.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(rs.Primary.ID)}) + resp, err := conn.GetHostedZone(&route53.GetHostedZoneRequest{ID: aws.String(rs.Primary.ID)}) if err != nil { return fmt.Errorf("Hosted zone err: %v", err) } + *zone = *resp.HostedZone + return nil + } +} + +func testAccLoadTagsR53(zone *route53.HostedZone, td *route53.ResourceTagSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).r53conn + + zone := cleanZoneID(*zone.ID) + req := &route53.ListTagsForResourceRequest{ + ResourceID: aws.String(zone), + ResourceType: aws.String("hostedzone"), + } + + resp, err := conn.ListTagsForResource(req) + if err != nil { + return err + } + + if resp.ResourceTagSet != nil { + *td = *resp.ResourceTagSet + } + return nil } } @@ -116,5 +145,10 @@ func testAccCheckRoute53ZoneExists(n string) resource.TestCheckFunc { const testAccRoute53ZoneConfig = ` resource "aws_route53_zone" "main" { name = "hashicorp.com" + + tags { + foo = "bar" + Name = "tf-route53-tag-test" + } } ` diff --git a/builtin/providers/aws/resource_aws_route_table.go b/builtin/providers/aws/resource_aws_route_table.go index 9d01218b008f..87b4786ab873 100644 --- a/builtin/providers/aws/resource_aws_route_table.go +++ b/builtin/providers/aws/resource_aws_route_table.go @@ -6,10 +6,11 @@ import ( "log" "time" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsRouteTable() *schema.Resource { @@ -61,22 +62,22 @@ func resourceAwsRouteTable() *schema.Resource { } func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Create the routing table - createOpts := &ec2.CreateRouteTable{ - VpcId: d.Get("vpc_id").(string), + createOpts := &ec2.CreateRouteTableInput{ + VPCID: aws.String(d.Get("vpc_id").(string)), } log.Printf("[DEBUG] RouteTable create config: %#v", createOpts) - resp, err := ec2conn.CreateRouteTable(createOpts) + resp, err := conn.CreateRouteTable(createOpts) if err != nil { return fmt.Errorf("Error creating route table: %s", err) } // Get the ID and store it - rt := &resp.RouteTable - d.SetId(rt.RouteTableId) + rt := resp.RouteTable + d.SetId(*rt.RouteTableID) log.Printf("[INFO] Route Table ID: %s", d.Id()) // Wait for the route table to become available @@ -86,7 +87,7 @@ func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, Target: "ready", - Refresh: resourceAwsRouteTableStateRefreshFunc(ec2conn, d.Id()), + Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -99,51 +100,60 @@ func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error } func resourceAwsRouteTableRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn - rtRaw, _, err := resourceAwsRouteTableStateRefreshFunc(ec2conn, d.Id())() + rtRaw, _, err := resourceAwsRouteTableStateRefreshFunc(conn, d.Id())() if err != nil { return err } if rtRaw == nil { + d.SetId("") return nil } rt := rtRaw.(*ec2.RouteTable) - d.Set("vpc_id", rt.VpcId) + d.Set("vpc_id", rt.VPCID) // Create an empty schema.Set to hold all routes route := &schema.Set{F: resourceAwsRouteTableHash} // Loop through the routes and add them to the set for _, r := range rt.Routes { - if r.GatewayId == "local" { + if r.GatewayID != nil && *r.GatewayID == "local" { continue } - if r.Origin == "EnableVgwRoutePropagation" { + if r.Origin != nil && *r.Origin == "EnableVgwRoutePropagation" { continue } m := make(map[string]interface{}) - m["cidr_block"] = r.DestinationCidrBlock - m["gateway_id"] = r.GatewayId - m["instance_id"] = r.InstanceId - m["vpc_peering_connection_id"] = r.VpcPeeringConnectionId + if r.DestinationCIDRBlock != nil { + m["cidr_block"] = *r.DestinationCIDRBlock + } + if r.GatewayID != nil { + m["gateway_id"] = *r.GatewayID + } + if r.InstanceID != nil { + m["instance_id"] = *r.InstanceID + } + if r.VPCPeeringConnectionID != nil { + m["vpc_peering_connection_id"] = *r.VPCPeeringConnectionID + } route.Add(m) } d.Set("route", route) // Tags - d.Set("tags", tagsToMap(rt.Tags)) + d.Set("tags", tagsToMapSDK(rt.Tags)) return nil } func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Check if the route set as a whole has changed if d.HasChange("route") { @@ -159,8 +169,10 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error log.Printf( "[INFO] Deleting route from %s: %s", d.Id(), m["cidr_block"].(string)) - _, err := ec2conn.DeleteRoute( - d.Id(), m["cidr_block"].(string)) + _, err := conn.DeleteRoute(&ec2.DeleteRouteInput{ + RouteTableID: aws.String(d.Id()), + DestinationCIDRBlock: aws.String(m["cidr_block"].(string)), + }) if err != nil { return err } @@ -174,17 +186,16 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error for _, route := range nrs.List() { m := route.(map[string]interface{}) - opts := ec2.CreateRoute{ - RouteTableId: d.Id(), - DestinationCidrBlock: m["cidr_block"].(string), - GatewayId: m["gateway_id"].(string), - InstanceId: m["instance_id"].(string), - VpcPeeringConnectionId: m["vpc_peering_connection_id"].(string), + opts := ec2.CreateRouteInput{ + RouteTableID: aws.String(d.Id()), + DestinationCIDRBlock: aws.String(m["cidr_block"].(string)), + GatewayID: aws.String(m["gateway_id"].(string)), + InstanceID: aws.String(m["instance_id"].(string)), + VPCPeeringConnectionID: aws.String(m["vpc_peering_connection_id"].(string)), } log.Printf("[INFO] Creating route for %s: %#v", d.Id(), opts) - _, err := ec2conn.CreateRoute(&opts) - if err != nil { + if _, err := conn.CreateRoute(&opts); err != nil { return err } @@ -193,7 +204,7 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error } } - if err := setTags(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } else { d.SetPartial("tags") @@ -203,11 +214,11 @@ func resourceAwsRouteTableUpdate(d *schema.ResourceData, meta interface{}) error } func resourceAwsRouteTableDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // First request the routing table since we'll have to disassociate // all the subnets first. - rtRaw, _, err := resourceAwsRouteTableStateRefreshFunc(ec2conn, d.Id())() + rtRaw, _, err := resourceAwsRouteTableStateRefreshFunc(conn, d.Id())() if err != nil { return err } @@ -218,16 +229,22 @@ func resourceAwsRouteTableDelete(d *schema.ResourceData, meta interface{}) error // Do all the disassociations for _, a := range rt.Associations { - log.Printf("[INFO] Disassociating association: %s", a.AssociationId) - if _, err := ec2conn.DisassociateRouteTable(a.AssociationId); err != nil { + log.Printf("[INFO] Disassociating association: %s", *a.RouteTableAssociationID) + _, err := conn.DisassociateRouteTable(&ec2.DisassociateRouteTableInput{ + AssociationID: a.RouteTableAssociationID, + }) + if err != nil { return err } } // Delete the route table log.Printf("[INFO] Deleting Route Table: %s", d.Id()) - if _, err := ec2conn.DeleteRouteTable(d.Id()); err != nil { - ec2err, ok := err.(*ec2.Error) + _, err = conn.DeleteRouteTable(&ec2.DeleteRouteTableInput{ + RouteTableID: aws.String(d.Id()), + }) + if err != nil { + ec2err, ok := err.(aws.APIError) if ok && ec2err.Code == "InvalidRouteTableID.NotFound" { return nil } @@ -243,7 +260,7 @@ func resourceAwsRouteTableDelete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"ready"}, Target: "", - Refresh: resourceAwsRouteTableStateRefreshFunc(ec2conn, d.Id()), + Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -279,9 +296,11 @@ func resourceAwsRouteTableHash(v interface{}) int { // a RouteTable. func resourceAwsRouteTableStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := conn.DescribeRouteTables([]string{id}, ec2.NewFilter()) + resp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + RouteTableIDs: []*string{aws.String(id)}, + }) if err != nil { - if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidRouteTableID.NotFound" { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidRouteTableID.NotFound" { resp = nil } else { log.Printf("Error on RouteTableStateRefresh: %s", err) @@ -295,7 +314,7 @@ func resourceAwsRouteTableStateRefreshFunc(conn *ec2.EC2, id string) resource.St return nil, "", nil } - rt := &resp.RouteTables[0] + rt := resp.RouteTables[0] return rt, "ready", nil } } diff --git a/builtin/providers/aws/resource_aws_route_table_association.go b/builtin/providers/aws/resource_aws_route_table_association.go index 84683600869d..4b62a59a2345 100644 --- a/builtin/providers/aws/resource_aws_route_table_association.go +++ b/builtin/providers/aws/resource_aws_route_table_association.go @@ -4,8 +4,9 @@ import ( "fmt" "log" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsRouteTableAssociation() *schema.Resource { @@ -31,34 +32,35 @@ func resourceAwsRouteTableAssociation() *schema.Resource { } func resourceAwsRouteTableAssociationCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf( "[INFO] Creating route table association: %s => %s", d.Get("subnet_id").(string), d.Get("route_table_id").(string)) - resp, err := ec2conn.AssociateRouteTable( - d.Get("route_table_id").(string), - d.Get("subnet_id").(string)) + resp, err := conn.AssociateRouteTable(&ec2.AssociateRouteTableInput{ + RouteTableID: aws.String(d.Get("route_table_id").(string)), + SubnetID: aws.String(d.Get("subnet_id").(string)), + }) if err != nil { return err } // Set the ID and return - d.SetId(resp.AssociationId) + d.SetId(*resp.AssociationID) log.Printf("[INFO] Association ID: %s", d.Id()) return nil } func resourceAwsRouteTableAssociationRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Get the routing table that this association belongs to rtRaw, _, err := resourceAwsRouteTableStateRefreshFunc( - ec2conn, d.Get("route_table_id").(string))() + conn, d.Get("route_table_id").(string))() if err != nil { return err } @@ -70,9 +72,9 @@ func resourceAwsRouteTableAssociationRead(d *schema.ResourceData, meta interface // Inspect that the association exists found := false for _, a := range rt.Associations { - if a.AssociationId == d.Id() { + if *a.RouteTableAssociationID == d.Id() { found = true - d.Set("subnet_id", a.SubnetId) + d.Set("subnet_id", *a.SubnetID) break } } @@ -86,19 +88,21 @@ func resourceAwsRouteTableAssociationRead(d *schema.ResourceData, meta interface } func resourceAwsRouteTableAssociationUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf( "[INFO] Creating route table association: %s => %s", d.Get("subnet_id").(string), d.Get("route_table_id").(string)) - resp, err := ec2conn.ReassociateRouteTable( - d.Id(), - d.Get("route_table_id").(string)) + req := &ec2.ReplaceRouteTableAssociationInput{ + AssociationID: aws.String(d.Id()), + RouteTableID: aws.String(d.Get("route_table_id").(string)), + } + resp, err := conn.ReplaceRouteTableAssociation(req) if err != nil { - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if ok && ec2err.Code == "InvalidAssociationID.NotFound" { // Not found, so just create a new one return resourceAwsRouteTableAssociationCreate(d, meta) @@ -108,18 +112,21 @@ func resourceAwsRouteTableAssociationUpdate(d *schema.ResourceData, meta interfa } // Update the ID - d.SetId(resp.AssociationId) + d.SetId(*resp.NewAssociationID) log.Printf("[INFO] Association ID: %s", d.Id()) return nil } func resourceAwsRouteTableAssociationDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf("[INFO] Deleting route table association: %s", d.Id()) - if _, err := ec2conn.DisassociateRouteTable(d.Id()); err != nil { - ec2err, ok := err.(*ec2.Error) + _, err := conn.DisassociateRouteTable(&ec2.DisassociateRouteTableInput{ + AssociationID: aws.String(d.Id()), + }) + if err != nil { + ec2err, ok := err.(aws.APIError) if ok && ec2err.Code == "InvalidAssociationID.NotFound" { return nil } diff --git a/builtin/providers/aws/resource_aws_route_table_association_test.go b/builtin/providers/aws/resource_aws_route_table_association_test.go index 079fb41f80c7..b3a77ac4f6b2 100644 --- a/builtin/providers/aws/resource_aws_route_table_association_test.go +++ b/builtin/providers/aws/resource_aws_route_table_association_test.go @@ -4,9 +4,10 @@ import ( "fmt" "testing" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccAWSRouteTableAssociation(t *testing.T) { @@ -37,7 +38,7 @@ func TestAccAWSRouteTableAssociation(t *testing.T) { } func testAccCheckRouteTableAssociationDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route_table_association" { @@ -45,11 +46,12 @@ func testAccCheckRouteTableAssociationDestroy(s *terraform.State) error { } // Try to find the resource - resp, err := conn.DescribeRouteTables( - []string{rs.Primary.Attributes["route_table_Id"]}, ec2.NewFilter()) + resp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + RouteTableIDs: []*string{aws.String(rs.Primary.Attributes["route_table_id"])}, + }) if err != nil { // Verify the error is what we want - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -62,7 +64,7 @@ func testAccCheckRouteTableAssociationDestroy(s *terraform.State) error { rt := resp.RouteTables[0] if len(rt.Associations) > 0 { return fmt.Errorf( - "route table %s has associations", rt.RouteTableId) + "route table %s has associations", *rt.RouteTableID) } } @@ -81,9 +83,10 @@ func testAccCheckRouteTableAssociationExists(n string, v *ec2.RouteTable) resour return fmt.Errorf("No ID is set") } - conn := testAccProvider.Meta().(*AWSClient).ec2conn - resp, err := conn.DescribeRouteTables( - []string{rs.Primary.Attributes["route_table_id"]}, ec2.NewFilter()) + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + resp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + RouteTableIDs: []*string{aws.String(rs.Primary.Attributes["route_table_id"])}, + }) if err != nil { return err } @@ -91,7 +94,7 @@ func testAccCheckRouteTableAssociationExists(n string, v *ec2.RouteTable) resour return fmt.Errorf("RouteTable not found") } - *v = resp.RouteTables[0] + *v = *resp.RouteTables[0] if len(v.Associations) == 0 { return fmt.Errorf("no associations") diff --git a/builtin/providers/aws/resource_aws_route_table_test.go b/builtin/providers/aws/resource_aws_route_table_test.go index 2f4dfab2e566..79a012122b17 100644 --- a/builtin/providers/aws/resource_aws_route_table_test.go +++ b/builtin/providers/aws/resource_aws_route_table_test.go @@ -4,9 +4,10 @@ import ( "fmt" "testing" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccAWSRouteTable_normal(t *testing.T) { @@ -17,9 +18,9 @@ func TestAccAWSRouteTable_normal(t *testing.T) { return fmt.Errorf("bad routes: %#v", v.Routes) } - routes := make(map[string]ec2.Route) + routes := make(map[string]*ec2.Route) for _, r := range v.Routes { - routes[r.DestinationCidrBlock] = r + routes[*r.DestinationCIDRBlock] = r } if _, ok := routes["10.1.0.0/16"]; !ok { @@ -37,9 +38,9 @@ func TestAccAWSRouteTable_normal(t *testing.T) { return fmt.Errorf("bad routes: %#v", v.Routes) } - routes := make(map[string]ec2.Route) + routes := make(map[string]*ec2.Route) for _, r := range v.Routes { - routes[r.DestinationCidrBlock] = r + routes[*r.DestinationCIDRBlock] = r } if _, ok := routes["10.1.0.0/16"]; !ok { @@ -89,9 +90,9 @@ func TestAccAWSRouteTable_instance(t *testing.T) { return fmt.Errorf("bad routes: %#v", v.Routes) } - routes := make(map[string]ec2.Route) + routes := make(map[string]*ec2.Route) for _, r := range v.Routes { - routes[r.DestinationCidrBlock] = r + routes[*r.DestinationCIDRBlock] = r } if _, ok := routes["10.1.0.0/16"]; !ok { @@ -133,7 +134,7 @@ func TestAccAWSRouteTable_tags(t *testing.T) { Config: testAccRouteTableConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists("aws_route_table.foo", &route_table), - testAccCheckTags(&route_table.Tags, "foo", "bar"), + testAccCheckTagsSDK(&route_table.Tags, "foo", "bar"), ), }, @@ -141,8 +142,8 @@ func TestAccAWSRouteTable_tags(t *testing.T) { Config: testAccRouteTableConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableExists("aws_route_table.foo", &route_table), - testAccCheckTags(&route_table.Tags, "foo", ""), - testAccCheckTags(&route_table.Tags, "bar", "baz"), + testAccCheckTagsSDK(&route_table.Tags, "foo", ""), + testAccCheckTagsSDK(&route_table.Tags, "bar", "baz"), ), }, }, @@ -150,7 +151,7 @@ func TestAccAWSRouteTable_tags(t *testing.T) { } func testAccCheckRouteTableDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route_table" { @@ -158,8 +159,9 @@ func testAccCheckRouteTableDestroy(s *terraform.State) error { } // Try to find the resource - resp, err := conn.DescribeRouteTables( - []string{rs.Primary.ID}, ec2.NewFilter()) + resp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + RouteTableIDs: []*string{aws.String(rs.Primary.ID)}, + }) if err == nil { if len(resp.RouteTables) > 0 { return fmt.Errorf("still exist.") @@ -169,7 +171,7 @@ func testAccCheckRouteTableDestroy(s *terraform.State) error { } // Verify the error is what we want - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -192,9 +194,10 @@ func testAccCheckRouteTableExists(n string, v *ec2.RouteTable) resource.TestChec return fmt.Errorf("No ID is set") } - conn := testAccProvider.Meta().(*AWSClient).ec2conn - resp, err := conn.DescribeRouteTables( - []string{rs.Primary.ID}, ec2.NewFilter()) + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + resp, err := conn.DescribeRouteTables(&ec2.DescribeRouteTablesInput{ + RouteTableIDs: []*string{aws.String(rs.Primary.ID)}, + }) if err != nil { return err } @@ -202,13 +205,16 @@ func testAccCheckRouteTableExists(n string, v *ec2.RouteTable) resource.TestChec return fmt.Errorf("RouteTable not found") } - *v = resp.RouteTables[0] + *v = *resp.RouteTables[0] return nil } } -func TestAccAWSRouteTable_vpcPeering(t *testing.T) { +// TODO: re-enable this test. +// VPC Peering connections are prefixed with pcx +// Right now there is no VPC Peering resource +func _TestAccAWSRouteTable_vpcPeering(t *testing.T) { var v ec2.RouteTable testCheck := func(*terraform.State) error { @@ -216,9 +222,9 @@ func TestAccAWSRouteTable_vpcPeering(t *testing.T) { return fmt.Errorf("bad routes: %#v", v.Routes) } - routes := make(map[string]ec2.Route) + routes := make(map[string]*ec2.Route) for _, r := range v.Routes { - routes[r.DestinationCidrBlock] = r + routes[*r.DestinationCIDRBlock] = r } if _, ok := routes["10.1.0.0/16"]; !ok { @@ -345,6 +351,9 @@ resource "aws_route_table" "foo" { } ` +// TODO: re-enable this test. +// VPC Peering connections are prefixed with pcx +// Right now there is no VPC Peering resource const testAccRouteTableVpcPeeringConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -359,7 +368,7 @@ resource "aws_route_table" "foo" { route { cidr_block = "10.2.0.0/16" - vpc_peering_connection_id = "vpc-12345" + vpc_peering_connection_id = "pcx-12345" } } ` diff --git a/builtin/providers/aws/resource_aws_s3_bucket.go b/builtin/providers/aws/resource_aws_s3_bucket.go index d832190b0c4b..a33f9b35f333 100644 --- a/builtin/providers/aws/resource_aws_s3_bucket.go +++ b/builtin/providers/aws/resource_aws_s3_bucket.go @@ -14,6 +14,7 @@ func resourceAwsS3Bucket() *schema.Resource { return &schema.Resource{ Create: resourceAwsS3BucketCreate, Read: resourceAwsS3BucketRead, + Update: resourceAwsS3BucketUpdate, Delete: resourceAwsS3BucketDelete, Schema: map[string]*schema.Schema{ @@ -29,6 +30,8 @@ func resourceAwsS3Bucket() *schema.Resource { Optional: true, ForceNew: true, }, + + "tags": tagsSchema(), }, } } @@ -64,7 +67,15 @@ func resourceAwsS3BucketCreate(d *schema.ResourceData, meta interface{}) error { // Assign the bucket name as the resource ID d.SetId(bucket) - return nil + return resourceAwsS3BucketUpdate(d, meta) +} + +func resourceAwsS3BucketUpdate(d *schema.ResourceData, meta interface{}) error { + s3conn := meta.(*AWSClient).s3conn + if err := setTagsS3(s3conn, d); err != nil { + return err + } + return resourceAwsS3BucketRead(d, meta) } func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { @@ -76,6 +87,16 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { if err != nil { return err } + + tagSet, err := getTagSetS3(s3conn, d.Id()) + if err != nil { + return err + } + + if err := d.Set("tags", tagsToMapS3(tagSet)); err != nil { + return err + } + return nil } diff --git a/builtin/providers/aws/resource_aws_security_group.go b/builtin/providers/aws/resource_aws_security_group.go index 451f1816f5d2..6621ea8e7fec 100644 --- a/builtin/providers/aws/resource_aws_security_group.go +++ b/builtin/providers/aws/resource_aws_security_group.go @@ -7,10 +7,11 @@ import ( "sort" "time" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsSecurityGroup() *schema.Resource { @@ -141,28 +142,28 @@ func resourceAwsSecurityGroup() *schema.Resource { } func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn - securityGroupOpts := ec2.SecurityGroup{ - Name: d.Get("name").(string), + securityGroupOpts := &ec2.CreateSecurityGroupInput{ + GroupName: aws.String(d.Get("name").(string)), } if v := d.Get("vpc_id"); v != nil { - securityGroupOpts.VpcId = v.(string) + securityGroupOpts.VPCID = aws.String(v.(string)) } if v := d.Get("description"); v != nil { - securityGroupOpts.Description = v.(string) + securityGroupOpts.Description = aws.String(v.(string)) } log.Printf( "[DEBUG] Security Group create configuration: %#v", securityGroupOpts) - createResp, err := ec2conn.CreateSecurityGroup(securityGroupOpts) + createResp, err := conn.CreateSecurityGroup(securityGroupOpts) if err != nil { return fmt.Errorf("Error creating Security Group: %s", err) } - d.SetId(createResp.Id) + d.SetId(*createResp.GroupID) log.Printf("[INFO] Security Group ID: %s", d.Id()) @@ -173,7 +174,7 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er stateConf := &resource.StateChangeConf{ Pending: []string{""}, Target: "exists", - Refresh: SGStateRefreshFunc(ec2conn, d.Id()), + Refresh: SGStateRefreshFunc(conn, d.Id()), Timeout: 1 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -186,9 +187,9 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er } func resourceAwsSecurityGroupRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn - sgRaw, _, err := SGStateRefreshFunc(ec2conn, d.Id())() + sgRaw, _, err := SGStateRefreshFunc(conn, d.Id())() if err != nil { return err } @@ -197,26 +198,25 @@ func resourceAwsSecurityGroupRead(d *schema.ResourceData, meta interface{}) erro return nil } - sg := sgRaw.(*ec2.SecurityGroupInfo) + sg := sgRaw.(*ec2.SecurityGroup) - ingressRules := resourceAwsSecurityGroupIPPermGather(d, sg.IPPerms) - egressRules := resourceAwsSecurityGroupIPPermGather(d, sg.IPPermsEgress) + ingressRules := resourceAwsSecurityGroupIPPermGather(d, sg.IPPermissions) + egressRules := resourceAwsSecurityGroupIPPermGather(d, sg.IPPermissionsEgress) d.Set("description", sg.Description) - d.Set("name", sg.Name) - d.Set("vpc_id", sg.VpcId) - d.Set("owner_id", sg.OwnerId) + d.Set("name", sg.GroupName) + d.Set("vpc_id", sg.VPCID) + d.Set("owner_id", sg.OwnerID) d.Set("ingress", ingressRules) d.Set("egress", egressRules) - d.Set("tags", tagsToMap(sg.Tags)) - + d.Set("tags", tagsToMapSDK(sg.Tags)) return nil } func resourceAwsSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn - sgRaw, _, err := SGStateRefreshFunc(ec2conn, d.Id())() + sgRaw, _, err := SGStateRefreshFunc(conn, d.Id())() if err != nil { return err } @@ -224,7 +224,8 @@ func resourceAwsSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) er d.SetId("") return nil } - group := sgRaw.(*ec2.SecurityGroupInfo).SecurityGroup + + group := sgRaw.(*ec2.SecurityGroup) err = resourceAwsSecurityGroupUpdateRules(d, "ingress", meta, group) if err != nil { @@ -238,7 +239,7 @@ func resourceAwsSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) er } } - if err := setTags(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } @@ -248,14 +249,16 @@ func resourceAwsSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) er } func resourceAwsSecurityGroupDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf("[DEBUG] Security Group destroy: %v", d.Id()) return resource.Retry(5*time.Minute, func() error { - _, err := ec2conn.DeleteSecurityGroup(ec2.SecurityGroup{Id: d.Id()}) + _, err := conn.DeleteSecurityGroup(&ec2.DeleteSecurityGroupInput{ + GroupID: aws.String(d.Id()), + }) if err != nil { - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -282,6 +285,7 @@ func resourceAwsSecurityGroupRuleHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%d-", m["from_port"].(int))) buf.WriteString(fmt.Sprintf("%d-", m["to_port"].(int))) buf.WriteString(fmt.Sprintf("%s-", m["protocol"].(string))) + buf.WriteString(fmt.Sprintf("%t-", m["self"].(bool))) // We need to make sure to sort the strings below so that we always // generate the same hash code no matter what is in the set. @@ -313,34 +317,45 @@ func resourceAwsSecurityGroupRuleHash(v interface{}) int { return hashcode.String(buf.String()) } -func resourceAwsSecurityGroupIPPermGather(d *schema.ResourceData, permissions []ec2.IPPerm) []map[string]interface{} { +func resourceAwsSecurityGroupIPPermGather(d *schema.ResourceData, permissions []*ec2.IPPermission) []map[string]interface{} { ruleMap := make(map[string]map[string]interface{}) for _, perm := range permissions { - k := fmt.Sprintf("%s-%d-%d", perm.Protocol, perm.FromPort, perm.ToPort) + var fromPort, toPort *int64 + if v := perm.FromPort; v != nil { + fromPort = v + } + if v := perm.ToPort; v != nil { + toPort = v + } + + k := fmt.Sprintf("%s-%d-%d", *perm.IPProtocol, fromPort, toPort) m, ok := ruleMap[k] if !ok { m = make(map[string]interface{}) ruleMap[k] = m } - m["from_port"] = perm.FromPort - m["to_port"] = perm.ToPort - m["protocol"] = perm.Protocol + m["from_port"] = fromPort + m["to_port"] = toPort + m["protocol"] = *perm.IPProtocol - if len(perm.SourceIPs) > 0 { + if len(perm.IPRanges) > 0 { raw, ok := m["cidr_blocks"] if !ok { - raw = make([]string, 0, len(perm.SourceIPs)) + raw = make([]string, 0, len(perm.IPRanges)) } list := raw.([]string) - list = append(list, perm.SourceIPs...) + for _, ip := range perm.IPRanges { + list = append(list, *ip.CIDRIP) + } + m["cidr_blocks"] = list } var groups []string - if len(perm.SourceGroups) > 0 { - groups = flattenSecurityGroups(perm.SourceGroups) + if len(perm.UserIDGroupPairs) > 0 { + groups = flattenSecurityGroupsSDK(perm.UserIDGroupPairs) } for i, id := range groups { if id == d.Id() { @@ -364,13 +379,13 @@ func resourceAwsSecurityGroupIPPermGather(d *schema.ResourceData, permissions [] for _, m := range ruleMap { rules = append(rules, m) } - return rules } func resourceAwsSecurityGroupUpdateRules( d *schema.ResourceData, ruleset string, - meta interface{}, group ec2.SecurityGroup) error { + meta interface{}, group *ec2.SecurityGroup) error { + if d.HasChange(ruleset) { o, n := d.GetChange(ruleset) if o == nil { @@ -383,8 +398,8 @@ func resourceAwsSecurityGroupUpdateRules( os := o.(*schema.Set) ns := n.(*schema.Set) - remove := expandIPPerms(d.Id(), os.Difference(ns).List()) - add := expandIPPerms(d.Id(), ns.Difference(os).List()) + remove := expandIPPermsSDK(group, os.Difference(ns).List()) + add := expandIPPermsSDK(group, ns.Difference(os).List()) // TODO: We need to handle partial state better in the in-between // in this update. @@ -396,34 +411,58 @@ func resourceAwsSecurityGroupUpdateRules( // not have service issues. if len(remove) > 0 || len(add) > 0 { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn + var err error if len(remove) > 0 { - // Revoke the old rules - revoke := ec2conn.RevokeSecurityGroup + log.Printf("[DEBUG] Revoking security group %#v %s rule: %#v", + group, ruleset, remove) + if ruleset == "egress" { - revoke = ec2conn.RevokeSecurityGroupEgress + req := &ec2.RevokeSecurityGroupEgressInput{ + GroupID: group.GroupID, + IPPermissions: remove, + } + _, err = conn.RevokeSecurityGroupEgress(req) + } else { + req := &ec2.RevokeSecurityGroupIngressInput{ + GroupID: group.GroupID, + IPPermissions: remove, + } + _, err = conn.RevokeSecurityGroupIngress(req) } - log.Printf("[DEBUG] Revoking security group %s %s rule: %#v", - group, ruleset, remove) - if _, err := revoke(group, remove); err != nil { + if err != nil { return fmt.Errorf( - "Error revoking security group %s rules: %s", + "Error authorizing security group %s rules: %s", ruleset, err) } } if len(add) > 0 { + log.Printf("[DEBUG] Authorizing security group %#v %s rule: %#v", + group, ruleset, add) // Authorize the new rules - authorize := ec2conn.AuthorizeSecurityGroup if ruleset == "egress" { - authorize = ec2conn.AuthorizeSecurityGroupEgress + req := &ec2.AuthorizeSecurityGroupEgressInput{ + GroupID: group.GroupID, + IPPermissions: add, + } + _, err = conn.AuthorizeSecurityGroupEgress(req) + } else { + req := &ec2.AuthorizeSecurityGroupIngressInput{ + GroupID: group.GroupID, + IPPermissions: add, + } + if group.VPCID == nil || *group.VPCID == "" { + req.GroupID = nil + req.GroupName = group.GroupName + } + + _, err = conn.AuthorizeSecurityGroupIngress(req) } - log.Printf("[DEBUG] Authorizing security group %s %s rule: %#v", - group, ruleset, add) - if _, err := authorize(group, add); err != nil { + if err != nil { return fmt.Errorf( "Error authorizing security group %s rules: %s", ruleset, err) @@ -431,7 +470,6 @@ func resourceAwsSecurityGroupUpdateRules( } } } - return nil } @@ -439,10 +477,12 @@ func resourceAwsSecurityGroupUpdateRules( // a security group. func SGStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - sgs := []ec2.SecurityGroup{ec2.SecurityGroup{Id: id}} - resp, err := conn.SecurityGroups(sgs, nil) + req := &ec2.DescribeSecurityGroupsInput{ + GroupIDs: []*string{aws.String(id)}, + } + resp, err := conn.DescribeSecurityGroups(req) if err != nil { - if ec2err, ok := err.(*ec2.Error); ok { + if ec2err, ok := err.(aws.APIError); ok { if ec2err.Code == "InvalidSecurityGroupID.NotFound" || ec2err.Code == "InvalidGroup.NotFound" { resp = nil @@ -460,7 +500,7 @@ func SGStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return nil, "", nil } - group := &resp.Groups[0] + group := resp.SecurityGroups[0] return group, "exists", nil } } diff --git a/builtin/providers/aws/resource_aws_security_group_test.go b/builtin/providers/aws/resource_aws_security_group_test.go index d31f9754b24e..58908a98a4e4 100644 --- a/builtin/providers/aws/resource_aws_security_group_test.go +++ b/builtin/providers/aws/resource_aws_security_group_test.go @@ -5,13 +5,14 @@ import ( "reflect" "testing" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccAWSSecurityGroup_normal(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -28,15 +29,15 @@ func TestAccAWSSecurityGroup_normal(t *testing.T) { resource.TestCheckResourceAttr( "aws_security_group.web", "description", "Used in the terraform acceptance tests"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.protocol", "tcp"), + "aws_security_group.web", "ingress.3629188364.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.from_port", "80"), + "aws_security_group.web", "ingress.3629188364.from_port", "80"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.to_port", "8000"), + "aws_security_group.web", "ingress.3629188364.to_port", "8000"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.cidr_blocks.#", "1"), + "aws_security_group.web", "ingress.3629188364.cidr_blocks.#", "1"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.cidr_blocks.0", "10.0.0.0/8"), + "aws_security_group.web", "ingress.3629188364.cidr_blocks.0", "10.0.0.0/8"), ), }, }, @@ -44,7 +45,7 @@ func TestAccAWSSecurityGroup_normal(t *testing.T) { } func TestAccAWSSecurityGroup_self(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup checkSelf := func(s *terraform.State) (err error) { defer func() { @@ -53,7 +54,7 @@ func TestAccAWSSecurityGroup_self(t *testing.T) { } }() - if group.IPPerms[0].SourceGroups[0].Id != group.Id { + if *group.IPPermissions[0].UserIDGroupPairs[0].GroupID != *group.GroupID { return fmt.Errorf("bad: %#v", group) } @@ -74,13 +75,13 @@ func TestAccAWSSecurityGroup_self(t *testing.T) { resource.TestCheckResourceAttr( "aws_security_group.web", "description", "Used in the terraform acceptance tests"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.3128515109.protocol", "tcp"), + "aws_security_group.web", "ingress.3971148406.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.3128515109.from_port", "80"), + "aws_security_group.web", "ingress.3971148406.from_port", "80"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.3128515109.to_port", "8000"), + "aws_security_group.web", "ingress.3971148406.to_port", "8000"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.3128515109.self", "true"), + "aws_security_group.web", "ingress.3971148406.self", "true"), checkSelf, ), }, @@ -89,10 +90,10 @@ func TestAccAWSSecurityGroup_self(t *testing.T) { } func TestAccAWSSecurityGroup_vpc(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup testCheck := func(*terraform.State) error { - if group.VpcId == "" { + if *group.VPCID == "" { return fmt.Errorf("should have vpc ID") } @@ -114,25 +115,25 @@ func TestAccAWSSecurityGroup_vpc(t *testing.T) { resource.TestCheckResourceAttr( "aws_security_group.web", "description", "Used in the terraform acceptance tests"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.protocol", "tcp"), + "aws_security_group.web", "ingress.3629188364.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.from_port", "80"), + "aws_security_group.web", "ingress.3629188364.from_port", "80"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.to_port", "8000"), + "aws_security_group.web", "ingress.3629188364.to_port", "8000"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.cidr_blocks.#", "1"), + "aws_security_group.web", "ingress.3629188364.cidr_blocks.#", "1"), resource.TestCheckResourceAttr( - "aws_security_group.web", "ingress.332851786.cidr_blocks.0", "10.0.0.0/8"), + "aws_security_group.web", "ingress.3629188364.cidr_blocks.0", "10.0.0.0/8"), resource.TestCheckResourceAttr( - "aws_security_group.web", "egress.332851786.protocol", "tcp"), + "aws_security_group.web", "egress.3629188364.protocol", "tcp"), resource.TestCheckResourceAttr( - "aws_security_group.web", "egress.332851786.from_port", "80"), + "aws_security_group.web", "egress.3629188364.from_port", "80"), resource.TestCheckResourceAttr( - "aws_security_group.web", "egress.332851786.to_port", "8000"), + "aws_security_group.web", "egress.3629188364.to_port", "8000"), resource.TestCheckResourceAttr( - "aws_security_group.web", "egress.332851786.cidr_blocks.#", "1"), + "aws_security_group.web", "egress.3629188364.cidr_blocks.#", "1"), resource.TestCheckResourceAttr( - "aws_security_group.web", "egress.332851786.cidr_blocks.0", "10.0.0.0/8"), + "aws_security_group.web", "egress.3629188364.cidr_blocks.0", "10.0.0.0/8"), testCheck, ), }, @@ -141,7 +142,7 @@ func TestAccAWSSecurityGroup_vpc(t *testing.T) { } func TestAccAWSSecurityGroup_MultiIngress(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -159,7 +160,7 @@ func TestAccAWSSecurityGroup_MultiIngress(t *testing.T) { } func TestAccAWSSecurityGroup_Change(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -184,30 +185,27 @@ func TestAccAWSSecurityGroup_Change(t *testing.T) { } func testAccCheckAWSSecurityGroupDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_security_group" { continue } - sgs := []ec2.SecurityGroup{ - ec2.SecurityGroup{ - Id: rs.Primary.ID, - }, - } - // Retrieve our group - resp, err := conn.SecurityGroups(sgs, nil) + req := &ec2.DescribeSecurityGroupsInput{ + GroupIDs: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeSecurityGroups(req) if err == nil { - if len(resp.Groups) > 0 && resp.Groups[0].Id == rs.Primary.ID { + if len(resp.SecurityGroups) > 0 && *resp.SecurityGroups[0].GroupID == rs.Primary.ID { return fmt.Errorf("Security Group (%s) still exists.", rs.Primary.ID) } return nil } - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -220,7 +218,7 @@ func testAccCheckAWSSecurityGroupDestroy(s *terraform.State) error { return nil } -func testAccCheckAWSSecurityGroupExists(n string, group *ec2.SecurityGroupInfo) resource.TestCheckFunc { +func testAccCheckAWSSecurityGroupExists(n string, group *ec2.SecurityGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -231,21 +229,17 @@ func testAccCheckAWSSecurityGroupExists(n string, group *ec2.SecurityGroupInfo) return fmt.Errorf("No Security Group is set") } - conn := testAccProvider.Meta().(*AWSClient).ec2conn - sgs := []ec2.SecurityGroup{ - ec2.SecurityGroup{ - Id: rs.Primary.ID, - }, + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + req := &ec2.DescribeSecurityGroupsInput{ + GroupIDs: []*string{aws.String(rs.Primary.ID)}, } - resp, err := conn.SecurityGroups(sgs, nil) + resp, err := conn.DescribeSecurityGroups(req) if err != nil { return err } - if len(resp.Groups) > 0 && resp.Groups[0].Id == rs.Primary.ID { - - *group = resp.Groups[0] - + if len(resp.SecurityGroups) > 0 && *resp.SecurityGroups[0].GroupID == rs.Primary.ID { + *group = *resp.SecurityGroups[0] return nil } @@ -253,32 +247,32 @@ func testAccCheckAWSSecurityGroupExists(n string, group *ec2.SecurityGroupInfo) } } -func testAccCheckAWSSecurityGroupAttributes(group *ec2.SecurityGroupInfo) resource.TestCheckFunc { +func testAccCheckAWSSecurityGroupAttributes(group *ec2.SecurityGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - p := ec2.IPPerm{ - FromPort: 80, - ToPort: 8000, - Protocol: "tcp", - SourceIPs: []string{"10.0.0.0/8"}, + p := &ec2.IPPermission{ + FromPort: aws.Long(80), + ToPort: aws.Long(8000), + IPProtocol: aws.String("tcp"), + IPRanges: []*ec2.IPRange{&ec2.IPRange{CIDRIP: aws.String("10.0.0.0/8")}}, } - if group.Name != "terraform_acceptance_test_example" { - return fmt.Errorf("Bad name: %s", group.Name) + if *group.GroupName != "terraform_acceptance_test_example" { + return fmt.Errorf("Bad name: %s", *group.GroupName) } - if group.Description != "Used in the terraform acceptance tests" { - return fmt.Errorf("Bad description: %s", group.Description) + if *group.Description != "Used in the terraform acceptance tests" { + return fmt.Errorf("Bad description: %s", *group.Description) } - if len(group.IPPerms) == 0 { + if len(group.IPPermissions) == 0 { return fmt.Errorf("No IPPerms") } // Compare our ingress - if !reflect.DeepEqual(group.IPPerms[0], p) { + if !reflect.DeepEqual(group.IPPermissions[0], p) { return fmt.Errorf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - group.IPPerms[0], + group.IPPermissions[0], p) } @@ -287,7 +281,7 @@ func testAccCheckAWSSecurityGroupAttributes(group *ec2.SecurityGroupInfo) resour } func TestAccAWSSecurityGroup_tags(t *testing.T) { - var group ec2.SecurityGroupInfo + var group ec2.SecurityGroup resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -298,7 +292,7 @@ func TestAccAWSSecurityGroup_tags(t *testing.T) { Config: testAccAWSSecurityGroupConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.foo", &group), - testAccCheckTags(&group.Tags, "foo", "bar"), + testAccCheckTagsSDK(&group.Tags, "foo", "bar"), ), }, @@ -306,56 +300,63 @@ func TestAccAWSSecurityGroup_tags(t *testing.T) { Config: testAccAWSSecurityGroupConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.foo", &group), - testAccCheckTags(&group.Tags, "foo", ""), - testAccCheckTags(&group.Tags, "bar", "baz"), + testAccCheckTagsSDK(&group.Tags, "foo", ""), + testAccCheckTagsSDK(&group.Tags, "bar", "baz"), ), }, }, }) } -func testAccCheckAWSSecurityGroupAttributesChanged(group *ec2.SecurityGroupInfo) resource.TestCheckFunc { +func testAccCheckAWSSecurityGroupAttributesChanged(group *ec2.SecurityGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - p := []ec2.IPPerm{ - ec2.IPPerm{ - FromPort: 80, - ToPort: 9000, - Protocol: "tcp", - SourceIPs: []string{"10.0.0.0/8"}, + p := []*ec2.IPPermission{ + &ec2.IPPermission{ + FromPort: aws.Long(80), + ToPort: aws.Long(9000), + IPProtocol: aws.String("tcp"), + IPRanges: []*ec2.IPRange{&ec2.IPRange{CIDRIP: aws.String("10.0.0.0/8")}}, }, - ec2.IPPerm{ - FromPort: 80, - ToPort: 8000, - Protocol: "tcp", - SourceIPs: []string{"0.0.0.0/0", "10.0.0.0/8"}, + &ec2.IPPermission{ + FromPort: aws.Long(80), + ToPort: aws.Long(8000), + IPProtocol: aws.String("tcp"), + IPRanges: []*ec2.IPRange{ + &ec2.IPRange{ + CIDRIP: aws.String("0.0.0.0/0"), + }, + &ec2.IPRange{ + CIDRIP: aws.String("10.0.0.0/8"), + }, + }, }, } - if group.Name != "terraform_acceptance_test_example" { - return fmt.Errorf("Bad name: %s", group.Name) + if *group.GroupName != "terraform_acceptance_test_example" { + return fmt.Errorf("Bad name: %s", *group.GroupName) } - if group.Description != "Used in the terraform acceptance tests" { - return fmt.Errorf("Bad description: %s", group.Description) + if *group.Description != "Used in the terraform acceptance tests" { + return fmt.Errorf("Bad description: %s", *group.Description) } // Compare our ingress - if len(group.IPPerms) != 2 { + if len(group.IPPermissions) != 2 { return fmt.Errorf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - group.IPPerms, + group.IPPermissions, p) } - if group.IPPerms[0].ToPort == 8000 { - group.IPPerms[1], group.IPPerms[0] = - group.IPPerms[0], group.IPPerms[1] + if *group.IPPermissions[0].ToPort == 8000 { + group.IPPermissions[1], group.IPPermissions[0] = + group.IPPermissions[0], group.IPPermissions[1] } - if !reflect.DeepEqual(group.IPPerms, p) { + if !reflect.DeepEqual(group.IPPermissions, p) { return fmt.Errorf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - group.IPPerms, + group.IPPermissions, p) } @@ -374,6 +375,10 @@ resource "aws_security_group" "web" { to_port = 8000 cidr_blocks = ["10.0.0.0/8"] } + + tags { + Name = "tf-acc-test" + } } ` diff --git a/builtin/providers/aws/resource_aws_subnet.go b/builtin/providers/aws/resource_aws_subnet.go index e09fb8bc4442..459e6f43b943 100644 --- a/builtin/providers/aws/resource_aws_subnet.go +++ b/builtin/providers/aws/resource_aws_subnet.go @@ -5,8 +5,8 @@ import ( "log" "time" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -42,7 +42,7 @@ func resourceAwsSubnet() *schema.Resource { "map_public_ip_on_launch": &schema.Schema{ Type: schema.TypeBool, Optional: true, - Computed: true, + Default: false, }, "tags": tagsSchema(), @@ -51,15 +51,15 @@ func resourceAwsSubnet() *schema.Resource { } func resourceAwsSubnetCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn - createOpts := &ec2.CreateSubnetRequest{ + createOpts := &ec2.CreateSubnetInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), CIDRBlock: aws.String(d.Get("cidr_block").(string)), VPCID: aws.String(d.Get("vpc_id").(string)), } - resp, err := ec2conn.CreateSubnet(createOpts) + resp, err := conn.CreateSubnet(createOpts) if err != nil { return fmt.Errorf("Error creating subnet: %s", err) @@ -75,7 +75,7 @@ func resourceAwsSubnetCreate(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, Target: "available", - Refresh: SubnetStateRefreshFunc(ec2conn, *subnet.SubnetID), + Refresh: SubnetStateRefreshFunc(conn, *subnet.SubnetID), Timeout: 10 * time.Minute, } @@ -91,10 +91,10 @@ func resourceAwsSubnetCreate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn - resp, err := ec2conn.DescribeSubnets(&ec2.DescribeSubnetsRequest{ - SubnetIDs: []string{d.Id()}, + resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsInput{ + SubnetIDs: []*string{aws.String(d.Id())}, }) if err != nil { @@ -109,7 +109,7 @@ func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { return nil } - subnet := &resp.Subnets[0] + subnet := resp.Subnets[0] d.Set("vpc_id", subnet.VPCID) d.Set("availability_zone", subnet.AvailabilityZone) @@ -121,25 +121,27 @@ func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { } func resourceAwsSubnetUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn d.Partial(true) - if err := setTagsSDK(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } else { d.SetPartial("tags") } if d.HasChange("map_public_ip_on_launch") { - modifyOpts := &ec2.ModifySubnetAttributeRequest{ - SubnetID: aws.String(d.Id()), - MapPublicIPOnLaunch: &ec2.AttributeBooleanValue{aws.Boolean(true)}, + modifyOpts := &ec2.ModifySubnetAttributeInput{ + SubnetID: aws.String(d.Id()), + MapPublicIPOnLaunch: &ec2.AttributeBooleanValue{ + Value: aws.Boolean(d.Get("map_public_ip_on_launch").(bool)), + }, } log.Printf("[DEBUG] Subnet modify attributes: %#v", modifyOpts) - err := ec2conn.ModifySubnetAttribute(modifyOpts) + _, err := conn.ModifySubnetAttribute(modifyOpts) if err != nil { return err @@ -154,20 +156,41 @@ func resourceAwsSubnetUpdate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsSubnetDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).awsEC2conn + conn := meta.(*AWSClient).ec2SDKconn log.Printf("[INFO] Deleting subnet: %s", d.Id()) - - err := ec2conn.DeleteSubnet(&ec2.DeleteSubnetRequest{ + req := &ec2.DeleteSubnetInput{ SubnetID: aws.String(d.Id()), - }) + } - if err != nil { - ec2err, ok := err.(aws.APIError) - if ok && ec2err.Code == "InvalidSubnetID.NotFound" { - return nil - } + wait := resource.StateChangeConf{ + Pending: []string{"pending"}, + Target: "destroyed", + Timeout: 5 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + _, err := conn.DeleteSubnet(req) + if err != nil { + if apiErr, ok := err.(aws.APIError); ok { + if apiErr.Code == "DependencyViolation" { + // There is some pending operation, so just retry + // in a bit. + return 42, "pending", nil + } + + if apiErr.Code == "InvalidSubnetID.NotFound" { + return 42, "destroyed", nil + } + } + + return 42, "failure", err + } + + return 42, "destroyed", nil + }, + } + if _, err := wait.WaitForState(); err != nil { return fmt.Errorf("Error deleting subnet: %s", err) } @@ -177,8 +200,8 @@ func resourceAwsSubnetDelete(d *schema.ResourceData, meta interface{}) error { // SubnetStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch a Subnet. func SubnetStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsRequest{ - SubnetIDs: []string{id}, + resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsInput{ + SubnetIDs: []*string{aws.String(id)}, }) if err != nil { if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidSubnetID.NotFound" { @@ -195,7 +218,7 @@ func SubnetStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc return nil, "", nil } - subnet := &resp.Subnets[0] + subnet := resp.Subnets[0] return subnet, *subnet.State, nil } } diff --git a/builtin/providers/aws/resource_aws_subnet_test.go b/builtin/providers/aws/resource_aws_subnet_test.go index 77dfeccf0729..256c19147218 100644 --- a/builtin/providers/aws/resource_aws_subnet_test.go +++ b/builtin/providers/aws/resource_aws_subnet_test.go @@ -4,8 +4,8 @@ import ( "fmt" "testing" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -43,7 +43,7 @@ func TestAccAWSSubnet(t *testing.T) { } func testAccCheckSubnetDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).awsEC2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_subnet" { @@ -51,8 +51,8 @@ func testAccCheckSubnetDestroy(s *terraform.State) error { } // Try to find the resource - resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsRequest{ - SubnetIDs: []string{rs.Primary.ID}, + resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsInput{ + SubnetIDs: []*string{aws.String(rs.Primary.ID)}, }) if err == nil { if len(resp.Subnets) > 0 { @@ -86,9 +86,9 @@ func testAccCheckSubnetExists(n string, v *ec2.Subnet) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - conn := testAccProvider.Meta().(*AWSClient).awsEC2conn - resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsRequest{ - SubnetIDs: []string{rs.Primary.ID}, + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + resp, err := conn.DescribeSubnets(&ec2.DescribeSubnetsInput{ + SubnetIDs: []*string{aws.String(rs.Primary.ID)}, }) if err != nil { return err @@ -97,7 +97,7 @@ func testAccCheckSubnetExists(n string, v *ec2.Subnet) resource.TestCheckFunc { return fmt.Errorf("Subnet not found") } - *v = resp.Subnets[0] + *v = *resp.Subnets[0] return nil } @@ -112,5 +112,8 @@ resource "aws_subnet" "foo" { cidr_block = "10.1.1.0/24" vpc_id = "${aws_vpc.foo.id}" map_public_ip_on_launch = true + tags { + Name = "tf-subnet-acc-test" + } } ` diff --git a/builtin/providers/aws/resource_aws_vpc.go b/builtin/providers/aws/resource_aws_vpc.go index f4ac2162e898..bd41d5a587a4 100644 --- a/builtin/providers/aws/resource_aws_vpc.go +++ b/builtin/providers/aws/resource_aws_vpc.go @@ -5,9 +5,10 @@ import ( "log" "time" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsVpc() *schema.Resource { @@ -63,23 +64,26 @@ func resourceAwsVpc() *schema.Resource { } func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn - + conn := meta.(*AWSClient).ec2SDKconn + instance_tenancy := "default" + if v, ok := d.GetOk("instance_tenancy"); ok { + instance_tenancy = v.(string) + } // Create the VPC - createOpts := &ec2.CreateVpc{ - CidrBlock: d.Get("cidr_block").(string), - InstanceTenancy: d.Get("instance_tenancy").(string), + createOpts := &ec2.CreateVPCInput{ + CIDRBlock: aws.String(d.Get("cidr_block").(string)), + InstanceTenancy: aws.String(instance_tenancy), } - log.Printf("[DEBUG] VPC create config: %#v", createOpts) - vpcResp, err := ec2conn.CreateVpc(createOpts) + log.Printf("[DEBUG] VPC create config: %#v", *createOpts) + vpcResp, err := conn.CreateVPC(createOpts) if err != nil { return fmt.Errorf("Error creating VPC: %s", err) } // Get the ID and store it - vpc := &vpcResp.VPC - log.Printf("[INFO] VPC ID: %s", vpc.VpcId) - d.SetId(vpc.VpcId) + vpc := vpcResp.VPC + d.SetId(*vpc.VPCID) + log.Printf("[INFO] VPC ID: %s", d.Id()) // Set partial mode and say that we setup the cidr block d.Partial(true) @@ -92,7 +96,7 @@ func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, Target: "available", - Refresh: VPCStateRefreshFunc(ec2conn, d.Id()), + Refresh: VPCStateRefreshFunc(conn, d.Id()), Timeout: 10 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -106,10 +110,10 @@ func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Refresh the VPC state - vpcRaw, _, err := VPCStateRefreshFunc(ec2conn, d.Id())() + vpcRaw, _, err := VPCStateRefreshFunc(conn, d.Id())() if err != nil { return err } @@ -120,79 +124,106 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { // VPC stuff vpc := vpcRaw.(*ec2.VPC) - d.Set("cidr_block", vpc.CidrBlock) + vpcid := d.Id() + d.Set("cidr_block", vpc.CIDRBlock) // Tags - d.Set("tags", tagsToMap(vpc.Tags)) + d.Set("tags", tagsToMapSDK(vpc.Tags)) // Attributes - resp, err := ec2conn.VpcAttribute(d.Id(), "enableDnsSupport") + attribute := "enableDnsSupport" + DescribeAttrOpts := &ec2.DescribeVPCAttributeInput{ + Attribute: aws.String(attribute), + VPCID: aws.String(vpcid), + } + resp, err := conn.DescribeVPCAttribute(DescribeAttrOpts) if err != nil { return err } - d.Set("enable_dns_support", resp.EnableDnsSupport) - - resp, err = ec2conn.VpcAttribute(d.Id(), "enableDnsHostnames") + d.Set("enable_dns_support", *resp.EnableDNSSupport) + attribute = "enableDnsHostnames" + DescribeAttrOpts = &ec2.DescribeVPCAttributeInput{ + Attribute: &attribute, + VPCID: &vpcid, + } + resp, err = conn.DescribeVPCAttribute(DescribeAttrOpts) if err != nil { return err } - d.Set("enable_dns_hostnames", resp.EnableDnsHostnames) + d.Set("enable_dns_hostnames", *resp.EnableDNSHostnames) // Get the main routing table for this VPC - filter := ec2.NewFilter() - filter.Add("association.main", "true") - filter.Add("vpc-id", d.Id()) - routeResp, err := ec2conn.DescribeRouteTables(nil, filter) + // Really Ugly need to make this better - rmenn + filter1 := &ec2.Filter{ + Name: aws.String("association.main"), + Values: []*string{aws.String("true")}, + } + filter2 := &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(d.Id())}, + } + DescribeRouteOpts := &ec2.DescribeRouteTablesInput{ + Filters: []*ec2.Filter{filter1, filter2}, + } + routeResp, err := conn.DescribeRouteTables(DescribeRouteOpts) if err != nil { return err } if v := routeResp.RouteTables; len(v) > 0 { - d.Set("main_route_table_id", v[0].RouteTableId) + d.Set("main_route_table_id", *v[0].RouteTableID) } - resourceAwsVpcSetDefaultNetworkAcl(ec2conn, d) - resourceAwsVpcSetDefaultSecurityGroup(ec2conn, d) + resourceAwsVpcSetDefaultNetworkAcl(conn, d) + resourceAwsVpcSetDefaultSecurityGroup(conn, d) return nil } func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn + conn := meta.(*AWSClient).ec2SDKconn // Turn on partial mode d.Partial(true) - + vpcid := d.Id() if d.HasChange("enable_dns_hostnames") { - options := new(ec2.ModifyVpcAttribute) - options.EnableDnsHostnames = d.Get("enable_dns_hostnames").(bool) - options.SetEnableDnsHostnames = true + val := d.Get("enable_dns_hostnames").(bool) + modifyOpts := &ec2.ModifyVPCAttributeInput{ + VPCID: &vpcid, + EnableDNSHostnames: &ec2.AttributeBooleanValue{ + Value: &val, + }, + } log.Printf( - "[INFO] Modifying enable_dns_hostnames vpc attribute for %s: %#v", - d.Id(), options) - if _, err := ec2conn.ModifyVpcAttribute(d.Id(), options); err != nil { + "[INFO] Modifying enable_dns_support vpc attribute for %s: %#v", + d.Id(), modifyOpts) + if _, err := conn.ModifyVPCAttribute(modifyOpts); err != nil { return err } - d.SetPartial("enable_dns_hostnames") + d.SetPartial("enable_dns_support") } if d.HasChange("enable_dns_support") { - options := new(ec2.ModifyVpcAttribute) - options.EnableDnsSupport = d.Get("enable_dns_support").(bool) - options.SetEnableDnsSupport = true + val := d.Get("enable_dns_support").(bool) + modifyOpts := &ec2.ModifyVPCAttributeInput{ + VPCID: &vpcid, + EnableDNSSupport: &ec2.AttributeBooleanValue{ + Value: &val, + }, + } log.Printf( "[INFO] Modifying enable_dns_support vpc attribute for %s: %#v", - d.Id(), options) - if _, err := ec2conn.ModifyVpcAttribute(d.Id(), options); err != nil { + d.Id(), modifyOpts) + if _, err := conn.ModifyVPCAttribute(modifyOpts); err != nil { return err } d.SetPartial("enable_dns_support") } - if err := setTags(ec2conn, d); err != nil { + if err := setTagsSDK(conn, d); err != nil { return err } else { d.SetPartial("tags") @@ -203,11 +234,14 @@ func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { } func resourceAwsVpcDelete(d *schema.ResourceData, meta interface{}) error { - ec2conn := meta.(*AWSClient).ec2conn - + conn := meta.(*AWSClient).ec2SDKconn + vpcID := d.Id() + DeleteVpcOpts := &ec2.DeleteVPCInput{ + VPCID: &vpcID, + } log.Printf("[INFO] Deleting VPC: %s", d.Id()) - if _, err := ec2conn.DeleteVpc(d.Id()); err != nil { - ec2err, ok := err.(*ec2.Error) + if _, err := conn.DeleteVPC(DeleteVpcOpts); err != nil { + ec2err, ok := err.(aws.APIError) if ok && ec2err.Code == "InvalidVpcID.NotFound" { return nil } @@ -222,9 +256,12 @@ func resourceAwsVpcDelete(d *schema.ResourceData, meta interface{}) error { // a VPC. func VPCStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := conn.DescribeVpcs([]string{id}, ec2.NewFilter()) + DescribeVpcOpts := &ec2.DescribeVPCsInput{ + VPCIDs: []*string{aws.String(id)}, + } + resp, err := conn.DescribeVPCs(DescribeVpcOpts) if err != nil { - if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidVpcID.NotFound" { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidVpcID.NotFound" { resp = nil } else { log.Printf("Error on VPCStateRefresh: %s", err) @@ -238,38 +275,54 @@ func VPCStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return nil, "", nil } - vpc := &resp.VPCs[0] - return vpc, vpc.State, nil + vpc := resp.VPCs[0] + return vpc, *vpc.State, nil } } func resourceAwsVpcSetDefaultNetworkAcl(conn *ec2.EC2, d *schema.ResourceData) error { - filter := ec2.NewFilter() - filter.Add("default", "true") - filter.Add("vpc-id", d.Id()) - networkAclResp, err := conn.NetworkAcls(nil, filter) + filter1 := &ec2.Filter{ + Name: aws.String("default"), + Values: []*string{aws.String("true")}, + } + filter2 := &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(d.Id())}, + } + DescribeNetworkACLOpts := &ec2.DescribeNetworkACLsInput{ + Filters: []*ec2.Filter{filter1, filter2}, + } + networkAclResp, err := conn.DescribeNetworkACLs(DescribeNetworkACLOpts) if err != nil { return err } - if v := networkAclResp.NetworkAcls; len(v) > 0 { - d.Set("default_network_acl_id", v[0].NetworkAclId) + if v := networkAclResp.NetworkACLs; len(v) > 0 { + d.Set("default_network_acl_id", v[0].NetworkACLID) } return nil } func resourceAwsVpcSetDefaultSecurityGroup(conn *ec2.EC2, d *schema.ResourceData) error { - filter := ec2.NewFilter() - filter.Add("group-name", "default") - filter.Add("vpc-id", d.Id()) - securityGroupResp, err := conn.SecurityGroups(nil, filter) + filter1 := &ec2.Filter{ + Name: aws.String("group-name"), + Values: []*string{aws.String("default")}, + } + filter2 := &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(d.Id())}, + } + DescribeSgOpts := &ec2.DescribeSecurityGroupsInput{ + Filters: []*ec2.Filter{filter1, filter2}, + } + securityGroupResp, err := conn.DescribeSecurityGroups(DescribeSgOpts) if err != nil { return err } - if v := securityGroupResp.Groups; len(v) > 0 { - d.Set("default_security_group_id", v[0].Id) + if v := securityGroupResp.SecurityGroups; len(v) > 0 { + d.Set("default_security_group_id", v[0].GroupID) } return nil diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection.go b/builtin/providers/aws/resource_aws_vpc_peering_connection.go index a8316c11477a..0f8c6185bea0 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection.go @@ -5,9 +5,10 @@ import ( "log" "time" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) func resourceAwsVpcPeeringConnection() *schema.Resource { @@ -19,9 +20,10 @@ func resourceAwsVpcPeeringConnection() *schema.Resource { Schema: map[string]*schema.Schema{ "peer_owner_id": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: schema.EnvDefaultFunc("AWS_ACCOUNT_ID", nil), }, "peer_vpc_id": &schema.Schema{ Type: schema.TypeString, @@ -42,20 +44,20 @@ func resourceAwsVpcPeeringCreate(d *schema.ResourceData, meta interface{}) error ec2conn := meta.(*AWSClient).ec2conn // Create the vpc peering connection - createOpts := &ec2.CreateVpcPeeringConnection{ - PeerOwnerId: d.Get("peer_owner_id").(string), - PeerVpcId: d.Get("peer_vpc_id").(string), - VpcId: d.Get("vpc_id").(string), + createOpts := &ec2.CreateVPCPeeringConnectionRequest{ + PeerOwnerID: aws.String(d.Get("peer_owner_id").(string)), + PeerVPCID: aws.String(d.Get("peer_vpc_id").(string)), + VPCID: aws.String(d.Get("vpc_id").(string)), } log.Printf("[DEBUG] VpcPeeringCreate create config: %#v", createOpts) - resp, err := ec2conn.CreateVpcPeeringConnection(createOpts) + resp, err := ec2conn.CreateVPCPeeringConnection(createOpts) if err != nil { return fmt.Errorf("Error creating vpc peering connection: %s", err) } // Get the ID and store it - rt := &resp.VpcPeeringConnection - d.SetId(rt.VpcPeeringConnectionId) + rt := resp.VPCPeeringConnection + d.SetId(*rt.VPCPeeringConnectionID) log.Printf("[INFO] Vpc Peering Connection ID: %s", d.Id()) // Wait for the vpc peering connection to become available @@ -88,11 +90,11 @@ func resourceAwsVpcPeeringRead(d *schema.ResourceData, meta interface{}) error { return nil } - pc := pcRaw.(*ec2.VpcPeeringConnection) + pc := pcRaw.(*ec2.VPCPeeringConnection) - d.Set("peer_owner_id", pc.AccepterVpcInfo.OwnerId) - d.Set("peer_vpc_id", pc.AccepterVpcInfo.VpcId) - d.Set("vpc_id", pc.RequesterVpcInfo.VpcId) + d.Set("peer_owner_id", pc.AccepterVPCInfo.OwnerID) + d.Set("peer_vpc_id", pc.AccepterVPCInfo.VPCID) + d.Set("vpc_id", pc.RequesterVPCInfo.VPCID) d.Set("tags", tagsToMap(pc.Tags)) return nil @@ -113,7 +115,10 @@ func resourceAwsVpcPeeringUpdate(d *schema.ResourceData, meta interface{}) error func resourceAwsVpcPeeringDelete(d *schema.ResourceData, meta interface{}) error { ec2conn := meta.(*AWSClient).ec2conn - _, err := ec2conn.DeleteVpcPeeringConnection(d.Id()) + _, err := ec2conn.DeleteVPCPeeringConnection( + &ec2.DeleteVPCPeeringConnectionRequest{ + VPCPeeringConnectionID: aws.String(d.Id()), + }) return err } @@ -122,9 +127,11 @@ func resourceAwsVpcPeeringDelete(d *schema.ResourceData, meta interface{}) error func resourceAwsVpcPeeringConnectionStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - resp, err := conn.DescribeVpcPeeringConnection([]string{id}, ec2.NewFilter()) + resp, err := conn.DescribeVPCPeeringConnections(&ec2.DescribeVPCPeeringConnectionsRequest{ + VPCPeeringConnectionIDs: []string{id}, + }) if err != nil { - if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidVpcPeeringConnectionID.NotFound" { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidVpcPeeringConnectionID.NotFound" { resp = nil } else { log.Printf("Error on VpcPeeringConnectionStateRefresh: %s", err) @@ -138,7 +145,7 @@ func resourceAwsVpcPeeringConnectionStateRefreshFunc(conn *ec2.EC2, id string) r return nil, "", nil } - pc := &resp.VpcPeeringConnections[0] + pc := &resp.VPCPeeringConnections[0] return pc, "ready", nil } diff --git a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go index 2b4b71e338b8..b3f30c3a140f 100644 --- a/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go +++ b/builtin/providers/aws/resource_aws_vpc_peering_connection_test.go @@ -2,18 +2,24 @@ package aws import ( "fmt" + "os" "testing" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccAWSVPCPeeringConnection_normal(t *testing.T) { var conf ec2.Address resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { + testAccPreCheck(t) + if os.Getenv("AWS_ACCOUNT_ID") == "" { + t.Fatal("AWS_ACCOUNT_ID must be set") + } + }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ @@ -35,10 +41,13 @@ func testAccCheckAWSVpcPeeringConnectionDestroy(s *terraform.State) error { continue } - describe, err := conn.DescribeVpcPeeringConnection([]string{rs.Primary.ID}, ec2.NewFilter()) + describe, err := conn.DescribeVPCPeeringConnections( + &ec2.DescribeVPCPeeringConnectionsRequest{ + VPCPeeringConnectionIDs: []string{rs.Primary.ID}, + }) if err == nil { - if len(describe.VpcPeeringConnections) != 0 { + if len(describe.VPCPeeringConnections) != 0 { return fmt.Errorf("vpc peering connection still exists") } } @@ -68,11 +77,10 @@ resource "aws_vpc" "foo" { } resource "aws_vpc" "bar" { - cidr_block = "10.0.1.0/16" + cidr_block = "10.1.0.0/16" } resource "aws_vpc_peering_connection" "foo" { - peer_owner_id = "12345" vpc_id = "${aws_vpc.foo.id}" peer_vpc_id = "${aws_vpc.bar.id}" } diff --git a/builtin/providers/aws/resource_aws_vpc_test.go b/builtin/providers/aws/resource_aws_vpc_test.go index b555e0875373..f325d339dc00 100644 --- a/builtin/providers/aws/resource_aws_vpc_test.go +++ b/builtin/providers/aws/resource_aws_vpc_test.go @@ -4,9 +4,10 @@ import ( "fmt" "testing" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestAccVpc_basic(t *testing.T) { @@ -65,7 +66,7 @@ func TestAccVpc_tags(t *testing.T) { testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), resource.TestCheckResourceAttr( "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), - testAccCheckTags(&vpc.Tags, "foo", "bar"), + testAccCheckTagsSDK(&vpc.Tags, "foo", "bar"), ), }, @@ -73,8 +74,8 @@ func TestAccVpc_tags(t *testing.T) { Config: testAccVpcConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckVpcExists("aws_vpc.foo", &vpc), - testAccCheckTags(&vpc.Tags, "foo", ""), - testAccCheckTags(&vpc.Tags, "bar", "baz"), + testAccCheckTagsSDK(&vpc.Tags, "foo", ""), + testAccCheckTagsSDK(&vpc.Tags, "bar", "baz"), ), }, }, @@ -111,7 +112,7 @@ func TestAccVpcUpdate(t *testing.T) { } func testAccCheckVpcDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc" { @@ -119,7 +120,10 @@ func testAccCheckVpcDestroy(s *terraform.State) error { } // Try to find the VPC - resp, err := conn.DescribeVpcs([]string{rs.Primary.ID}, ec2.NewFilter()) + DescribeVpcOpts := &ec2.DescribeVPCsInput{ + VPCIDs: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeVPCs(DescribeVpcOpts) if err == nil { if len(resp.VPCs) > 0 { return fmt.Errorf("VPCs still exist.") @@ -129,7 +133,7 @@ func testAccCheckVpcDestroy(s *terraform.State) error { } // Verify the error is what we want - ec2err, ok := err.(*ec2.Error) + ec2err, ok := err.(aws.APIError) if !ok { return err } @@ -143,8 +147,9 @@ func testAccCheckVpcDestroy(s *terraform.State) error { func testAccCheckVpcCidr(vpc *ec2.VPC, expected string) resource.TestCheckFunc { return func(s *terraform.State) error { - if vpc.CidrBlock != expected { - return fmt.Errorf("Bad cidr: %s", vpc.CidrBlock) + CIDRBlock := vpc.CIDRBlock + if *CIDRBlock != expected { + return fmt.Errorf("Bad cidr: %s", *vpc.CIDRBlock) } return nil @@ -162,8 +167,11 @@ func testAccCheckVpcExists(n string, vpc *ec2.VPC) resource.TestCheckFunc { return fmt.Errorf("No VPC ID is set") } - conn := testAccProvider.Meta().(*AWSClient).ec2conn - resp, err := conn.DescribeVpcs([]string{rs.Primary.ID}, ec2.NewFilter()) + conn := testAccProvider.Meta().(*AWSClient).ec2SDKconn + DescribeVpcOpts := &ec2.DescribeVPCsInput{ + VPCIDs: []*string{aws.String(rs.Primary.ID)}, + } + resp, err := conn.DescribeVPCs(DescribeVpcOpts) if err != nil { return err } @@ -171,12 +179,32 @@ func testAccCheckVpcExists(n string, vpc *ec2.VPC) resource.TestCheckFunc { return fmt.Errorf("VPC not found") } - *vpc = resp.VPCs[0] + *vpc = *resp.VPCs[0] return nil } } +// https://github.com/hashicorp/terraform/issues/1301 +func TestAccVpc_bothDnsOptionsSet(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccVpcConfig_BothDnsOptions, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_vpc.bar", "enable_dns_hostnames", "true"), + resource.TestCheckResourceAttr( + "aws_vpc.bar", "enable_dns_support", "true"), + ), + }, + }, + }) +} + const testAccVpcConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -216,3 +244,12 @@ resource "aws_vpc" "bar" { cidr_block = "10.2.0.0/16" } ` + +const testAccVpcConfig_BothDnsOptions = ` +resource "aws_vpc" "bar" { + cidr_block = "10.2.0.0/16" + + enable_dns_hostnames = true + enable_dns_support = true +} +` diff --git a/builtin/providers/aws/resource_aws_vpn_gateway.go b/builtin/providers/aws/resource_aws_vpn_gateway.go new file mode 100644 index 000000000000..b6ecba581363 --- /dev/null +++ b/builtin/providers/aws/resource_aws_vpn_gateway.go @@ -0,0 +1,318 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsVpnGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsVpnGatewayCreate, + Read: resourceAwsVpnGatewayRead, + Update: resourceAwsVpnGatewayUpdate, + Delete: resourceAwsVpnGatewayDelete, + + Schema: map[string]*schema.Schema{ + "availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "vpc_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsVpnGatewayCreate(d *schema.ResourceData, meta interface{}) error { + ec2conn := meta.(*AWSClient).ec2conn + + createOpts := &ec2.CreateVPNGatewayRequest{ + AvailabilityZone: aws.String(d.Get("availability_zone").(string)), + Type: aws.String("ipsec.1"), + } + + // Create the VPN gateway + log.Printf("[DEBUG] Creating VPN gateway") + resp, err := ec2conn.CreateVPNGateway(createOpts) + if err != nil { + return fmt.Errorf("Error creating VPN gateway: %s", err) + } + + // Get the ID and store it + vpnGateway := resp.VPNGateway + d.SetId(*vpnGateway.VPNGatewayID) + log.Printf("[INFO] VPN Gateway ID: %s", *vpnGateway.VPNGatewayID) + + // Attach the VPN gateway to the correct VPC + return resourceAwsVpnGatewayUpdate(d, meta) +} + +func resourceAwsVpnGatewayRead(d *schema.ResourceData, meta interface{}) error { + ec2conn := meta.(*AWSClient).ec2conn + + vpnGatewayRaw, _, err := vpnGatewayStateRefreshFunc(ec2conn, d.Id())() + if err != nil { + return err + } + if vpnGatewayRaw == nil { + // Seems we have lost our VPN gateway + d.SetId("") + return nil + } + + vpnGateway := vpnGatewayRaw.(*ec2.VPNGateway) + if len(vpnGateway.VPCAttachments) == 0 { + // Gateway exists but not attached to the VPC + d.Set("vpc_id", "") + } else { + d.Set("vpc_id", vpnGateway.VPCAttachments[0].VPCID) + } + d.Set("availability_zone", vpnGateway.AvailabilityZone) + d.Set("tags", tagsToMap(vpnGateway.Tags)) + + return nil +} + +func resourceAwsVpnGatewayUpdate(d *schema.ResourceData, meta interface{}) error { + if d.HasChange("vpc_id") { + // If we're already attached, detach it first + if err := resourceAwsVpnGatewayDetach(d, meta); err != nil { + return err + } + + // Attach the VPN gateway to the new vpc + if err := resourceAwsVpnGatewayAttach(d, meta); err != nil { + return err + } + } + + ec2conn := meta.(*AWSClient).ec2conn + + if err := setTags(ec2conn, d); err != nil { + return err + } + + d.SetPartial("tags") + + return resourceAwsVpnGatewayRead(d, meta) +} + +func resourceAwsVpnGatewayDelete(d *schema.ResourceData, meta interface{}) error { + ec2conn := meta.(*AWSClient).ec2conn + + // Detach if it is attached + if err := resourceAwsVpnGatewayDetach(d, meta); err != nil { + return err + } + + log.Printf("[INFO] Deleting VPN gateway: %s", d.Id()) + + return resource.Retry(5*time.Minute, func() error { + err := ec2conn.DeleteVPNGateway(&ec2.DeleteVPNGatewayRequest{ + VPNGatewayID: aws.String(d.Id()), + }) + if err == nil { + return nil + } + + ec2err, ok := err.(aws.APIError) + if !ok { + return err + } + + switch ec2err.Code { + case "InvalidVpnGatewayID.NotFound": + return nil + case "IncorrectState": + return err // retry + } + + return resource.RetryError{Err: err} + }) +} + +func resourceAwsVpnGatewayAttach(d *schema.ResourceData, meta interface{}) error { + ec2conn := meta.(*AWSClient).ec2conn + + if d.Get("vpc_id").(string) == "" { + log.Printf( + "[DEBUG] Not attaching VPN Gateway '%s' as no VPC ID is set", + d.Id()) + return nil + } + + log.Printf( + "[INFO] Attaching VPN Gateway '%s' to VPC '%s'", + d.Id(), + d.Get("vpc_id").(string)) + + _, err := ec2conn.AttachVPNGateway(&ec2.AttachVPNGatewayRequest{ + VPNGatewayID: aws.String(d.Id()), + VPCID: aws.String(d.Get("vpc_id").(string)), + }) + if err != nil { + return err + } + + // A note on the states below: the AWS docs (as of July, 2014) say + // that the states would be: attached, attaching, detached, detaching, + // but when running, I noticed that the state is usually "available" when + // it is attached. + + // Wait for it to be fully attached before continuing + log.Printf("[DEBUG] Waiting for VPN gateway (%s) to attach", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"detached", "attaching"}, + Target: "available", + Refresh: VpnGatewayAttachStateRefreshFunc(ec2conn, d.Id(), "available"), + Timeout: 1 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for VPN gateway (%s) to attach: %s", + d.Id(), err) + } + + return nil +} + +func resourceAwsVpnGatewayDetach(d *schema.ResourceData, meta interface{}) error { + ec2conn := meta.(*AWSClient).ec2conn + + // Get the old VPC ID to detach from + vpcID, _ := d.GetChange("vpc_id") + + if vpcID.(string) == "" { + log.Printf( + "[DEBUG] Not detaching VPN Gateway '%s' as no VPC ID is set", + d.Id()) + return nil + } + + log.Printf( + "[INFO] Detaching VPN Gateway '%s' from VPC '%s'", + d.Id(), + vpcID.(string)) + + wait := true + err := ec2conn.DetachVPNGateway(&ec2.DetachVPNGatewayRequest{ + VPNGatewayID: aws.String(d.Id()), + VPCID: aws.String(d.Get("vpc_id").(string)), + }) + if err != nil { + ec2err, ok := err.(aws.APIError) + if ok { + if ec2err.Code == "InvalidVpnGatewayID.NotFound" { + err = nil + wait = false + } else if ec2err.Code == "InvalidVpnGatewayAttachment.NotFound" { + err = nil + wait = false + } + } + + if err != nil { + return err + } + } + + if !wait { + return nil + } + + // Wait for it to be fully detached before continuing + log.Printf("[DEBUG] Waiting for VPN gateway (%s) to detach", d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"attached", "detaching", "available"}, + Target: "detached", + Refresh: VpnGatewayAttachStateRefreshFunc(ec2conn, d.Id(), "detached"), + Timeout: 1 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for vpn gateway (%s) to detach: %s", + d.Id(), err) + } + + return nil +} + +// vpnGatewayStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch a VPNGateway. +func vpnGatewayStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + resp, err := conn.DescribeVPNGateways(&ec2.DescribeVPNGatewaysRequest{ + VPNGatewayIDs: []string{id}, + }) + if err != nil { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidVpnGatewayID.NotFound" { + resp = nil + } else { + log.Printf("[ERROR] Error on VpnGatewayStateRefresh: %s", err) + return nil, "", err + } + } + + if resp == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our instance yet. Return an empty state. + return nil, "", nil + } + + vpnGateway := &resp.VPNGateways[0] + return vpnGateway, *vpnGateway.State, nil + } +} + +// VpnGatewayAttachStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// the state of a VPN gateway's attachment +func VpnGatewayAttachStateRefreshFunc(conn *ec2.EC2, id string, expected string) resource.StateRefreshFunc { + var start time.Time + return func() (interface{}, string, error) { + if start.IsZero() { + start = time.Now() + } + + resp, err := conn.DescribeVPNGateways(&ec2.DescribeVPNGatewaysRequest{ + VPNGatewayIDs: []string{id}, + }) + if err != nil { + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "InvalidVpnGatewayID.NotFound" { + resp = nil + } else { + log.Printf("[ERROR] Error on VpnGatewayStateRefresh: %s", err) + return nil, "", err + } + } + + if resp == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our instance yet. Return an empty state. + return nil, "", nil + } + + vpnGateway := &resp.VPNGateways[0] + + if time.Now().Sub(start) > 10*time.Second { + return vpnGateway, expected, nil + } + + if len(vpnGateway.VPCAttachments) == 0 { + // No attachments, we're detached + return vpnGateway, "detached", nil + } + + return vpnGateway, *vpnGateway.VPCAttachments[0].State, nil + } +} diff --git a/builtin/providers/aws/resource_aws_vpn_gateway_test.go b/builtin/providers/aws/resource_aws_vpn_gateway_test.go new file mode 100644 index 000000000000..21ccb980c4d3 --- /dev/null +++ b/builtin/providers/aws/resource_aws_vpn_gateway_test.go @@ -0,0 +1,232 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSVpnGateway(t *testing.T) { + var v, v2 ec2.VPNGateway + + testNotEqual := func(*terraform.State) error { + if len(v.VPCAttachments) == 0 { + return fmt.Errorf("VPN gateway A is not attached") + } + if len(v2.VPCAttachments) == 0 { + return fmt.Errorf("VPN gateway B is not attached") + } + + id1 := v.VPCAttachments[0].VPCID + id2 := v2.VPCAttachments[0].VPCID + if id1 == id2 { + return fmt.Errorf("Both attachment IDs are the same") + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpnGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccVpnGatewayConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpnGatewayExists( + "aws_vpn_gateway.foo", &v), + ), + }, + + resource.TestStep{ + Config: testAccVpnGatewayConfigChangeVPC, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpnGatewayExists( + "aws_vpn_gateway.foo", &v2), + testNotEqual, + ), + }, + }, + }) +} + +func TestAccAWSVpnGateway_delete(t *testing.T) { + var vpnGateway ec2.VPNGateway + + testDeleted := func(r string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[r] + if ok { + return fmt.Errorf("VPN Gateway %q should have been deleted", r) + } + return nil + } + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpnGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccVpnGatewayConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &vpnGateway)), + }, + resource.TestStep{ + Config: testAccNoVpnGatewayConfig, + Check: resource.ComposeTestCheckFunc(testDeleted("aws_vpn_gateway.foo")), + }, + }, + }) +} + +func TestAccVpnGateway_tags(t *testing.T) { + var v ec2.VPNGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpnGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckVpnGatewayConfigTags, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), + testAccCheckTags(&v.Tags, "foo", "bar"), + ), + }, + + resource.TestStep{ + Config: testAccCheckVpnGatewayConfigTagsUpdate, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), + testAccCheckTags(&v.Tags, "foo", ""), + testAccCheckTags(&v.Tags, "bar", "baz"), + ), + }, + }, + }) +} + +func testAccCheckVpnGatewayDestroy(s *terraform.State) error { + ec2conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_vpn_gateway" { + continue + } + + // Try to find the resource + resp, err := ec2conn.DescribeVPNGateways(&ec2.DescribeVPNGatewaysRequest{ + VPNGatewayIDs: []string{rs.Primary.ID}, + }) + if err == nil { + if len(resp.VPNGateways) > 0 { + return fmt.Errorf("still exists") + } + + return nil + } + + // Verify the error is what we want + ec2err, ok := err.(aws.APIError) + if !ok { + return err + } + if ec2err.Code != "InvalidVpnGatewayID.NotFound" { + return err + } + } + + return nil +} + +func testAccCheckVpnGatewayExists(n string, ig *ec2.VPNGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + ec2conn := testAccProvider.Meta().(*AWSClient).ec2conn + resp, err := ec2conn.DescribeVPNGateways(&ec2.DescribeVPNGatewaysRequest{ + VPNGatewayIDs: []string{rs.Primary.ID}, + }) + if err != nil { + return err + } + if len(resp.VPNGateways) == 0 { + return fmt.Errorf("VPNGateway not found") + } + + *ig = resp.VPNGateways[0] + + return nil + } +} + +const testAccNoVpnGatewayConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} +` + +const testAccVpnGatewayConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_vpn_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" +} +` + +const testAccVpnGatewayConfigChangeVPC = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_vpc" "bar" { + cidr_block = "10.2.0.0/16" +} + +resource "aws_vpn_gateway" "foo" { + vpc_id = "${aws_vpc.bar.id}" +} +` + +const testAccCheckVpnGatewayConfigTags = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_vpn_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" + tags { + foo = "bar" + } +} +` + +const testAccCheckVpnGatewayConfigTagsUpdate = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_vpn_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" + tags { + bar = "baz" + } +} +` diff --git a/builtin/providers/aws/s3_tags.go b/builtin/providers/aws/s3_tags.go new file mode 100644 index 000000000000..4b8234b9b07f --- /dev/null +++ b/builtin/providers/aws/s3_tags.go @@ -0,0 +1,131 @@ +package aws + +import ( + "crypto/md5" + "encoding/base64" + "encoding/xml" + "log" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/s3" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsS3(conn *s3.S3, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsS3(tagsFromMapS3(o), tagsFromMapS3(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + err := conn.DeleteBucketTagging(&s3.DeleteBucketTaggingRequest{ + Bucket: aws.String(d.Get("bucket").(string)), + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + tagging := s3.Tagging{ + TagSet: create, + XMLName: xml.Name{ + Space: "http://s3.amazonaws.com/doc/2006-03-01/", + Local: "Tagging", + }, + } + // AWS S3 API requires us to send a base64 encoded md5 hash of the + // content, which we need to build ourselves since aws-sdk-go does not. + b, err := xml.Marshal(tagging) + if err != nil { + return err + } + h := md5.New() + h.Write(b) + base := base64.StdEncoding.EncodeToString(h.Sum(nil)) + + req := &s3.PutBucketTaggingRequest{ + Bucket: aws.String(d.Get("bucket").(string)), + ContentMD5: aws.String(base), + Tagging: &tagging, + } + + err = conn.PutBucketTagging(req) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsS3(oldTags, newTags []s3.Tag) ([]s3.Tag, []s3.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []s3.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapS3(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapS3(m map[string]interface{}) []s3.Tag { + result := make([]s3.Tag, 0, len(m)) + for k, v := range m { + result = append(result, s3.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapS3(ts []s3.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} + +// return a slice of s3 tags associated with the given s3 bucket. Essentially +// s3.GetBucketTagging, except returns an empty slice instead of an error when +// there are no tags. +func getTagSetS3(s3conn *s3.S3, bucket string) ([]s3.Tag, error) { + request := &s3.GetBucketTaggingRequest{ + Bucket: aws.String(bucket), + } + + response, err := s3conn.GetBucketTagging(request) + if ec2err, ok := err.(aws.APIError); ok && ec2err.Code == "NoSuchTagSet" { + // There is no tag set associated with the bucket. + return []s3.Tag{}, nil + } else if err != nil { + return nil, err + } + + return response.TagSet, nil +} diff --git a/builtin/providers/aws/s3_tags_test.go b/builtin/providers/aws/s3_tags_test.go new file mode 100644 index 000000000000..9b082c6e481b --- /dev/null +++ b/builtin/providers/aws/s3_tags_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/aws-sdk-go/gen/s3" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffTagsS3(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsS3(tagsFromMapS3(tc.Old), tagsFromMapS3(tc.New)) + cm := tagsToMapS3(c) + rm := tagsToMapS3(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckTagsS3( + ts *[]s3.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapS3(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index 7d4793d3d9af..03a9bca8c3c6 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -1,13 +1,15 @@ package aws import ( + "fmt" "strings" "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/aws-sdk-go/gen/elb" "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/aws-sdk-go/gen/route53" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) // Takes the result of flatmap.Expand for an array of listeners and @@ -16,7 +18,7 @@ func expandListeners(configured []interface{}) ([]elb.Listener, error) { listeners := make([]elb.Listener, 0, len(configured)) // Loop over our configured listeners and create - // an array of goamz compatabile objects + // an array of aws-sdk-go compatabile objects for _, lRaw := range configured { data := lRaw.(map[string]interface{}) @@ -39,15 +41,18 @@ func expandListeners(configured []interface{}) ([]elb.Listener, error) { // Takes the result of flatmap.Expand for an array of ingress/egress // security group rules and returns EC2 API compatible objects -func expandIPPerms(id string, configured []interface{}) []ec2.IPPerm { - perms := make([]ec2.IPPerm, len(configured)) +func expandIPPerms( + group ec2.SecurityGroup, configured []interface{}) []ec2.IPPermission { + vpc := group.VPCID != nil + + perms := make([]ec2.IPPermission, len(configured)) for i, mRaw := range configured { - var perm ec2.IPPerm + var perm ec2.IPPermission m := mRaw.(map[string]interface{}) - perm.FromPort = m["from_port"].(int) - perm.ToPort = m["to_port"].(int) - perm.Protocol = m["protocol"].(string) + perm.FromPort = aws.Integer(m["from_port"].(int)) + perm.ToPort = aws.Integer(m["to_port"].(int)) + perm.IPProtocol = aws.String(m["protocol"].(string)) var groups []string if raw, ok := m["security_groups"]; ok { @@ -57,29 +62,38 @@ func expandIPPerms(id string, configured []interface{}) []ec2.IPPerm { } } if v, ok := m["self"]; ok && v.(bool) { - groups = append(groups, id) + if vpc { + groups = append(groups, *group.GroupID) + } else { + groups = append(groups, *group.GroupName) + } } if len(groups) > 0 { - perm.SourceGroups = make([]ec2.UserSecurityGroup, len(groups)) + perm.UserIDGroupPairs = make([]ec2.UserIDGroupPair, len(groups)) for i, name := range groups { ownerId, id := "", name if items := strings.Split(id, "/"); len(items) > 1 { ownerId, id = items[0], items[1] } - perm.SourceGroups[i] = ec2.UserSecurityGroup{ - Id: id, - OwnerId: ownerId, + perm.UserIDGroupPairs[i] = ec2.UserIDGroupPair{ + GroupID: aws.String(id), + UserID: aws.String(ownerId), + } + if !vpc { + perm.UserIDGroupPairs[i].GroupID = nil + perm.UserIDGroupPairs[i].GroupName = aws.String(id) + perm.UserIDGroupPairs[i].UserID = nil } } } if raw, ok := m["cidr_blocks"]; ok { list := raw.([]interface{}) - perm.SourceIPs = make([]string, len(list)) + perm.IPRanges = make([]ec2.IPRange, len(list)) for i, v := range list { - perm.SourceIPs[i] = v.(string) + perm.IPRanges[i] = ec2.IPRange{aws.String(v.(string))} } } @@ -95,7 +109,7 @@ func expandParameters(configured []interface{}) ([]rds.Parameter, error) { parameters := make([]rds.Parameter, 0, len(configured)) // Loop over our configured parameters and create - // an array of goamz compatabile objects + // an array of aws-sdk-go compatabile objects for _, pRaw := range configured { data := pRaw.(map[string]interface{}) @@ -111,31 +125,6 @@ func expandParameters(configured []interface{}) ([]rds.Parameter, error) { return parameters, nil } -// Flattens an array of ipPerms into a list of primitives that -// flatmap.Flatten() can handle -func flattenIPPerms(list []ec2.IPPerm) []map[string]interface{} { - result := make([]map[string]interface{}, 0, len(list)) - - for _, perm := range list { - n := make(map[string]interface{}) - n["from_port"] = perm.FromPort - n["protocol"] = perm.Protocol - n["to_port"] = perm.ToPort - - if len(perm.SourceIPs) > 0 { - n["cidr_blocks"] = perm.SourceIPs - } - - if v := flattenSecurityGroups(perm.SourceGroups); len(v) > 0 { - n["security_groups"] = v - } - - result = append(result, n) - } - - return result -} - // Flattens a health check into something that flatmap.Flatten() // can handle func flattenHealthCheck(check *elb.HealthCheck) []map[string]interface{} { @@ -154,10 +143,10 @@ func flattenHealthCheck(check *elb.HealthCheck) []map[string]interface{} { } // Flattens an array of UserSecurityGroups into a []string -func flattenSecurityGroups(list []ec2.UserSecurityGroup) []string { +func flattenSecurityGroups(list []ec2.UserIDGroupPair) []string { result := make([]string, 0, len(list)) for _, g := range list { - result = append(result, g.Id) + result = append(result, *g.GroupID) } return result } @@ -220,3 +209,73 @@ func expandStringList(configured []interface{}) []string { } return vs } + +//Flattens an array of private ip addresses into a []string, where the elements returned are the IP strings e.g. "192.168.0.0" +func flattenNetworkInterfacesPrivateIPAddesses(dtos []ec2.NetworkInterfacePrivateIPAddress) []string { + ips := make([]string, 0, len(dtos)) + for _, v := range dtos { + ip := *v.PrivateIPAddress + ips = append(ips, ip) + } + return ips +} + +//Flattens security group identifiers into a []string, where the elements returned are the GroupIDs +func flattenGroupIdentifiers(dtos []ec2.GroupIdentifier) []string { + ids := make([]string, 0, len(dtos)) + for _, v := range dtos { + group_id := *v.GroupID + ids = append(ids, group_id) + } + return ids +} + +//Expands an array of IPs into a ec2 Private IP Address Spec +func expandPrivateIPAddesses(ips []interface{}) []ec2.PrivateIPAddressSpecification { + dtos := make([]ec2.PrivateIPAddressSpecification, 0, len(ips)) + for i, v := range ips { + new_private_ip := ec2.PrivateIPAddressSpecification{ + PrivateIPAddress: aws.String(v.(string)), + } + + new_private_ip.Primary = aws.Boolean(i == 0) + + dtos = append(dtos, new_private_ip) + } + return dtos +} + +//Flattens network interface attachment into a map[string]interface +func flattenAttachment(a *ec2.NetworkInterfaceAttachment) map[string]interface{} { + att := make(map[string]interface{}) + att["instance"] = *a.InstanceID + att["device_index"] = *a.DeviceIndex + att["attachment_id"] = *a.AttachmentID + return att +} + +func flattenResourceRecords(recs []route53.ResourceRecord) []string { + strs := make([]string, 0, len(recs)) + for _, r := range recs { + if r.Value != nil { + s := strings.Replace(*r.Value, "\"", "", 2) + strs = append(strs, s) + } + } + return strs +} + +func expandResourceRecords(recs []interface{}, typeStr string) []route53.ResourceRecord { + records := make([]route53.ResourceRecord, 0, len(recs)) + for _, r := range recs { + s := r.(string) + switch typeStr { + case "TXT": + str := fmt.Sprintf("\"%s\"", s) + records = append(records, route53.ResourceRecord{Value: aws.String(str)}) + default: + records = append(records, route53.ResourceRecord{Value: aws.String(s)}) + } + } + return records +} diff --git a/builtin/providers/aws/structure_sdk.go b/builtin/providers/aws/structure_sdk.go new file mode 100644 index 000000000000..634a7acc6732 --- /dev/null +++ b/builtin/providers/aws/structure_sdk.go @@ -0,0 +1,255 @@ +package aws + +import ( + "strings" + + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" + "github.com/hashicorp/aws-sdk-go/gen/elb" + "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/terraform/helper/schema" +) + +// Takes the result of flatmap.Expand for an array of listeners and +// returns ELB API compatible objects +func expandListenersSDK(configured []interface{}) ([]elb.Listener, error) { + listeners := make([]elb.Listener, 0, len(configured)) + + // Loop over our configured listeners and create + // an array of aws-sdk-go compatabile objects + for _, lRaw := range configured { + data := lRaw.(map[string]interface{}) + + ip := data["instance_port"].(int) + lp := data["lb_port"].(int) + l := elb.Listener{ + InstancePort: &ip, + InstanceProtocol: aws.String(data["instance_protocol"].(string)), + LoadBalancerPort: &lp, + Protocol: aws.String(data["lb_protocol"].(string)), + } + + if v, ok := data["ssl_certificate_id"]; ok { + l.SSLCertificateID = aws.String(v.(string)) + } + + listeners = append(listeners, l) + } + + return listeners, nil +} + +// Takes the result of flatmap.Expand for an array of ingress/egress +// security group rules and returns EC2 API compatible objects +func expandIPPermsSDK( + group *ec2.SecurityGroup, configured []interface{}) []*ec2.IPPermission { + vpc := group.VPCID != nil + + perms := make([]*ec2.IPPermission, len(configured)) + for i, mRaw := range configured { + var perm ec2.IPPermission + m := mRaw.(map[string]interface{}) + + perm.FromPort = aws.Long(int64(m["from_port"].(int))) + perm.ToPort = aws.Long(int64(m["to_port"].(int))) + perm.IPProtocol = aws.String(m["protocol"].(string)) + + var groups []string + if raw, ok := m["security_groups"]; ok { + list := raw.(*schema.Set).List() + for _, v := range list { + groups = append(groups, v.(string)) + } + } + if v, ok := m["self"]; ok && v.(bool) { + if vpc { + groups = append(groups, *group.GroupID) + } else { + groups = append(groups, *group.GroupName) + } + } + + if len(groups) > 0 { + perm.UserIDGroupPairs = make([]*ec2.UserIDGroupPair, len(groups)) + for i, name := range groups { + ownerId, id := "", name + if items := strings.Split(id, "/"); len(items) > 1 { + ownerId, id = items[0], items[1] + } + + perm.UserIDGroupPairs[i] = &ec2.UserIDGroupPair{ + GroupID: aws.String(id), + UserID: aws.String(ownerId), + } + if !vpc { + perm.UserIDGroupPairs[i].GroupID = nil + perm.UserIDGroupPairs[i].GroupName = aws.String(id) + perm.UserIDGroupPairs[i].UserID = nil + } + } + } + + if raw, ok := m["cidr_blocks"]; ok { + list := raw.([]interface{}) + perm.IPRanges = make([]*ec2.IPRange, len(list)) + for i, v := range list { + perm.IPRanges[i] = &ec2.IPRange{CIDRIP: aws.String(v.(string))} + } + } + + perms[i] = &perm + } + + return perms +} + +// Takes the result of flatmap.Expand for an array of parameters and +// returns Parameter API compatible objects +func expandParametersSDK(configured []interface{}) ([]rds.Parameter, error) { + parameters := make([]rds.Parameter, 0, len(configured)) + + // Loop over our configured parameters and create + // an array of aws-sdk-go compatabile objects + for _, pRaw := range configured { + data := pRaw.(map[string]interface{}) + + p := rds.Parameter{ + ApplyMethod: aws.String(data["apply_method"].(string)), + ParameterName: aws.String(data["name"].(string)), + ParameterValue: aws.String(data["value"].(string)), + } + + parameters = append(parameters, p) + } + + return parameters, nil +} + +// Flattens a health check into something that flatmap.Flatten() +// can handle +func flattenHealthCheckSDK(check *elb.HealthCheck) []map[string]interface{} { + result := make([]map[string]interface{}, 0, 1) + + chk := make(map[string]interface{}) + chk["unhealthy_threshold"] = *check.UnhealthyThreshold + chk["healthy_threshold"] = *check.HealthyThreshold + chk["target"] = *check.Target + chk["timeout"] = *check.Timeout + chk["interval"] = *check.Interval + + result = append(result, chk) + + return result +} + +// Flattens an array of UserSecurityGroups into a []string +func flattenSecurityGroupsSDK(list []*ec2.UserIDGroupPair) []string { + result := make([]string, 0, len(list)) + for _, g := range list { + result = append(result, *g.GroupID) + } + return result +} + +// Flattens an array of Instances into a []string +func flattenInstancesSDK(list []elb.Instance) []string { + result := make([]string, 0, len(list)) + for _, i := range list { + result = append(result, *i.InstanceID) + } + return result +} + +// Expands an array of String Instance IDs into a []Instances +func expandInstanceStringSDK(list []interface{}) []elb.Instance { + result := make([]elb.Instance, 0, len(list)) + for _, i := range list { + result = append(result, elb.Instance{aws.String(i.(string))}) + } + return result +} + +// Flattens an array of Listeners into a []map[string]interface{} +func flattenListenersSDK(list []elb.ListenerDescription) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(list)) + for _, i := range list { + l := map[string]interface{}{ + "instance_port": *i.Listener.InstancePort, + "instance_protocol": strings.ToLower(*i.Listener.InstanceProtocol), + "lb_port": *i.Listener.LoadBalancerPort, + "lb_protocol": strings.ToLower(*i.Listener.Protocol), + } + // SSLCertificateID is optional, and may be nil + if i.Listener.SSLCertificateID != nil { + l["ssl_certificate_id"] = *i.Listener.SSLCertificateID + } + result = append(result, l) + } + return result +} + +// Flattens an array of Parameters into a []map[string]interface{} +func flattenParametersSDK(list []rds.Parameter) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(list)) + for _, i := range list { + result = append(result, map[string]interface{}{ + "name": strings.ToLower(*i.ParameterName), + "value": strings.ToLower(*i.ParameterValue), + }) + } + return result +} + +// Takes the result of flatmap.Expand for an array of strings +// and returns a []string +func expandStringListSDK(configured []interface{}) []*string { + vs := make([]*string, 0, len(configured)) + for _, v := range configured { + vs = append(vs, aws.String(v.(string))) + } + return vs +} + +//Flattens an array of private ip addresses into a []string, where the elements returned are the IP strings e.g. "192.168.0.0" +func flattenNetworkInterfacesPrivateIPAddessesSDK(dtos []*ec2.NetworkInterfacePrivateIPAddress) []string { + ips := make([]string, 0, len(dtos)) + for _, v := range dtos { + ip := *v.PrivateIPAddress + ips = append(ips, ip) + } + return ips +} + +//Flattens security group identifiers into a []string, where the elements returned are the GroupIDs +func flattenGroupIdentifiersSDK(dtos []*ec2.GroupIdentifier) []string { + ids := make([]string, 0, len(dtos)) + for _, v := range dtos { + group_id := *v.GroupID + ids = append(ids, group_id) + } + return ids +} + +//Expands an array of IPs into a ec2 Private IP Address Spec +func expandPrivateIPAddessesSDK(ips []interface{}) []*ec2.PrivateIPAddressSpecification { + dtos := make([]*ec2.PrivateIPAddressSpecification, 0, len(ips)) + for i, v := range ips { + new_private_ip := &ec2.PrivateIPAddressSpecification{ + PrivateIPAddress: aws.String(v.(string)), + } + + new_private_ip.Primary = aws.Boolean(i == 0) + + dtos = append(dtos, new_private_ip) + } + return dtos +} + +//Flattens network interface attachment into a map[string]interface +func flattenAttachmentSDK(a *ec2.NetworkInterfaceAttachment) map[string]interface{} { + att := make(map[string]interface{}) + att["instance"] = *a.InstanceID + att["device_index"] = *a.DeviceIndex + att["attachment_id"] = *a.AttachmentID + return att +} diff --git a/builtin/providers/aws/structure_sdk_test.go b/builtin/providers/aws/structure_sdk_test.go new file mode 100644 index 000000000000..db4b76c6f140 --- /dev/null +++ b/builtin/providers/aws/structure_sdk_test.go @@ -0,0 +1,444 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/awslabs/aws-sdk-go/service/ec2" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/elb" + "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/terraform/flatmap" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +// Returns test configuration +func testConfSDK() map[string]string { + return map[string]string{ + "listener.#": "1", + "listener.0.lb_port": "80", + "listener.0.lb_protocol": "http", + "listener.0.instance_port": "8000", + "listener.0.instance_protocol": "http", + "availability_zones.#": "2", + "availability_zones.0": "us-east-1a", + "availability_zones.1": "us-east-1b", + "ingress.#": "1", + "ingress.0.protocol": "icmp", + "ingress.0.from_port": "1", + "ingress.0.to_port": "-1", + "ingress.0.cidr_blocks.#": "1", + "ingress.0.cidr_blocks.0": "0.0.0.0/0", + "ingress.0.security_groups.#": "2", + "ingress.0.security_groups.0": "sg-11111", + "ingress.0.security_groups.1": "foo/sg-22222", + } +} + +func TestExpandIPPermsSDK(t *testing.T) { + hash := func(v interface{}) int { + return hashcode.String(v.(string)) + } + + expanded := []interface{}{ + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "cidr_blocks": []interface{}{"0.0.0.0/0"}, + "security_groups": schema.NewSet(hash, []interface{}{ + "sg-11111", + "foo/sg-22222", + }), + }, + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "self": true, + }, + } + group := &ec2.SecurityGroup{ + GroupID: aws.String("foo"), + VPCID: aws.String("bar"), + } + perms := expandIPPermsSDK(group, expanded) + + expected := []ec2.IPPermission{ + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Long(1), + ToPort: aws.Long(-1), + IPRanges: []*ec2.IPRange{&ec2.IPRange{CIDRIP: aws.String("0.0.0.0/0")}}, + UserIDGroupPairs: []*ec2.UserIDGroupPair{ + &ec2.UserIDGroupPair{ + UserID: aws.String("foo"), + GroupID: aws.String("sg-22222"), + }, + &ec2.UserIDGroupPair{ + GroupID: aws.String("sg-22222"), + }, + }, + }, + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Long(1), + ToPort: aws.Long(-1), + UserIDGroupPairs: []*ec2.UserIDGroupPair{ + &ec2.UserIDGroupPair{ + UserID: aws.String("foo"), + }, + }, + }, + } + + exp := expected[0] + perm := perms[0] + + if *exp.FromPort != *perm.FromPort { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.FromPort, + *exp.FromPort) + } + + if *exp.IPRanges[0].CIDRIP != *perm.IPRanges[0].CIDRIP { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.IPRanges[0].CIDRIP, + *exp.IPRanges[0].CIDRIP) + } + + if *exp.UserIDGroupPairs[0].UserID != *perm.UserIDGroupPairs[0].UserID { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.UserIDGroupPairs[0].UserID, + *exp.UserIDGroupPairs[0].UserID) + } + +} + +func TestExpandIPPerms_nonVPCSDK(t *testing.T) { + hash := func(v interface{}) int { + return hashcode.String(v.(string)) + } + + expanded := []interface{}{ + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "cidr_blocks": []interface{}{"0.0.0.0/0"}, + "security_groups": schema.NewSet(hash, []interface{}{ + "sg-11111", + "foo/sg-22222", + }), + }, + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "self": true, + }, + } + group := &ec2.SecurityGroup{ + GroupName: aws.String("foo"), + } + perms := expandIPPermsSDK(group, expanded) + + expected := []ec2.IPPermission{ + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Long(1), + ToPort: aws.Long(-1), + IPRanges: []*ec2.IPRange{&ec2.IPRange{CIDRIP: aws.String("0.0.0.0/0")}}, + UserIDGroupPairs: []*ec2.UserIDGroupPair{ + &ec2.UserIDGroupPair{ + GroupName: aws.String("sg-22222"), + }, + &ec2.UserIDGroupPair{ + GroupName: aws.String("sg-22222"), + }, + }, + }, + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Long(1), + ToPort: aws.Long(-1), + UserIDGroupPairs: []*ec2.UserIDGroupPair{ + &ec2.UserIDGroupPair{ + GroupName: aws.String("foo"), + }, + }, + }, + } + + exp := expected[0] + perm := perms[0] + + if *exp.FromPort != *perm.FromPort { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.FromPort, + *exp.FromPort) + } + + if *exp.IPRanges[0].CIDRIP != *perm.IPRanges[0].CIDRIP { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.IPRanges[0].CIDRIP, + *exp.IPRanges[0].CIDRIP) + } +} + +func TestExpandListenersSDK(t *testing.T) { + expanded := []interface{}{ + map[string]interface{}{ + "instance_port": 8000, + "lb_port": 80, + "instance_protocol": "http", + "lb_protocol": "http", + }, + } + listeners, err := expandListenersSDK(expanded) + if err != nil { + t.Fatalf("bad: %#v", err) + } + + expected := elb.Listener{ + InstancePort: aws.Integer(8000), + LoadBalancerPort: aws.Integer(80), + InstanceProtocol: aws.String("http"), + Protocol: aws.String("http"), + } + + if !reflect.DeepEqual(listeners[0], expected) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + listeners[0], + expected) + } + +} + +func TestFlattenHealthCheckSDK(t *testing.T) { + cases := []struct { + Input elb.HealthCheck + Output []map[string]interface{} + }{ + { + Input: elb.HealthCheck{ + UnhealthyThreshold: aws.Integer(10), + HealthyThreshold: aws.Integer(10), + Target: aws.String("HTTP:80/"), + Timeout: aws.Integer(30), + Interval: aws.Integer(30), + }, + Output: []map[string]interface{}{ + map[string]interface{}{ + "unhealthy_threshold": 10, + "healthy_threshold": 10, + "target": "HTTP:80/", + "timeout": 30, + "interval": 30, + }, + }, + }, + } + + for _, tc := range cases { + output := flattenHealthCheckSDK(&tc.Input) + if !reflect.DeepEqual(output, tc.Output) { + t.Fatalf("Got:\n\n%#v\n\nExpected:\n\n%#v", output, tc.Output) + } + } +} + +func TestExpandStringListSDK(t *testing.T) { + expanded := flatmap.Expand(testConfSDK(), "availability_zones").([]interface{}) + stringList := expandStringList(expanded) + expected := []string{ + "us-east-1a", + "us-east-1b", + } + + if !reflect.DeepEqual(stringList, expected) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + stringList, + expected) + } + +} + +func TestExpandParametersSDK(t *testing.T) { + expanded := []interface{}{ + map[string]interface{}{ + "name": "character_set_client", + "value": "utf8", + "apply_method": "immediate", + }, + } + parameters, err := expandParametersSDK(expanded) + if err != nil { + t.Fatalf("bad: %#v", err) + } + + expected := rds.Parameter{ + ParameterName: aws.String("character_set_client"), + ParameterValue: aws.String("utf8"), + ApplyMethod: aws.String("immediate"), + } + + if !reflect.DeepEqual(parameters[0], expected) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + parameters[0], + expected) + } +} + +func TestFlattenParametersSDK(t *testing.T) { + cases := []struct { + Input []rds.Parameter + Output []map[string]interface{} + }{ + { + Input: []rds.Parameter{ + rds.Parameter{ + ParameterName: aws.String("character_set_client"), + ParameterValue: aws.String("utf8"), + }, + }, + Output: []map[string]interface{}{ + map[string]interface{}{ + "name": "character_set_client", + "value": "utf8", + }, + }, + }, + } + + for _, tc := range cases { + output := flattenParametersSDK(tc.Input) + if !reflect.DeepEqual(output, tc.Output) { + t.Fatalf("Got:\n\n%#v\n\nExpected:\n\n%#v", output, tc.Output) + } + } +} + +func TestExpandInstanceStringSDK(t *testing.T) { + + expected := []elb.Instance{ + elb.Instance{aws.String("test-one")}, + elb.Instance{aws.String("test-two")}, + } + + ids := []interface{}{ + "test-one", + "test-two", + } + + expanded := expandInstanceStringSDK(ids) + + if !reflect.DeepEqual(expanded, expected) { + t.Fatalf("Expand Instance String output did not match.\nGot:\n%#v\n\nexpected:\n%#v", expanded, expected) + } +} + +func TestFlattenNetworkInterfacesPrivateIPAddessesSDK(t *testing.T) { + expanded := []*ec2.NetworkInterfacePrivateIPAddress{ + &ec2.NetworkInterfacePrivateIPAddress{PrivateIPAddress: aws.String("192.168.0.1")}, + &ec2.NetworkInterfacePrivateIPAddress{PrivateIPAddress: aws.String("192.168.0.2")}, + } + + result := flattenNetworkInterfacesPrivateIPAddessesSDK(expanded) + + if result == nil { + t.Fatal("result was nil") + } + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if result[0] != "192.168.0.1" { + t.Fatalf("expected ip to be 192.168.0.1, but was %s", result[0]) + } + + if result[1] != "192.168.0.2" { + t.Fatalf("expected ip to be 192.168.0.2, but was %s", result[1]) + } +} + +func TestFlattenGroupIdentifiersSDK(t *testing.T) { + expanded := []*ec2.GroupIdentifier{ + &ec2.GroupIdentifier{GroupID: aws.String("sg-001")}, + &ec2.GroupIdentifier{GroupID: aws.String("sg-002")}, + } + + result := flattenGroupIdentifiersSDK(expanded) + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if result[0] != "sg-001" { + t.Fatalf("expected id to be sg-001, but was %s", result[0]) + } + + if result[1] != "sg-002" { + t.Fatalf("expected id to be sg-002, but was %s", result[1]) + } +} + +func TestExpandPrivateIPAddessesSDK(t *testing.T) { + + ip1 := "192.168.0.1" + ip2 := "192.168.0.2" + flattened := []interface{}{ + ip1, + ip2, + } + + result := expandPrivateIPAddessesSDK(flattened) + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if *result[0].PrivateIPAddress != "192.168.0.1" || !*result[0].Primary { + t.Fatalf("expected ip to be 192.168.0.1 and Primary, but got %v, %t", *result[0].PrivateIPAddress, *result[0].Primary) + } + + if *result[1].PrivateIPAddress != "192.168.0.2" || *result[1].Primary { + t.Fatalf("expected ip to be 192.168.0.2 and not Primary, but got %v, %t", *result[1].PrivateIPAddress, *result[1].Primary) + } +} + +func TestFlattenAttachmentSDK(t *testing.T) { + expanded := &ec2.NetworkInterfaceAttachment{ + InstanceID: aws.String("i-00001"), + DeviceIndex: aws.Long(1), + AttachmentID: aws.String("at-002"), + } + + result := flattenAttachmentSDK(expanded) + + if result == nil { + t.Fatal("expected result to have value, but got nil") + } + + if result["instance"] != "i-00001" { + t.Fatalf("expected instance to be i-00001, but got %s", result["instance"]) + } + + if result["device_index"] != int64(1) { + t.Fatalf("expected device_index to be 1, but got %d", result["device_index"]) + } + + if result["attachment_id"] != "at-002" { + t.Fatalf("expected attachment_id to be at-002, but got %s", result["attachment_id"]) + } +} diff --git a/builtin/providers/aws/structure_test.go b/builtin/providers/aws/structure_test.go index f3a8bcc72c61..ca27f6ddfe79 100644 --- a/builtin/providers/aws/structure_test.go +++ b/builtin/providers/aws/structure_test.go @@ -5,12 +5,13 @@ import ( "testing" "github.com/hashicorp/aws-sdk-go/aws" + ec2 "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/aws-sdk-go/gen/elb" "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/aws-sdk-go/gen/route53" "github.com/hashicorp/terraform/flatmap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) // Returns test configuration @@ -59,121 +60,136 @@ func TestExpandIPPerms(t *testing.T) { "self": true, }, } - perms := expandIPPerms("foo", expanded) - - expected := []ec2.IPPerm{ - ec2.IPPerm{ - Protocol: "icmp", - FromPort: 1, - ToPort: -1, - SourceIPs: []string{"0.0.0.0/0"}, - SourceGroups: []ec2.UserSecurityGroup{ - ec2.UserSecurityGroup{ - OwnerId: "foo", - Id: "sg-22222", + group := ec2.SecurityGroup{ + GroupID: aws.String("foo"), + VPCID: aws.String("bar"), + } + perms := expandIPPerms(group, expanded) + + expected := []ec2.IPPermission{ + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Integer(1), + ToPort: aws.Integer(-1), + IPRanges: []ec2.IPRange{ec2.IPRange{aws.String("0.0.0.0/0")}}, + UserIDGroupPairs: []ec2.UserIDGroupPair{ + ec2.UserIDGroupPair{ + UserID: aws.String("foo"), + GroupID: aws.String("sg-22222"), }, - ec2.UserSecurityGroup{ - Id: "sg-11111", + ec2.UserIDGroupPair{ + GroupID: aws.String("sg-22222"), }, }, }, - ec2.IPPerm{ - Protocol: "icmp", - FromPort: 1, - ToPort: -1, - SourceGroups: []ec2.UserSecurityGroup{ - ec2.UserSecurityGroup{ - Id: "foo", + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Integer(1), + ToPort: aws.Integer(-1), + UserIDGroupPairs: []ec2.UserIDGroupPair{ + ec2.UserIDGroupPair{ + UserID: aws.String("foo"), }, }, }, } - if !reflect.DeepEqual(perms, expected) { + exp := expected[0] + perm := perms[0] + + if *exp.FromPort != *perm.FromPort { t.Fatalf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", - perms[0], - expected) + *perm.FromPort, + *exp.FromPort) + } + + if *exp.IPRanges[0].CIDRIP != *perm.IPRanges[0].CIDRIP { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.IPRanges[0].CIDRIP, + *exp.IPRanges[0].CIDRIP) + } + + if *exp.UserIDGroupPairs[0].UserID != *perm.UserIDGroupPairs[0].UserID { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.UserIDGroupPairs[0].UserID, + *exp.UserIDGroupPairs[0].UserID) } } -func TestFlattenIPPerms(t *testing.T) { - cases := []struct { - Input []ec2.IPPerm - Output []map[string]interface{} - }{ - { - Input: []ec2.IPPerm{ - ec2.IPPerm{ - Protocol: "icmp", - FromPort: 1, - ToPort: -1, - SourceIPs: []string{"0.0.0.0/0"}, - SourceGroups: []ec2.UserSecurityGroup{ - ec2.UserSecurityGroup{ - Id: "sg-11111", - }, - }, - }, - }, +func TestExpandIPPerms_nonVPC(t *testing.T) { + hash := func(v interface{}) int { + return hashcode.String(v.(string)) + } - Output: []map[string]interface{}{ - map[string]interface{}{ - "protocol": "icmp", - "from_port": 1, - "to_port": -1, - "cidr_blocks": []string{"0.0.0.0/0"}, - "security_groups": []string{"sg-11111"}, - }, - }, + expanded := []interface{}{ + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "cidr_blocks": []interface{}{"0.0.0.0/0"}, + "security_groups": schema.NewSet(hash, []interface{}{ + "sg-11111", + "foo/sg-22222", + }), }, - - { - Input: []ec2.IPPerm{ - ec2.IPPerm{ - Protocol: "icmp", - FromPort: 1, - ToPort: -1, - SourceIPs: []string{"0.0.0.0/0"}, - SourceGroups: nil, + map[string]interface{}{ + "protocol": "icmp", + "from_port": 1, + "to_port": -1, + "self": true, + }, + } + group := ec2.SecurityGroup{ + GroupName: aws.String("foo"), + } + perms := expandIPPerms(group, expanded) + + expected := []ec2.IPPermission{ + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Integer(1), + ToPort: aws.Integer(-1), + IPRanges: []ec2.IPRange{ec2.IPRange{aws.String("0.0.0.0/0")}}, + UserIDGroupPairs: []ec2.UserIDGroupPair{ + ec2.UserIDGroupPair{ + GroupName: aws.String("sg-22222"), }, - }, - - Output: []map[string]interface{}{ - map[string]interface{}{ - "protocol": "icmp", - "from_port": 1, - "to_port": -1, - "cidr_blocks": []string{"0.0.0.0/0"}, + ec2.UserIDGroupPair{ + GroupName: aws.String("sg-22222"), }, }, }, - { - Input: []ec2.IPPerm{ - ec2.IPPerm{ - Protocol: "icmp", - FromPort: 1, - ToPort: -1, - SourceIPs: nil, - }, - }, - - Output: []map[string]interface{}{ - map[string]interface{}{ - "protocol": "icmp", - "from_port": 1, - "to_port": -1, + ec2.IPPermission{ + IPProtocol: aws.String("icmp"), + FromPort: aws.Integer(1), + ToPort: aws.Integer(-1), + UserIDGroupPairs: []ec2.UserIDGroupPair{ + ec2.UserIDGroupPair{ + GroupName: aws.String("foo"), }, }, }, } - for _, tc := range cases { - output := flattenIPPerms(tc.Input) - if !reflect.DeepEqual(output, tc.Output) { - t.Fatalf("Input:\n\n%#v\n\nOutput:\n\n%#v", tc.Input, output) - } + exp := expected[0] + perm := perms[0] + + if *exp.FromPort != *perm.FromPort { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.FromPort, + *exp.FromPort) + } + + if *exp.IPRanges[0].CIDRIP != *perm.IPRanges[0].CIDRIP { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + *perm.IPRanges[0].CIDRIP, + *exp.IPRanges[0].CIDRIP) } } @@ -331,3 +347,120 @@ func TestExpandInstanceString(t *testing.T) { t.Fatalf("Expand Instance String output did not match.\nGot:\n%#v\n\nexpected:\n%#v", expanded, expected) } } + +func TestFlattenNetworkInterfacesPrivateIPAddesses(t *testing.T) { + expanded := []ec2.NetworkInterfacePrivateIPAddress{ + ec2.NetworkInterfacePrivateIPAddress{PrivateIPAddress: aws.String("192.168.0.1")}, + ec2.NetworkInterfacePrivateIPAddress{PrivateIPAddress: aws.String("192.168.0.2")}, + } + + result := flattenNetworkInterfacesPrivateIPAddesses(expanded) + + if result == nil { + t.Fatal("result was nil") + } + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if result[0] != "192.168.0.1" { + t.Fatalf("expected ip to be 192.168.0.1, but was %s", result[0]) + } + + if result[1] != "192.168.0.2" { + t.Fatalf("expected ip to be 192.168.0.2, but was %s", result[1]) + } +} + +func TestFlattenGroupIdentifiers(t *testing.T) { + expanded := []ec2.GroupIdentifier{ + ec2.GroupIdentifier{GroupID: aws.String("sg-001")}, + ec2.GroupIdentifier{GroupID: aws.String("sg-002")}, + } + + result := flattenGroupIdentifiers(expanded) + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if result[0] != "sg-001" { + t.Fatalf("expected id to be sg-001, but was %s", result[0]) + } + + if result[1] != "sg-002" { + t.Fatalf("expected id to be sg-002, but was %s", result[1]) + } +} + +func TestExpandPrivateIPAddesses(t *testing.T) { + + ip1 := "192.168.0.1" + ip2 := "192.168.0.2" + flattened := []interface{}{ + ip1, + ip2, + } + + result := expandPrivateIPAddesses(flattened) + + if len(result) != 2 { + t.Fatalf("expected result had %d elements, but got %d", 2, len(result)) + } + + if *result[0].PrivateIPAddress != "192.168.0.1" || !*result[0].Primary { + t.Fatalf("expected ip to be 192.168.0.1 and Primary, but got %v, %t", *result[0].PrivateIPAddress, *result[0].Primary) + } + + if *result[1].PrivateIPAddress != "192.168.0.2" || *result[1].Primary { + t.Fatalf("expected ip to be 192.168.0.2 and not Primary, but got %v, %t", *result[1].PrivateIPAddress, *result[1].Primary) + } +} + +func TestFlattenAttachment(t *testing.T) { + expanded := &ec2.NetworkInterfaceAttachment{ + InstanceID: aws.String("i-00001"), + DeviceIndex: aws.Integer(1), + AttachmentID: aws.String("at-002"), + } + + result := flattenAttachment(expanded) + + if result == nil { + t.Fatal("expected result to have value, but got nil") + } + + if result["instance"] != "i-00001" { + t.Fatalf("expected instance to be i-00001, but got %s", result["instance"]) + } + + if result["device_index"] != 1 { + t.Fatalf("expected device_index to be 1, but got %d", result["device_index"]) + } + + if result["attachment_id"] != "at-002" { + t.Fatalf("expected attachment_id to be at-002, but got %s", result["attachment_id"]) + } +} + +func TestFlattenResourceRecords(t *testing.T) { + expanded := []route53.ResourceRecord{ + route53.ResourceRecord{ + Value: aws.String("127.0.0.1"), + }, + route53.ResourceRecord{ + Value: aws.String("127.0.0.3"), + }, + } + + result := flattenResourceRecords(expanded) + + if result == nil { + t.Fatal("expected result to have value, but got nil") + } + + if len(result) != 2 { + t.Fatal("expected result to have value, but got nil") + } +} diff --git a/builtin/providers/aws/tags.go b/builtin/providers/aws/tags.go index b45875c59a4f..1c64b18b4d27 100644 --- a/builtin/providers/aws/tags.go +++ b/builtin/providers/aws/tags.go @@ -3,11 +3,13 @@ package aws import ( "log" + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/schema" - "github.com/mitchellh/goamz/ec2" ) // tagsSchema returns the schema to use for tags. +// func tagsSchema() *schema.Schema { return &schema.Schema{ Type: schema.TypeMap, @@ -27,13 +29,21 @@ func setTags(conn *ec2.EC2, d *schema.ResourceData) error { // Set tags if len(remove) > 0 { log.Printf("[DEBUG] Removing tags: %#v", remove) - if _, err := conn.DeleteTags([]string{d.Id()}, remove); err != nil { + err := conn.DeleteTags(&ec2.DeleteTagsRequest{ + Resources: []string{d.Id()}, + Tags: remove, + }) + if err != nil { return err } } if len(create) > 0 { log.Printf("[DEBUG] Creating tags: %#v", create) - if _, err := conn.CreateTags([]string{d.Id()}, create); err != nil { + err := conn.CreateTags(&ec2.CreateTagsRequest{ + Resources: []string{d.Id()}, + Tags: create, + }) + if err != nil { return err } } @@ -49,14 +59,14 @@ func diffTags(oldTags, newTags []ec2.Tag) ([]ec2.Tag, []ec2.Tag) { // First, we're creating everything we have create := make(map[string]interface{}) for _, t := range newTags { - create[t.Key] = t.Value + create[*t.Key] = *t.Value } // Build the list of what to remove var remove []ec2.Tag for _, t := range oldTags { - old, ok := create[t.Key] - if !ok || old != t.Value { + old, ok := create[*t.Key] + if !ok || old != *t.Value { // Delete it! remove = append(remove, t) } @@ -70,8 +80,8 @@ func tagsFromMap(m map[string]interface{}) []ec2.Tag { result := make([]ec2.Tag, 0, len(m)) for k, v := range m { result = append(result, ec2.Tag{ - Key: k, - Value: v.(string), + Key: aws.String(k), + Value: aws.String(v.(string)), }) } @@ -82,7 +92,7 @@ func tagsFromMap(m map[string]interface{}) []ec2.Tag { func tagsToMap(ts []ec2.Tag) map[string]string { result := make(map[string]string) for _, t := range ts { - result[t.Key] = t.Value + result[*t.Key] = *t.Value } return result diff --git a/builtin/providers/aws/tagsELB.go b/builtin/providers/aws/tagsELB.go new file mode 100644 index 000000000000..ad5e0752e2f7 --- /dev/null +++ b/builtin/providers/aws/tagsELB.go @@ -0,0 +1,94 @@ +package aws + +import ( + "log" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/elb" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsELB(conn *elb.ELB, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsELB(tagsFromMapELB(o), tagsFromMapELB(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]elb.TagKeyOnly, 0, len(remove)) + for _, t := range remove { + k = append(k, elb.TagKeyOnly{Key: t.Key}) + } + _, err := conn.RemoveTags(&elb.RemoveTagsInput{ + LoadBalancerNames: []string{d.Get("name").(string)}, + Tags: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + _, err := conn.AddTags(&elb.AddTagsInput{ + LoadBalancerNames: []string{d.Get("name").(string)}, + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsELB(oldTags, newTags []elb.Tag) ([]elb.Tag, []elb.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []elb.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapELB(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapELB(m map[string]interface{}) []elb.Tag { + result := make([]elb.Tag, 0, len(m)) + for k, v := range m { + result = append(result, elb.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapELB(ts []elb.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tagsELB_test.go b/builtin/providers/aws/tagsELB_test.go new file mode 100644 index 000000000000..79021b4dda1b --- /dev/null +++ b/builtin/providers/aws/tagsELB_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/aws-sdk-go/gen/elb" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffELBTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsELB(tagsFromMapELB(tc.Old), tagsFromMapELB(tc.New)) + cm := tagsToMapELB(c) + rm := tagsToMapELB(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckELBTags( + ts *[]elb.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapELB(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/tagsRDS.go b/builtin/providers/aws/tagsRDS.go new file mode 100644 index 000000000000..8eb592427854 --- /dev/null +++ b/builtin/providers/aws/tagsRDS.go @@ -0,0 +1,95 @@ +package aws + +import ( + "log" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsRDS(tagsFromMapRDS(o), tagsFromMapRDS(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + k := make([]string, len(remove), len(remove)) + for i, t := range remove { + k[i] = *t.Key + } + + err := conn.RemoveTagsFromResource(&rds.RemoveTagsFromResourceMessage{ + ResourceName: aws.String(arn), + TagKeys: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + err := conn.AddTagsToResource(&rds.AddTagsToResourceMessage{ + ResourceName: aws.String(arn), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsRDS(oldTags, newTags []rds.Tag) ([]rds.Tag, []rds.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []rds.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapRDS(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapRDS(m map[string]interface{}) []rds.Tag { + result := make([]rds.Tag, 0, len(m)) + for k, v := range m { + result = append(result, rds.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapRDS(ts []rds.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tagsRDS_test.go b/builtin/providers/aws/tagsRDS_test.go new file mode 100644 index 000000000000..1d9da835751e --- /dev/null +++ b/builtin/providers/aws/tagsRDS_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/aws-sdk-go/gen/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffRDSTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsRDS(tagsFromMapRDS(tc.Old), tagsFromMapRDS(tc.New)) + cm := tagsToMapRDS(c) + rm := tagsToMapRDS(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckRDSTags( + ts *[]rds.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapRDS(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/tags_route53.go b/builtin/providers/aws/tags_route53.go new file mode 100644 index 000000000000..e5251d02a077 --- /dev/null +++ b/builtin/providers/aws/tags_route53.go @@ -0,0 +1,86 @@ +package aws + +import ( + "log" + + "github.com/hashicorp/aws-sdk-go/aws" + "github.com/hashicorp/aws-sdk-go/gen/route53" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsR53(conn *route53.Route53, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsR53(tagsFromMapR53(o), tagsFromMapR53(n)) + + // Set tags + r := make([]string, len(remove)) + for i, t := range remove { + r[i] = *t.Key + } + log.Printf("[DEBUG] Changing tags: \n\tadding: %#v\n\tremoving:%#v", create, remove) + req := &route53.ChangeTagsForResourceRequest{ + AddTags: create, + RemoveTagKeys: r, + ResourceID: aws.String(d.Id()), + ResourceType: aws.String("hostedzone"), + } + + _, err := conn.ChangeTagsForResource(req) + if err != nil { + return err + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsR53(oldTags, newTags []route53.Tag) ([]route53.Tag, []route53.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []route53.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapR53(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapR53(m map[string]interface{}) []route53.Tag { + result := make([]route53.Tag, 0, len(m)) + for k, v := range m { + result = append(result, route53.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapR53(ts []route53.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + result[*t.Key] = *t.Value + } + + return result +} diff --git a/builtin/providers/aws/tags_route53_test.go b/builtin/providers/aws/tags_route53_test.go new file mode 100644 index 000000000000..40a4154f3f46 --- /dev/null +++ b/builtin/providers/aws/tags_route53_test.go @@ -0,0 +1,85 @@ +package aws + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/aws-sdk-go/gen/route53" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDiffTagsR53(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsR53(tagsFromMapR53(tc.Old), tagsFromMapR53(tc.New)) + cm := tagsToMapR53(c) + rm := tagsToMapR53(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// testAccCheckTags can be used to check the tags on a resource. +func testAccCheckTagsR53( + ts *[]route53.Tag, key string, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := tagsToMapR53(*ts) + v, ok := m[key] + if value != "" && !ok { + return fmt.Errorf("Missing tag: %s", key) + } else if value == "" && ok { + return fmt.Errorf("Extra tag: %s", key) + } + if value == "" { + return nil + } + + if v != value { + return fmt.Errorf("%s: bad value: %s", key, v) + } + + return nil + } +} diff --git a/builtin/providers/aws/tags_sdk.go b/builtin/providers/aws/tags_sdk.go index 7e9690b780bd..0b8e807c291e 100644 --- a/builtin/providers/aws/tags_sdk.go +++ b/builtin/providers/aws/tags_sdk.go @@ -1,22 +1,15 @@ package aws -// TODO: Clint: consolidate tags and tags_sdk -// tags_sdk and tags_sdk_test are used only for transition to aws-sdk-go -// and will replace tags and tags_test when the transition to aws-sdk-go/ec2 is -// complete - import ( "log" - "github.com/hashicorp/aws-sdk-go/aws" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/aws" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" ) // tagsSchema returns the schema to use for tags. // -// TODO: uncomment this when we replace the original tags.go -// // func tagsSchema() *schema.Schema { // return &schema.Schema{ // Type: schema.TypeMap, @@ -36,8 +29,8 @@ func setTagsSDK(conn *ec2.EC2, d *schema.ResourceData) error { // Set tags if len(remove) > 0 { log.Printf("[DEBUG] Removing tags: %#v", remove) - err := conn.DeleteTags(&ec2.DeleteTagsRequest{ - Resources: []string{d.Id()}, + _, err := conn.DeleteTags(&ec2.DeleteTagsInput{ + Resources: []*string{aws.String(d.Id())}, Tags: remove, }) if err != nil { @@ -46,8 +39,8 @@ func setTagsSDK(conn *ec2.EC2, d *schema.ResourceData) error { } if len(create) > 0 { log.Printf("[DEBUG] Creating tags: %#v", create) - err := conn.CreateTags(&ec2.CreateTagsRequest{ - Resources: []string{d.Id()}, + _, err := conn.CreateTags(&ec2.CreateTagsInput{ + Resources: []*string{aws.String(d.Id())}, Tags: create, }) if err != nil { @@ -62,7 +55,7 @@ func setTagsSDK(conn *ec2.EC2, d *schema.ResourceData) error { // diffTags takes our tags locally and the ones remotely and returns // the set of tags that must be created, and the set of tags that must // be destroyed. -func diffTagsSDK(oldTags, newTags []ec2.Tag) ([]ec2.Tag, []ec2.Tag) { +func diffTagsSDK(oldTags, newTags []*ec2.Tag) ([]*ec2.Tag, []*ec2.Tag) { // First, we're creating everything we have create := make(map[string]interface{}) for _, t := range newTags { @@ -70,7 +63,7 @@ func diffTagsSDK(oldTags, newTags []ec2.Tag) ([]ec2.Tag, []ec2.Tag) { } // Build the list of what to remove - var remove []ec2.Tag + var remove []*ec2.Tag for _, t := range oldTags { old, ok := create[*t.Key] if !ok || old != *t.Value { @@ -83,10 +76,10 @@ func diffTagsSDK(oldTags, newTags []ec2.Tag) ([]ec2.Tag, []ec2.Tag) { } // tagsFromMap returns the tags for the given map of data. -func tagsFromMapSDK(m map[string]interface{}) []ec2.Tag { - result := make([]ec2.Tag, 0, len(m)) +func tagsFromMapSDK(m map[string]interface{}) []*ec2.Tag { + result := make([]*ec2.Tag, 0, len(m)) for k, v := range m { - result = append(result, ec2.Tag{ + result = append(result, &ec2.Tag{ Key: aws.String(k), Value: aws.String(v.(string)), }) @@ -96,7 +89,7 @@ func tagsFromMapSDK(m map[string]interface{}) []ec2.Tag { } // tagsToMap turns the list of tags into a map. -func tagsToMapSDK(ts []ec2.Tag) map[string]string { +func tagsToMapSDK(ts []*ec2.Tag) map[string]string { result := make(map[string]string) for _, t := range ts { result[*t.Key] = *t.Value diff --git a/builtin/providers/aws/tags_sdk_test.go b/builtin/providers/aws/tags_sdk_test.go index 5a5b0e600620..272957f6b81a 100644 --- a/builtin/providers/aws/tags_sdk_test.go +++ b/builtin/providers/aws/tags_sdk_test.go @@ -5,7 +5,7 @@ import ( "reflect" "testing" - "github.com/hashicorp/aws-sdk-go/gen/ec2" + "github.com/awslabs/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -63,7 +63,7 @@ func TestDiffTagsSDK(t *testing.T) { // testAccCheckTags can be used to check the tags on a resource. func testAccCheckTagsSDK( - ts *[]ec2.Tag, key string, value string) resource.TestCheckFunc { + ts *[]*ec2.Tag, key string, value string) resource.TestCheckFunc { return func(s *terraform.State) error { m := tagsToMapSDK(*ts) v, ok := m[key] diff --git a/builtin/providers/aws/tags_test.go b/builtin/providers/aws/tags_test.go index 6e89492ca828..16578ac1b66c 100644 --- a/builtin/providers/aws/tags_test.go +++ b/builtin/providers/aws/tags_test.go @@ -5,9 +5,9 @@ import ( "reflect" "testing" + "github.com/hashicorp/aws-sdk-go/gen/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "github.com/mitchellh/goamz/ec2" ) func TestDiffTags(t *testing.T) { diff --git a/builtin/providers/cloudstack/provider.go b/builtin/providers/cloudstack/provider.go index a9913f6e82db..8cdafd1ab376 100644 --- a/builtin/providers/cloudstack/provider.go +++ b/builtin/providers/cloudstack/provider.go @@ -30,22 +30,26 @@ func Provider() terraform.ResourceProvider { "timeout": &schema.Schema{ Type: schema.TypeInt, Required: true, - DefaultFunc: schema.EnvDefaultFunc("CLOUDSTACK_TIMEOUT", 180), + DefaultFunc: schema.EnvDefaultFunc("CLOUDSTACK_TIMEOUT", 300), }, }, ResourcesMap: map[string]*schema.Resource{ - "cloudstack_disk": resourceCloudStackDisk(), - "cloudstack_egress_firewall": resourceCloudStackEgressFirewall(), - "cloudstack_firewall": resourceCloudStackFirewall(), - "cloudstack_instance": resourceCloudStackInstance(), - "cloudstack_ipaddress": resourceCloudStackIPAddress(), - "cloudstack_network": resourceCloudStackNetwork(), - "cloudstack_network_acl": resourceCloudStackNetworkACL(), - "cloudstack_network_acl_rule": resourceCloudStackNetworkACLRule(), - "cloudstack_nic": resourceCloudStackNIC(), - "cloudstack_port_forward": resourceCloudStackPortForward(), - "cloudstack_vpc": resourceCloudStackVPC(), + "cloudstack_disk": resourceCloudStackDisk(), + "cloudstack_egress_firewall": resourceCloudStackEgressFirewall(), + "cloudstack_firewall": resourceCloudStackFirewall(), + "cloudstack_instance": resourceCloudStackInstance(), + "cloudstack_ipaddress": resourceCloudStackIPAddress(), + "cloudstack_network": resourceCloudStackNetwork(), + "cloudstack_network_acl": resourceCloudStackNetworkACL(), + "cloudstack_network_acl_rule": resourceCloudStackNetworkACLRule(), + "cloudstack_nic": resourceCloudStackNIC(), + "cloudstack_port_forward": resourceCloudStackPortForward(), + "cloudstack_template": resourceCloudStackTemplate(), + "cloudstack_vpc": resourceCloudStackVPC(), + "cloudstack_vpn_connection": resourceCloudStackVPNConnection(), + "cloudstack_vpn_customer_gateway": resourceCloudStackVPNCustomerGateway(), + "cloudstack_vpn_gateway": resourceCloudStackVPNGateway(), }, ConfigureFunc: providerConfigure, diff --git a/builtin/providers/cloudstack/provider_test.go b/builtin/providers/cloudstack/provider_test.go index a13839177e8a..878ab1882e32 100644 --- a/builtin/providers/cloudstack/provider_test.go +++ b/builtin/providers/cloudstack/provider_test.go @@ -43,6 +43,7 @@ func testAccPreCheck(t *testing.T) { // SET THESE VALUES IN ORDER TO RUN THE ACC TESTS!! var CLOUDSTACK_DISK_OFFERING_1 = "" var CLOUDSTACK_DISK_OFFERING_2 = "" +var CLOUDSTACK_HYPERVISOR = "" var CLOUDSTACK_SERVICE_OFFERING_1 = "" var CLOUDSTACK_SERVICE_OFFERING_2 = "" var CLOUDSTACK_NETWORK_1 = "" @@ -51,10 +52,14 @@ var CLOUDSTACK_NETWORK_1_OFFERING = "" var CLOUDSTACK_NETWORK_1_IPADDRESS = "" var CLOUDSTACK_NETWORK_2 = "" var CLOUDSTACK_NETWORK_2_IPADDRESS = "" -var CLOUDSTACK_VPC_CIDR = "" +var CLOUDSTACK_VPC_CIDR_1 = "" +var CLOUDSTACK_VPC_CIDR_2 = "" var CLOUDSTACK_VPC_OFFERING = "" var CLOUDSTACK_VPC_NETWORK_CIDR = "" var CLOUDSTACK_VPC_NETWORK_OFFERING = "" var CLOUDSTACK_PUBLIC_IPADDRESS = "" var CLOUDSTACK_TEMPLATE = "" +var CLOUDSTACK_TEMPLATE_FORMAT = "" +var CLOUDSTACK_TEMPLATE_URL = "" +var CLOUDSTACK_TEMPLATE_OS_TYPE = "" var CLOUDSTACK_ZONE = "" diff --git a/builtin/providers/cloudstack/resource_cloudstack_disk.go b/builtin/providers/cloudstack/resource_cloudstack_disk.go index 88ddff59eeb7..382cd28767a7 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_disk.go +++ b/builtin/providers/cloudstack/resource_cloudstack_disk.go @@ -84,7 +84,7 @@ func resourceCloudStackDiskCreate(d *schema.ResourceData, meta interface{}) erro if d.Get("size").(int) != 0 { // Set the volume size - p.SetSize(d.Get("size").(int)) + p.SetSize(int64(d.Get("size").(int))) } // Retrieve the zone UUID @@ -141,7 +141,7 @@ func resourceCloudStackDiskRead(d *schema.ResourceData, meta interface{}) error d.Set("name", v.Name) d.Set("attach", v.Attached != "") // If attached this will contain a timestamp when attached d.Set("disk_offering", v.Diskofferingname) - d.Set("size", v.Size/(1024*1024*1024)) // Needed to get GB's again + d.Set("size", int(v.Size/(1024*1024*1024))) // Needed to get GB's again d.Set("zone", v.Zonename) if v.Attached != "" { @@ -196,7 +196,7 @@ func resourceCloudStackDiskUpdate(d *schema.ResourceData, meta interface{}) erro if d.Get("size").(int) != 0 { // Set the size - p.SetSize(d.Get("size").(int)) + p.SetSize(int64(d.Get("size").(int))) } // Set the shrink bit @@ -367,7 +367,7 @@ func isAttached(cs *cloudstack.CloudStackClient, id string) (bool, error) { return v.Attached != "", nil } -func retrieveDeviceID(device string) int { +func retrieveDeviceID(device string) int64 { switch device { case "/dev/xvdb", "D:": return 1 @@ -402,7 +402,7 @@ func retrieveDeviceID(device string) int { } } -func retrieveDeviceName(device int, os string) string { +func retrieveDeviceName(device int64, os string) string { switch device { case 1: if os == "Windows" { diff --git a/builtin/providers/cloudstack/resource_cloudstack_instance.go b/builtin/providers/cloudstack/resource_cloudstack_instance.go index 600001a27dd5..b88e4a255cf2 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_instance.go +++ b/builtin/providers/cloudstack/resource_cloudstack_instance.go @@ -95,18 +95,18 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) return e.Error() } - // Retrieve the template UUID - templateid, e := retrieveUUID(cs, "template", d.Get("template").(string)) - if e != nil { - return e.Error() - } - // Retrieve the zone object zone, _, err := cs.Zone.GetZoneByName(d.Get("zone").(string)) if err != nil { return err } + // Retrieve the template UUID + templateid, e := retrieveTemplateUUID(cs, zone.Id, d.Get("template").(string)) + if e != nil { + return e.Error() + } + // Create a new parameter struct p := cs.VirtualMachine.NewDeployVirtualMachineParams(serviceofferingid, templateid, zone.Id) @@ -156,6 +156,12 @@ func resourceCloudStackInstanceCreate(d *schema.ResourceData, meta interface{}) d.SetId(r.Id) + // Set the connection info for any configured provisioners + d.SetConnInfo(map[string]string{ + "host": r.Nic[0].Ipaddress, + "password": r.Password, + }) + return resourceCloudStackInstanceRead(d, meta) } diff --git a/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go b/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go index 5b1fc9a31731..dfdeba209082 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_ipaddress_test.go @@ -132,6 +132,6 @@ resource "cloudstack_vpc" "foobar" { resource "cloudstack_ipaddress" "foo" { vpc = "${cloudstack_vpc.foobar.name}" }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_network.go b/builtin/providers/cloudstack/resource_cloudstack_network.go index 51dc7d9e027f..4eb03f666df8 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network.go @@ -171,6 +171,7 @@ func resourceCloudStackNetworkUpdate(d *schema.ResourceData, meta interface{}) e if displaytext == "" { displaytext = name } + p.SetDisplaytext(displaytext) } // Check if the cidr is changed diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go index 037b9d10bada..dbceb8d8daa6 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_rule_test.go @@ -196,7 +196,7 @@ resource "cloudstack_network_acl_rule" "foo" { traffic_type = "ingress" } }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE) @@ -233,6 +233,6 @@ resource "cloudstack_network_acl_rule" "foo" { traffic_type = "egress" } }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go index e625d4c2d8fa..9bf0bb0cfa99 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_acl_test.go @@ -112,6 +112,6 @@ resource "cloudstack_network_acl" "foo" { description = "terraform-acl-text" vpc = "${cloudstack_vpc.foobar.name}" }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_network_test.go b/builtin/providers/cloudstack/resource_cloudstack_network_test.go index 750761f02089..d936f8cb06b4 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_network_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_network_test.go @@ -186,7 +186,7 @@ resource "cloudstack_network" "foo" { aclid = "${cloudstack_network_acl.foo.id}" zone = "${cloudstack_vpc.foobar.zone}" }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE, CLOUDSTACK_VPC_NETWORK_CIDR, diff --git a/builtin/providers/cloudstack/resource_cloudstack_template.go b/builtin/providers/cloudstack/resource_cloudstack_template.go new file mode 100644 index 000000000000..4469f2d3f340 --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_template.go @@ -0,0 +1,283 @@ +package cloudstack + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func resourceCloudStackTemplate() *schema.Resource { + return &schema.Resource{ + Create: resourceCloudStackTemplateCreate, + Read: resourceCloudStackTemplateRead, + Update: resourceCloudStackTemplateUpdate, + Delete: resourceCloudStackTemplateDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "display_text": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "format": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "hypervisor": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "os_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "zone": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "is_dynamically_scalable": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "is_extractable": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "is_featured": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "is_public": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "password_enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "is_ready": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + }, + + "is_ready_timeout": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 300, + }, + }, + } +} + +func resourceCloudStackTemplateCreate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + name := d.Get("name").(string) + + // Compute/set the display text + displaytext := d.Get("display_text").(string) + if displaytext == "" { + displaytext = name + } + + // Retrieve the os_type UUID + ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + if e != nil { + return e.Error() + } + + // Retrieve the zone UUID + zoneid, e := retrieveUUID(cs, "zone", d.Get("zone").(string)) + if e != nil { + return e.Error() + } + + // Create a new parameter struct + p := cs.Template.NewRegisterTemplateParams( + displaytext, + d.Get("format").(string), + d.Get("hypervisor").(string), + name, + ostypeid, + d.Get("url").(string), + zoneid) + + // Set optional parameters + if v, ok := d.GetOk("is_dynamically_scalable"); ok { + p.SetIsdynamicallyscalable(v.(bool)) + } + + if v, ok := d.GetOk("is_extractable"); ok { + p.SetIsextractable(v.(bool)) + } + + if v, ok := d.GetOk("is_featured"); ok { + p.SetIsfeatured(v.(bool)) + } + + if v, ok := d.GetOk("is_public"); ok { + p.SetIspublic(v.(bool)) + } + + if v, ok := d.GetOk("password_enabled"); ok { + p.SetPasswordenabled(v.(bool)) + } + + // Create the new template + r, err := cs.Template.RegisterTemplate(p) + if err != nil { + return fmt.Errorf("Error creating template %s: %s", name, err) + } + + d.SetId(r.RegisterTemplate[0].Id) + + // Wait until the template is ready to use, or timeout with an error... + currentTime := time.Now().Unix() + timeout := int64(d.Get("is_ready_timeout").(int)) + for { + err := resourceCloudStackTemplateRead(d, meta) + if err != nil { + return err + } + + if d.Get("is_ready").(bool) { + return nil + } + + if time.Now().Unix()-currentTime > timeout { + return fmt.Errorf("Timeout while waiting for template to become ready") + } + time.Sleep(5 * time.Second) + } +} + +func resourceCloudStackTemplateRead(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Get the template details + t, count, err := cs.Template.GetTemplateByID(d.Id(), "executable") + if err != nil { + if count == 0 { + log.Printf( + "[DEBUG] Template %s no longer exists", d.Get("name").(string)) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", t.Name) + d.Set("display_text", t.Displaytext) + d.Set("format", t.Format) + d.Set("hypervisor", t.Hypervisor) + d.Set("os_type", t.Ostypename) + d.Set("zone", t.Zonename) + d.Set("is_dynamically_scalable", t.Isdynamicallyscalable) + d.Set("is_extractable", t.Isextractable) + d.Set("is_featured", t.Isfeatured) + d.Set("is_public", t.Ispublic) + d.Set("password_enabled", t.Passwordenabled) + d.Set("is_ready", t.Isready) + + return nil +} + +func resourceCloudStackTemplateUpdate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + name := d.Get("name").(string) + + // Create a new parameter struct + p := cs.Template.NewUpdateTemplateParams(d.Id()) + + if d.HasChange("name") { + p.SetName(name) + } + + if d.HasChange("display_text") { + p.SetDisplaytext(d.Get("display_text").(string)) + } + + if d.HasChange("format") { + p.SetFormat(d.Get("format").(string)) + } + + if d.HasChange("is_dynamically_scalable") { + p.SetIsdynamicallyscalable(d.Get("is_dynamically_scalable").(bool)) + } + + if d.HasChange("os_type") { + ostypeid, e := retrieveUUID(cs, "os_type", d.Get("os_type").(string)) + if e != nil { + return e.Error() + } + p.SetOstypeid(ostypeid) + } + + if d.HasChange("password_enabled") { + p.SetPasswordenabled(d.Get("password_enabled").(bool)) + } + + _, err := cs.Template.UpdateTemplate(p) + if err != nil { + return fmt.Errorf("Error updating template %s: %s", name, err) + } + + return resourceCloudStackTemplateRead(d, meta) +} + +func resourceCloudStackTemplateDelete(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.Template.NewDeleteTemplateParams(d.Id()) + + // Delete the template + log.Printf("[INFO] Deleting template: %s", d.Get("name").(string)) + _, err := cs.Template.DeleteTemplate(p) + if err != nil { + // This is a very poor way to be told the UUID does no longer exist :( + if strings.Contains(err.Error(), fmt.Sprintf( + "Invalid parameter id value=%s due to incorrect long value format, "+ + "or entity does not exist", d.Id())) { + return nil + } + + return fmt.Errorf("Error deleting template %s: %s", d.Get("name").(string), err) + } + return nil +} diff --git a/builtin/providers/cloudstack/resource_cloudstack_template_test.go b/builtin/providers/cloudstack/resource_cloudstack_template_test.go new file mode 100755 index 000000000000..3e78461dccb0 --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_template_test.go @@ -0,0 +1,198 @@ +package cloudstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func TestAccCloudStackTemplate_basic(t *testing.T) { + var template cloudstack.Template + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackTemplateDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackTemplate_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackTemplateExists("cloudstack_template.foo", &template), + testAccCheckCloudStackTemplateBasicAttributes(&template), + resource.TestCheckResourceAttr( + "cloudstack_template.foo", "display_text", "terraform-test"), + ), + }, + }, + }) +} + +func TestAccCloudStackTemplate_update(t *testing.T) { + var template cloudstack.Template + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackTemplateDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackTemplate_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackTemplateExists("cloudstack_template.foo", &template), + testAccCheckCloudStackTemplateBasicAttributes(&template), + ), + }, + + resource.TestStep{ + Config: testAccCloudStackTemplate_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackTemplateExists( + "cloudstack_template.foo", &template), + testAccCheckCloudStackTemplateUpdatedAttributes(&template), + resource.TestCheckResourceAttr( + "cloudstack_template.foo", "display_text", "terraform-updated"), + ), + }, + }, + }) +} + +func testAccCheckCloudStackTemplateExists( + n string, template *cloudstack.Template) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No template ID is set") + } + + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + tmpl, _, err := cs.Template.GetTemplateByID(rs.Primary.ID, "executable") + + if err != nil { + return err + } + + if tmpl.Id != rs.Primary.ID { + return fmt.Errorf("Template not found") + } + + *template = *tmpl + + return nil + } +} + +func testAccCheckCloudStackTemplateBasicAttributes( + template *cloudstack.Template) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if template.Name != "terraform-test" { + return fmt.Errorf("Bad name: %s", template.Name) + } + + if template.Format != CLOUDSTACK_TEMPLATE_FORMAT { + return fmt.Errorf("Bad format: %s", template.Format) + } + + if template.Hypervisor != CLOUDSTACK_HYPERVISOR { + return fmt.Errorf("Bad hypervisor: %s", template.Hypervisor) + } + + if template.Ostypename != CLOUDSTACK_TEMPLATE_OS_TYPE { + return fmt.Errorf("Bad os type: %s", template.Ostypename) + } + + if template.Zonename != CLOUDSTACK_ZONE { + return fmt.Errorf("Bad zone: %s", template.Zonename) + } + + return nil + } +} + +func testAccCheckCloudStackTemplateUpdatedAttributes( + template *cloudstack.Template) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if template.Displaytext != "terraform-updated" { + return fmt.Errorf("Bad name: %s", template.Displaytext) + } + + if !template.Isdynamicallyscalable { + return fmt.Errorf("Bad is_dynamically_scalable: %t", template.Isdynamicallyscalable) + } + + if !template.Passwordenabled { + return fmt.Errorf("Bad password_enabled: %t", template.Passwordenabled) + } + + return nil + } +} + +func testAccCheckCloudStackTemplateDestroy(s *terraform.State) error { + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "cloudstack_template" { + continue + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No template ID is set") + } + + p := cs.Template.NewDeleteTemplateParams(rs.Primary.ID) + _, err := cs.Template.DeleteTemplate(p) + + if err != nil { + return fmt.Errorf( + "Error deleting template (%s): %s", + rs.Primary.ID, err) + } + } + + return nil +} + +var testAccCloudStackTemplate_basic = fmt.Sprintf(` +resource "cloudstack_template" "foo" { + name = "terraform-test" + format = "%s" + hypervisor = "%s" + os_type = "%s" + url = "%s" + zone = "%s" +} +`, + CLOUDSTACK_TEMPLATE_FORMAT, + CLOUDSTACK_HYPERVISOR, + CLOUDSTACK_TEMPLATE_OS_TYPE, + CLOUDSTACK_TEMPLATE_URL, + CLOUDSTACK_ZONE) + +var testAccCloudStackTemplate_update = fmt.Sprintf(` +resource "cloudstack_template" "foo" { + name = "terraform-test" + display_text = "terraform-updated" + format = "%s" + hypervisor = "%s" + os_type = "%s" + url = "%s" + zone = "%s" + is_dynamically_scalable = true + password_enabled = true +} +`, + CLOUDSTACK_TEMPLATE_FORMAT, + CLOUDSTACK_HYPERVISOR, + CLOUDSTACK_TEMPLATE_OS_TYPE, + CLOUDSTACK_TEMPLATE_URL, + CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go index bf4e8f448d02..07861a0913d9 100644 --- a/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go +++ b/builtin/providers/cloudstack/resource_cloudstack_vpc_test.go @@ -72,8 +72,8 @@ func testAccCheckCloudStackVPCAttributes( return fmt.Errorf("Bad display text: %s", vpc.Displaytext) } - if vpc.Cidr != CLOUDSTACK_VPC_CIDR { - return fmt.Errorf("Bad VPC offering: %s", vpc.Cidr) + if vpc.Cidr != CLOUDSTACK_VPC_CIDR_1 { + return fmt.Errorf("Bad VPC CIDR: %s", vpc.Cidr) } return nil @@ -113,6 +113,6 @@ resource "cloudstack_vpc" "foo" { vpc_offering = "%s" zone = "%s" }`, - CLOUDSTACK_VPC_CIDR, + CLOUDSTACK_VPC_CIDR_1, CLOUDSTACK_VPC_OFFERING, CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go new file mode 100644 index 000000000000..b036890a5aad --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection.go @@ -0,0 +1,95 @@ +package cloudstack + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func resourceCloudStackVPNConnection() *schema.Resource { + return &schema.Resource{ + Create: resourceCloudStackVPNConnectionCreate, + Read: resourceCloudStackVPNConnectionRead, + Delete: resourceCloudStackVPNConnectionDelete, + + Schema: map[string]*schema.Schema{ + "customergatewayid": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "vpngatewayid": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceCloudStackVPNConnectionCreate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewCreateVpnConnectionParams( + d.Get("customergatewayid").(string), + d.Get("vpngatewayid").(string), + ) + + // Create the new VPN Connection + v, err := cs.VPN.CreateVpnConnection(p) + if err != nil { + return fmt.Errorf("Error creating VPN Connection: %s", err) + } + + d.SetId(v.Id) + + return resourceCloudStackVPNConnectionRead(d, meta) +} + +func resourceCloudStackVPNConnectionRead(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Get the VPN Connection details + v, count, err := cs.VPN.GetVpnConnectionByID(d.Id()) + if err != nil { + if count == 0 { + log.Printf("[DEBUG] VPN Connection does no longer exist") + d.SetId("") + return nil + } + + return err + } + + d.Set("customergatewayid", v.S2scustomergatewayid) + d.Set("vpngatewayid", v.S2svpngatewayid) + + return nil +} + +func resourceCloudStackVPNConnectionDelete(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewDeleteVpnConnectionParams(d.Id()) + + // Delete the VPN Connection + _, err := cs.VPN.DeleteVpnConnection(p) + if err != nil { + // This is a very poor way to be told the UUID does no longer exist :( + if strings.Contains(err.Error(), fmt.Sprintf( + "Invalid parameter id value=%s due to incorrect long value format, "+ + "or entity does not exist", d.Id())) { + return nil + } + + return fmt.Errorf("Error deleting VPN Connection: %s", err) + } + + return nil +} diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go new file mode 100644 index 000000000000..1b9d9920ae7c --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_connection_test.go @@ -0,0 +1,142 @@ +package cloudstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func TestAccCloudStackVPNConnection_basic(t *testing.T) { + var vpnConnection cloudstack.VpnConnection + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackVPNConnectionDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackVPNConnection_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackVPNConnectionExists( + "cloudstack_vpn_connection.foo-bar", &vpnConnection), + testAccCheckCloudStackVPNConnectionExists( + "cloudstack_vpn_connection.bar-foo", &vpnConnection), + ), + }, + }, + }) +} + +func testAccCheckCloudStackVPNConnectionExists( + n string, vpnConnection *cloudstack.VpnConnection) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN Connection ID is set") + } + + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + v, _, err := cs.VPN.GetVpnConnectionByID(rs.Primary.ID) + + if err != nil { + return err + } + + if v.Id != rs.Primary.ID { + return fmt.Errorf("VPN Connection not found") + } + + *vpnConnection = *v + + return nil + } +} + +func testAccCheckCloudStackVPNConnectionDestroy(s *terraform.State) error { + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "cloudstack_vpn_connection" { + continue + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN Connection ID is set") + } + + p := cs.VPN.NewDeleteVpnConnectionParams(rs.Primary.ID) + _, err := cs.VPN.DeleteVpnConnection(p) + + if err != nil { + return fmt.Errorf( + "Error deleting VPN Connection (%s): %s", + rs.Primary.ID, err) + } + } + + return nil +} + +var testAccCloudStackVPNConnection_basic = fmt.Sprintf(` +resource "cloudstack_vpc" "foo" { + name = "terraform-vpc-foo" + cidr = "%s" + vpc_offering = "%s" + zone = "%s" +} + +resource "cloudstack_vpc" "bar" { + name = "terraform-vpc-bar" + cidr = "%s" + vpc_offering = "%s" + zone = "%s" +} + +resource "cloudstack_vpn_gateway" "foo" { + vpc = "${cloudstack_vpc.foo.name}" +} + +resource "cloudstack_vpn_gateway" "bar" { + vpc = "${cloudstack_vpc.bar.name}" +} + +resource "cloudstack_vpn_customer_gateway" "foo" { + name = "terraform-foo" + cidr = "${cloudstack_vpc.foo.cidr}" + esp_policy = "aes256-sha1" + gateway = "${cloudstack_vpn_gateway.foo.public_ip}" + ike_policy = "aes256-sha1" + ipsec_psk = "terraform" +} + +resource "cloudstack_vpn_customer_gateway" "bar" { + name = "terraform-bar" + cidr = "${cloudstack_vpc.bar.cidr}" + esp_policy = "aes256-sha1" + gateway = "${cloudstack_vpn_gateway.bar.public_ip}" + ike_policy = "aes256-sha1" + ipsec_psk = "terraform" +} + +resource "cloudstack_vpn_connection" "foo-bar" { + customergatewayid = "${cloudstack_vpn_customer_gateway.foo.id}" + vpngatewayid = "${cloudstack_vpn_gateway.bar.id}" +} + +resource "cloudstack_vpn_connection" "bar-foo" { + customergatewayid = "${cloudstack_vpn_customer_gateway.bar.id}" + vpngatewayid = "${cloudstack_vpn_gateway.foo.id}" +}`, + CLOUDSTACK_VPC_CIDR_1, + CLOUDSTACK_VPC_OFFERING, + CLOUDSTACK_ZONE, + CLOUDSTACK_VPC_CIDR_2, + CLOUDSTACK_VPC_OFFERING, + CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go new file mode 100644 index 000000000000..f27e28d3856b --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway.go @@ -0,0 +1,193 @@ +package cloudstack + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func resourceCloudStackVPNCustomerGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceCloudStackVPNCustomerGatewayCreate, + Read: resourceCloudStackVPNCustomerGatewayRead, + Update: resourceCloudStackVPNCustomerGatewayUpdate, + Delete: resourceCloudStackVPNCustomerGatewayDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "cidr": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "esp_policy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "gateway": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "ike_policy": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "ipsec_psk": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "dpd": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "esp_lifetime": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + + "ike_lifetime": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + }, + } +} + +func resourceCloudStackVPNCustomerGatewayCreate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewCreateVpnCustomerGatewayParams( + d.Get("cidr").(string), + d.Get("esp_policy").(string), + d.Get("gateway").(string), + d.Get("ike_policy").(string), + d.Get("ipsec_psk").(string), + ) + + p.SetName(d.Get("name").(string)) + + if dpd, ok := d.GetOk("dpd"); ok { + p.SetDpd(dpd.(bool)) + } + + if esplifetime, ok := d.GetOk("esp_lifetime"); ok { + p.SetEsplifetime(int64(esplifetime.(int))) + } + + if ikelifetime, ok := d.GetOk("ike_lifetime"); ok { + p.SetIkelifetime(int64(ikelifetime.(int))) + } + + // Create the new VPN Customer Gateway + v, err := cs.VPN.CreateVpnCustomerGateway(p) + if err != nil { + return fmt.Errorf("Error creating VPN Customer Gateway %s: %s", d.Get("name").(string), err) + } + + d.SetId(v.Id) + + return resourceCloudStackVPNCustomerGatewayRead(d, meta) +} + +func resourceCloudStackVPNCustomerGatewayRead(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Get the VPN Customer Gateway details + v, count, err := cs.VPN.GetVpnCustomerGatewayByID(d.Id()) + if err != nil { + if count == 0 { + log.Printf( + "[DEBUG] VPN Customer Gateway %s does no longer exist", d.Get("name").(string)) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", v.Name) + d.Set("cidr", v.Cidrlist) + d.Set("esp_policy", v.Esppolicy) + d.Set("gateway", v.Gateway) + d.Set("ike_policy", v.Ikepolicy) + d.Set("ipsec_psk", v.Ipsecpsk) + d.Set("dpd", v.Dpd) + d.Set("esp_lifetime", int(v.Esplifetime)) + d.Set("ike_lifetime", int(v.Ikelifetime)) + + return nil +} + +func resourceCloudStackVPNCustomerGatewayUpdate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewUpdateVpnCustomerGatewayParams( + d.Get("cidr").(string), + d.Get("esp_policy").(string), + d.Get("gateway").(string), + d.Id(), + d.Get("ike_policy").(string), + d.Get("ipsec_psk").(string), + ) + + p.SetName(d.Get("name").(string)) + + if dpd, ok := d.GetOk("dpd"); ok { + p.SetDpd(dpd.(bool)) + } + + if esplifetime, ok := d.GetOk("esp_lifetime"); ok { + p.SetEsplifetime(int64(esplifetime.(int))) + } + + if ikelifetime, ok := d.GetOk("ike_lifetime"); ok { + p.SetIkelifetime(int64(ikelifetime.(int))) + } + + // Update the VPN Customer Gateway + _, err := cs.VPN.UpdateVpnCustomerGateway(p) + if err != nil { + return fmt.Errorf("Error updating VPN Customer Gateway %s: %s", d.Get("name").(string), err) + } + + return resourceCloudStackVPNCustomerGatewayRead(d, meta) +} + +func resourceCloudStackVPNCustomerGatewayDelete(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewDeleteVpnCustomerGatewayParams(d.Id()) + + // Delete the VPN Customer Gateway + _, err := cs.VPN.DeleteVpnCustomerGateway(p) + if err != nil { + // This is a very poor way to be told the UUID does no longer exist :( + if strings.Contains(err.Error(), fmt.Sprintf( + "Invalid parameter id value=%s due to incorrect long value format, "+ + "or entity does not exist", d.Id())) { + return nil + } + + return fmt.Errorf("Error deleting VPN Customer Gateway %s: %s", d.Get("name").(string), err) + } + + return nil +} diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go new file mode 100644 index 000000000000..b468c76fe97a --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_customer_gateway_test.go @@ -0,0 +1,223 @@ +package cloudstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func TestAccCloudStackVPNCustomerGateway_basic(t *testing.T) { + var vpnCustomerGateway cloudstack.VpnCustomerGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackVPNCustomerGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackVPNCustomerGateway_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackVPNCustomerGatewayExists( + "cloudstack_vpn_customer_gateway.foo", &vpnCustomerGateway), + testAccCheckCloudStackVPNCustomerGatewayAttributes(&vpnCustomerGateway), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "name", "terraform-foo"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "name", "terraform-bar"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "ike_policy", "aes256-sha1"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "esp_policy", "aes256-sha1"), + ), + }, + }, + }) +} + +func TestAccCloudStackVPNCustomerGateway_update(t *testing.T) { + var vpnCustomerGateway cloudstack.VpnCustomerGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackVPNCustomerGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackVPNCustomerGateway_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackVPNCustomerGatewayExists( + "cloudstack_vpn_customer_gateway.foo", &vpnCustomerGateway), + testAccCheckCloudStackVPNCustomerGatewayAttributes(&vpnCustomerGateway), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "name", "terraform-foo"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "name", "terraform-bar"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "ike_policy", "aes256-sha1"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "esp_policy", "aes256-sha1"), + ), + }, + + resource.TestStep{ + Config: testAccCloudStackVPNCustomerGateway_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackVPNCustomerGatewayExists( + "cloudstack_vpn_customer_gateway.foo", &vpnCustomerGateway), + testAccCheckCloudStackVPNCustomerGatewayAttributes(&vpnCustomerGateway), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "name", "terraform-foo-bar"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "name", "terraform-bar-foo"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.foo", "ike_policy", "3des-md5"), + resource.TestCheckResourceAttr( + "cloudstack_vpn_customer_gateway.bar", "esp_policy", "3des-md5"), + ), + }, + }, + }) +} + +func testAccCheckCloudStackVPNCustomerGatewayExists( + n string, vpnCustomerGateway *cloudstack.VpnCustomerGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN CustomerGateway ID is set") + } + + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + v, _, err := cs.VPN.GetVpnCustomerGatewayByID(rs.Primary.ID) + + if err != nil { + return err + } + + if v.Id != rs.Primary.ID { + return fmt.Errorf("VPN CustomerGateway not found") + } + + *vpnCustomerGateway = *v + + return nil + } +} + +func testAccCheckCloudStackVPNCustomerGatewayAttributes( + vpnCustomerGateway *cloudstack.VpnCustomerGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if vpnCustomerGateway.Esppolicy != "aes256-sha1" { + return fmt.Errorf("Bad ESP policy: %s", vpnCustomerGateway.Esppolicy) + } + + if vpnCustomerGateway.Ikepolicy != "aes256-sha1" { + return fmt.Errorf("Bad IKE policy: %s", vpnCustomerGateway.Ikepolicy) + } + + if vpnCustomerGateway.Ipsecpsk != "terraform" { + return fmt.Errorf("Bad IPSEC pre-shared key: %s", vpnCustomerGateway.Ipsecpsk) + } + + return nil + } +} + +func testAccCheckCloudStackVPNCustomerGatewayDestroy(s *terraform.State) error { + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "cloudstack_vpn_customer_gateway" { + continue + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN Customer Gateway ID is set") + } + + p := cs.VPN.NewDeleteVpnCustomerGatewayParams(rs.Primary.ID) + _, err := cs.VPN.DeleteVpnCustomerGateway(p) + + if err != nil { + return fmt.Errorf( + "Error deleting VPN Customer Gateway (%s): %s", + rs.Primary.ID, err) + } + } + + return nil +} + +var testAccCloudStackVPNCustomerGateway_basic = fmt.Sprintf(` +resource "cloudstack_vpc" "foo" { + name = "terraform-vpc-foo" + cidr = "%s" + vpc_offering = "%s" + zone = "%s" +} + +resource "cloudstack_vpc" "bar" { + name = "terraform-vpc-bar" + cidr = "%s" + vpc_offering = "%s" + zone = "%s" +} + +resource "cloudstack_vpn_gateway" "foo" { + vpc = "${cloudstack_vpc.foo.name}" +} + +resource "cloudstack_vpn_gateway" "bar" { + vpc = "${cloudstack_vpc.bar.name}" +} + +resource "cloudstack_vpn_customer_gateway" "foo" { + name = "terraform-foo" + cidr = "${cloudstack_vpc.foo.cidr}" + esp_policy = "aes256-sha1" + gateway = "${cloudstack_vpn_gateway.foo.public_ip}" + ike_policy = "aes256-sha1" + ipsec_psk = "terraform" +} + +resource "cloudstack_vpn_customer_gateway" "bar" { + name = "terraform-bar" + cidr = "${cloudstack_vpc.bar.cidr}" + esp_policy = "aes256-sha1" + gateway = "${cloudstack_vpn_gateway.bar.public_ip}" + ike_policy = "aes256-sha1" + ipsec_psk = "terraform" +}`, + CLOUDSTACK_VPC_CIDR_1, + CLOUDSTACK_VPC_OFFERING, + CLOUDSTACK_ZONE, + CLOUDSTACK_VPC_CIDR_2, + CLOUDSTACK_VPC_OFFERING, + CLOUDSTACK_ZONE) + +var testAccCloudStackVPNCustomerGateway_update = fmt.Sprintf(` +resource "cloudstack_vpn_customer_gateway" "foo" { + name = "terraform-foo-bar" + cidr = "${cloudstack_vpc.foo.cidr}" + esp_policy = "3des-md5" + gateway = "${cloudstack_vpn_gateway.foo.public_ip}" + ike_policy = "3des-md5" + ipsec_psk = "terraform" +} + +resource "cloudstack_vpn_customer_gateway" "bar" { + name = "terraform-bar-foo" + cidr = "${cloudstack_vpc.bar.cidr}" + esp_policy = "3des-md5" + gateway = "${cloudstack_vpn_gateway.bar.public_ip}" + ike_policy = "3des-md5" + ipsec_psk = "terraform" +}`) diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go new file mode 100644 index 000000000000..063c317771f0 --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway.go @@ -0,0 +1,97 @@ +package cloudstack + +import ( + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func resourceCloudStackVPNGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceCloudStackVPNGatewayCreate, + Read: resourceCloudStackVPNGatewayRead, + Delete: resourceCloudStackVPNGatewayDelete, + + Schema: map[string]*schema.Schema{ + "vpc": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "public_ip": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceCloudStackVPNGatewayCreate(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Retrieve the VPC UUID + vpcid, e := retrieveUUID(cs, "vpc", d.Get("vpc").(string)) + if e != nil { + return e.Error() + } + + // Create a new parameter struct + p := cs.VPN.NewCreateVpnGatewayParams(vpcid) + + // Create the new VPN Gateway + v, err := cs.VPN.CreateVpnGateway(p) + if err != nil { + return fmt.Errorf("Error creating VPN Gateway for VPC %s: %s", d.Get("vpc").(string), err) + } + + d.SetId(v.Id) + + return resourceCloudStackVPNGatewayRead(d, meta) +} + +func resourceCloudStackVPNGatewayRead(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Get the VPN Gateway details + v, count, err := cs.VPN.GetVpnGatewayByID(d.Id()) + if err != nil { + if count == 0 { + log.Printf( + "[DEBUG] VPN Gateway for VPC %s does no longer exist", d.Get("vpc").(string)) + d.SetId("") + return nil + } + + return err + } + + d.Set("public_ip", v.Publicip) + + return nil +} + +func resourceCloudStackVPNGatewayDelete(d *schema.ResourceData, meta interface{}) error { + cs := meta.(*cloudstack.CloudStackClient) + + // Create a new parameter struct + p := cs.VPN.NewDeleteVpnGatewayParams(d.Id()) + + // Delete the VPN Gateway + _, err := cs.VPN.DeleteVpnGateway(p) + if err != nil { + // This is a very poor way to be told the UUID does no longer exist :( + if strings.Contains(err.Error(), fmt.Sprintf( + "Invalid parameter id value=%s due to incorrect long value format, "+ + "or entity does not exist", d.Id())) { + return nil + } + + return fmt.Errorf("Error deleting VPN Gateway for VPC %s: %s", d.Get("vpc").(string), err) + } + + return nil +} diff --git a/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go new file mode 100644 index 000000000000..db6c0085a3b1 --- /dev/null +++ b/builtin/providers/cloudstack/resource_cloudstack_vpn_gateway_test.go @@ -0,0 +1,101 @@ +package cloudstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/xanzy/go-cloudstack/cloudstack" +) + +func TestAccCloudStackVPNGateway_basic(t *testing.T) { + var vpnGateway cloudstack.VpnGateway + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckCloudStackVPNGatewayDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCloudStackVPNGateway_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudStackVPNGatewayExists( + "cloudstack_vpn_gateway.foo", &vpnGateway), + resource.TestCheckResourceAttr( + "cloudstack_vpn_gateway.foo", "vpc", "terraform-vpc"), + ), + }, + }, + }) +} + +func testAccCheckCloudStackVPNGatewayExists( + n string, vpnGateway *cloudstack.VpnGateway) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN Gateway ID is set") + } + + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + v, _, err := cs.VPN.GetVpnGatewayByID(rs.Primary.ID) + + if err != nil { + return err + } + + if v.Id != rs.Primary.ID { + return fmt.Errorf("VPN Gateway not found") + } + + *vpnGateway = *v + + return nil + } +} + +func testAccCheckCloudStackVPNGatewayDestroy(s *terraform.State) error { + cs := testAccProvider.Meta().(*cloudstack.CloudStackClient) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "cloudstack_vpn_gateway" { + continue + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No VPN Gateway ID is set") + } + + p := cs.VPN.NewDeleteVpnGatewayParams(rs.Primary.ID) + _, err := cs.VPN.DeleteVpnGateway(p) + + if err != nil { + return fmt.Errorf( + "Error deleting VPN Gateway (%s): %s", + rs.Primary.ID, err) + } + } + + return nil +} + +var testAccCloudStackVPNGateway_basic = fmt.Sprintf(` +resource "cloudstack_vpc" "foo" { + name = "terraform-vpc" + display_text = "terraform-vpc-text" + cidr = "%s" + vpc_offering = "%s" + zone = "%s" +} + +resource "cloudstack_vpn_gateway" "foo" { + vpc = "${cloudstack_vpc.foo.name}" +}`, + CLOUDSTACK_VPC_CIDR_1, + CLOUDSTACK_VPC_OFFERING, + CLOUDSTACK_ZONE) diff --git a/builtin/providers/cloudstack/resources.go b/builtin/providers/cloudstack/resources.go index acef7b3da954..37f5cb965d34 100644 --- a/builtin/providers/cloudstack/resources.go +++ b/builtin/providers/cloudstack/resources.go @@ -40,8 +40,6 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str uuid, err = cs.VPC.GetVPCOfferingID(value) case "vpc": uuid, err = cs.VPC.GetVPCID(value) - case "template": - uuid, err = cs.Template.GetTemplateID(value, "executable") case "network": uuid, err = cs.Network.GetNetworkID(value) case "zone": @@ -59,6 +57,19 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str break } err = fmt.Errorf("Could not find UUID of IP address: %s", value) + case "os_type": + p := cs.GuestOS.NewListOsTypesParams() + p.SetDescription(value) + l, e := cs.GuestOS.ListOsTypes(p) + if e != nil { + err = e + break + } + if l.Count == 1 { + uuid = l.OsTypes[0].Id + break + } + err = fmt.Errorf("Could not find UUID of OS Type: %s", value) default: return uuid, &retrieveError{name: name, value: value, err: fmt.Errorf("Unknown request: %s", name)} @@ -71,6 +82,22 @@ func retrieveUUID(cs *cloudstack.CloudStackClient, name, value string) (uuid str return uuid, nil } +func retrieveTemplateUUID(cs *cloudstack.CloudStackClient, zoneid, value string) (uuid string, e *retrieveError) { + // If the supplied value isn't a UUID, try to retrieve the UUID ourselves + if isUUID(value) { + return value, nil + } + + log.Printf("[DEBUG] Retrieving UUID of template: %s", value) + + uuid, err := cs.Template.GetTemplateID(value, "executable", zoneid) + if err != nil { + return uuid, &retrieveError{name: "template", value: value, err: err} + } + + return uuid, nil +} + func isUUID(s string) bool { re := regexp.MustCompile(`^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$`) return re.MatchString(s) diff --git a/builtin/providers/digitalocean/resource_digitalocean_record.go b/builtin/providers/digitalocean/resource_digitalocean_record.go index d365e4706a1d..78a4e891165d 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record.go @@ -91,8 +91,9 @@ func resourceDigitalOceanRecordCreate(d *schema.ResourceData, meta interface{}) func resourceDigitalOceanRecordRead(d *schema.ResourceData, meta interface{}) error { client := meta.(*digitalocean.Client) + domain := d.Get("domain").(string) - rec, err := client.RetrieveRecord(d.Get("domain").(string), d.Id()) + rec, err := client.RetrieveRecord(domain, d.Id()) if err != nil { // If the record is somehow already destroyed, mark as // succesfully gone @@ -104,6 +105,18 @@ func resourceDigitalOceanRecordRead(d *schema.ResourceData, meta interface{}) er return err } + // Update response data for records with domain value + if t := rec.Type; t == "CNAME" || t == "MX" || t == "NS" || t == "SRV" { + // Append dot to response if resource value is absolute + if value := d.Get("value").(string); strings.HasSuffix(value, ".") { + rec.Data += "." + // If resource value ends with current domain, make response data absolute + if strings.HasSuffix(value, domain+".") { + rec.Data += domain + "." + } + } + } + d.Set("name", rec.Name) d.Set("type", rec.Type) d.Set("value", rec.Data) diff --git a/builtin/providers/digitalocean/resource_digitalocean_record_test.go b/builtin/providers/digitalocean/resource_digitalocean_record_test.go index 66ac2bb5f576..139fd30b712d 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_record_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_record_test.go @@ -76,6 +76,87 @@ func TestAccDigitalOceanRecord_Updated(t *testing.T) { }) } +func TestAccDigitalOceanRecord_HostnameValue(t *testing.T) { + var record digitalocean.Record + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDigitalOceanRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDigitalOceanRecordConfig_cname, + Check: resource.ComposeTestCheckFunc( + testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), + testAccCheckDigitalOceanRecordAttributesHostname("a", &record), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "value", "a.foobar-test-terraform.com."), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "type", "CNAME"), + ), + }, + }, + }) +} + +func TestAccDigitalOceanRecord_RelativeHostnameValue(t *testing.T) { + var record digitalocean.Record + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDigitalOceanRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDigitalOceanRecordConfig_relative_cname, + Check: resource.ComposeTestCheckFunc( + testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), + testAccCheckDigitalOceanRecordAttributesHostname("a.b", &record), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "value", "a.b"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "type", "CNAME"), + ), + }, + }, + }) +} + +func TestAccDigitalOceanRecord_ExternalHostnameValue(t *testing.T) { + var record digitalocean.Record + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDigitalOceanRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccCheckDigitalOceanRecordConfig_external_cname, + Check: resource.ComposeTestCheckFunc( + testAccCheckDigitalOceanRecordExists("digitalocean_record.foobar", &record), + testAccCheckDigitalOceanRecordAttributesHostname("a.foobar-test-terraform.net", &record), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "name", "terraform"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "domain", "foobar-test-terraform.com"), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "value", "a.foobar-test-terraform.net."), + resource.TestCheckResourceAttr( + "digitalocean_record.foobar", "type", "CNAME"), + ), + }, + }, + }) +} + func testAccCheckDigitalOceanRecordDestroy(s *terraform.State) error { client := testAccProvider.Meta().(*digitalocean.Client) @@ -146,6 +227,17 @@ func testAccCheckDigitalOceanRecordExists(n string, record *digitalocean.Record) } } +func testAccCheckDigitalOceanRecordAttributesHostname(data string, record *digitalocean.Record) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if record.Data != data { + return fmt.Errorf("Bad value: expected %s, got %s", data, record.Data) + } + + return nil + } +} + const testAccCheckDigitalOceanRecordConfig_basic = ` resource "digitalocean_domain" "foobar" { name = "foobar-test-terraform.com" @@ -173,3 +265,45 @@ resource "digitalocean_record" "foobar" { value = "192.168.0.11" type = "A" }` + +const testAccCheckDigitalOceanRecordConfig_cname = ` +resource "digitalocean_domain" "foobar" { + name = "foobar-test-terraform.com" + ip_address = "192.168.0.10" +} + +resource "digitalocean_record" "foobar" { + domain = "${digitalocean_domain.foobar.name}" + + name = "terraform" + value = "a.foobar-test-terraform.com." + type = "CNAME" +}` + +const testAccCheckDigitalOceanRecordConfig_relative_cname = ` +resource "digitalocean_domain" "foobar" { + name = "foobar-test-terraform.com" + ip_address = "192.168.0.10" +} + +resource "digitalocean_record" "foobar" { + domain = "${digitalocean_domain.foobar.name}" + + name = "terraform" + value = "a.b" + type = "CNAME" +}` + +const testAccCheckDigitalOceanRecordConfig_external_cname = ` +resource "digitalocean_domain" "foobar" { + name = "foobar-test-terraform.com" + ip_address = "192.168.0.10" +} + +resource "digitalocean_record" "foobar" { + domain = "${digitalocean_domain.foobar.name}" + + name = "terraform" + value = "a.foobar-test-terraform.net." + type = "CNAME" +}` diff --git a/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go b/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go index d5c50e6f8d3c..009366e18a0f 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_ssh_key_test.go @@ -2,6 +2,8 @@ package digitalocean import ( "fmt" + "strconv" + "strings" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -25,7 +27,7 @@ func TestAccDigitalOceanSSHKey_Basic(t *testing.T) { resource.TestCheckResourceAttr( "digitalocean_ssh_key.foobar", "name", "foobar"), resource.TestCheckResourceAttr( - "digitalocean_ssh_key.foobar", "public_key", "abcdef"), + "digitalocean_ssh_key.foobar", "public_key", testAccValidPublicKey), ), }, }, @@ -82,7 +84,7 @@ func testAccCheckDigitalOceanSSHKeyExists(n string, key *digitalocean.SSHKey) re return err } - if foundKey.Name != rs.Primary.ID { + if strconv.Itoa(int(foundKey.Id)) != rs.Primary.ID { return fmt.Errorf("Record not found") } @@ -92,8 +94,12 @@ func testAccCheckDigitalOceanSSHKeyExists(n string, key *digitalocean.SSHKey) re } } -const testAccCheckDigitalOceanSSHKeyConfig_basic = ` +var testAccCheckDigitalOceanSSHKeyConfig_basic = fmt.Sprintf(` resource "digitalocean_ssh_key" "foobar" { name = "foobar" - public_key = "abcdef" -}` + public_key = "%s" +}`, testAccValidPublicKey) + +var testAccValidPublicKey = strings.TrimSpace(` +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCKVmnMOlHKcZK8tpt3MP1lqOLAcqcJzhsvJcjscgVERRN7/9484SOBJ3HSKxxNG5JN8owAjy5f9yYwcUg+JaUVuytn5Pv3aeYROHGGg+5G346xaq3DAwX6Y5ykr2fvjObgncQBnuU5KHWCECO/4h8uWuwh/kfniXPVjFToc+gnkqA+3RKpAecZhFXwfalQ9mMuYGFxn+fwn8cYEApsJbsEmb0iJwPiZ5hjFC8wREuiTlhPHDgkBLOiycd20op2nXzDbHfCHInquEe/gYxEitALONxm0swBOwJZwlTDOB7C6y2dzlrtxr1L59m7pCkWI4EtTRLvleehBoj3u7jB4usR +`) diff --git a/builtin/providers/docker/config.go b/builtin/providers/docker/config.go new file mode 100644 index 000000000000..1991827440f7 --- /dev/null +++ b/builtin/providers/docker/config.go @@ -0,0 +1,33 @@ +package docker + +import ( + "path/filepath" + + dc "github.com/fsouza/go-dockerclient" +) + +// Config is the structure that stores the configuration to talk to a +// Docker API compatible host. +type Config struct { + Host string + CertPath string +} + +// NewClient() returns a new Docker client. +func (c *Config) NewClient() (*dc.Client, error) { + // If there is no cert information, then just return the direct client + if c.CertPath == "" { + return dc.NewClient(c.Host) + } + + // If there is cert information, load it and use it. + ca := filepath.Join(c.CertPath, "ca.pem") + cert := filepath.Join(c.CertPath, "cert.pem") + key := filepath.Join(c.CertPath, "key.pem") + return dc.NewTLSClient(c.Host, cert, key, ca) +} + +// Data ia structure for holding data that we fetch from Docker. +type Data struct { + DockerImages map[string]*dc.APIImages +} diff --git a/builtin/providers/docker/provider.go b/builtin/providers/docker/provider.go new file mode 100644 index 000000000000..fdc8b771949f --- /dev/null +++ b/builtin/providers/docker/provider.go @@ -0,0 +1,54 @@ +package docker + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "host": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("DOCKER_HOST", "unix:/run/docker.sock"), + Description: "The Docker daemon address", + }, + + "cert_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: schema.EnvDefaultFunc("DOCKER_CERT_PATH", ""), + Description: "Path to directory with Docker TLS config", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "docker_container": resourceDockerContainer(), + "docker_image": resourceDockerImage(), + }, + + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + config := Config{ + Host: d.Get("host").(string), + CertPath: d.Get("cert_path").(string), + } + + client, err := config.NewClient() + if err != nil { + return nil, fmt.Errorf("Error initializing Docker client: %s", err) + } + + err = client.Ping() + if err != nil { + return nil, fmt.Errorf("Error pinging Docker server: %s", err) + } + + return client, nil +} diff --git a/builtin/providers/docker/provider_test.go b/builtin/providers/docker/provider_test.go new file mode 100644 index 000000000000..d0910488938a --- /dev/null +++ b/builtin/providers/docker/provider_test.go @@ -0,0 +1,36 @@ +package docker + +import ( + "os/exec" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "docker": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + cmd := exec.Command("docker", "version") + if err := cmd.Run(); err != nil { + t.Fatalf("Docker must be available: %s", err) + } +} diff --git a/builtin/providers/docker/resource_docker_container.go b/builtin/providers/docker/resource_docker_container.go new file mode 100644 index 000000000000..50b501ca2483 --- /dev/null +++ b/builtin/providers/docker/resource_docker_container.go @@ -0,0 +1,222 @@ +package docker + +import ( + "bytes" + "fmt" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerContainer() *schema.Resource { + return &schema.Resource{ + Create: resourceDockerContainerCreate, + Read: resourceDockerContainerRead, + Update: resourceDockerContainerUpdate, + Delete: resourceDockerContainerDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + // Indicates whether the container must be running. + // + // An assumption is made that configured containers + // should be running; if not, they should not be in + // the configuration. Therefore a stopped container + // should be started. Set to false to have the + // provider leave the container alone. + // + // Actively-debugged containers are likely to be + // stopped and started manually, and Docker has + // some provisions for restarting containers that + // stop. The utility here comes from the fact that + // this will delete and re-create the container + // following the principle that the containers + // should be pristine when started. + "must_run": &schema.Schema{ + Type: schema.TypeBool, + Default: true, + Optional: true, + }, + + // ForceNew is not true for image because we need to + // sane this against Docker image IDs, as each image + // can have multiple names/tags attached do it. + "image": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "hostname": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "domainname": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "command": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "dns": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: stringSetHash, + }, + + "publish_all_ports": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + + "volumes": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: getVolumesElem(), + Set: resourceDockerVolumesHash, + }, + + "ports": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: getPortsElem(), + Set: resourceDockerPortsHash, + }, + + "env": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: stringSetHash, + }, + }, + } +} + +func getVolumesElem() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "from_container": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "container_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "host_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "read_only": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func getPortsElem() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "internal": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "external": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "protocol": &schema.Schema{ + Type: schema.TypeString, + Default: "tcp", + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceDockerPortsHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + buf.WriteString(fmt.Sprintf("%v-", m["internal"].(int))) + + if v, ok := m["external"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(int))) + } + + if v, ok := m["ip"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["protocol"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + return hashcode.String(buf.String()) +} + +func resourceDockerVolumesHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["from_container"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["container_path"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["host_path"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(string))) + } + + if v, ok := m["read_only"]; ok { + buf.WriteString(fmt.Sprintf("%v-", v.(bool))) + } + + return hashcode.String(buf.String()) +} + +func stringSetHash(v interface{}) int { + return hashcode.String(v.(string)) +} diff --git a/builtin/providers/docker/resource_docker_container_funcs.go b/builtin/providers/docker/resource_docker_container_funcs.go new file mode 100644 index 000000000000..17a8e4eeddd8 --- /dev/null +++ b/builtin/providers/docker/resource_docker_container_funcs.go @@ -0,0 +1,267 @@ +package docker + +import ( + "errors" + "fmt" + "strconv" + "strings" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerContainerCreate(d *schema.ResourceData, meta interface{}) error { + var err error + client := meta.(*dc.Client) + + var data Data + if err := fetchLocalImages(&data, client); err != nil { + return err + } + + image := d.Get("image").(string) + if _, ok := data.DockerImages[image]; !ok { + if _, ok := data.DockerImages[image+":latest"]; !ok { + return fmt.Errorf("Unable to find image %s", image) + } else { + image = image + ":latest" + } + } + + // The awesome, wonderful, splendiferous, sensical + // Docker API now lets you specify a HostConfig in + // CreateContainerOptions, but in my testing it still only + // actually applies HostConfig options set in StartContainer. + // How cool is that? + createOpts := dc.CreateContainerOptions{ + Name: d.Get("name").(string), + Config: &dc.Config{ + Image: image, + Hostname: d.Get("hostname").(string), + Domainname: d.Get("domainname").(string), + }, + } + + if v, ok := d.GetOk("env"); ok { + createOpts.Config.Env = stringSetToStringSlice(v.(*schema.Set)) + } + + if v, ok := d.GetOk("command"); ok { + createOpts.Config.Cmd = stringListToStringSlice(v.([]interface{})) + } + + exposedPorts := map[dc.Port]struct{}{} + portBindings := map[dc.Port][]dc.PortBinding{} + + if v, ok := d.GetOk("ports"); ok { + exposedPorts, portBindings = portSetToDockerPorts(v.(*schema.Set)) + } + if len(exposedPorts) != 0 { + createOpts.Config.ExposedPorts = exposedPorts + } + + volumes := map[string]struct{}{} + binds := []string{} + volumesFrom := []string{} + + if v, ok := d.GetOk("volumes"); ok { + volumes, binds, volumesFrom, err = volumeSetToDockerVolumes(v.(*schema.Set)) + if err != nil { + return fmt.Errorf("Unable to parse volumes: %s", err) + } + } + if len(volumes) != 0 { + createOpts.Config.Volumes = volumes + } + + var retContainer *dc.Container + if retContainer, err = client.CreateContainer(createOpts); err != nil { + return fmt.Errorf("Unable to create container: %s", err) + } + if retContainer == nil { + return fmt.Errorf("Returned container is nil") + } + + d.SetId(retContainer.ID) + + hostConfig := &dc.HostConfig{ + PublishAllPorts: d.Get("publish_all_ports").(bool), + } + + if len(portBindings) != 0 { + hostConfig.PortBindings = portBindings + } + + if len(binds) != 0 { + hostConfig.Binds = binds + } + if len(volumesFrom) != 0 { + hostConfig.VolumesFrom = volumesFrom + } + + if v, ok := d.GetOk("dns"); ok { + hostConfig.DNS = stringSetToStringSlice(v.(*schema.Set)) + } + + if err := client.StartContainer(retContainer.ID, hostConfig); err != nil { + return fmt.Errorf("Unable to start container: %s", err) + } + + return resourceDockerContainerRead(d, meta) +} + +func resourceDockerContainerRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + apiContainer, err := fetchDockerContainer(d.Get("name").(string), client) + if err != nil { + return err + } + + if apiContainer == nil { + // This container doesn't exist anymore + d.SetId("") + + return nil + } + + container, err := client.InspectContainer(apiContainer.ID) + if err != nil { + return fmt.Errorf("Error inspecting container %s: %s", apiContainer.ID, err) + } + + if d.Get("must_run").(bool) && !container.State.Running { + return resourceDockerContainerDelete(d, meta) + } + + return nil +} + +func resourceDockerContainerUpdate(d *schema.ResourceData, meta interface{}) error { + return nil +} + +func resourceDockerContainerDelete(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + + removeOpts := dc.RemoveContainerOptions{ + ID: d.Id(), + RemoveVolumes: true, + Force: true, + } + + if err := client.RemoveContainer(removeOpts); err != nil { + return fmt.Errorf("Error deleting container %s: %s", d.Id(), err) + } + + d.SetId("") + return nil +} + +func stringListToStringSlice(stringList []interface{}) []string { + ret := []string{} + for _, v := range stringList { + ret = append(ret, v.(string)) + } + return ret +} + +func stringSetToStringSlice(stringSet *schema.Set) []string { + ret := []string{} + if stringSet == nil { + return ret + } + for _, envVal := range stringSet.List() { + ret = append(ret, envVal.(string)) + } + return ret +} + +func fetchDockerContainer(name string, client *dc.Client) (*dc.APIContainers, error) { + apiContainers, err := client.ListContainers(dc.ListContainersOptions{All: true}) + + if err != nil { + return nil, fmt.Errorf("Error fetching container information from Docker: %s\n", err) + } + + for _, apiContainer := range apiContainers { + // Sometimes the Docker API prefixes container names with / + // like it does in these commands. But if there's no + // set name, it just uses the ID without a /...ugh. + var dockerContainerName string + if len(apiContainer.Names) > 0 { + dockerContainerName = strings.TrimLeft(apiContainer.Names[0], "/") + } else { + dockerContainerName = apiContainer.ID + } + + if dockerContainerName == name { + return &apiContainer, nil + } + } + + return nil, nil +} + +func portSetToDockerPorts(ports *schema.Set) (map[dc.Port]struct{}, map[dc.Port][]dc.PortBinding) { + retExposedPorts := map[dc.Port]struct{}{} + retPortBindings := map[dc.Port][]dc.PortBinding{} + + for _, portInt := range ports.List() { + port := portInt.(map[string]interface{}) + internal := port["internal"].(int) + protocol := port["protocol"].(string) + + exposedPort := dc.Port(strconv.Itoa(internal) + "/" + protocol) + retExposedPorts[exposedPort] = struct{}{} + + external, extOk := port["external"].(int) + ip, ipOk := port["ip"].(string) + + if extOk { + portBinding := dc.PortBinding{ + HostPort: strconv.Itoa(external), + } + if ipOk { + portBinding.HostIP = ip + } + retPortBindings[exposedPort] = append(retPortBindings[exposedPort], portBinding) + } + } + + return retExposedPorts, retPortBindings +} + +func volumeSetToDockerVolumes(volumes *schema.Set) (map[string]struct{}, []string, []string, error) { + retVolumeMap := map[string]struct{}{} + retHostConfigBinds := []string{} + retVolumeFromContainers := []string{} + + for _, volumeInt := range volumes.List() { + volume := volumeInt.(map[string]interface{}) + fromContainer := volume["from_container"].(string) + containerPath := volume["container_path"].(string) + hostPath := volume["host_path"].(string) + readOnly := volume["read_only"].(bool) + + switch { + case len(fromContainer) == 0 && len(containerPath) == 0: + return retVolumeMap, retHostConfigBinds, retVolumeFromContainers, errors.New("Volume entry without container path or source container") + case len(fromContainer) != 0 && len(containerPath) != 0: + return retVolumeMap, retHostConfigBinds, retVolumeFromContainers, errors.New("Both a container and a path specified in a volume entry") + case len(fromContainer) != 0: + retVolumeFromContainers = append(retVolumeFromContainers, fromContainer) + case len(hostPath) != 0: + readWrite := "rw" + if readOnly { + readWrite = "ro" + } + retVolumeMap[containerPath] = struct{}{} + retHostConfigBinds = append(retHostConfigBinds, hostPath+":"+containerPath+":"+readWrite) + default: + retVolumeMap[containerPath] = struct{}{} + } + } + + return retVolumeMap, retHostConfigBinds, retVolumeFromContainers, nil +} diff --git a/builtin/providers/docker/resource_docker_container_test.go b/builtin/providers/docker/resource_docker_container_test.go new file mode 100644 index 000000000000..48302d096062 --- /dev/null +++ b/builtin/providers/docker/resource_docker_container_test.go @@ -0,0 +1,63 @@ +package docker + +import ( + "fmt" + "testing" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDockerContainer_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerContainerConfig, + Check: resource.ComposeTestCheckFunc( + testAccContainerRunning("docker_container.foo"), + ), + }, + }, + }) +} + +func testAccContainerRunning(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + client := testAccProvider.Meta().(*dc.Client) + containers, err := client.ListContainers(dc.ListContainersOptions{}) + if err != nil { + return err + } + + for _, c := range containers { + if c.ID == rs.Primary.ID { + return nil + } + } + + return fmt.Errorf("Container not found: %s", rs.Primary.ID) + } +} + +const testAccDockerContainerConfig = ` +resource "docker_image" "foo" { + name = "ubuntu:trusty-20150320" +} + +resource "docker_container" "foo" { + name = "tf-test" + image = "${docker_image.foo.latest}" +} +` diff --git a/builtin/providers/docker/resource_docker_image.go b/builtin/providers/docker/resource_docker_image.go new file mode 100644 index 000000000000..54822d738ebc --- /dev/null +++ b/builtin/providers/docker/resource_docker_image.go @@ -0,0 +1,31 @@ +package docker + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerImage() *schema.Resource { + return &schema.Resource{ + Create: resourceDockerImageCreate, + Read: resourceDockerImageRead, + Update: resourceDockerImageUpdate, + Delete: resourceDockerImageDelete, + + Schema: map[string]*schema.Schema{ + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + + "keep_updated": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + }, + + "latest": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} diff --git a/builtin/providers/docker/resource_docker_image_funcs.go b/builtin/providers/docker/resource_docker_image_funcs.go new file mode 100644 index 000000000000..2c7470db0487 --- /dev/null +++ b/builtin/providers/docker/resource_docker_image_funcs.go @@ -0,0 +1,173 @@ +package docker + +import ( + "fmt" + "strings" + + dc "github.com/fsouza/go-dockerclient" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceDockerImageCreate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + apiImage, err := findImage(d, client) + if err != nil { + return fmt.Errorf("Unable to read Docker image into resource: %s", err) + } + + d.SetId(apiImage.ID + d.Get("name").(string)) + d.Set("latest", apiImage.ID) + + return nil +} + +func resourceDockerImageRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*dc.Client) + apiImage, err := findImage(d, client) + if err != nil { + return fmt.Errorf("Unable to read Docker image into resource: %s", err) + } + + d.Set("latest", apiImage.ID) + + return nil +} + +func resourceDockerImageUpdate(d *schema.ResourceData, meta interface{}) error { + // We need to re-read in case switching parameters affects + // the value of "latest" or others + + return resourceDockerImageRead(d, meta) +} + +func resourceDockerImageDelete(d *schema.ResourceData, meta interface{}) error { + d.SetId("") + return nil +} + +func fetchLocalImages(data *Data, client *dc.Client) error { + images, err := client.ListImages(dc.ListImagesOptions{All: false}) + if err != nil { + return fmt.Errorf("Unable to list Docker images: %s", err) + } + + if data.DockerImages == nil { + data.DockerImages = make(map[string]*dc.APIImages) + } + + // Docker uses different nomenclatures in different places...sometimes a short + // ID, sometimes long, etc. So we store both in the map so we can always find + // the same image object. We store the tags, too. + for i, image := range images { + data.DockerImages[image.ID[:12]] = &images[i] + data.DockerImages[image.ID] = &images[i] + for _, repotag := range image.RepoTags { + data.DockerImages[repotag] = &images[i] + } + } + + return nil +} + +func pullImage(data *Data, client *dc.Client, image string) error { + // TODO: Test local registry handling. It should be working + // based on the code that was ported over + + pullOpts := dc.PullImageOptions{} + + splitImageName := strings.Split(image, ":") + switch { + + // It's in registry:port/repo:tag format + case len(splitImageName) == 3: + splitPortRepo := strings.Split(splitImageName[1], "/") + pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] + pullOpts.Repository = splitPortRepo[1] + pullOpts.Tag = splitImageName[2] + + // It's either registry:port/repo or repo:tag with default registry + case len(splitImageName) == 2: + splitPortRepo := strings.Split(splitImageName[1], "/") + switch len(splitPortRepo) { + + // registry:port/repo + case 2: + pullOpts.Registry = splitImageName[0] + ":" + splitPortRepo[0] + pullOpts.Repository = splitPortRepo[1] + pullOpts.Tag = "latest" + + // repo:tag + case 1: + pullOpts.Repository = splitImageName[0] + pullOpts.Tag = splitImageName[1] + } + + default: + pullOpts.Repository = image + } + + if err := client.PullImage(pullOpts, dc.AuthConfiguration{}); err != nil { + return fmt.Errorf("Error pulling image %s: %s\n", image, err) + } + + return fetchLocalImages(data, client) +} + +func getImageTag(image string) string { + splitImageName := strings.Split(image, ":") + switch { + + // It's in registry:port/repo:tag format + case len(splitImageName) == 3: + return splitImageName[2] + + // It's either registry:port/repo or repo:tag with default registry + case len(splitImageName) == 2: + splitPortRepo := strings.Split(splitImageName[1], "/") + if len(splitPortRepo) == 2 { + return "" + } else { + return splitImageName[1] + } + } + + return "" +} + +func findImage(d *schema.ResourceData, client *dc.Client) (*dc.APIImages, error) { + var data Data + if err := fetchLocalImages(&data, client); err != nil { + return nil, err + } + + imageName := d.Get("name").(string) + if imageName == "" { + return nil, fmt.Errorf("Empty image name is not allowed") + } + + searchLocal := func() *dc.APIImages { + if apiImage, ok := data.DockerImages[imageName]; ok { + return apiImage + } + if apiImage, ok := data.DockerImages[imageName+":latest"]; ok { + imageName = imageName + ":latest" + return apiImage + } + return nil + } + + foundImage := searchLocal() + + if d.Get("keep_updated").(bool) || foundImage == nil { + if err := pullImage(&data, client, imageName); err != nil { + return nil, fmt.Errorf("Unable to pull image %s: %s", imageName, err) + } + } + + foundImage = searchLocal() + if foundImage != nil { + return foundImage, nil + } + + return nil, fmt.Errorf("Unable to find or pull image %s", imageName) +} diff --git a/builtin/providers/docker/resource_docker_image_test.go b/builtin/providers/docker/resource_docker_image_test.go new file mode 100644 index 000000000000..d43c81efc0f7 --- /dev/null +++ b/builtin/providers/docker/resource_docker_image_test.go @@ -0,0 +1,32 @@ +package docker + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDockerImage_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccDockerImageConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "docker_image.foo", + "latest", + "d0955f21bf24f5bfffd32d2d0bb669d0564701c271bc3dfc64cfc5adfdec2d07"), + ), + }, + }, + }) +} + +const testAccDockerImageConfig = ` +resource "docker_image" "foo" { + name = "ubuntu:trusty-20150320" + keep_updated = true +} +` diff --git a/builtin/providers/google/config.go b/builtin/providers/google/config.go index 9ae889482e7a..254cb3ebfe73 100644 --- a/builtin/providers/google/config.go +++ b/builtin/providers/google/config.go @@ -7,11 +7,10 @@ import ( "net/http" "os" - "code.google.com/p/google-api-go-client/compute/v1" - "golang.org/x/oauth2" "golang.org/x/oauth2/google" "golang.org/x/oauth2/jwt" + "google.golang.org/api/compute/v1" ) // Config is the configuration structure used to instantiate the Google diff --git a/builtin/providers/google/disk_type.go b/builtin/providers/google/disk_type.go index dfea866db2fa..1653337be436 100644 --- a/builtin/providers/google/disk_type.go +++ b/builtin/providers/google/disk_type.go @@ -1,7 +1,7 @@ package google import ( - "code.google.com/p/google-api-go-client/compute/v1" + "google.golang.org/api/compute/v1" ) // readDiskType finds the disk type with the given name. diff --git a/builtin/providers/google/operation.go b/builtin/providers/google/operation.go index 32bf79a5eb15..b1f2f255bc54 100644 --- a/builtin/providers/google/operation.go +++ b/builtin/providers/google/operation.go @@ -4,7 +4,8 @@ import ( "bytes" "fmt" - "code.google.com/p/google-api-go-client/compute/v1" + "google.golang.org/api/compute/v1" + "github.com/hashicorp/terraform/helper/resource" ) diff --git a/builtin/providers/google/resource_compute_address.go b/builtin/providers/google/resource_compute_address.go index d67ceb190ab9..9bb9547fe8b7 100644 --- a/builtin/providers/google/resource_compute_address.go +++ b/builtin/providers/google/resource_compute_address.go @@ -5,9 +5,9 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeAddress() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_address_test.go b/builtin/providers/google/resource_compute_address_test.go index ba87169d6a99..90988bb2ce58 100644 --- a/builtin/providers/google/resource_compute_address_test.go +++ b/builtin/providers/google/resource_compute_address_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeAddress_basic(t *testing.T) { diff --git a/builtin/providers/google/resource_compute_disk.go b/builtin/providers/google/resource_compute_disk.go index 72457b9ac298..56b7ed25f0f8 100644 --- a/builtin/providers/google/resource_compute_disk.go +++ b/builtin/providers/google/resource_compute_disk.go @@ -5,9 +5,9 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeDisk() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_disk_test.go b/builtin/providers/google/resource_compute_disk_test.go index f99d9ed629f8..659affff8eb2 100644 --- a/builtin/providers/google/resource_compute_disk_test.go +++ b/builtin/providers/google/resource_compute_disk_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeDisk_basic(t *testing.T) { diff --git a/builtin/providers/google/resource_compute_firewall.go b/builtin/providers/google/resource_compute_firewall.go index 09d9ca250874..2a2433a87de4 100644 --- a/builtin/providers/google/resource_compute_firewall.go +++ b/builtin/providers/google/resource_compute_firewall.go @@ -6,10 +6,10 @@ import ( "sort" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeFirewall() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_firewall_test.go b/builtin/providers/google/resource_compute_firewall_test.go index 9bb92af20bb0..a4a489fba1d4 100644 --- a/builtin/providers/google/resource_compute_firewall_test.go +++ b/builtin/providers/google/resource_compute_firewall_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeFirewall_basic(t *testing.T) { diff --git a/builtin/providers/google/resource_compute_forwarding_rule.go b/builtin/providers/google/resource_compute_forwarding_rule.go index e8737434425b..8138ead83794 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_forwarding_rule.go @@ -5,9 +5,9 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeForwardingRule() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_http_health_check.go b/builtin/providers/google/resource_compute_http_health_check.go index 68a4c1348e88..7f059b860d5d 100644 --- a/builtin/providers/google/resource_compute_http_health_check.go +++ b/builtin/providers/google/resource_compute_http_health_check.go @@ -5,9 +5,9 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeHttpHealthCheck() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_instance.go b/builtin/providers/google/resource_compute_instance.go index 3b3e86dede36..c7f0f8d37535 100644 --- a/builtin/providers/google/resource_compute_instance.go +++ b/builtin/providers/google/resource_compute_instance.go @@ -5,10 +5,10 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeInstance() *schema.Resource { @@ -72,6 +72,13 @@ func resourceComputeInstance() *schema.Resource { "auto_delete": &schema.Schema{ Type: schema.TypeBool, Optional: true, + Default: true, + ForceNew: true, + }, + + "size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, ForceNew: true, }, }, @@ -283,11 +290,7 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err disk.Type = "PERSISTENT" disk.Mode = "READ_WRITE" disk.Boot = i == 0 - disk.AutoDelete = true - - if v, ok := d.GetOk(prefix + ".auto_delete"); ok { - disk.AutoDelete = v.(bool) - } + disk.AutoDelete = d.Get(prefix + ".auto_delete").(bool) // Load up the disk for this disk if specified if v, ok := d.GetOk(prefix + ".disk"); ok { @@ -331,6 +334,11 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err disk.InitializeParams.DiskType = diskType.SelfLink } + if v, ok := d.GetOk(prefix + ".size"); ok { + diskSizeGb := v.(int) + disk.InitializeParams.DiskSizeGb = int64(diskSizeGb) + } + disks = append(disks, &disk) } @@ -514,7 +522,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error networks := make([]map[string]interface{}, 0, 1) if networksCount > 0 { // TODO: Remove this when realizing deprecation of .network - for _, iface := range instance.NetworkInterfaces { + for i, iface := range instance.NetworkInterfaces { var natIP string for _, config := range iface.AccessConfigs { if config.Type == "ONE_TO_ONE_NAT" { @@ -531,6 +539,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error network["name"] = iface.Name network["external_address"] = natIP network["internal_address"] = iface.NetworkIP + network["source"] = d.Get(fmt.Sprintf("network.%d.source", i)) networks = append(networks, network) } } @@ -538,7 +547,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error networkInterfaces := make([]map[string]interface{}, 0, 1) if networkInterfacesCount > 0 { - for _, iface := range instance.NetworkInterfaces { + for i, iface := range instance.NetworkInterfaces { // The first non-empty ip is left in natIP var natIP string accessConfigs := make( @@ -564,6 +573,7 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error networkInterfaces = append(networkInterfaces, map[string]interface{}{ "name": iface.Name, "address": iface.NetworkIP, + "network": d.Get(fmt.Sprintf("network_interface.%d.network", i)), "access_config": accessConfigs, }) } diff --git a/builtin/providers/google/resource_compute_instance_template.go b/builtin/providers/google/resource_compute_instance_template.go index 074e45695003..1eb907fd478e 100644 --- a/builtin/providers/google/resource_compute_instance_template.go +++ b/builtin/providers/google/resource_compute_instance_template.go @@ -4,10 +4,10 @@ import ( "fmt" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeInstanceTemplate() *schema.Resource { @@ -58,6 +58,7 @@ func resourceComputeInstanceTemplate() *schema.Resource { "auto_delete": &schema.Schema{ Type: schema.TypeBool, Optional: true, + Default: true, ForceNew: true, }, @@ -235,11 +236,7 @@ func buildDisks(d *schema.ResourceData, meta interface{}) []*compute.AttachedDis disk.Mode = "READ_WRITE" disk.Interface = "SCSI" disk.Boot = i == 0 - disk.AutoDelete = true - - if v, ok := d.GetOk(prefix + ".auto_delete"); ok { - disk.AutoDelete = v.(bool) - } + disk.AutoDelete = d.Get(prefix + ".auto_delete").(bool) if v, ok := d.GetOk(prefix + ".boot"); ok { disk.Boot = v.(bool) diff --git a/builtin/providers/google/resource_compute_instance_template_test.go b/builtin/providers/google/resource_compute_instance_template_test.go index 74133089d6de..f9b3ac2b45d3 100644 --- a/builtin/providers/google/resource_compute_instance_template_test.go +++ b/builtin/providers/google/resource_compute_instance_template_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeInstanceTemplate_basic(t *testing.T) { @@ -65,7 +65,7 @@ func TestAccComputeInstanceTemplate_disks(t *testing.T) { testAccCheckComputeInstanceTemplateExists( "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "debian-7-wheezy-v20140814", true, true), - testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "foo_existing_disk", false, false), + testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "terraform-test-foobar", false, false), ), }, }, @@ -252,6 +252,14 @@ resource "google_compute_instance_template" "foobar" { }` const testAccComputeInstanceTemplate_disks = ` +resource "google_compute_disk" "foobar" { + name = "terraform-test-foobar" + image = "debian-7-wheezy-v20140814" + size = 10 + type = "pd-ssd" + zone = "us-central1-a" +} + resource "google_compute_instance_template" "foobar" { name = "terraform-test" machine_type = "n1-standard-1" @@ -263,7 +271,7 @@ resource "google_compute_instance_template" "foobar" { } disk { - source = "foo_existing_disk" + source = "terraform-test-foobar" auto_delete = false boot = false } diff --git a/builtin/providers/google/resource_compute_instance_test.go b/builtin/providers/google/resource_compute_instance_test.go index 9d16db521074..612282b16ad2 100644 --- a/builtin/providers/google/resource_compute_instance_test.go +++ b/builtin/providers/google/resource_compute_instance_test.go @@ -5,9 +5,9 @@ import ( "strings" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeInstance_basic_deprecated_network(t *testing.T) { @@ -168,6 +168,34 @@ func TestAccComputeInstance_update_deprecated_network(t *testing.T) { }) } +func TestAccComputeInstance_forceNewAndChangeMetadata(t *testing.T) { + var instance compute.Instance + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeInstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeInstance_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeInstanceExists( + "google_compute_instance.foobar", &instance), + ), + }, + resource.TestStep{ + Config: testAccComputeInstance_forceNewAndChangeMetadata, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeInstanceExists( + "google_compute_instance.foobar", &instance), + testAccCheckComputeInstanceMetadata( + &instance, "qux", "true"), + ), + }, + }, + }) +} + func TestAccComputeInstance_update(t *testing.T) { var instance compute.Instance @@ -432,6 +460,30 @@ resource "google_compute_instance" "foobar" { } }` +// Update zone to ForceNew, and change metadata k/v entirely +// Generates diff mismatch +const testAccComputeInstance_forceNewAndChangeMetadata = ` +resource "google_compute_instance" "foobar" { + name = "terraform-test" + machine_type = "n1-standard-1" + zone = "us-central1-a" + zone = "us-central1-b" + tags = ["baz"] + + disk { + image = "debian-7-wheezy-v20140814" + } + + network_interface { + network = "default" + access_config { } + } + + metadata { + qux = "true" + } +}` + // Update metadata, tags, and network_interface const testAccComputeInstance_update = ` resource "google_compute_instance" "foobar" { diff --git a/builtin/providers/google/resource_compute_network.go b/builtin/providers/google/resource_compute_network.go index 4254da72139b..5e581eff2126 100644 --- a/builtin/providers/google/resource_compute_network.go +++ b/builtin/providers/google/resource_compute_network.go @@ -5,9 +5,9 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeNetwork() *schema.Resource { diff --git a/builtin/providers/google/resource_compute_network_test.go b/builtin/providers/google/resource_compute_network_test.go index ea25b0ff4f08..89827f57627a 100644 --- a/builtin/providers/google/resource_compute_network_test.go +++ b/builtin/providers/google/resource_compute_network_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeNetwork_basic(t *testing.T) { diff --git a/builtin/providers/google/resource_compute_route.go b/builtin/providers/google/resource_compute_route.go index 02aa726523e3..1f52a2807bc2 100644 --- a/builtin/providers/google/resource_compute_route.go +++ b/builtin/providers/google/resource_compute_route.go @@ -5,10 +5,10 @@ import ( "log" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeRoute() *schema.Resource { @@ -75,6 +75,7 @@ func resourceComputeRoute() *schema.Resource { "tags": &schema.Schema{ Type: schema.TypeSet, Optional: true, + ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: func(v interface{}) int { return hashcode.String(v.(string)) diff --git a/builtin/providers/google/resource_compute_route_test.go b/builtin/providers/google/resource_compute_route_test.go index 065842f85257..e4b8627e9368 100644 --- a/builtin/providers/google/resource_compute_route_test.go +++ b/builtin/providers/google/resource_compute_route_test.go @@ -4,9 +4,9 @@ import ( "fmt" "testing" - "code.google.com/p/google-api-go-client/compute/v1" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" + "google.golang.org/api/compute/v1" ) func TestAccComputeRoute_basic(t *testing.T) { diff --git a/builtin/providers/google/resource_compute_target_pool.go b/builtin/providers/google/resource_compute_target_pool.go index 98935b84cea6..83611e2bd20f 100644 --- a/builtin/providers/google/resource_compute_target_pool.go +++ b/builtin/providers/google/resource_compute_target_pool.go @@ -6,9 +6,9 @@ import ( "strings" "time" - "code.google.com/p/google-api-go-client/compute/v1" - "code.google.com/p/google-api-go-client/googleapi" "github.com/hashicorp/terraform/helper/schema" + "google.golang.org/api/compute/v1" + "google.golang.org/api/googleapi" ) func resourceComputeTargetPool() *schema.Resource { diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index af27c7b26287..52954aa5d1fd 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -358,14 +358,18 @@ func updateConfigVars( vars := make(map[string]*string) for _, v := range o { - for k, _ := range v.(map[string]interface{}) { - vars[k] = nil + if v != nil { + for k, _ := range v.(map[string]interface{}) { + vars[k] = nil + } } } for _, v := range n { - for k, v := range v.(map[string]interface{}) { - val := v.(string) - vars[k] = &val + if v != nil { + for k, v := range v.(map[string]interface{}) { + val := v.(string) + vars[k] = &val + } } } diff --git a/builtin/providers/openstack/config.go b/builtin/providers/openstack/config.go new file mode 100644 index 000000000000..d05662017ce3 --- /dev/null +++ b/builtin/providers/openstack/config.go @@ -0,0 +1,67 @@ +package openstack + +import ( + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack" +) + +type Config struct { + Username string + UserID string + Password string + APIKey string + IdentityEndpoint string + TenantID string + TenantName string + DomainID string + DomainName string + + osClient *gophercloud.ProviderClient +} + +func (c *Config) loadAndValidate() error { + ao := gophercloud.AuthOptions{ + Username: c.Username, + UserID: c.UserID, + Password: c.Password, + APIKey: c.APIKey, + IdentityEndpoint: c.IdentityEndpoint, + TenantID: c.TenantID, + TenantName: c.TenantName, + DomainID: c.DomainID, + DomainName: c.DomainName, + } + + client, err := openstack.AuthenticatedClient(ao) + if err != nil { + return err + } + + c.osClient = client + + return nil +} + +func (c *Config) blockStorageV1Client(region string) (*gophercloud.ServiceClient, error) { + return openstack.NewBlockStorageV1(c.osClient, gophercloud.EndpointOpts{ + Region: region, + }) +} + +func (c *Config) computeV2Client(region string) (*gophercloud.ServiceClient, error) { + return openstack.NewComputeV2(c.osClient, gophercloud.EndpointOpts{ + Region: region, + }) +} + +func (c *Config) networkingV2Client(region string) (*gophercloud.ServiceClient, error) { + return openstack.NewNetworkV2(c.osClient, gophercloud.EndpointOpts{ + Region: region, + }) +} + +func (c *Config) objectStorageV1Client(region string) (*gophercloud.ServiceClient, error) { + return openstack.NewObjectStorageV1(c.osClient, gophercloud.EndpointOpts{ + Region: region, + }) +} diff --git a/builtin/providers/openstack/provider.go b/builtin/providers/openstack/provider.go new file mode 100644 index 000000000000..d71f5a8f08e2 --- /dev/null +++ b/builtin/providers/openstack/provider.go @@ -0,0 +1,120 @@ +package openstack + +import ( + "os" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a schema.Provider for OpenStack. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "auth_url": &schema.Schema{ + Type: schema.TypeString, + Required: true, + DefaultFunc: envDefaultFunc("OS_AUTH_URL"), + }, + "user_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: envDefaultFunc("OS_USERNAME"), + }, + "user_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + "tenant_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: envDefaultFunc("OS_TENANT_NAME"), + }, + "password": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + DefaultFunc: envDefaultFunc("OS_PASSWORD"), + }, + "api_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + "domain_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + "domain_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Default: "", + }, + }, + + ResourcesMap: map[string]*schema.Resource{ + "openstack_blockstorage_volume_v1": resourceBlockStorageVolumeV1(), + "openstack_compute_instance_v2": resourceComputeInstanceV2(), + "openstack_compute_keypair_v2": resourceComputeKeypairV2(), + "openstack_compute_secgroup_v2": resourceComputeSecGroupV2(), + "openstack_compute_floatingip_v2": resourceComputeFloatingIPV2(), + "openstack_fw_firewall_v1": resourceFWFirewallV1(), + "openstack_fw_policy_v1": resourceFWPolicyV1(), + "openstack_fw_rule_v1": resourceFWRuleV1(), + "openstack_lb_monitor_v1": resourceLBMonitorV1(), + "openstack_lb_pool_v1": resourceLBPoolV1(), + "openstack_lb_vip_v1": resourceLBVipV1(), + "openstack_networking_network_v2": resourceNetworkingNetworkV2(), + "openstack_networking_subnet_v2": resourceNetworkingSubnetV2(), + "openstack_networking_floatingip_v2": resourceNetworkingFloatingIPV2(), + "openstack_networking_router_v2": resourceNetworkingRouterV2(), + "openstack_networking_router_interface_v2": resourceNetworkingRouterInterfaceV2(), + "openstack_objectstorage_container_v1": resourceObjectStorageContainerV1(), + }, + + ConfigureFunc: configureProvider, + } +} + +func configureProvider(d *schema.ResourceData) (interface{}, error) { + config := Config{ + IdentityEndpoint: d.Get("auth_url").(string), + Username: d.Get("user_name").(string), + UserID: d.Get("user_id").(string), + Password: d.Get("password").(string), + APIKey: d.Get("api_key").(string), + TenantID: d.Get("tenant_id").(string), + TenantName: d.Get("tenant_name").(string), + DomainID: d.Get("domain_id").(string), + DomainName: d.Get("domain_name").(string), + } + + if err := config.loadAndValidate(); err != nil { + return nil, err + } + + return &config, nil +} + +func envDefaultFunc(k string) schema.SchemaDefaultFunc { + return func() (interface{}, error) { + if v := os.Getenv(k); v != "" { + return v, nil + } + + return nil, nil + } +} + +func envDefaultFuncAllowMissing(k string) schema.SchemaDefaultFunc { + return func() (interface{}, error) { + v := os.Getenv(k) + return v, nil + } +} diff --git a/builtin/providers/openstack/provider_test.go b/builtin/providers/openstack/provider_test.go new file mode 100644 index 000000000000..686bf0533155 --- /dev/null +++ b/builtin/providers/openstack/provider_test.go @@ -0,0 +1,70 @@ +package openstack + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var ( + OS_REGION_NAME = "" + OS_POOL_NAME = "" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "openstack": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + v := os.Getenv("OS_AUTH_URL") + if v == "" { + t.Fatal("OS_AUTH_URL must be set for acceptance tests") + } + + v = os.Getenv("OS_REGION_NAME") + if v != "" { + OS_REGION_NAME = v + } + + v1 := os.Getenv("OS_IMAGE_ID") + v2 := os.Getenv("OS_IMAGE_NAME") + + if v1 == "" && v2 == "" { + t.Fatal("OS_IMAGE_ID or OS_IMAGE_NAME must be set for acceptance tests") + } + + v = os.Getenv("OS_POOL_NAME") + if v == "" { + t.Fatal("OS_POOL_NAME must be set for acceptance tests") + } + OS_POOL_NAME = v + + v1 = os.Getenv("OS_FLAVOR_ID") + v2 = os.Getenv("OS_FLAVOR_NAME") + if v1 == "" && v2 == "" { + t.Fatal("OS_FLAVOR_ID or OS_FLAVOR_NAME must be set for acceptance tests") + } + + v = os.Getenv("OS_NETWORK_ID") + if v == "" { + t.Fatal("OS_NETWORK_ID must be set for acceptance tests") + } +} diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go new file mode 100644 index 000000000000..c83bc538efa2 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1.go @@ -0,0 +1,314 @@ +package openstack + +import ( + "bytes" + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/blockstorage/v1/volumes" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/volumeattach" +) + +func resourceBlockStorageVolumeV1() *schema.Resource { + return &schema.Resource{ + Create: resourceBlockStorageVolumeV1Create, + Read: resourceBlockStorageVolumeV1Read, + Update: resourceBlockStorageVolumeV1Update, + Delete: resourceBlockStorageVolumeV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "size": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "metadata": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: false, + }, + "snapshot_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "source_vol_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "image_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "volume_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "attachment": &schema.Schema{ + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "instance_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "device": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: resourceVolumeAttachmentHash, + }, + }, + } +} + +func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + blockStorageClient, err := config.blockStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + createOpts := &volumes.CreateOpts{ + Description: d.Get("description").(string), + Name: d.Get("name").(string), + Size: d.Get("size").(int), + SnapshotID: d.Get("snapshot_id").(string), + SourceVolID: d.Get("source_vol_id").(string), + ImageID: d.Get("image_id").(string), + VolumeType: d.Get("volume_type").(string), + Metadata: resourceContainerMetadataV2(d), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + v, err := volumes.Create(blockStorageClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack volume: %s", err) + } + log.Printf("[INFO] Volume ID: %s", v.ID) + + // Store the ID now + d.SetId(v.ID) + + // Wait for the volume to become available. + log.Printf( + "[DEBUG] Waiting for volume (%s) to become available", + v.ID) + + stateConf := &resource.StateChangeConf{ + Target: "available", + Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for volume (%s) to become ready: %s", + v.ID, err) + } + + return resourceBlockStorageVolumeV1Read(d, meta) +} + +func resourceBlockStorageVolumeV1Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + blockStorageClient, err := config.blockStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + v, err := volumes.Get(blockStorageClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "volume") + } + + log.Printf("[DEBUG] Retreived volume %s: %+v", d.Id(), v) + + d.Set("size", v.Size) + d.Set("description", v.Description) + d.Set("name", v.Name) + d.Set("snapshot_id", v.SnapshotID) + d.Set("source_vol_id", v.SourceVolID) + d.Set("volume_type", v.VolumeType) + d.Set("metadata", v.Metadata) + + if len(v.Attachments) > 0 { + attachments := make([]map[string]interface{}, len(v.Attachments)) + for i, attachment := range v.Attachments { + attachments[i] = make(map[string]interface{}) + attachments[i]["id"] = attachment["id"] + attachments[i]["instance_id"] = attachment["server_id"] + attachments[i]["device"] = attachment["device"] + log.Printf("[DEBUG] attachment: %v", attachment) + } + d.Set("attachment", attachments) + } + + return nil +} + +func resourceBlockStorageVolumeV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + blockStorageClient, err := config.blockStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + updateOpts := volumes.UpdateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + } + + if d.HasChange("metadata") { + updateOpts.Metadata = resourceVolumeMetadataV1(d) + } + + _, err = volumes.Update(blockStorageClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack volume: %s", err) + } + + return resourceBlockStorageVolumeV1Read(d, meta) +} + +func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + blockStorageClient, err := config.blockStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + v, err := volumes.Get(blockStorageClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "volume") + } + + // make sure this volume is detached from all instances before deleting + if len(v.Attachments) > 0 { + log.Printf("[DEBUG] detaching volumes") + if computeClient, err := config.computeV2Client(d.Get("region").(string)); err != nil { + return err + } else { + for _, volumeAttachment := range v.Attachments { + log.Printf("[DEBUG] Attachment: %v", volumeAttachment) + if err := volumeattach.Delete(computeClient, volumeAttachment["server_id"].(string), volumeAttachment["id"].(string)).ExtractErr(); err != nil { + return err + } + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"in-use", "attaching"}, + Target: "available", + Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for volume (%s) to become available: %s", + d.Id(), err) + } + } + } + + err = volumes.Delete(blockStorageClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack volume: %s", err) + } + + // Wait for the volume to delete before moving on. + log.Printf("[DEBUG] Waiting for volume (%s) to delete", d.Id()) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"deleting", "available"}, + Target: "deleted", + Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for volume (%s) to delete: %s", + d.Id(), err) + } + + d.SetId("") + return nil +} + +func resourceVolumeMetadataV1(d *schema.ResourceData) map[string]string { + m := make(map[string]string) + for key, val := range d.Get("metadata").(map[string]interface{}) { + m[key] = val.(string) + } + return m +} + +// VolumeV1StateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// an OpenStack volume. +func VolumeV1StateRefreshFunc(client *gophercloud.ServiceClient, volumeID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + v, err := volumes.Get(client, volumeID).Extract() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return nil, "", err + } + if errCode.Actual == 404 { + return v, "deleted", nil + } + return nil, "", err + } + + return v, v.Status, nil + } +} + +func resourceVolumeAttachmentHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + if m["instance_id"] != nil { + buf.WriteString(fmt.Sprintf("%s-", m["instance_id"].(string))) + } + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go new file mode 100644 index 000000000000..5404fd3912b1 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_blockstorage_volume_v1_test.go @@ -0,0 +1,138 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/blockstorage/v1/volumes" +) + +func TestAccBlockStorageV1Volume_basic(t *testing.T) { + var volume volumes.Volume + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckBlockStorageV1VolumeDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccBlockStorageV1Volume_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckBlockStorageV1VolumeExists(t, "openstack_blockstorage_volume_v1.volume_1", &volume), + resource.TestCheckResourceAttr("openstack_blockstorage_volume_v1.volume_1", "name", "tf-test-volume"), + testAccCheckBlockStorageV1VolumeMetadata(&volume, "foo", "bar"), + ), + }, + resource.TestStep{ + Config: testAccBlockStorageV1Volume_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_blockstorage_volume_v1.volume_1", "name", "tf-test-volume-updated"), + testAccCheckBlockStorageV1VolumeMetadata(&volume, "foo", "bar"), + ), + }, + }, + }) +} + +func testAccCheckBlockStorageV1VolumeDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + blockStorageClient, err := config.blockStorageV1Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_blockstorage_volume_v1" { + continue + } + + _, err := volumes.Get(blockStorageClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Volume still exists") + } + } + + return nil +} + +func testAccCheckBlockStorageV1VolumeExists(t *testing.T, n string, volume *volumes.Volume) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + blockStorageClient, err := config.blockStorageV1Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } + + found, err := volumes.Get(blockStorageClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Volume not found") + } + + *volume = *found + + return nil + } +} + +func testAccCheckBlockStorageV1VolumeMetadata( + volume *volumes.Volume, k string, v string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if volume.Metadata == nil { + return fmt.Errorf("No metadata") + } + + for key, value := range volume.Metadata { + if k != key { + continue + } + + if v == value { + return nil + } + + return fmt.Errorf("Bad value for %s: %s", k, value) + } + + return fmt.Errorf("Metadata not found: %s", k) + } +} + +var testAccBlockStorageV1Volume_basic = fmt.Sprintf(` + resource "openstack_blockstorage_volume_v1" "volume_1" { + region = "%s" + name = "tf-test-volume" + description = "first test volume" + metadata{ + foo = "bar" + } + size = 1 + }`, + OS_REGION_NAME) + +var testAccBlockStorageV1Volume_update = fmt.Sprintf(` + resource "openstack_blockstorage_volume_v1" "volume_1" { + region = "%s" + name = "tf-test-volume-updated" + description = "first test volume" + metadata{ + foo = "bar" + } + size = 1 + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_compute_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_compute_floatingip_v2.go new file mode 100644 index 000000000000..323ec7608dce --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_floatingip_v2.go @@ -0,0 +1,107 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip" +) + +func resourceComputeFloatingIPV2() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeFloatingIPV2Create, + Read: resourceComputeFloatingIPV2Read, + Update: nil, + Delete: resourceComputeFloatingIPV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + + "pool": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFunc("OS_POOL_NAME"), + }, + + "address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "fixed_ip": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + + "instance_id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceComputeFloatingIPV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + createOpts := &floatingip.CreateOpts{ + Pool: d.Get("pool").(string), + } + log.Printf("[DEBUG] Create Options: %#v", createOpts) + newFip, err := floatingip.Create(computeClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating Floating IP: %s", err) + } + + d.SetId(newFip.ID) + + return resourceComputeFloatingIPV2Read(d, meta) +} + +func resourceComputeFloatingIPV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + fip, err := floatingip.Get(computeClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "floating ip") + } + + log.Printf("[DEBUG] Retrieved Floating IP %s: %+v", d.Id(), fip) + + d.Set("pool", fip.Pool) + d.Set("instance_id", fip.InstanceID) + d.Set("address", fip.IP) + d.Set("fixed_ip", fip.FixedIP) + + return nil +} + +func resourceComputeFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + log.Printf("[DEBUG] Deleting Floating IP %s", d.Id()) + if err := floatingip.Delete(computeClient, d.Id()).ExtractErr(); err != nil { + return fmt.Errorf("Error deleting Floating IP: %s", err) + } + + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_compute_floatingip_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_floatingip_v2_test.go new file mode 100644 index 000000000000..d6fe43b529a9 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_floatingip_v2_test.go @@ -0,0 +1,122 @@ +package openstack + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip" + "github.com/rackspace/gophercloud/openstack/compute/v2/servers" +) + +func TestAccComputeV2FloatingIP_basic(t *testing.T) { + var floatingIP floatingip.FloatingIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2FloatingIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2FloatingIP_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2FloatingIPExists(t, "openstack_compute_floatingip_v2.foo", &floatingIP), + ), + }, + }, + }) +} + +func TestAccComputeV2FloatingIP_attach(t *testing.T) { + var instance servers.Server + var fip floatingip.FloatingIP + var testAccComputeV2FloatingIP_attach = fmt.Sprintf(` + resource "openstack_compute_floatingip_v2" "myip" { + } + + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + floating_ip = "${openstack_compute_floatingip_v2.myip.address}" + + network { + uuid = "%s" + } + }`, + os.Getenv("OS_NETWORK_ID")) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2FloatingIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2FloatingIP_attach, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2FloatingIPExists(t, "openstack_compute_floatingip_v2.myip", &fip), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckComputeV2InstanceFloatingIPAttach(&instance, &fip), + ), + }, + }, + }) +} + +func testAccCheckComputeV2FloatingIPDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2FloatingIPDestroy) Error creating OpenStack compute client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_compute_floatingip_v2" { + continue + } + + _, err := floatingip.Get(computeClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("FloatingIP still exists") + } + } + + return nil +} + +func testAccCheckComputeV2FloatingIPExists(t *testing.T, n string, kp *floatingip.FloatingIP) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2FloatingIPExists) Error creating OpenStack compute client: %s", err) + } + + found, err := floatingip.Get(computeClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("FloatingIP not found") + } + + *kp = *found + + return nil + } +} + +var testAccComputeV2FloatingIP_basic = ` + resource "openstack_compute_floatingip_v2" "foo" { + }` diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go new file mode 100644 index 000000000000..02dafe5ae8e4 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2.go @@ -0,0 +1,1052 @@ +package openstack + +import ( + "bytes" + "crypto/sha1" + "encoding/hex" + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/bootfromvolume" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/secgroups" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/tenantnetworks" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/volumeattach" + "github.com/rackspace/gophercloud/openstack/compute/v2/flavors" + "github.com/rackspace/gophercloud/openstack/compute/v2/images" + "github.com/rackspace/gophercloud/openstack/compute/v2/servers" + "github.com/rackspace/gophercloud/pagination" +) + +func resourceComputeInstanceV2() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeInstanceV2Create, + Read: resourceComputeInstanceV2Read, + Update: resourceComputeInstanceV2Update, + Delete: resourceComputeInstanceV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "image_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + DefaultFunc: envDefaultFunc("OS_IMAGE_ID"), + }, + "image_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + DefaultFunc: envDefaultFunc("OS_IMAGE_NAME"), + }, + "flavor_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + Computed: true, + DefaultFunc: envDefaultFunc("OS_FLAVOR_ID"), + }, + "flavor_name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + Computed: true, + DefaultFunc: envDefaultFunc("OS_FLAVOR_NAME"), + }, + "floating_ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "user_data": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + // just stash the hash for state & diff comparisons + StateFunc: func(v interface{}) string { + switch v.(type) { + case string: + hash := sha1.Sum([]byte(v.(string))) + return hex.EncodeToString(hash[:]) + default: + return "" + } + }, + }, + "security_groups": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: false, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "availability_zone": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "network": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uuid": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "port": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "fixed_ip_v4": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "fixed_ip_v6": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "mac": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "metadata": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: false, + }, + "config_drive": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "admin_pass": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "access_ip_v4": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: false, + }, + "access_ip_v6": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: false, + }, + "key_pair": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "block_device": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uuid": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "source_type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "volume_size": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + "destination_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "boot_index": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + }, + }, + }, + }, + "volume": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "volume_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "device": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + Set: resourceComputeVolumeAttachmentHash, + }, + }, + } +} + +func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + var createOpts servers.CreateOptsBuilder + + imageId, err := getImageID(computeClient, d) + if err != nil { + return err + } + + flavorId, err := getFlavorID(computeClient, d) + if err != nil { + return err + } + + networkDetails, err := resourceInstanceNetworks(computeClient, d) + if err != nil { + return err + } + + networks := make([]servers.Network, len(networkDetails)) + for i, net := range networkDetails { + networks[i] = servers.Network{ + UUID: net["uuid"].(string), + Port: net["port"].(string), + FixedIP: net["fixed_ip_v4"].(string), + } + } + + createOpts = &servers.CreateOpts{ + Name: d.Get("name").(string), + ImageRef: imageId, + FlavorRef: flavorId, + SecurityGroups: resourceInstanceSecGroupsV2(d), + AvailabilityZone: d.Get("availability_zone").(string), + Networks: networks, + Metadata: resourceInstanceMetadataV2(d), + ConfigDrive: d.Get("config_drive").(bool), + AdminPass: d.Get("admin_pass").(string), + UserData: []byte(d.Get("user_data").(string)), + } + + if keyName, ok := d.Get("key_pair").(string); ok && keyName != "" { + createOpts = &keypairs.CreateOptsExt{ + createOpts, + keyName, + } + } + + if blockDeviceRaw, ok := d.Get("block_device").(map[string]interface{}); ok && blockDeviceRaw != nil { + blockDevice := resourceInstanceBlockDeviceV2(d, blockDeviceRaw) + createOpts = &bootfromvolume.CreateOptsExt{ + createOpts, + blockDevice, + } + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + server, err := servers.Create(computeClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack server: %s", err) + } + log.Printf("[INFO] Instance ID: %s", server.ID) + + // Store the ID now + d.SetId(server.ID) + + // Wait for the instance to become running so we can get some attributes + // that aren't available until later. + log.Printf( + "[DEBUG] Waiting for instance (%s) to become running", + server.ID) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"BUILD"}, + Target: "ACTIVE", + Refresh: ServerV2StateRefreshFunc(computeClient, server.ID), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for instance (%s) to become ready: %s", + server.ID, err) + } + floatingIP := d.Get("floating_ip").(string) + if floatingIP != "" { + if err := floatingip.Associate(computeClient, server.ID, floatingIP).ExtractErr(); err != nil { + return fmt.Errorf("Error associating floating IP: %s", err) + } + } + + // were volume attachments specified? + if v := d.Get("volume"); v != nil { + vols := v.(*schema.Set).List() + if len(vols) > 0 { + if blockClient, err := config.blockStorageV1Client(d.Get("region").(string)); err != nil { + return fmt.Errorf("Error creating OpenStack block storage client: %s", err) + } else { + if err := attachVolumesToInstance(computeClient, blockClient, d.Id(), vols); err != nil { + return err + } + } + } + } + + return resourceComputeInstanceV2Read(d, meta) +} + +func resourceComputeInstanceV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + server, err := servers.Get(computeClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "server") + } + + log.Printf("[DEBUG] Retreived Server %s: %+v", d.Id(), server) + + d.Set("name", server.Name) + + // begin reading the network configuration + d.Set("access_ip_v4", server.AccessIPv4) + d.Set("access_ip_v6", server.AccessIPv6) + hostv4 := server.AccessIPv4 + hostv6 := server.AccessIPv6 + + networkDetails, err := resourceInstanceNetworks(computeClient, d) + addresses := resourceInstanceAddresses(server.Addresses) + if err != nil { + return err + } + + // if there are no networkDetails, make networks at least a length of 1 + networkLength := 1 + if len(networkDetails) > 0 { + networkLength = len(networkDetails) + } + networks := make([]map[string]interface{}, networkLength) + + // Loop through all networks and addresses, + // merge relevant address details. + if len(networkDetails) == 0 { + for netName, n := range addresses { + if floatingIP, ok := n["floating_ip"]; ok { + hostv4 = floatingIP.(string) + } else { + if hostv4 == "" && n["fixed_ip_v4"] != nil { + hostv4 = n["fixed_ip_v4"].(string) + } + } + + if hostv6 == "" && n["fixed_ip_v6"] != nil { + hostv6 = n["fixed_ip_v6"].(string) + } + + networks[0] = map[string]interface{}{ + "name": netName, + "fixed_ip_v4": n["fixed_ip_v4"], + "fixed_ip_v6": n["fixed_ip_v6"], + "mac": n["mac"], + } + } + } else { + for i, net := range networkDetails { + n := addresses[net["name"].(string)] + + if floatingIP, ok := n["floating_ip"]; ok { + hostv4 = floatingIP.(string) + } else { + if hostv4 == "" && n["fixed_ip_v4"] != nil { + hostv4 = n["fixed_ip_v4"].(string) + } + } + + if hostv6 == "" && n["fixed_ip_v6"] != nil { + hostv6 = n["fixed_ip_v6"].(string) + } + + networks[i] = map[string]interface{}{ + "uuid": networkDetails[i]["uuid"], + "name": networkDetails[i]["name"], + "port": networkDetails[i]["port"], + "fixed_ip_v4": n["fixed_ip_v4"], + "fixed_ip_v6": n["fixed_ip_v6"], + "mac": n["mac"], + } + } + } + + log.Printf("[DEBUG] new networks: %+v", networks) + + d.Set("network", networks) + d.Set("access_ip_v4", hostv4) + d.Set("access_ip_v6", hostv6) + log.Printf("hostv4: %s", hostv4) + log.Printf("hostv6: %s", hostv6) + + // prefer the v6 address if no v4 address exists. + preferredv := "" + if hostv4 != "" { + preferredv = hostv4 + } else if hostv6 != "" { + preferredv = hostv6 + } + + if preferredv != "" { + // Initialize the connection info + d.SetConnInfo(map[string]string{ + "type": "ssh", + "host": preferredv, + }) + } + // end network configuration + + d.Set("metadata", server.Metadata) + + secGrpNames := []string{} + for _, sg := range server.SecurityGroups { + secGrpNames = append(secGrpNames, sg["name"].(string)) + } + d.Set("security_groups", secGrpNames) + + flavorId, ok := server.Flavor["id"].(string) + if !ok { + return fmt.Errorf("Error setting OpenStack server's flavor: %v", server.Flavor) + } + d.Set("flavor_id", flavorId) + + flavor, err := flavors.Get(computeClient, flavorId).Extract() + if err != nil { + return err + } + d.Set("flavor_name", flavor.Name) + + imageId, ok := server.Image["id"].(string) + if !ok { + return fmt.Errorf("Error setting OpenStack server's image: %v", server.Image) + } + d.Set("image_id", imageId) + + image, err := images.Get(computeClient, imageId).Extract() + if err != nil { + return err + } + d.Set("image_name", image.Name) + + // volume attachments + vas, err := getVolumeAttachments(computeClient, d.Id()) + if err != nil { + return err + } + if len(vas) > 0 { + attachments := make([]map[string]interface{}, len(vas)) + for i, attachment := range vas { + attachments[i] = make(map[string]interface{}) + attachments[i]["id"] = attachment.ID + attachments[i]["volume_id"] = attachment.VolumeID + attachments[i]["device"] = attachment.Device + } + log.Printf("[INFO] Volume attachments: %v", attachments) + d.Set("volume", attachments) + } + + return nil +} + +func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + var updateOpts servers.UpdateOpts + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + if d.HasChange("access_ip_v4") { + updateOpts.AccessIPv4 = d.Get("access_ip_v4").(string) + } + if d.HasChange("access_ip_v6") { + updateOpts.AccessIPv4 = d.Get("access_ip_v6").(string) + } + + if updateOpts != (servers.UpdateOpts{}) { + _, err := servers.Update(computeClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack server: %s", err) + } + } + + if d.HasChange("metadata") { + var metadataOpts servers.MetadataOpts + metadataOpts = make(servers.MetadataOpts) + newMetadata := d.Get("metadata").(map[string]interface{}) + for k, v := range newMetadata { + metadataOpts[k] = v.(string) + } + + _, err := servers.UpdateMetadata(computeClient, d.Id(), metadataOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack server (%s) metadata: %s", d.Id(), err) + } + } + + if d.HasChange("security_groups") { + oldSGRaw, newSGRaw := d.GetChange("security_groups") + oldSGSlice, newSGSlice := oldSGRaw.([]interface{}), newSGRaw.([]interface{}) + oldSGSet := schema.NewSet(func(v interface{}) int { return hashcode.String(v.(string)) }, oldSGSlice) + newSGSet := schema.NewSet(func(v interface{}) int { return hashcode.String(v.(string)) }, newSGSlice) + secgroupsToAdd := newSGSet.Difference(oldSGSet) + secgroupsToRemove := oldSGSet.Difference(newSGSet) + + log.Printf("[DEBUG] Security groups to add: %v", secgroupsToAdd) + + log.Printf("[DEBUG] Security groups to remove: %v", secgroupsToRemove) + + for _, g := range secgroupsToAdd.List() { + err := secgroups.AddServerToGroup(computeClient, d.Id(), g.(string)).ExtractErr() + if err != nil { + return fmt.Errorf("Error adding security group to OpenStack server (%s): %s", d.Id(), err) + } + log.Printf("[DEBUG] Added security group (%s) to instance (%s)", g.(string), d.Id()) + } + + for _, g := range secgroupsToRemove.List() { + err := secgroups.RemoveServerFromGroup(computeClient, d.Id(), g.(string)).ExtractErr() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return fmt.Errorf("Error removing security group from OpenStack server (%s): %s", d.Id(), err) + } + if errCode.Actual == 404 { + continue + } else { + return fmt.Errorf("Error removing security group from OpenStack server (%s): %s", d.Id(), err) + } + } else { + log.Printf("[DEBUG] Removed security group (%s) from instance (%s)", g.(string), d.Id()) + } + } + } + + if d.HasChange("admin_pass") { + if newPwd, ok := d.Get("admin_pass").(string); ok { + err := servers.ChangeAdminPassword(computeClient, d.Id(), newPwd).ExtractErr() + if err != nil { + return fmt.Errorf("Error changing admin password of OpenStack server (%s): %s", d.Id(), err) + } + } + } + + if d.HasChange("floating_ip") { + oldFIP, newFIP := d.GetChange("floating_ip") + log.Printf("[DEBUG] Old Floating IP: %v", oldFIP) + log.Printf("[DEBUG] New Floating IP: %v", newFIP) + if oldFIP.(string) != "" { + log.Printf("[DEBUG] Attemping to disassociate %s from %s", oldFIP, d.Id()) + if err := floatingip.Disassociate(computeClient, d.Id(), oldFIP.(string)).ExtractErr(); err != nil { + return fmt.Errorf("Error disassociating Floating IP during update: %s", err) + } + } + + if newFIP.(string) != "" { + log.Printf("[DEBUG] Attemping to associate %s to %s", newFIP, d.Id()) + if err := floatingip.Associate(computeClient, d.Id(), newFIP.(string)).ExtractErr(); err != nil { + return fmt.Errorf("Error associating Floating IP during update: %s", err) + } + } + } + + if d.HasChange("volume") { + // old attachments and new attachments + oldAttachments, newAttachments := d.GetChange("volume") + + // for each old attachment, detach the volume + oldAttachmentSet := oldAttachments.(*schema.Set).List() + if len(oldAttachmentSet) > 0 { + if blockClient, err := config.blockStorageV1Client(d.Get("region").(string)); err != nil { + return err + } else { + if err := detachVolumesFromInstance(computeClient, blockClient, d.Id(), oldAttachmentSet); err != nil { + return err + } + } + } + + // for each new attachment, attach the volume + newAttachmentSet := newAttachments.(*schema.Set).List() + if len(newAttachmentSet) > 0 { + if blockClient, err := config.blockStorageV1Client(d.Get("region").(string)); err != nil { + return err + } else { + if err := attachVolumesToInstance(computeClient, blockClient, d.Id(), newAttachmentSet); err != nil { + return err + } + } + } + + d.SetPartial("volume") + } + + if d.HasChange("flavor_id") || d.HasChange("flavor_name") { + flavorId, err := getFlavorID(computeClient, d) + if err != nil { + return err + } + resizeOpts := &servers.ResizeOpts{ + FlavorRef: flavorId, + } + log.Printf("[DEBUG] Resize configuration: %#v", resizeOpts) + err = servers.Resize(computeClient, d.Id(), resizeOpts).ExtractErr() + if err != nil { + return fmt.Errorf("Error resizing OpenStack server: %s", err) + } + + // Wait for the instance to finish resizing. + log.Printf("[DEBUG] Waiting for instance (%s) to finish resizing", d.Id()) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"RESIZE"}, + Target: "VERIFY_RESIZE", + Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), + Timeout: 3 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for instance (%s) to resize: %s", d.Id(), err) + } + + // Confirm resize. + log.Printf("[DEBUG] Confirming resize") + err = servers.ConfirmResize(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error confirming resize of OpenStack server: %s", err) + } + + stateConf = &resource.StateChangeConf{ + Pending: []string{"VERIFY_RESIZE"}, + Target: "ACTIVE", + Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), + Timeout: 3 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for instance (%s) to confirm resize: %s", d.Id(), err) + } + } + + return resourceComputeInstanceV2Read(d, meta) +} + +func resourceComputeInstanceV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + err = servers.Delete(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack server: %s", err) + } + + // Wait for the instance to delete before moving on. + log.Printf("[DEBUG] Waiting for instance (%s) to delete", d.Id()) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"ACTIVE"}, + Target: "DELETED", + Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for instance (%s) to delete: %s", + d.Id(), err) + } + + d.SetId("") + return nil +} + +// ServerV2StateRefreshFunc returns a resource.StateRefreshFunc that is used to watch +// an OpenStack instance. +func ServerV2StateRefreshFunc(client *gophercloud.ServiceClient, instanceID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + s, err := servers.Get(client, instanceID).Extract() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return nil, "", err + } + if errCode.Actual == 404 { + return s, "DELETED", nil + } + return nil, "", err + } + + return s, s.Status, nil + } +} + +func resourceInstanceSecGroupsV2(d *schema.ResourceData) []string { + rawSecGroups := d.Get("security_groups").([]interface{}) + secgroups := make([]string, len(rawSecGroups)) + for i, raw := range rawSecGroups { + secgroups[i] = raw.(string) + } + return secgroups +} + +func resourceInstanceNetworks(computeClient *gophercloud.ServiceClient, d *schema.ResourceData) ([]map[string]interface{}, error) { + rawNetworks := d.Get("network").([]interface{}) + newNetworks := make([]map[string]interface{}, len(rawNetworks)) + var tenantnet tenantnetworks.Network + + tenantNetworkExt := true + for i, raw := range rawNetworks { + rawMap := raw.(map[string]interface{}) + + allPages, err := tenantnetworks.List(computeClient).AllPages() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return nil, err + } + + if errCode.Actual == 404 { + tenantNetworkExt = false + } else { + return nil, err + } + } + + networkID := "" + networkName := "" + if tenantNetworkExt { + networkList, err := tenantnetworks.ExtractNetworks(allPages) + if err != nil { + return nil, err + } + + for _, network := range networkList { + if network.Name == rawMap["name"] { + tenantnet = network + } + if network.ID == rawMap["uuid"] { + tenantnet = network + } + } + + networkID = tenantnet.ID + networkName = tenantnet.Name + } else { + networkID = rawMap["uuid"].(string) + networkName = rawMap["name"].(string) + } + + newNetworks[i] = map[string]interface{}{ + "uuid": networkID, + "name": networkName, + "port": rawMap["port"].(string), + "fixed_ip_v4": rawMap["fixed_ip_v4"].(string), + } + } + + log.Printf("[DEBUG] networks: %+v", newNetworks) + + return newNetworks, nil +} + +func resourceInstanceAddresses(addresses map[string]interface{}) map[string]map[string]interface{} { + + addrs := make(map[string]map[string]interface{}) + for n, networkAddresses := range addresses { + addrs[n] = make(map[string]interface{}) + for _, element := range networkAddresses.([]interface{}) { + address := element.(map[string]interface{}) + if address["OS-EXT-IPS:type"] == "floating" { + addrs[n]["floating_ip"] = address["addr"] + } else { + if address["version"].(float64) == 4 { + addrs[n]["fixed_ip_v4"] = address["addr"].(string) + } else { + addrs[n]["fixed_ip_v6"] = fmt.Sprintf("[%s]", address["addr"].(string)) + } + } + if mac, ok := address["OS-EXT-IPS-MAC:mac_addr"]; ok { + addrs[n]["mac"] = mac.(string) + } + } + } + + log.Printf("[DEBUG] Addresses: %+v", addresses) + + return addrs +} + +func resourceInstanceMetadataV2(d *schema.ResourceData) map[string]string { + m := make(map[string]string) + for key, val := range d.Get("metadata").(map[string]interface{}) { + m[key] = val.(string) + } + return m +} + +func resourceInstanceBlockDeviceV2(d *schema.ResourceData, bd map[string]interface{}) []bootfromvolume.BlockDevice { + sourceType := bootfromvolume.SourceType(bd["source_type"].(string)) + bfvOpts := []bootfromvolume.BlockDevice{ + bootfromvolume.BlockDevice{ + UUID: bd["uuid"].(string), + SourceType: sourceType, + VolumeSize: bd["volume_size"].(int), + DestinationType: bd["destination_type"].(string), + BootIndex: bd["boot_index"].(int), + }, + } + + return bfvOpts +} + +func getImageID(client *gophercloud.ServiceClient, d *schema.ResourceData) (string, error) { + imageId := d.Get("image_id").(string) + + if imageId != "" { + return imageId, nil + } + + imageCount := 0 + imageName := d.Get("image_name").(string) + if imageName != "" { + pager := images.ListDetail(client, &images.ListOpts{ + Name: imageName, + }) + pager.EachPage(func(page pagination.Page) (bool, error) { + imageList, err := images.ExtractImages(page) + if err != nil { + return false, err + } + + for _, i := range imageList { + if i.Name == imageName { + imageCount++ + imageId = i.ID + } + } + return true, nil + }) + + switch imageCount { + case 0: + return "", fmt.Errorf("Unable to find image: %s", imageName) + case 1: + return imageId, nil + default: + return "", fmt.Errorf("Found %d images matching %s", imageCount, imageName) + } + } + return "", fmt.Errorf("Neither an image ID nor an image name were able to be determined.") +} + +func getFlavorID(client *gophercloud.ServiceClient, d *schema.ResourceData) (string, error) { + flavorId := d.Get("flavor_id").(string) + + if flavorId != "" { + return flavorId, nil + } + + flavorCount := 0 + flavorName := d.Get("flavor_name").(string) + if flavorName != "" { + pager := flavors.ListDetail(client, nil) + pager.EachPage(func(page pagination.Page) (bool, error) { + flavorList, err := flavors.ExtractFlavors(page) + if err != nil { + return false, err + } + + for _, f := range flavorList { + if f.Name == flavorName { + flavorCount++ + flavorId = f.ID + } + } + return true, nil + }) + + switch flavorCount { + case 0: + return "", fmt.Errorf("Unable to find flavor: %s", flavorName) + case 1: + return flavorId, nil + default: + return "", fmt.Errorf("Found %d flavors matching %s", flavorCount, flavorName) + } + } + return "", fmt.Errorf("Neither a flavor ID nor a flavor name were able to be determined.") +} + +func resourceComputeVolumeAttachmentHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["volume_id"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["device"].(string))) + return hashcode.String(buf.String()) +} + +func attachVolumesToInstance(computeClient *gophercloud.ServiceClient, blockClient *gophercloud.ServiceClient, serverId string, vols []interface{}) error { + if len(vols) > 0 { + for _, v := range vols { + va := v.(map[string]interface{}) + volumeId := va["volume_id"].(string) + device := va["device"].(string) + + s := "" + if serverId != "" { + s = serverId + } else if va["server_id"] != "" { + s = va["server_id"].(string) + } else { + return fmt.Errorf("Unable to determine server ID to attach volume.") + } + + vaOpts := &volumeattach.CreateOpts{ + Device: device, + VolumeID: volumeId, + } + + if _, err := volumeattach.Create(computeClient, s, vaOpts).Extract(); err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"attaching", "available"}, + Target: "in-use", + Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), + Timeout: 30 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 2 * time.Second, + } + + if _, err := stateConf.WaitForState(); err != nil { + return err + } + + log.Printf("[INFO] Attached volume %s to instance %s", volumeId, serverId) + } + } + return nil +} + +func detachVolumesFromInstance(computeClient *gophercloud.ServiceClient, blockClient *gophercloud.ServiceClient, serverId string, vols []interface{}) error { + if len(vols) > 0 { + for _, v := range vols { + va := v.(map[string]interface{}) + aId := va["id"].(string) + + if err := volumeattach.Delete(computeClient, serverId, aId).ExtractErr(); err != nil { + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"detaching", "in-use"}, + Target: "available", + Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), + Timeout: 30 * time.Minute, + Delay: 5 * time.Second, + MinTimeout: 2 * time.Second, + } + + if _, err := stateConf.WaitForState(); err != nil { + return err + } + log.Printf("[INFO] Detached volume %s from instance %s", va["volume_id"], serverId) + } + } + + return nil +} + +func getVolumeAttachments(computeClient *gophercloud.ServiceClient, serverId string) ([]volumeattach.VolumeAttachment, error) { + var attachments []volumeattach.VolumeAttachment + err := volumeattach.List(computeClient, serverId).EachPage(func(page pagination.Page) (bool, error) { + actual, err := volumeattach.ExtractVolumeAttachments(page) + if err != nil { + return false, err + } + + attachments = actual + return true, nil + }) + + if err != nil { + return nil, err + } + + return attachments, nil +} diff --git a/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go new file mode 100644 index 000000000000..587df56520d5 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_instance_v2_test.go @@ -0,0 +1,234 @@ +package openstack + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/blockstorage/v1/volumes" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/volumeattach" + "github.com/rackspace/gophercloud/openstack/compute/v2/servers" + "github.com/rackspace/gophercloud/pagination" +) + +func TestAccComputeV2Instance_basic(t *testing.T) { + var instance servers.Server + var testAccComputeV2Instance_basic = fmt.Sprintf(` + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + network { + uuid = "%s" + } + metadata { + foo = "bar" + } + }`, + os.Getenv("OS_NETWORK_ID")) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckComputeV2InstanceMetadata(&instance, "foo", "bar"), + ), + }, + }, + }) +} + +func TestAccComputeV2Instance_volumeAttach(t *testing.T) { + var instance servers.Server + var volume volumes.Volume + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_volumeAttach, + Check: resource.ComposeTestCheckFunc( + testAccCheckBlockStorageV1VolumeExists(t, "openstack_blockstorage_volume_v1.myvol", &volume), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckComputeV2InstanceVolumeAttachment(&instance, &volume), + ), + }, + }, + }) +} + +func TestAccComputeV2Instance_floatingIPAttach(t *testing.T) { + var instance servers.Server + var fip floatingip.FloatingIP + var testAccComputeV2Instance_floatingIPAttach = fmt.Sprintf(` + resource "openstack_compute_floatingip_v2" "myip" { + } + + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + floating_ip = "${openstack_compute_floatingip_v2.myip.address}" + + network { + uuid = "%s" + } + }`, + os.Getenv("OS_NETWORK_ID")) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2InstanceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Instance_floatingIPAttach, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2FloatingIPExists(t, "openstack_compute_floatingip_v2.myip", &fip), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckComputeV2InstanceFloatingIPAttach(&instance, &fip), + ), + }, + }, + }) +} + +func testAccCheckComputeV2InstanceDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2InstanceDestroy) Error creating OpenStack compute client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_compute_instance_v2" { + continue + } + + _, err := servers.Get(computeClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Instance still exists") + } + } + + return nil +} + +func testAccCheckComputeV2InstanceExists(t *testing.T, n string, instance *servers.Server) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2InstanceExists) Error creating OpenStack compute client: %s", err) + } + + found, err := servers.Get(computeClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Instance not found") + } + + *instance = *found + + return nil + } +} + +func testAccCheckComputeV2InstanceMetadata( + instance *servers.Server, k string, v string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if instance.Metadata == nil { + return fmt.Errorf("No metadata") + } + + for key, value := range instance.Metadata { + if k != key { + continue + } + + if v == value.(string) { + return nil + } + + return fmt.Errorf("Bad value for %s: %s", k, value) + } + + return fmt.Errorf("Metadata not found: %s", k) + } +} + +func testAccCheckComputeV2InstanceVolumeAttachment( + instance *servers.Server, volume *volumes.Volume) resource.TestCheckFunc { + return func(s *terraform.State) error { + var attachments []volumeattach.VolumeAttachment + + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return err + } + err = volumeattach.List(computeClient, instance.ID).EachPage(func(page pagination.Page) (bool, error) { + actual, err := volumeattach.ExtractVolumeAttachments(page) + if err != nil { + return false, fmt.Errorf("Unable to lookup attachment: %s", err) + } + + attachments = actual + return true, nil + }) + + for _, attachment := range attachments { + if attachment.VolumeID == volume.ID { + return nil + } + } + + return fmt.Errorf("Volume not found: %s", volume.ID) + } +} + +func testAccCheckComputeV2InstanceFloatingIPAttach( + instance *servers.Server, fip *floatingip.FloatingIP) resource.TestCheckFunc { + return func(s *terraform.State) error { + if fip.InstanceID == instance.ID { + return nil + } + + return fmt.Errorf("Floating IP %s was not attached to instance %s", fip.ID, instance.ID) + + } +} + +var testAccComputeV2Instance_volumeAttach = fmt.Sprintf(` + resource "openstack_blockstorage_volume_v1" "myvol" { + name = "myvol" + size = 1 + } + + resource "openstack_compute_instance_v2" "foo" { + region = "%s" + name = "terraform-test" + volume { + volume_id = "${openstack_blockstorage_volume_v1.myvol.id}" + } + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_compute_keypair_v2.go b/builtin/providers/openstack/resource_openstack_compute_keypair_v2.go new file mode 100644 index 000000000000..bc9a28b38dcc --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_keypair_v2.go @@ -0,0 +1,92 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs" +) + +func resourceComputeKeypairV2() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeKeypairV2Create, + Read: resourceComputeKeypairV2Read, + Delete: resourceComputeKeypairV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "public_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceComputeKeypairV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + createOpts := keypairs.CreateOpts{ + Name: d.Get("name").(string), + PublicKey: d.Get("public_key").(string), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + kp, err := keypairs.Create(computeClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack keypair: %s", err) + } + + d.SetId(kp.Name) + + return resourceComputeKeypairV2Read(d, meta) +} + +func resourceComputeKeypairV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + kp, err := keypairs.Get(computeClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "keypair") + } + + d.Set("name", kp.Name) + d.Set("public_key", kp.PublicKey) + + return nil +} + +func resourceComputeKeypairV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + err = keypairs.Delete(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack keypair: %s", err) + } + d.SetId("") + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_compute_keypair_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_keypair_v2_test.go new file mode 100644 index 000000000000..da090bcd8373 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_keypair_v2_test.go @@ -0,0 +1,90 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs" +) + +func TestAccComputeV2Keypair_basic(t *testing.T) { + var keypair keypairs.KeyPair + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2KeypairDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2Keypair_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2KeypairExists(t, "openstack_compute_keypair_v2.foo", &keypair), + ), + }, + }, + }) +} + +func testAccCheckComputeV2KeypairDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2KeypairDestroy) Error creating OpenStack compute client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_compute_keypair_v2" { + continue + } + + _, err := keypairs.Get(computeClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Keypair still exists") + } + } + + return nil +} + +func testAccCheckComputeV2KeypairExists(t *testing.T, n string, kp *keypairs.KeyPair) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2KeypairExists) Error creating OpenStack compute client: %s", err) + } + + found, err := keypairs.Get(computeClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.Name != rs.Primary.ID { + return fmt.Errorf("Keypair not found") + } + + *kp = *found + + return nil + } +} + +var testAccComputeV2Keypair_basic = fmt.Sprintf(` + resource "openstack_compute_keypair_v2" "foo" { + region = "%s" + name = "test-keypair-tf" + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAjpC1hwiOCCmKEWxJ4qzTTsJbKzndLo1BCz5PcwtUnflmU+gHJtWMZKpuEGVi29h0A/+ydKek1O18k10Ff+4tyFjiHDQAT9+OfgWf7+b1yK+qDip3X1C0UPMbwHlTfSGWLGZquwhvEFx9k3h/M+VtMvwR1lJ9LUyTAImnNjWG7TAIPmui30HvM2UiFEmqkr4ijq45MyX2+fLIePLRIFuu1p4whjHAQYufqyno3BS48icQb4p6iVEZPo4AE2o9oIyQvj2mx4dk5Y8CgSETOZTYDOR3rU2fZTRDRgPJDH9FWvQjF5tA0p3d9CoWWd2s6GKKbfoUIi8R/Db1BSPJwkqB jrp-hp-pc" + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go new file mode 100644 index 000000000000..e6d8be8ea135 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2.go @@ -0,0 +1,294 @@ +package openstack + +import ( + "bytes" + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/secgroups" +) + +func resourceComputeSecGroupV2() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeSecGroupV2Create, + Read: resourceComputeSecGroupV2Read, + Update: resourceComputeSecGroupV2Update, + Delete: resourceComputeSecGroupV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "rule": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "from_port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: false, + }, + "to_port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: false, + }, + "ip_protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "cidr": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "from_group_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "self": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: false, + }, + }, + }, + }, + }, + } +} + +func resourceComputeSecGroupV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + createOpts := secgroups.CreateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + sg, err := secgroups.Create(computeClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack security group: %s", err) + } + + d.SetId(sg.ID) + + createRuleOptsList := resourceSecGroupRulesV2(d) + for _, createRuleOpts := range createRuleOptsList { + _, err := secgroups.CreateRule(computeClient, createRuleOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack security group rule: %s", err) + } + } + + return resourceComputeSecGroupV2Read(d, meta) +} + +func resourceComputeSecGroupV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + sg, err := secgroups.Get(computeClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "security group") + } + + d.Set("name", sg.Name) + d.Set("description", sg.Description) + rtm := rulesToMap(sg.Rules) + for _, v := range rtm { + if v["group"] == d.Get("name") { + v["self"] = "1" + } else { + v["self"] = "0" + } + } + log.Printf("[DEBUG] rulesToMap(sg.Rules): %+v", rtm) + d.Set("rule", rtm) + + return nil +} + +func resourceComputeSecGroupV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + updateOpts := secgroups.UpdateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + } + + log.Printf("[DEBUG] Updating Security Group (%s) with options: %+v", d.Id(), updateOpts) + + _, err = secgroups.Update(computeClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack security group (%s): %s", d.Id(), err) + } + + if d.HasChange("rule") { + oldSGRaw, newSGRaw := d.GetChange("rule") + oldSGRSlice, newSGRSlice := oldSGRaw.([]interface{}), newSGRaw.([]interface{}) + oldSGRSet := schema.NewSet(secgroupRuleV2Hash, oldSGRSlice) + newSGRSet := schema.NewSet(secgroupRuleV2Hash, newSGRSlice) + secgrouprulesToAdd := newSGRSet.Difference(oldSGRSet) + secgrouprulesToRemove := oldSGRSet.Difference(newSGRSet) + + log.Printf("[DEBUG] Security group rules to add: %v", secgrouprulesToAdd) + + log.Printf("[DEBUG] Security groups rules to remove: %v", secgrouprulesToRemove) + + for _, rawRule := range secgrouprulesToAdd.List() { + createRuleOpts := resourceSecGroupRuleCreateOptsV2(d, rawRule) + rule, err := secgroups.CreateRule(computeClient, createRuleOpts).Extract() + if err != nil { + return fmt.Errorf("Error adding rule to OpenStack security group (%s): %s", d.Id(), err) + } + log.Printf("[DEBUG] Added rule (%s) to OpenStack security group (%s) ", rule.ID, d.Id()) + } + + for _, r := range secgrouprulesToRemove.List() { + rule := resourceSecGroupRuleV2(d, r) + err := secgroups.DeleteRule(computeClient, rule.ID).ExtractErr() + if err != nil { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return fmt.Errorf("Error removing rule (%s) from OpenStack security group (%s): %s", rule.ID, d.Id(), err) + } + if errCode.Actual == 404 { + continue + } else { + return fmt.Errorf("Error removing rule (%s) from OpenStack security group (%s)", rule.ID, d.Id()) + } + } else { + log.Printf("[DEBUG] Removed rule (%s) from OpenStack security group (%s): %s", rule.ID, d.Id(), err) + } + } + } + + return resourceComputeSecGroupV2Read(d, meta) +} + +func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + computeClient, err := config.computeV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack compute client: %s", err) + } + + err = secgroups.Delete(computeClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack security group: %s", err) + } + d.SetId("") + return nil +} + +func resourceSecGroupRulesV2(d *schema.ResourceData) []secgroups.CreateRuleOpts { + rawRules := (d.Get("rule")).([]interface{}) + createRuleOptsList := make([]secgroups.CreateRuleOpts, len(rawRules)) + for i, raw := range rawRules { + rawMap := raw.(map[string]interface{}) + groupId := rawMap["from_group_id"].(string) + if rawMap["self"].(bool) { + groupId = d.Id() + } + createRuleOptsList[i] = secgroups.CreateRuleOpts{ + ParentGroupID: d.Id(), + FromPort: rawMap["from_port"].(int), + ToPort: rawMap["to_port"].(int), + IPProtocol: rawMap["ip_protocol"].(string), + CIDR: rawMap["cidr"].(string), + FromGroupID: groupId, + } + } + return createRuleOptsList +} + +func resourceSecGroupRuleCreateOptsV2(d *schema.ResourceData, raw interface{}) secgroups.CreateRuleOpts { + rawMap := raw.(map[string]interface{}) + groupId := rawMap["from_group_id"].(string) + if rawMap["self"].(bool) { + groupId = d.Id() + } + return secgroups.CreateRuleOpts{ + ParentGroupID: d.Id(), + FromPort: rawMap["from_port"].(int), + ToPort: rawMap["to_port"].(int), + IPProtocol: rawMap["ip_protocol"].(string), + CIDR: rawMap["cidr"].(string), + FromGroupID: groupId, + } +} + +func resourceSecGroupRuleV2(d *schema.ResourceData, raw interface{}) secgroups.Rule { + rawMap := raw.(map[string]interface{}) + return secgroups.Rule{ + ID: rawMap["id"].(string), + ParentGroupID: d.Id(), + FromPort: rawMap["from_port"].(int), + ToPort: rawMap["to_port"].(int), + IPProtocol: rawMap["ip_protocol"].(string), + IPRange: secgroups.IPRange{CIDR: rawMap["cidr"].(string)}, + } +} + +func rulesToMap(sgrs []secgroups.Rule) []map[string]interface{} { + sgrMap := make([]map[string]interface{}, len(sgrs)) + for i, sgr := range sgrs { + sgrMap[i] = map[string]interface{}{ + "id": sgr.ID, + "from_port": sgr.FromPort, + "to_port": sgr.ToPort, + "ip_protocol": sgr.IPProtocol, + "cidr": sgr.IPRange.CIDR, + "group": sgr.Group.Name, + } + } + return sgrMap +} + +func secgroupRuleV2Hash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%d-", m["from_port"].(int))) + buf.WriteString(fmt.Sprintf("%d-", m["to_port"].(int))) + buf.WriteString(fmt.Sprintf("%s-", m["ip_protocol"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["cidr"].(string))) + + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go new file mode 100644 index 000000000000..e78865b8a5d6 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_compute_secgroup_v2_test.go @@ -0,0 +1,90 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/compute/v2/extensions/secgroups" +) + +func TestAccComputeV2SecGroup_basic(t *testing.T) { + var secgroup secgroups.SecurityGroup + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeV2SecGroupDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccComputeV2SecGroup_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeV2SecGroupExists(t, "openstack_compute_secgroup_v2.foo", &secgroup), + ), + }, + }, + }) +} + +func testAccCheckComputeV2SecGroupDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2SecGroupDestroy) Error creating OpenStack compute client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_compute_secgroup_v2" { + continue + } + + _, err := secgroups.Get(computeClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Security group still exists") + } + } + + return nil +} + +func testAccCheckComputeV2SecGroupExists(t *testing.T, n string, secgroup *secgroups.SecurityGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + computeClient, err := config.computeV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckComputeV2SecGroupExists) Error creating OpenStack compute client: %s", err) + } + + found, err := secgroups.Get(computeClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Security group not found") + } + + *secgroup = *found + + return nil + } +} + +var testAccComputeV2SecGroup_basic = fmt.Sprintf(` + resource "openstack_compute_secgroup_v2" "foo" { + region = "%s" + name = "test_group_1" + description = "first test security group" + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go new file mode 100644 index 000000000000..e845babdc016 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1.go @@ -0,0 +1,242 @@ +package openstack + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls" +) + +func resourceFWFirewallV1() *schema.Resource { + return &schema.Resource{ + Create: resourceFWFirewallV1Create, + Read: resourceFWFirewallV1Read, + Update: resourceFWFirewallV1Update, + Delete: resourceFWFirewallV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "policy_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceFWFirewallV1Create(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + adminStateUp := d.Get("admin_state_up").(bool) + + firewallConfiguration := firewalls.CreateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + PolicyID: d.Get("policy_id").(string), + AdminStateUp: &adminStateUp, + TenantID: d.Get("tenant_id").(string), + } + + log.Printf("[DEBUG] Create firewall: %#v", firewallConfiguration) + + firewall, err := firewalls.Create(networkingClient, firewallConfiguration).Extract() + if err != nil { + return err + } + + log.Printf("[DEBUG] Firewall created: %#v", firewall) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING_CREATE"}, + Target: "ACTIVE", + Refresh: waitForFirewallActive(networkingClient, firewall.ID), + Timeout: 30 * time.Second, + Delay: 0, + MinTimeout: 2 * time.Second, + } + + _, err = stateConf.WaitForState() + + d.SetId(firewall.ID) + + return resourceFWFirewallV1Read(d, meta) +} + +func resourceFWFirewallV1Read(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Retrieve information about firewall: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + firewall, err := firewalls.Get(networkingClient, d.Id()).Extract() + + if err != nil { + return CheckDeleted(d, err, "LB pool") + } + + d.Set("name", firewall.Name) + d.Set("description", firewall.Description) + d.Set("policy_id", firewall.PolicyID) + d.Set("admin_state_up", firewall.AdminStateUp) + d.Set("tenant_id", firewall.TenantID) + + return nil +} + +func resourceFWFirewallV1Update(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + opts := firewalls.UpdateOpts{} + + if d.HasChange("name") { + opts.Name = d.Get("name").(string) + } + + if d.HasChange("description") { + opts.Description = d.Get("description").(string) + } + + if d.HasChange("policy_id") { + opts.PolicyID = d.Get("policy_id").(string) + } + + if d.HasChange("admin_state_up") { + adminStateUp := d.Get("admin_state_up").(bool) + opts.AdminStateUp = &adminStateUp + } + + log.Printf("[DEBUG] Updating firewall with id %s: %#v", d.Id(), opts) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, + Target: "ACTIVE", + Refresh: waitForFirewallActive(networkingClient, d.Id()), + Timeout: 30 * time.Second, + Delay: 0, + MinTimeout: 2 * time.Second, + } + + _, err = stateConf.WaitForState() + + err = firewalls.Update(networkingClient, d.Id(), opts).Err + if err != nil { + return err + } + + return resourceFWFirewallV1Read(d, meta) +} + +func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Destroy firewall: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, + Target: "ACTIVE", + Refresh: waitForFirewallActive(networkingClient, d.Id()), + Timeout: 30 * time.Second, + Delay: 0, + MinTimeout: 2 * time.Second, + } + + _, err = stateConf.WaitForState() + + err = firewalls.Delete(networkingClient, d.Id()).Err + + if err != nil { + return err + } + + stateConf = &resource.StateChangeConf{ + Pending: []string{"DELETING"}, + Target: "DELETED", + Refresh: waitForFirewallDeletion(networkingClient, d.Id()), + Timeout: 2 * time.Minute, + Delay: 0, + MinTimeout: 2 * time.Second, + } + + _, err = stateConf.WaitForState() + + return err +} + +func waitForFirewallActive(networkingClient *gophercloud.ServiceClient, id string) resource.StateRefreshFunc { + + return func() (interface{}, string, error) { + fw, err := firewalls.Get(networkingClient, id).Extract() + log.Printf("[DEBUG] Get firewall %s => %#v", id, fw) + + if err != nil { + return nil, "", err + } + return fw, fw.Status, nil + } +} + +func waitForFirewallDeletion(networkingClient *gophercloud.ServiceClient, id string) resource.StateRefreshFunc { + + return func() (interface{}, string, error) { + fw, err := firewalls.Get(networkingClient, id).Extract() + log.Printf("[DEBUG] Get firewall %s => %#v", id, fw) + + if err != nil { + httpStatus := err.(*gophercloud.UnexpectedResponseCodeError) + log.Printf("[DEBUG] Get firewall %s status is %d", id, httpStatus.Actual) + + if httpStatus.Actual == 404 { + log.Printf("[DEBUG] Firewall %s is actually deleted", id) + return "", "DELETED", nil + } + return nil, "", fmt.Errorf("Unexpected status code %d", httpStatus.Actual) + } + + log.Printf("[DEBUG] Firewall %s deletion is pending", id) + return fw, "DELETING", nil + } +} diff --git a/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go new file mode 100644 index 000000000000..34112f778f09 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_firewall_v1_test.go @@ -0,0 +1,139 @@ +package openstack + +import ( + "fmt" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/firewalls" +) + +func TestAccFWFirewallV1(t *testing.T) { + + var policyID *string + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckFWFirewallV1Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testFirewallConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWFirewallV1Exists("openstack_fw_firewall_v1.accept_test", "", "", policyID), + ), + }, + resource.TestStep{ + Config: testFirewallConfigUpdated, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWFirewallV1Exists("openstack_fw_firewall_v1.accept_test", "accept_test", "terraform acceptance test", policyID), + ), + }, + }, + }) +} + +func testAccCheckFWFirewallV1Destroy(s *terraform.State) error { + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckOpenstackFirewallDestroy) Error creating OpenStack networking client: %s", err) + } + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_firewall" { + continue + } + _, err = firewalls.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Firewall (%s) still exists.", rs.Primary.ID) + } + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + return httpError + } + } + return nil +} + +func testAccCheckFWFirewallV1Exists(n, expectedName, expectedDescription string, policyID *string) resource.TestCheckFunc { + + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckFirewallExists) Error creating OpenStack networking client: %s", err) + } + + var found *firewalls.Firewall + for i := 0; i < 5; i++ { + // Firewall creation is asynchronous. Retry some times + // if we get a 404 error. Fail on any other error. + found, err = firewalls.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + time.Sleep(time.Second) + continue + } + } + break + } + + if err != nil { + return err + } + + if found.Name != expectedName { + return fmt.Errorf("Expected Name to be <%s> but found <%s>", expectedName, found.Name) + } + if found.Description != expectedDescription { + return fmt.Errorf("Expected Description to be <%s> but found <%s>", expectedDescription, found.Description) + } + if found.PolicyID == "" { + return fmt.Errorf("Policy should not be empty") + } + if policyID != nil && found.PolicyID == *policyID { + return fmt.Errorf("Policy had not been correctly updated. Went from <%s> to <%s>", expectedName, found.Name) + } + + policyID = &found.PolicyID + + return nil + } +} + +const testFirewallConfig = ` +resource "openstack_fw_firewall_v1" "accept_test" { + policy_id = "${openstack_fw_policy_v1.accept_test_policy_1.id}" +} + +resource "openstack_fw_policy_v1" "accept_test_policy_1" { + name = "policy-1" +} +` + +const testFirewallConfigUpdated = ` +resource "openstack_fw_firewall_v1" "accept_test" { + name = "accept_test" + description = "terraform acceptance test" + policy_id = "${openstack_fw_policy_v1.accept_test_policy_2.id}" +} + +resource "openstack_fw_policy_v1" "accept_test_policy_2" { + name = "policy-2" +} +` diff --git a/builtin/providers/openstack/resource_openstack_fw_policy_v1.go b/builtin/providers/openstack/resource_openstack_fw_policy_v1.go new file mode 100644 index 000000000000..a1c13853cea5 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_policy_v1.go @@ -0,0 +1,200 @@ +package openstack + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/policies" +) + +func resourceFWPolicyV1() *schema.Resource { + return &schema.Resource{ + Create: resourceFWPolicyV1Create, + Read: resourceFWPolicyV1Read, + Update: resourceFWPolicyV1Update, + Delete: resourceFWPolicyV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "audited": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "shared": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "rules": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + }, + } +} + +func resourceFWPolicyV1Create(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + v := d.Get("rules").(*schema.Set) + + log.Printf("[DEBUG] Rules found : %#v", v) + log.Printf("[DEBUG] Rules count : %d", v.Len()) + + rules := make([]string, v.Len()) + for i, v := range v.List() { + rules[i] = v.(string) + } + + audited := d.Get("audited").(bool) + shared := d.Get("shared").(bool) + + opts := policies.CreateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + Audited: &audited, + Shared: &shared, + TenantID: d.Get("tenant_id").(string), + Rules: rules, + } + + log.Printf("[DEBUG] Create firewall policy: %#v", opts) + + policy, err := policies.Create(networkingClient, opts).Extract() + if err != nil { + return err + } + + log.Printf("[DEBUG] Firewall policy created: %#v", policy) + + d.SetId(policy.ID) + + return resourceFWPolicyV1Read(d, meta) +} + +func resourceFWPolicyV1Read(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Retrieve information about firewall policy: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + policy, err := policies.Get(networkingClient, d.Id()).Extract() + + if err != nil { + return CheckDeleted(d, err, "LB pool") + } + + d.Set("name", policy.Name) + d.Set("description", policy.Description) + d.Set("shared", policy.Shared) + d.Set("audited", policy.Audited) + d.Set("tenant_id", policy.TenantID) + return nil +} + +func resourceFWPolicyV1Update(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + opts := policies.UpdateOpts{} + + if d.HasChange("name") { + opts.Name = d.Get("name").(string) + } + + if d.HasChange("description") { + opts.Description = d.Get("description").(string) + } + + if d.HasChange("rules") { + v := d.Get("rules").(*schema.Set) + + log.Printf("[DEBUG] Rules found : %#v", v) + log.Printf("[DEBUG] Rules count : %d", v.Len()) + + rules := make([]string, v.Len()) + for i, v := range v.List() { + rules[i] = v.(string) + } + opts.Rules = rules + } + + log.Printf("[DEBUG] Updating firewall policy with id %s: %#v", d.Id(), opts) + + err = policies.Update(networkingClient, d.Id(), opts).Err + if err != nil { + return err + } + + return resourceFWPolicyV1Read(d, meta) +} + +func resourceFWPolicyV1Delete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Destroy firewall policy: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + for i := 0; i < 15; i++ { + + err = policies.Delete(networkingClient, d.Id()).Err + if err == nil { + break + } + + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 409 { + return err + } + + // This error usualy means that the policy is attached + // to a firewall. At this point, the firewall is probably + // being delete. So, we retry a few times. + + time.Sleep(time.Second * 2) + } + + return err +} diff --git a/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go new file mode 100644 index 000000000000..1a37a383f732 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_policy_v1_test.go @@ -0,0 +1,165 @@ +package openstack + +import ( + "fmt" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/policies" +) + +func TestAccFWPolicyV1(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckFWPolicyV1Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testFirewallPolicyConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWPolicyV1Exists( + "openstack_fw_policy_v1.accept_test", + "", "", 0), + ), + }, + resource.TestStep{ + Config: testFirewallPolicyConfigAddRules, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWPolicyV1Exists( + "openstack_fw_policy_v1.accept_test", + "accept_test", "terraform acceptance test", 2), + ), + }, + resource.TestStep{ + Config: testFirewallPolicyUpdateDeleteRule, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWPolicyV1Exists( + "openstack_fw_policy_v1.accept_test", + "accept_test", "terraform acceptance test", 1), + ), + }, + }, + }) +} + +func testAccCheckFWPolicyV1Destroy(s *terraform.State) error { + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckOpenstackFirewallPolicyDestroy) Error creating OpenStack networking client: %s", err) + } + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_fw_policy_v1" { + continue + } + _, err = policies.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Firewall policy (%s) still exists.", rs.Primary.ID) + } + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + return httpError + } + } + return nil +} + +func testAccCheckFWPolicyV1Exists(n, name, description string, ruleCount int) resource.TestCheckFunc { + + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckFirewallPolicyExists) Error creating OpenStack networking client: %s", err) + } + + var found *policies.Policy + for i := 0; i < 5; i++ { + // Firewall policy creation is asynchronous. Retry some times + // if we get a 404 error. Fail on any other error. + found, err = policies.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + time.Sleep(time.Second) + continue + } + } + break + } + + if err != nil { + return err + } + + if name != found.Name { + return fmt.Errorf("Expected name <%s>, but found <%s>", name, found.Name) + } + + if description != found.Description { + return fmt.Errorf("Expected description <%s>, but found <%s>", description, found.Description) + } + + if ruleCount != len(found.Rules) { + return fmt.Errorf("Expected rule count <%d>, but found <%d>", ruleCount, len(found.Rules)) + } + + return nil + } +} + +const testFirewallPolicyConfig = ` +resource "openstack_fw_policy_v1" "accept_test" { + +} +` + +const testFirewallPolicyConfigAddRules = ` +resource "openstack_fw_policy_v1" "accept_test" { + name = "accept_test" + description = "terraform acceptance test" + rules = [ + "${openstack_fw_rule_v1.accept_test_udp_deny.id}", + "${openstack_fw_rule_v1.accept_test_tcp_allow.id}" + ] +} + +resource "openstack_fw_rule_v1" "accept_test_tcp_allow" { + protocol = "tcp" + action = "allow" +} + +resource "openstack_fw_rule_v1" "accept_test_udp_deny" { + protocol = "udp" + action = "deny" +} +` + +const testFirewallPolicyUpdateDeleteRule = ` +resource "openstack_fw_policy_v1" "accept_test" { + name = "accept_test" + description = "terraform acceptance test" + rules = [ + "${openstack_fw_rule_v1.accept_test_udp_deny.id}" + ] +} + +resource "openstack_fw_rule_v1" "accept_test_udp_deny" { + protocol = "udp" + action = "deny" +} +` diff --git a/builtin/providers/openstack/resource_openstack_fw_rule_v1.go b/builtin/providers/openstack/resource_openstack_fw_rule_v1.go new file mode 100644 index 000000000000..47728ab3feb8 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_rule_v1.go @@ -0,0 +1,223 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/policies" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/rules" +) + +func resourceFWRuleV1() *schema.Resource { + return &schema.Resource{ + Create: resourceFWRuleV1Create, + Read: resourceFWRuleV1Read, + Update: resourceFWRuleV1Update, + Delete: resourceFWRuleV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "action": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "ip_version": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Default: 4, + }, + "source_ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "destination_ip_address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "source_port": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "destination_port": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + }, + "enabled": &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceFWRuleV1Create(d *schema.ResourceData, meta interface{}) error { + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + enabled := d.Get("enabled").(bool) + + ruleConfiguration := rules.CreateOpts{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + Protocol: d.Get("protocol").(string), + Action: d.Get("action").(string), + IPVersion: d.Get("ip_version").(int), + SourceIPAddress: d.Get("source_ip_address").(string), + DestinationIPAddress: d.Get("destination_ip_address").(string), + SourcePort: d.Get("source_port").(string), + DestinationPort: d.Get("destination_port").(string), + Enabled: &enabled, + TenantID: d.Get("tenant_id").(string), + } + + log.Printf("[DEBUG] Create firewall rule: %#v", ruleConfiguration) + + rule, err := rules.Create(networkingClient, ruleConfiguration).Extract() + + if err != nil { + return err + } + + log.Printf("[DEBUG] Firewall rule with id %s : %#v", rule.ID, rule) + + d.SetId(rule.ID) + + return resourceFWRuleV1Read(d, meta) +} + +func resourceFWRuleV1Read(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Retrieve information about firewall rule: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + rule, err := rules.Get(networkingClient, d.Id()).Extract() + + if err != nil { + return CheckDeleted(d, err, "LB pool") + } + + d.Set("protocol", rule.Protocol) + d.Set("action", rule.Action) + + d.Set("name", rule.Name) + d.Set("description", rule.Description) + d.Set("ip_version", rule.IPVersion) + d.Set("source_ip_address", rule.SourceIPAddress) + d.Set("destination_ip_address", rule.DestinationIPAddress) + d.Set("source_port", rule.SourcePort) + d.Set("destination_port", rule.DestinationPort) + d.Set("enabled", rule.Enabled) + + return nil +} + +func resourceFWRuleV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + opts := rules.UpdateOpts{} + + if d.HasChange("name") { + opts.Name = d.Get("name").(string) + } + if d.HasChange("description") { + opts.Description = d.Get("description").(string) + } + if d.HasChange("protocol") { + opts.Protocol = d.Get("protocol").(string) + } + if d.HasChange("action") { + opts.Action = d.Get("action").(string) + } + if d.HasChange("ip_version") { + opts.IPVersion = d.Get("ip_version").(int) + } + if d.HasChange("source_ip_address") { + sourceIPAddress := d.Get("source_ip_address").(string) + opts.SourceIPAddress = &sourceIPAddress + } + if d.HasChange("destination_ip_address") { + destinationIPAddress := d.Get("destination_ip_address").(string) + opts.DestinationIPAddress = &destinationIPAddress + } + if d.HasChange("source_port") { + sourcePort := d.Get("source_port").(string) + opts.SourcePort = &sourcePort + } + if d.HasChange("destination_port") { + destinationPort := d.Get("destination_port").(string) + opts.DestinationPort = &destinationPort + } + if d.HasChange("enabled") { + enabled := d.Get("enabled").(bool) + opts.Enabled = &enabled + } + + log.Printf("[DEBUG] Updating firewall rules: %#v", opts) + + err = rules.Update(networkingClient, d.Id(), opts).Err + if err != nil { + return err + } + + return resourceFWRuleV1Read(d, meta) +} + +func resourceFWRuleV1Delete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[DEBUG] Destroy firewall rule: %s", d.Id()) + + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + rule, err := rules.Get(networkingClient, d.Id()).Extract() + if err != nil { + return err + } + + if rule.PolicyID != "" { + err := policies.RemoveRule(networkingClient, rule.PolicyID, rule.ID) + if err != nil { + return err + } + } + + return rules.Delete(networkingClient, d.Id()).Err +} diff --git a/builtin/providers/openstack/resource_openstack_fw_rule_v1_test.go b/builtin/providers/openstack/resource_openstack_fw_rule_v1_test.go new file mode 100644 index 000000000000..ba96bb8b19e0 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_fw_rule_v1_test.go @@ -0,0 +1,185 @@ +package openstack + +import ( + "fmt" + "reflect" + "testing" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/fwaas/rules" +) + +func TestAccFWRuleV1(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckFWRuleV1Destroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testFirewallRuleMinimalConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWRuleV1Exists( + "openstack_fw_rule_v1.accept_test_minimal", + &rules.Rule{ + Protocol: "udp", + Action: "deny", + IPVersion: 4, + Enabled: true, + }), + ), + }, + resource.TestStep{ + Config: testFirewallRuleConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWRuleV1Exists( + "openstack_fw_rule_v1.accept_test", + &rules.Rule{ + Name: "accept_test", + Protocol: "udp", + Action: "deny", + Description: "Terraform accept test", + IPVersion: 4, + SourceIPAddress: "1.2.3.4", + DestinationIPAddress: "4.3.2.0/24", + SourcePort: "444", + DestinationPort: "555", + Enabled: true, + }), + ), + }, + resource.TestStep{ + Config: testFirewallRuleUpdateAllFieldsConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckFWRuleV1Exists( + "openstack_fw_rule_v1.accept_test", + &rules.Rule{ + Name: "accept_test_updated_2", + Protocol: "tcp", + Action: "allow", + Description: "Terraform accept test updated", + IPVersion: 4, + SourceIPAddress: "1.2.3.0/24", + DestinationIPAddress: "4.3.2.8", + SourcePort: "666", + DestinationPort: "777", + Enabled: false, + }), + ), + }, + }, + }) +} + +func testAccCheckFWRuleV1Destroy(s *terraform.State) error { + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckOpenstackFirewallRuleDestroy) Error creating OpenStack networking client: %s", err) + } + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_firewall_rule" { + continue + } + _, err = rules.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Firewall rule (%s) still exists.", rs.Primary.ID) + } + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + return httpError + } + } + return nil +} + +func testAccCheckFWRuleV1Exists(n string, expected *rules.Rule) resource.TestCheckFunc { + + return func(s *terraform.State) error { + + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckFirewallRuleExists) Error creating OpenStack networking client: %s", err) + } + + var found *rules.Rule + for i := 0; i < 5; i++ { + // Firewall rule creation is asynchronous. Retry some times + // if we get a 404 error. Fail on any other error. + found, err = rules.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok || httpError.Actual != 404 { + time.Sleep(time.Second) + continue + } + } + break + } + + if err != nil { + return err + } + + expected.ID = found.ID + // Erase the tenant id because we don't want to compare + // it as long it is not present in the expected + found.TenantID = "" + + if !reflect.DeepEqual(expected, found) { + return fmt.Errorf("Expected:\n%#v\nFound:\n%#v", expected, found) + } + + return nil + } +} + +const testFirewallRuleMinimalConfig = ` +resource "openstack_fw_rule_v1" "accept_test_minimal" { + protocol = "udp" + action = "deny" +} +` + +const testFirewallRuleConfig = ` +resource "openstack_fw_rule_v1" "accept_test" { + name = "accept_test" + description = "Terraform accept test" + protocol = "udp" + action = "deny" + ip_version = 4 + source_ip_address = "1.2.3.4" + destination_ip_address = "4.3.2.0/24" + source_port = "444" + destination_port = "555" + enabled = true +} +` + +const testFirewallRuleUpdateAllFieldsConfig = ` +resource "openstack_fw_rule_v1" "accept_test" { + name = "accept_test_updated_2" + description = "Terraform accept test updated" + protocol = "tcp" + action = "allow" + ip_version = 4 + source_ip_address = "1.2.3.0/24" + destination_ip_address = "4.3.2.8" + source_port = "666" + destination_port = "777" + enabled = false +} +` diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go new file mode 100644 index 000000000000..35dcc9f608b8 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v1.go @@ -0,0 +1,192 @@ +package openstack + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/monitors" +) + +func resourceLBMonitorV1() *schema.Resource { + return &schema.Resource{ + Create: resourceLBMonitorV1Create, + Read: resourceLBMonitorV1Read, + Update: resourceLBMonitorV1Update, + Delete: resourceLBMonitorV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "type": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "delay": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: false, + }, + "timeout": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: false, + }, + "max_retries": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: false, + }, + "url_path": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "http_method": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "expected_codes": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + }, + } +} + +func resourceLBMonitorV1Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := monitors.CreateOpts{ + TenantID: d.Get("tenant_id").(string), + Type: d.Get("type").(string), + Delay: d.Get("delay").(int), + Timeout: d.Get("timeout").(int), + MaxRetries: d.Get("max_retries").(int), + URLPath: d.Get("url_path").(string), + ExpectedCodes: d.Get("expected_codes").(string), + HTTPMethod: d.Get("http_method").(string), + } + + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + createOpts.AdminStateUp = &asu + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + m, err := monitors.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack LB Monitor: %s", err) + } + log.Printf("[INFO] LB Monitor ID: %s", m.ID) + + d.SetId(m.ID) + + return resourceLBMonitorV1Read(d, meta) +} + +func resourceLBMonitorV1Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + m, err := monitors.Get(networkingClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "LB monitor") + } + + log.Printf("[DEBUG] Retreived OpenStack LB Monitor %s: %+v", d.Id(), m) + + d.Set("type", m.Type) + d.Set("delay", m.Delay) + d.Set("timeout", m.Timeout) + d.Set("max_retries", m.MaxRetries) + d.Set("tenant_id", m.TenantID) + d.Set("url_path", m.URLPath) + d.Set("http_method", m.HTTPMethod) + d.Set("expected_codes", m.ExpectedCodes) + d.Set("admin_state_up", strconv.FormatBool(m.AdminStateUp)) + + return nil +} + +func resourceLBMonitorV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + updateOpts := monitors.UpdateOpts{ + Delay: d.Get("delay").(int), + Timeout: d.Get("timeout").(int), + MaxRetries: d.Get("max_retries").(int), + URLPath: d.Get("url_path").(string), + HTTPMethod: d.Get("http_method").(string), + ExpectedCodes: d.Get("expected_codes").(string), + } + + if d.HasChange("admin_state_up") { + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + updateOpts.AdminStateUp = &asu + } + } + + log.Printf("[DEBUG] Updating OpenStack LB Monitor %s with options: %+v", d.Id(), updateOpts) + + _, err = monitors.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack LB Monitor: %s", err) + } + + return resourceLBMonitorV1Read(d, meta) +} + +func resourceLBMonitorV1Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = monitors.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack LB Monitor: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go new file mode 100644 index 000000000000..5aaf61d2c698 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_monitor_v1_test.go @@ -0,0 +1,110 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/monitors" +) + +func TestAccLBV1Monitor_basic(t *testing.T) { + var monitor monitors.Monitor + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1MonitorDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Monitor_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1MonitorExists(t, "openstack_lb_monitor_v1.monitor_1", &monitor), + ), + }, + resource.TestStep{ + Config: testAccLBV1Monitor_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_lb_monitor_v1.monitor_1", "delay", "20"), + ), + }, + }, + }) +} + +func testAccCheckLBV1MonitorDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1MonitorDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_lb_monitor_v1" { + continue + } + + _, err := monitors.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("LB monitor still exists") + } + } + + return nil +} + +func testAccCheckLBV1MonitorExists(t *testing.T, n string, monitor *monitors.Monitor) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1MonitorExists) Error creating OpenStack networking client: %s", err) + } + + found, err := monitors.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Monitor not found") + } + + *monitor = *found + + return nil + } +} + +var testAccLBV1Monitor_basic = fmt.Sprintf(` + resource "openstack_lb_monitor_v1" "monitor_1" { + region = "%s" + type = "PING" + delay = 30 + timeout = 5 + max_retries = 3 + admin_state_up = "true" + }`, + OS_REGION_NAME) + +var testAccLBV1Monitor_update = fmt.Sprintf(` + resource "openstack_lb_monitor_v1" "monitor_1" { + region = "%s" + type = "PING" + delay = 20 + timeout = 5 + max_retries = 3 + admin_state_up = "true" + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go new file mode 100644 index 000000000000..a41747a1f0d0 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1.go @@ -0,0 +1,327 @@ +package openstack + +import ( + "bytes" + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/members" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/pools" + "github.com/rackspace/gophercloud/pagination" +) + +func resourceLBPoolV1() *schema.Resource { + return &schema.Resource{ + Create: resourceLBPoolV1Create, + Read: resourceLBPoolV1Read, + Update: resourceLBPoolV1Update, + Delete: resourceLBPoolV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "lb_method": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "member": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "address": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeBool, + Required: true, + ForceNew: false, + }, + }, + }, + Set: resourceLBMemberV1Hash, + }, + "monitor_ids": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: false, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + }, + } +} + +func resourceLBPoolV1Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := pools.CreateOpts{ + Name: d.Get("name").(string), + Protocol: d.Get("protocol").(string), + SubnetID: d.Get("subnet_id").(string), + LBMethod: d.Get("lb_method").(string), + TenantID: d.Get("tenant_id").(string), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + p, err := pools.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack LB pool: %s", err) + } + log.Printf("[INFO] LB Pool ID: %s", p.ID) + + d.SetId(p.ID) + + if mIDs := resourcePoolMonitorIDsV1(d); mIDs != nil { + for _, mID := range mIDs { + _, err := pools.AssociateMonitor(networkingClient, p.ID, mID).Extract() + if err != nil { + return fmt.Errorf("Error associating monitor (%s) with OpenStack LB pool (%s): %s", mID, p.ID, err) + } + } + } + + if memberOpts := resourcePoolMembersV1(d); memberOpts != nil { + for _, memberOpt := range memberOpts { + _, err := members.Create(networkingClient, memberOpt).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack LB member: %s", err) + } + } + } + + return resourceLBPoolV1Read(d, meta) +} + +func resourceLBPoolV1Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + p, err := pools.Get(networkingClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "LB pool") + } + + log.Printf("[DEBUG] Retreived OpenStack LB Pool %s: %+v", d.Id(), p) + + d.Set("name", p.Name) + d.Set("protocol", p.Protocol) + d.Set("subnet_id", p.SubnetID) + d.Set("lb_method", p.LBMethod) + d.Set("tenant_id", p.TenantID) + d.Set("monitor_ids", p.MonitorIDs) + d.Set("member_ids", p.MemberIDs) + + return nil +} + +func resourceLBPoolV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + var updateOpts pools.UpdateOpts + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + if d.HasChange("lb_method") { + updateOpts.LBMethod = d.Get("lb_method").(string) + } + + log.Printf("[DEBUG] Updating OpenStack LB Pool %s with options: %+v", d.Id(), updateOpts) + + _, err = pools.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack LB Pool: %s", err) + } + + if d.HasChange("monitor_ids") { + oldMIDsRaw, newMIDsRaw := d.GetChange("security_groups") + oldMIDsSet, newMIDsSet := oldMIDsRaw.(*schema.Set), newMIDsRaw.(*schema.Set) + monitorsToAdd := newMIDsSet.Difference(oldMIDsSet) + monitorsToRemove := oldMIDsSet.Difference(newMIDsSet) + + log.Printf("[DEBUG] Monitors to add: %v", monitorsToAdd) + + log.Printf("[DEBUG] Monitors to remove: %v", monitorsToRemove) + + for _, m := range monitorsToAdd.List() { + _, err := pools.AssociateMonitor(networkingClient, d.Id(), m.(string)).Extract() + if err != nil { + return fmt.Errorf("Error associating monitor (%s) with OpenStack server (%s): %s", m.(string), d.Id(), err) + } + log.Printf("[DEBUG] Associated monitor (%s) with pool (%s)", m.(string), d.Id()) + } + + for _, m := range monitorsToRemove.List() { + _, err := pools.DisassociateMonitor(networkingClient, d.Id(), m.(string)).Extract() + if err != nil { + return fmt.Errorf("Error disassociating monitor (%s) from OpenStack server (%s): %s", m.(string), d.Id(), err) + } + log.Printf("[DEBUG] Disassociated monitor (%s) from pool (%s)", m.(string), d.Id()) + } + } + + if d.HasChange("member") { + oldMembersRaw, newMembersRaw := d.GetChange("member") + oldMembersSet, newMembersSet := oldMembersRaw.(*schema.Set), newMembersRaw.(*schema.Set) + membersToAdd := newMembersSet.Difference(oldMembersSet) + membersToRemove := oldMembersSet.Difference(newMembersSet) + + log.Printf("[DEBUG] Members to add: %v", membersToAdd) + + log.Printf("[DEBUG] Members to remove: %v", membersToRemove) + + for _, m := range membersToRemove.List() { + oldMember := resourcePoolMemberV1(d, m) + listOpts := members.ListOpts{ + PoolID: d.Id(), + Address: oldMember.Address, + ProtocolPort: oldMember.ProtocolPort, + } + err = members.List(networkingClient, listOpts).EachPage(func(page pagination.Page) (bool, error) { + extractedMembers, err := members.ExtractMembers(page) + if err != nil { + return false, err + } + for _, member := range extractedMembers { + err := members.Delete(networkingClient, member.ID).ExtractErr() + if err != nil { + return false, fmt.Errorf("Error deleting member (%s) from OpenStack LB pool (%s): %s", member.ID, d.Id(), err) + } + log.Printf("[DEBUG] Deleted member (%s) from pool (%s)", member.ID, d.Id()) + } + return true, nil + }) + } + + for _, m := range membersToAdd.List() { + createOpts := resourcePoolMemberV1(d, m) + newMember, err := members.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating LB member: %s", err) + } + log.Printf("[DEBUG] Created member (%s) in OpenStack LB pool (%s)", newMember.ID, d.Id()) + } + } + + return resourceLBPoolV1Read(d, meta) +} + +func resourceLBPoolV1Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = pools.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack LB Pool: %s", err) + } + + d.SetId("") + return nil +} + +func resourcePoolMonitorIDsV1(d *schema.ResourceData) []string { + mIDsRaw := d.Get("monitor_ids").(*schema.Set) + mIDs := make([]string, mIDsRaw.Len()) + for i, raw := range mIDsRaw.List() { + mIDs[i] = raw.(string) + } + return mIDs +} + +func resourcePoolMembersV1(d *schema.ResourceData) []members.CreateOpts { + memberOptsRaw := (d.Get("member")).(*schema.Set) + memberOpts := make([]members.CreateOpts, memberOptsRaw.Len()) + for i, raw := range memberOptsRaw.List() { + rawMap := raw.(map[string]interface{}) + memberOpts[i] = members.CreateOpts{ + TenantID: rawMap["tenant_id"].(string), + Address: rawMap["address"].(string), + ProtocolPort: rawMap["port"].(int), + PoolID: d.Id(), + } + } + return memberOpts +} + +func resourcePoolMemberV1(d *schema.ResourceData, raw interface{}) members.CreateOpts { + rawMap := raw.(map[string]interface{}) + return members.CreateOpts{ + TenantID: rawMap["tenant_id"].(string), + Address: rawMap["address"].(string), + ProtocolPort: rawMap["port"].(int), + PoolID: d.Id(), + } +} + +func resourceLBMemberV1Hash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["region"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["tenant_id"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["address"].(string))) + buf.WriteString(fmt.Sprintf("%d-", m["port"].(int))) + + return hashcode.String(buf.String()) +} diff --git a/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go new file mode 100644 index 000000000000..1889c2384553 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_pool_v1_test.go @@ -0,0 +1,134 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/pools" +) + +func TestAccLBV1Pool_basic(t *testing.T) { + var pool pools.Pool + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1PoolDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1Pool_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1PoolExists(t, "openstack_lb_pool_v1.pool_1", &pool), + ), + }, + resource.TestStep{ + Config: testAccLBV1Pool_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_lb_pool_v1.pool_1", "name", "tf_test_lb_pool_updated"), + ), + }, + }, + }) +} + +func testAccCheckLBV1PoolDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1PoolDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_lb_pool_v1" { + continue + } + + _, err := pools.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("LB Pool still exists") + } + } + + return nil +} + +func testAccCheckLBV1PoolExists(t *testing.T, n string, pool *pools.Pool) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1PoolExists) Error creating OpenStack networking client: %s", err) + } + + found, err := pools.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Pool not found") + } + + *pool = *found + + return nil + } +} + +var testAccLBV1Pool_basic = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_lb_pool_v1" "pool_1" { + region = "%s" + name = "tf_test_lb_pool" + protocol = "HTTP" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + lb_method = "ROUND_ROBIN" + }`, + OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) + +var testAccLBV1Pool_update = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_lb_pool_v1" "pool_1" { + region = "%s" + name = "tf_test_lb_pool_updated" + protocol = "HTTP" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + lb_method = "ROUND_ROBIN" + }`, + OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go new file mode 100644 index 000000000000..e2e2a26e479e --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1.go @@ -0,0 +1,258 @@ +package openstack + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/vips" +) + +func resourceLBVipV1() *schema.Resource { + return &schema.Resource{ + Create: resourceLBVipV1Create, + Read: resourceLBVipV1Read, + Update: resourceLBVipV1Update, + Delete: resourceLBVipV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "protocol": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "port": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "pool_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "address": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "description": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "persistence": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: false, + }, + "conn_limit": &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: false, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + }, + } +} + +func resourceLBVipV1Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := vips.CreateOpts{ + Name: d.Get("name").(string), + SubnetID: d.Get("subnet_id").(string), + Protocol: d.Get("protocol").(string), + ProtocolPort: d.Get("port").(int), + PoolID: d.Get("pool_id").(string), + TenantID: d.Get("tenant_id").(string), + Address: d.Get("address").(string), + Description: d.Get("description").(string), + Persistence: resourceVipPersistenceV1(d), + ConnLimit: gophercloud.MaybeInt(d.Get("conn_limit").(int)), + } + + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + createOpts.AdminStateUp = &asu + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + p, err := vips.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack LB VIP: %s", err) + } + log.Printf("[INFO] LB VIP ID: %s", p.ID) + + d.SetId(p.ID) + + return resourceLBVipV1Read(d, meta) +} + +func resourceLBVipV1Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + p, err := vips.Get(networkingClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "LB VIP") + } + + log.Printf("[DEBUG] Retreived OpenStack LB VIP %s: %+v", d.Id(), p) + + d.Set("name", p.Name) + d.Set("subnet_id", p.SubnetID) + d.Set("protocol", p.Protocol) + d.Set("port", p.ProtocolPort) + d.Set("pool_id", p.PoolID) + + if t, exists := d.GetOk("tenant_id"); exists && t != "" { + d.Set("tenant_id", p.TenantID) + } else { + d.Set("tenant_id", "") + } + + if t, exists := d.GetOk("address"); exists && t != "" { + d.Set("address", p.Address) + } else { + d.Set("address", "") + } + + if t, exists := d.GetOk("description"); exists && t != "" { + d.Set("description", p.Description) + } else { + d.Set("description", "") + } + + if t, exists := d.GetOk("persistence"); exists && t != "" { + d.Set("persistence", p.Description) + } + + if t, exists := d.GetOk("conn_limit"); exists && t != "" { + d.Set("conn_limit", p.ConnLimit) + } else { + d.Set("conn_limit", "") + } + + if t, exists := d.GetOk("admin_state_up"); exists && t != "" { + d.Set("admin_state_up", strconv.FormatBool(p.AdminStateUp)) + } else { + d.Set("admin_state_up", "") + } + + return nil +} + +func resourceLBVipV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + var updateOpts vips.UpdateOpts + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + if d.HasChange("pool_id") { + updateOpts.PoolID = d.Get("pool_id").(string) + } + if d.HasChange("description") { + updateOpts.Description = d.Get("description").(string) + } + if d.HasChange("persistence") { + updateOpts.Persistence = resourceVipPersistenceV1(d) + } + if d.HasChange("conn_limit") { + updateOpts.ConnLimit = gophercloud.MaybeInt(d.Get("conn_limit").(int)) + } + if d.HasChange("admin_state_up") { + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + updateOpts.AdminStateUp = &asu + } + } + + log.Printf("[DEBUG] Updating OpenStack LB VIP %s with options: %+v", d.Id(), updateOpts) + + _, err = vips.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack LB VIP: %s", err) + } + + return resourceLBVipV1Read(d, meta) +} + +func resourceLBVipV1Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = vips.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack LB VIP: %s", err) + } + + d.SetId("") + return nil +} + +func resourceVipPersistenceV1(d *schema.ResourceData) *vips.SessionPersistence { + rawP := d.Get("persistence").(interface{}) + rawMap := rawP.(map[string]interface{}) + if len(rawMap) != 0 { + p := vips.SessionPersistence{} + if t, ok := rawMap["type"]; ok { + p.Type = t.(string) + } + if c, ok := rawMap["cookie_name"]; ok { + p.CookieName = c.(string) + } + return &p + } + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go new file mode 100644 index 000000000000..f30cd9d56d42 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_lb_vip_v1_test.go @@ -0,0 +1,152 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/lbaas/vips" +) + +func TestAccLBV1VIP_basic(t *testing.T) { + var vip vips.VirtualIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLBV1VIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccLBV1VIP_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckLBV1VIPExists(t, "openstack_lb_vip_v1.vip_1", &vip), + ), + }, + resource.TestStep{ + Config: testAccLBV1VIP_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_lb_vip_v1.vip_1", "name", "tf_test_lb_vip_updated"), + ), + }, + }, + }) +} + +func testAccCheckLBV1VIPDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1VIPDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_lb_vip_v1" { + continue + } + + _, err := vips.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("LB VIP still exists") + } + } + + return nil +} + +func testAccCheckLBV1VIPExists(t *testing.T, n string, vip *vips.VirtualIP) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckLBV1VIPExists) Error creating OpenStack networking client: %s", err) + } + + found, err := vips.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("VIP not found") + } + + *vip = *found + + return nil + } +} + +var testAccLBV1VIP_basic = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_lb_pool_v1" "pool_1" { + region = "%s" + name = "tf_test_lb_pool" + protocol = "HTTP" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + lb_method = "ROUND_ROBIN" + } + + resource "openstack_lb_vip_v1" "vip_1" { + region = "RegionOne" + name = "tf_test_lb_vip" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + protocol = "HTTP" + port = 80 + pool_id = "${openstack_lb_pool_v1.pool_1.id}" + }`, + OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) + +var testAccLBV1VIP_update = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + } + + resource "openstack_lb_pool_v1" "pool_1" { + region = "%s" + name = "tf_test_lb_pool" + protocol = "HTTP" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + lb_method = "ROUND_ROBIN" + } + + resource "openstack_lb_vip_v1" "vip_1" { + region = "RegionOne" + name = "tf_test_lb_vip_updated" + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + protocol = "HTTP" + port = 80 + pool_id = "${openstack_lb_pool_v1.pool_1.id}" + }`, + OS_REGION_NAME, OS_REGION_NAME, OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go new file mode 100644 index 000000000000..1b81c6a96e2c --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2.go @@ -0,0 +1,163 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/floatingips" + "github.com/rackspace/gophercloud/openstack/networking/v2/networks" + "github.com/rackspace/gophercloud/pagination" +) + +func resourceNetworkingFloatingIPV2() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkFloatingIPV2Create, + Read: resourceNetworkFloatingIPV2Read, + Delete: resourceNetworkFloatingIPV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "address": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + }, + "pool": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFunc("OS_POOL_NAME"), + }, + }, + } +} + +func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + poolID, err := getNetworkID(d, meta, d.Get("pool").(string)) + if err != nil { + return fmt.Errorf("Error retrieving floating IP pool name: %s", err) + } + if len(poolID) == 0 { + return fmt.Errorf("No network found with name: %s", d.Get("pool").(string)) + } + createOpts := floatingips.CreateOpts{ + FloatingNetworkID: poolID, + } + log.Printf("[DEBUG] Create Options: %#v", createOpts) + floatingIP, err := floatingips.Create(networkClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error allocating floating IP: %s", err) + } + + d.SetId(floatingIP.ID) + + return resourceNetworkFloatingIPV2Read(d, meta) +} + +func resourceNetworkFloatingIPV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + floatingIP, err := floatingips.Get(networkClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "floating IP") + } + + d.Set("address", floatingIP.FloatingIP) + poolName, err := getNetworkName(d, meta, floatingIP.FloatingNetworkID) + if err != nil { + return fmt.Errorf("Error retrieving floating IP pool name: %s", err) + } + d.Set("pool", poolName) + + return nil +} + +func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + err = floatingips.Delete(networkClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting floating IP: %s", err) + } + d.SetId("") + return nil +} + +func getNetworkID(d *schema.ResourceData, meta interface{}, networkName string) (string, error) { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return "", fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + opts := networks.ListOpts{Name: networkName} + pager := networks.List(networkClient, opts) + networkID := "" + + err = pager.EachPage(func(page pagination.Page) (bool, error) { + networkList, err := networks.ExtractNetworks(page) + if err != nil { + return false, err + } + + for _, n := range networkList { + if n.Name == networkName { + networkID = n.ID + return false, nil + } + } + + return true, nil + }) + + return networkID, err +} + +func getNetworkName(d *schema.ResourceData, meta interface{}, networkID string) (string, error) { + config := meta.(*Config) + networkClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return "", fmt.Errorf("Error creating OpenStack network client: %s", err) + } + + opts := networks.ListOpts{ID: networkID} + pager := networks.List(networkClient, opts) + networkName := "" + + err = pager.EachPage(func(page pagination.Page) (bool, error) { + networkList, err := networks.ExtractNetworks(page) + if err != nil { + return false, err + } + + for _, n := range networkList { + if n.ID == networkID { + networkName = n.Name + return false, nil + } + } + + return true, nil + }) + + return networkName, err +} diff --git a/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go new file mode 100644 index 000000000000..a989f2774dbe --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_floatingip_v2_test.go @@ -0,0 +1,144 @@ +package openstack + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/compute/v2/servers" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/floatingips" +) + +func TestAccNetworkingV2FloatingIP_basic(t *testing.T) { + var floatingIP floatingips.FloatingIP + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2FloatingIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2FloatingIP_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2FloatingIPExists(t, "openstack_networking_floatingip_v2.foo", &floatingIP), + ), + }, + }, + }) +} + +func TestAccNetworkingV2FloatingIP_attach(t *testing.T) { + var instance servers.Server + var fip floatingips.FloatingIP + var testAccNetworkV2FloatingIP_attach = fmt.Sprintf(` + resource "openstack_networking_floatingip_v2" "myip" { + } + + resource "openstack_compute_instance_v2" "foo" { + name = "terraform-test" + floating_ip = "${openstack_networking_floatingip_v2.myip.address}" + + network { + uuid = "%s" + } + }`, + os.Getenv("OS_NETWORK_ID")) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2FloatingIPDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkV2FloatingIP_attach, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2FloatingIPExists(t, "openstack_networking_floatingip_v2.myip", &fip), + testAccCheckComputeV2InstanceExists(t, "openstack_compute_instance_v2.foo", &instance), + testAccCheckNetworkingV2InstanceFloatingIPAttach(&instance, &fip), + ), + }, + }, + }) +} + +func testAccCheckNetworkingV2FloatingIPDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2FloatingIPDestroy) Error creating OpenStack floating IP: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_networking_floatingip_v2" { + continue + } + + _, err := floatingips.Get(networkClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("FloatingIP still exists") + } + } + + return nil +} + +func testAccCheckNetworkingV2FloatingIPExists(t *testing.T, n string, kp *floatingips.FloatingIP) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2FloatingIPExists) Error creating OpenStack networking client: %s", err) + } + + found, err := floatingips.Get(networkClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("FloatingIP not found") + } + + *kp = *found + + return nil + } +} + +func testAccCheckNetworkingV2InstanceFloatingIPAttach( + instance *servers.Server, fip *floatingips.FloatingIP) resource.TestCheckFunc { + + // When Neutron is used, the Instance sometimes does not know its floating IP until some time + // after the attachment happened. This can be anywhere from 2-20 seconds. Because of that delay, + // the test usually completes with failure. + // However, the Fixed IP is known on both sides immediately, so that can be used as a bridge + // to ensure the two are now related. + // I think a better option is to introduce some state changing config in the actual resource. + return func(s *terraform.State) error { + for _, networkAddresses := range instance.Addresses { + for _, element := range networkAddresses.([]interface{}) { + address := element.(map[string]interface{}) + if address["OS-EXT-IPS:type"] == "fixed" && address["addr"] == fip.FixedIP { + return nil + } + } + } + return fmt.Errorf("Floating IP %+v was not attached to instance %+v", fip, instance) + } +} + +var testAccNetworkingV2FloatingIP_basic = ` + resource "openstack_networking_floatingip_v2" "foo" { + }` diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2.go b/builtin/providers/openstack/resource_openstack_networking_network_v2.go new file mode 100644 index 000000000000..0977f3ad467a --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2.go @@ -0,0 +1,170 @@ +package openstack + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/networks" +) + +func resourceNetworkingNetworkV2() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkingNetworkV2Create, + Read: resourceNetworkingNetworkV2Read, + Update: resourceNetworkingNetworkV2Update, + Delete: resourceNetworkingNetworkV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "shared": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceNetworkingNetworkV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := networks.CreateOpts{ + Name: d.Get("name").(string), + TenantID: d.Get("tenant_id").(string), + } + + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + createOpts.AdminStateUp = &asu + } + + sharedRaw := d.Get("shared").(string) + if sharedRaw != "" { + shared, err := strconv.ParseBool(sharedRaw) + if err != nil { + return fmt.Errorf("shared, if provided, must be either 'true' or 'false': %v", err) + } + createOpts.Shared = &shared + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + n, err := networks.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack Neutron network: %s", err) + } + log.Printf("[INFO] Network ID: %s", n.ID) + + d.SetId(n.ID) + + return resourceNetworkingNetworkV2Read(d, meta) +} + +func resourceNetworkingNetworkV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + n, err := networks.Get(networkingClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "network") + } + + log.Printf("[DEBUG] Retreived Network %s: %+v", d.Id(), n) + + d.Set("name", n.Name) + d.Set("admin_state_up", strconv.FormatBool(n.AdminStateUp)) + d.Set("shared", strconv.FormatBool(n.Shared)) + d.Set("tenant_id", n.TenantID) + + return nil +} + +func resourceNetworkingNetworkV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + var updateOpts networks.UpdateOpts + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + if d.HasChange("admin_state_up") { + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + updateOpts.AdminStateUp = &asu + } + } + if d.HasChange("shared") { + sharedRaw := d.Get("shared").(string) + if sharedRaw != "" { + shared, err := strconv.ParseBool(sharedRaw) + if err != nil { + return fmt.Errorf("shared, if provided, must be either 'true' or 'false': %v", err) + } + updateOpts.Shared = &shared + } + } + + log.Printf("[DEBUG] Updating Network %s with options: %+v", d.Id(), updateOpts) + + _, err = networks.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack Neutron Network: %s", err) + } + + return resourceNetworkingNetworkV2Read(d, meta) +} + +func resourceNetworkingNetworkV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = networks.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack Neutron Network: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go new file mode 100644 index 000000000000..5bff60532008 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_network_v2_test.go @@ -0,0 +1,104 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/networks" +) + +func TestAccNetworkingV2Network_basic(t *testing.T) { + var network networks.Network + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2NetworkDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Network_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2NetworkExists(t, "openstack_networking_network_v2.foo", &network), + ), + }, + resource.TestStep{ + Config: testAccNetworkingV2Network_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_networking_network_v2.foo", "name", "network_2"), + ), + }, + }, + }) +} + +func testAccCheckNetworkingV2NetworkDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2NetworkDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_networking_network_v2" { + continue + } + + _, err := networks.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Network still exists") + } + } + + return nil +} + +func testAccCheckNetworkingV2NetworkExists(t *testing.T, n string, network *networks.Network) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2NetworkExists) Error creating OpenStack networking client: %s", err) + } + + found, err := networks.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Network not found") + } + + *network = *found + + return nil + } +} + +var testAccNetworkingV2Network_basic = fmt.Sprintf(` + resource "openstack_networking_network_v2" "foo" { + region = "%s" + name = "network_1" + admin_state_up = "true" + }`, + OS_REGION_NAME) + +var testAccNetworkingV2Network_update = fmt.Sprintf(` + resource "openstack_networking_network_v2" "foo" { + region = "%s" + name = "network_2" + admin_state_up = "true" + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go new file mode 100644 index 000000000000..1e60c30ef57f --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2.go @@ -0,0 +1,107 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers" + "github.com/rackspace/gophercloud/openstack/networking/v2/ports" +) + +func resourceNetworkingRouterInterfaceV2() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkingRouterInterfaceV2Create, + Read: resourceNetworkingRouterInterfaceV2Read, + Delete: resourceNetworkingRouterInterfaceV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "router_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "subnet_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceNetworkingRouterInterfaceV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := routers.InterfaceOpts{ + SubnetID: d.Get("subnet_id").(string), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + n, err := routers.AddInterface(networkingClient, d.Get("router_id").(string), createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack Neutron router interface: %s", err) + } + log.Printf("[INFO] Router interface Port ID: %s", n.PortID) + + d.SetId(n.PortID) + + return resourceNetworkingRouterInterfaceV2Read(d, meta) +} + +func resourceNetworkingRouterInterfaceV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + n, err := ports.Get(networkingClient, d.Id()).Extract() + if err != nil { + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return fmt.Errorf("Error retrieving OpenStack Neutron Router Interface: %s", err) + } + + if httpError.Actual == 404 { + d.SetId("") + return nil + } + return fmt.Errorf("Error retrieving OpenStack Neutron Router Interface: %s", err) + } + + log.Printf("[DEBUG] Retreived Router Interface %s: %+v", d.Id(), n) + + return nil +} + +func resourceNetworkingRouterInterfaceV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + removeOpts := routers.InterfaceOpts{ + SubnetID: d.Get("subnet_id").(string), + } + + _, err = routers.RemoveInterface(networkingClient, d.Get("router_id").(string), removeOpts).Extract() + if err != nil { + return fmt.Errorf("Error deleting OpenStack Neutron Router Interface: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go new file mode 100644 index 000000000000..be3b12c0b5c8 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_router_interface_v2_test.go @@ -0,0 +1,100 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/ports" +) + +func TestAccNetworkingV2RouterInterface_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2RouterInterfaceDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2RouterInterface_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2RouterInterfaceExists(t, "openstack_networking_router_interface_v2.int_1"), + ), + }, + }, + }) +} + +func testAccCheckNetworkingV2RouterInterfaceDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2RouterInterfaceDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_networking_router_interface_v2" { + continue + } + + _, err := ports.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Router interface still exists") + } + } + + return nil +} + +func testAccCheckNetworkingV2RouterInterfaceExists(t *testing.T, n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2RouterInterfaceExists) Error creating OpenStack networking client: %s", err) + } + + found, err := ports.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Router interface not found") + } + + return nil + } +} + +var testAccNetworkingV2RouterInterface_basic = fmt.Sprintf(` +resource "openstack_networking_router_v2" "router_1" { + name = "router_1" + admin_state_up = "true" +} + +resource "openstack_networking_router_interface_v2" "int_1" { + subnet_id = "${openstack_networking_subnet_v2.subnet_1.id}" + router_id = "${openstack_networking_router_v2.router_1.id}" +} + +resource "openstack_networking_network_v2" "network_1" { + name = "network_1" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 +}`) diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2.go b/builtin/providers/openstack/resource_openstack_networking_router_v2.go new file mode 100644 index 000000000000..39ecc6ee2aaf --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_router_v2.go @@ -0,0 +1,169 @@ +package openstack + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers" +) + +func resourceNetworkingRouterV2() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkingRouterV2Create, + Read: resourceNetworkingRouterV2Read, + Update: resourceNetworkingRouterV2Update, + Delete: resourceNetworkingRouterV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "admin_state_up": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "external_gateway": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := routers.CreateOpts{ + Name: d.Get("name").(string), + TenantID: d.Get("tenant_id").(string), + } + + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + createOpts.AdminStateUp = &asu + } + + externalGateway := d.Get("external_gateway").(string) + if externalGateway != "" { + gatewayInfo := routers.GatewayInfo{ + NetworkID: externalGateway, + } + createOpts.GatewayInfo = &gatewayInfo + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + n, err := routers.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack Neutron router: %s", err) + } + log.Printf("[INFO] Router ID: %s", n.ID) + + d.SetId(n.ID) + + return resourceNetworkingRouterV2Read(d, meta) +} + +func resourceNetworkingRouterV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + n, err := routers.Get(networkingClient, d.Id()).Extract() + if err != nil { + httpError, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err) + } + + if httpError.Actual == 404 { + d.SetId("") + return nil + } + return fmt.Errorf("Error retrieving OpenStack Neutron Router: %s", err) + } + + log.Printf("[DEBUG] Retreived Router %s: %+v", d.Id(), n) + + d.Set("name", n.Name) + d.Set("admin_state_up", strconv.FormatBool(n.AdminStateUp)) + d.Set("tenant_id", n.TenantID) + d.Set("external_gateway", n.GatewayInfo.NetworkID) + + return nil +} + +func resourceNetworkingRouterV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + var updateOpts routers.UpdateOpts + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + if d.HasChange("admin_state_up") { + asuRaw := d.Get("admin_state_up").(string) + if asuRaw != "" { + asu, err := strconv.ParseBool(asuRaw) + if err != nil { + return fmt.Errorf("admin_state_up, if provided, must be either 'true' or 'false'") + } + updateOpts.AdminStateUp = &asu + } + } + + log.Printf("[DEBUG] Updating Router %s with options: %+v", d.Id(), updateOpts) + + _, err = routers.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack Neutron Router: %s", err) + } + + return resourceNetworkingRouterV2Read(d, meta) +} + +func resourceNetworkingRouterV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = routers.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack Neutron Router: %s", err) + } + + d.SetId("") + return nil +} diff --git a/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go new file mode 100644 index 000000000000..248f4e721f72 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_router_v2_test.go @@ -0,0 +1,100 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/extensions/layer3/routers" +) + +func TestAccNetworkingV2Router_basic(t *testing.T) { + var router routers.Router + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2RouterDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Router_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2RouterExists(t, "openstack_networking_router_v2.foo", &router), + ), + }, + resource.TestStep{ + Config: testAccNetworkingV2Router_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_networking_router_v2.foo", "name", "router_2"), + ), + }, + }, + }) +} + +func testAccCheckNetworkingV2RouterDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2RouterDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_networking_router_v2" { + continue + } + + _, err := routers.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Router still exists") + } + } + + return nil +} + +func testAccCheckNetworkingV2RouterExists(t *testing.T, n string, router *routers.Router) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2RouterExists) Error creating OpenStack networking client: %s", err) + } + + found, err := routers.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Router not found") + } + + *router = *found + + return nil + } +} + +var testAccNetworkingV2Router_basic = fmt.Sprintf(` + resource "openstack_networking_router_v2" "foo" { + name = "router" + admin_state_up = "true" + }`) + +var testAccNetworkingV2Router_update = fmt.Sprintf(` + resource "openstack_networking_router_v2" "foo" { + name = "router_2" + admin_state_up = "true" + }`) diff --git a/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go new file mode 100644 index 000000000000..573e4d7eed50 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_subnet_v2.go @@ -0,0 +1,272 @@ +package openstack + +import ( + "fmt" + "log" + "strconv" + + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/networking/v2/subnets" +) + +func resourceNetworkingSubnetV2() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkingSubnetV2Create, + Read: resourceNetworkingSubnetV2Read, + Update: resourceNetworkingSubnetV2Update, + Delete: resourceNetworkingSubnetV2Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "network_id": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "cidr": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "tenant_id": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "allocation_pools": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "end": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "gateway_ip": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "ip_version": &schema.Schema{ + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "enable_dhcp": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "dns_nameservers": &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ForceNew: false, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: func(v interface{}) int { + return hashcode.String(v.(string)) + }, + }, + "host_routes": &schema.Schema{ + Type: schema.TypeList, + Optional: true, + ForceNew: false, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "destination_cidr": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + "next_hop": &schema.Schema{ + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceNetworkingSubnetV2Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + createOpts := subnets.CreateOpts{ + NetworkID: d.Get("network_id").(string), + CIDR: d.Get("cidr").(string), + Name: d.Get("name").(string), + TenantID: d.Get("tenant_id").(string), + AllocationPools: resourceSubnetAllocationPoolsV2(d), + GatewayIP: d.Get("gateway_ip").(string), + IPVersion: d.Get("ip_version").(int), + DNSNameservers: resourceSubnetDNSNameserversV2(d), + HostRoutes: resourceSubnetHostRoutesV2(d), + } + + edRaw := d.Get("enable_dhcp").(string) + if edRaw != "" { + ed, err := strconv.ParseBool(edRaw) + if err != nil { + return fmt.Errorf("enable_dhcp, if provided, must be either 'true' or 'false'") + } + createOpts.EnableDHCP = &ed + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + s, err := subnets.Create(networkingClient, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack Neutron subnet: %s", err) + } + log.Printf("[INFO] Subnet ID: %s", s.ID) + + d.SetId(s.ID) + + return resourceNetworkingSubnetV2Read(d, meta) +} + +func resourceNetworkingSubnetV2Read(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + s, err := subnets.Get(networkingClient, d.Id()).Extract() + if err != nil { + return CheckDeleted(d, err, "subnet") + } + + log.Printf("[DEBUG] Retreived Subnet %s: %+v", d.Id(), s) + + d.Set("newtork_id", s.NetworkID) + d.Set("cidr", s.CIDR) + d.Set("ip_version", s.IPVersion) + d.Set("name", s.Name) + d.Set("tenant_id", s.TenantID) + d.Set("allocation_pools", s.AllocationPools) + d.Set("gateway_ip", s.GatewayIP) + d.Set("enable_dhcp", strconv.FormatBool(s.EnableDHCP)) + d.Set("dns_nameservers", s.DNSNameservers) + d.Set("host_routes", s.HostRoutes) + + return nil +} + +func resourceNetworkingSubnetV2Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + var updateOpts subnets.UpdateOpts + + if d.HasChange("name") { + updateOpts.Name = d.Get("name").(string) + } + + if d.HasChange("gateway_ip") { + updateOpts.GatewayIP = d.Get("gateway_ip").(string) + } + + if d.HasChange("dns_nameservers") { + updateOpts.DNSNameservers = resourceSubnetDNSNameserversV2(d) + } + + if d.HasChange("host_routes") { + updateOpts.HostRoutes = resourceSubnetHostRoutesV2(d) + } + + if d.HasChange("enable_dhcp") { + edRaw := d.Get("enable_dhcp").(string) + if edRaw != "" { + ed, err := strconv.ParseBool(edRaw) + if err != nil { + return fmt.Errorf("enable_dhcp, if provided, must be either 'true' or 'false'") + } + updateOpts.EnableDHCP = &ed + } + } + + log.Printf("[DEBUG] Updating Subnet %s with options: %+v", d.Id(), updateOpts) + + _, err = subnets.Update(networkingClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack Neutron Subnet: %s", err) + } + + return resourceNetworkingSubnetV2Read(d, meta) +} + +func resourceNetworkingSubnetV2Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + networkingClient, err := config.networkingV2Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack networking client: %s", err) + } + + err = subnets.Delete(networkingClient, d.Id()).ExtractErr() + if err != nil { + return fmt.Errorf("Error deleting OpenStack Neutron Subnet: %s", err) + } + + d.SetId("") + return nil +} + +func resourceSubnetAllocationPoolsV2(d *schema.ResourceData) []subnets.AllocationPool { + rawAPs := d.Get("allocation_pools").([]interface{}) + aps := make([]subnets.AllocationPool, len(rawAPs)) + for i, raw := range rawAPs { + rawMap := raw.(map[string]interface{}) + aps[i] = subnets.AllocationPool{ + Start: rawMap["start"].(string), + End: rawMap["end"].(string), + } + } + return aps +} + +func resourceSubnetDNSNameserversV2(d *schema.ResourceData) []string { + rawDNSN := d.Get("dns_nameservers").(*schema.Set) + dnsn := make([]string, rawDNSN.Len()) + for i, raw := range rawDNSN.List() { + dnsn[i] = raw.(string) + } + return dnsn +} + +func resourceSubnetHostRoutesV2(d *schema.ResourceData) []subnets.HostRoute { + rawHR := d.Get("host_routes").([]interface{}) + hr := make([]subnets.HostRoute, len(rawHR)) + for i, raw := range rawHR { + rawMap := raw.(map[string]interface{}) + hr[i] = subnets.HostRoute{ + DestinationCIDR: rawMap["destination_cidr"].(string), + NextHop: rawMap["next_hop"].(string), + } + } + return hr +} diff --git a/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go b/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go new file mode 100644 index 000000000000..d7f6116e9fd1 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_networking_subnet_v2_test.go @@ -0,0 +1,119 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/rackspace/gophercloud/openstack/networking/v2/subnets" +) + +func TestAccNetworkingV2Subnet_basic(t *testing.T) { + var subnet subnets.Subnet + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNetworkingV2SubnetDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccNetworkingV2Subnet_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckNetworkingV2SubnetExists(t, "openstack_networking_subnet_v2.subnet_1", &subnet), + ), + }, + resource.TestStep{ + Config: testAccNetworkingV2Subnet_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_networking_subnet_v2.subnet_1", "name", "tf-test-subnet"), + resource.TestCheckResourceAttr("openstack_networking_subnet_v2.subnet_1", "gateway_ip", "192.68.0.1"), + ), + }, + }, + }) +} + +func testAccCheckNetworkingV2SubnetDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2SubnetDestroy) Error creating OpenStack networking client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_networking_subnet_v2" { + continue + } + + _, err := subnets.Get(networkingClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Subnet still exists") + } + } + + return nil +} + +func testAccCheckNetworkingV2SubnetExists(t *testing.T, n string, subnet *subnets.Subnet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + config := testAccProvider.Meta().(*Config) + networkingClient, err := config.networkingV2Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("(testAccCheckNetworkingV2SubnetExists) Error creating OpenStack networking client: %s", err) + } + + found, err := subnets.Get(networkingClient, rs.Primary.ID).Extract() + if err != nil { + return err + } + + if found.ID != rs.Primary.ID { + return fmt.Errorf("Subnet not found") + } + + *subnet = *found + + return nil + } +} + +var testAccNetworkingV2Subnet_basic = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + }`, OS_REGION_NAME, OS_REGION_NAME) + +var testAccNetworkingV2Subnet_update = fmt.Sprintf(` + resource "openstack_networking_network_v2" "network_1" { + region = "%s" + name = "network_1" + admin_state_up = "true" + } + + resource "openstack_networking_subnet_v2" "subnet_1" { + region = "%s" + name = "tf-test-subnet" + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 + gateway_ip = "192.68.0.1" + }`, OS_REGION_NAME, OS_REGION_NAME) diff --git a/builtin/providers/openstack/resource_openstack_objectstorage_container_v1.go b/builtin/providers/openstack/resource_openstack_objectstorage_container_v1.go new file mode 100644 index 000000000000..b476a4080e36 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_objectstorage_container_v1.go @@ -0,0 +1,148 @@ +package openstack + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud/openstack/objectstorage/v1/containers" +) + +func resourceObjectStorageContainerV1() *schema.Resource { + return &schema.Resource{ + Create: resourceObjectStorageContainerV1Create, + Read: resourceObjectStorageContainerV1Read, + Update: resourceObjectStorageContainerV1Update, + Delete: resourceObjectStorageContainerV1Delete, + + Schema: map[string]*schema.Schema{ + "region": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + DefaultFunc: envDefaultFuncAllowMissing("OS_REGION_NAME"), + }, + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: false, + }, + "container_read": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "container_sync_to": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "container_sync_key": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "container_write": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "content_type": &schema.Schema{ + Type: schema.TypeString, + Optional: true, + ForceNew: false, + }, + "metadata": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: false, + }, + }, + } +} + +func resourceObjectStorageContainerV1Create(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + objectStorageClient, err := config.objectStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack object storage client: %s", err) + } + + cn := d.Get("name").(string) + + createOpts := &containers.CreateOpts{ + ContainerRead: d.Get("container_read").(string), + ContainerSyncTo: d.Get("container_sync_to").(string), + ContainerSyncKey: d.Get("container_sync_key").(string), + ContainerWrite: d.Get("container_write").(string), + ContentType: d.Get("content_type").(string), + Metadata: resourceContainerMetadataV2(d), + } + + log.Printf("[DEBUG] Create Options: %#v", createOpts) + _, err = containers.Create(objectStorageClient, cn, createOpts).Extract() + if err != nil { + return fmt.Errorf("Error creating OpenStack container: %s", err) + } + log.Printf("[INFO] Container ID: %s", cn) + + // Store the ID now + d.SetId(cn) + + return resourceObjectStorageContainerV1Read(d, meta) +} + +func resourceObjectStorageContainerV1Read(d *schema.ResourceData, meta interface{}) error { + return nil +} + +func resourceObjectStorageContainerV1Update(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + objectStorageClient, err := config.objectStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack object storage client: %s", err) + } + + updateOpts := containers.UpdateOpts{ + ContainerRead: d.Get("container_read").(string), + ContainerSyncTo: d.Get("container_sync_to").(string), + ContainerSyncKey: d.Get("container_sync_key").(string), + ContainerWrite: d.Get("container_write").(string), + ContentType: d.Get("content_type").(string), + } + + if d.HasChange("metadata") { + updateOpts.Metadata = resourceContainerMetadataV2(d) + } + + _, err = containers.Update(objectStorageClient, d.Id(), updateOpts).Extract() + if err != nil { + return fmt.Errorf("Error updating OpenStack container: %s", err) + } + + return resourceObjectStorageContainerV1Read(d, meta) +} + +func resourceObjectStorageContainerV1Delete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + objectStorageClient, err := config.objectStorageV1Client(d.Get("region").(string)) + if err != nil { + return fmt.Errorf("Error creating OpenStack object storage client: %s", err) + } + + _, err = containers.Delete(objectStorageClient, d.Id()).Extract() + if err != nil { + return fmt.Errorf("Error deleting OpenStack container: %s", err) + } + + d.SetId("") + return nil +} + +func resourceContainerMetadataV2(d *schema.ResourceData) map[string]string { + m := make(map[string]string) + for key, val := range d.Get("metadata").(map[string]interface{}) { + m[key] = val.(string) + } + return m +} diff --git a/builtin/providers/openstack/resource_openstack_objectstorage_container_v1_test.go b/builtin/providers/openstack/resource_openstack_objectstorage_container_v1_test.go new file mode 100644 index 000000000000..9377ad2fb0e8 --- /dev/null +++ b/builtin/providers/openstack/resource_openstack_objectstorage_container_v1_test.go @@ -0,0 +1,77 @@ +package openstack + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/rackspace/gophercloud/openstack/objectstorage/v1/containers" +) + +func TestAccObjectStorageV1Container_basic(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckObjectStorageV1ContainerDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccObjectStorageV1Container_basic, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_objectstorage_container_v1.container_1", "name", "tf-test-container"), + resource.TestCheckResourceAttr("openstack_objectstorage_container_v1.container_1", "content_type", "application/json"), + ), + }, + resource.TestStep{ + Config: testAccObjectStorageV1Container_update, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("openstack_objectstorage_container_v1.container_1", "content_type", "text/plain"), + ), + }, + }, + }) +} + +func testAccCheckObjectStorageV1ContainerDestroy(s *terraform.State) error { + config := testAccProvider.Meta().(*Config) + objectStorageClient, err := config.objectStorageV1Client(OS_REGION_NAME) + if err != nil { + return fmt.Errorf("Error creating OpenStack object storage client: %s", err) + } + + for _, rs := range s.RootModule().Resources { + if rs.Type != "openstack_objectstorage_container_v1" { + continue + } + + _, err := containers.Get(objectStorageClient, rs.Primary.ID).Extract() + if err == nil { + return fmt.Errorf("Container still exists") + } + } + + return nil +} + +var testAccObjectStorageV1Container_basic = fmt.Sprintf(` + resource "openstack_objectstorage_container_v1" "container_1" { + region = "%s" + name = "tf-test-container" + metadata { + test = "true" + } + content_type = "application/json" + }`, + OS_REGION_NAME) + +var testAccObjectStorageV1Container_update = fmt.Sprintf(` + resource "openstack_objectstorage_container_v1" "container_1" { + region = "%s" + name = "tf-test-container" + metadata { + test = "true" + } + content_type = "text/plain" + }`, + OS_REGION_NAME) diff --git a/builtin/providers/openstack/util.go b/builtin/providers/openstack/util.go new file mode 100644 index 000000000000..93a8bfbc5254 --- /dev/null +++ b/builtin/providers/openstack/util.go @@ -0,0 +1,22 @@ +package openstack + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/rackspace/gophercloud" +) + +// CheckDeleted checks the error to see if it's a 404 (Not Found) and, if so, +// sets the resource ID to the empty string instead of throwing an error. +func CheckDeleted(d *schema.ResourceData, err error, msg string) error { + errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError) + if !ok { + return fmt.Errorf("%s: %s", msg, err) + } + if errCode.Actual == 404 { + d.SetId("") + return nil + } + return fmt.Errorf("%s: %s", msg, err) +} diff --git a/builtin/providers/terraform/provider.go b/builtin/providers/terraform/provider.go new file mode 100644 index 000000000000..e71d5f40a39b --- /dev/null +++ b/builtin/providers/terraform/provider.go @@ -0,0 +1,15 @@ +package terraform + +import ( + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// Provider returns a terraform.ResourceProvider. +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + ResourcesMap: map[string]*schema.Resource{ + "terraform_remote_state": resourceRemoteState(), + }, + } +} diff --git a/builtin/providers/terraform/provider_test.go b/builtin/providers/terraform/provider_test.go new file mode 100644 index 000000000000..65f3ce4adb6c --- /dev/null +++ b/builtin/providers/terraform/provider_test.go @@ -0,0 +1,31 @@ +package terraform + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "terraform": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { +} diff --git a/builtin/providers/terraform/resource_state.go b/builtin/providers/terraform/resource_state.go new file mode 100644 index 000000000000..fb0e85ee2c7e --- /dev/null +++ b/builtin/providers/terraform/resource_state.go @@ -0,0 +1,76 @@ +package terraform + +import ( + "log" + "time" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/state/remote" +) + +func resourceRemoteState() *schema.Resource { + return &schema.Resource{ + Create: resourceRemoteStateCreate, + Read: resourceRemoteStateRead, + Delete: resourceRemoteStateDelete, + + Schema: map[string]*schema.Schema{ + "backend": &schema.Schema{ + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "config": &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + + "output": &schema.Schema{ + Type: schema.TypeMap, + Computed: true, + }, + }, + } +} + +func resourceRemoteStateCreate(d *schema.ResourceData, meta interface{}) error { + return resourceRemoteStateRead(d, meta) +} + +func resourceRemoteStateRead(d *schema.ResourceData, meta interface{}) error { + backend := d.Get("backend").(string) + config := make(map[string]string) + for k, v := range d.Get("config").(map[string]interface{}) { + config[k] = v.(string) + } + + // Create the client to access our remote state + log.Printf("[DEBUG] Initializing remote state client: %s", backend) + client, err := remote.NewClient(backend, config) + if err != nil { + return err + } + + // Create the remote state itself and refresh it in order to load the state + log.Printf("[DEBUG] Loading remote state...") + state := &remote.State{Client: client} + if err := state.RefreshState(); err != nil { + return err + } + + var outputs map[string]string + if !state.State().Empty() { + outputs = state.State().RootModule().Outputs + } + + d.SetId(time.Now().UTC().String()) + d.Set("output", outputs) + return nil +} + +func resourceRemoteStateDelete(d *schema.ResourceData, meta interface{}) error { + d.SetId("") + return nil +} diff --git a/builtin/providers/terraform/resource_state_test.go b/builtin/providers/terraform/resource_state_test.go new file mode 100644 index 000000000000..42ad55adac98 --- /dev/null +++ b/builtin/providers/terraform/resource_state_test.go @@ -0,0 +1,54 @@ +package terraform + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccState_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccState_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckStateValue( + "terraform_remote_state.foo", "foo", "bar"), + ), + }, + }, + }) +} + +func testAccCheckStateValue(id, name, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[id] + if !ok { + return fmt.Errorf("Not found: %s", id) + } + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + v := rs.Primary.Attributes["output."+name] + if v != value { + return fmt.Errorf( + "Value for %s is %s, not %s", name, v, value) + } + + return nil + } +} + +const testAccState_basic = ` +resource "terraform_remote_state" "foo" { + backend = "_local" + + config { + path = "./test-fixtures/basic.tfstate" + } +}` diff --git a/builtin/provisioners/file/resource_provisioner.go b/builtin/provisioners/file/resource_provisioner.go index bb95c0860330..8b9e14570f1d 100644 --- a/builtin/provisioners/file/resource_provisioner.go +++ b/builtin/provisioners/file/resource_provisioner.go @@ -60,6 +60,7 @@ func (p *ResourceProvisioner) copyFiles(conf *helper.SSHConfig, src, dst string) if err != nil { return err } + defer config.CleanupConfig() // Wait and retry until we establish the SSH connection var comm *helper.SSHCommunicator diff --git a/builtin/provisioners/remote-exec/resource_provisioner.go b/builtin/provisioners/remote-exec/resource_provisioner.go index b3f0d0c0e962..046e0e860cfb 100644 --- a/builtin/provisioners/remote-exec/resource_provisioner.go +++ b/builtin/provisioners/remote-exec/resource_provisioner.go @@ -172,16 +172,20 @@ func (p *ResourceProvisioner) runScripts( if err != nil { return err } + defer config.CleanupConfig() o.Output(fmt.Sprintf( "Connecting to remote host via SSH...\n"+ " Host: %s\n"+ " User: %s\n"+ " Password: %v\n"+ - " Private key: %v", + " Private key: %v"+ + " SSH Agent: %v", conf.Host, conf.User, conf.Password != "", - conf.KeyFile != "")) + conf.KeyFile != "", + conf.Agent, + )) // Wait and retry until we establish the SSH connection var comm *helper.SSHCommunicator diff --git a/command/apply.go b/command/apply.go index d46b71679814..529d6e701a87 100644 --- a/command/apply.go +++ b/command/apply.go @@ -93,6 +93,7 @@ func (c *ApplyCommand) Run(args []string) int { // Build the context based on the arguments given ctx, planned, err := c.Context(contextOpts{ + Destroy: c.Destroy, Path: configPath, StatePath: c.Meta.statePath, }) @@ -140,12 +141,7 @@ func (c *ApplyCommand) Run(args []string) int { } } - var opts terraform.PlanOpts - if c.Destroy { - opts.Destroy = true - } - - if _, err := ctx.Plan(&opts); err != nil { + if _, err := ctx.Plan(); err != nil { c.Ui.Error(fmt.Sprintf( "Error creating plan: %s", err)) return 1 @@ -319,6 +315,10 @@ Options: "-state". This can be used to preserve the old state. + -target=resource Resource to target. Operation will be limited to this + resource and its dependencies. This flag can be used + multiple times. + -var 'foo=bar' Set a variable in the Terraform configuration. This flag can be set multiple times. @@ -357,6 +357,10 @@ Options: "-state". This can be used to preserve the old state. + -target=resource Resource to target. Operation will be limited to this + resource and its dependencies. This flag can be used + multiple times. + -var 'foo=bar' Set a variable in the Terraform configuration. This flag can be set multiple times. diff --git a/command/apply_destroy_test.go b/command/apply_destroy_test.go index bdc2440f0bce..63afb15edb32 100644 --- a/command/apply_destroy_test.go +++ b/command/apply_destroy_test.go @@ -116,6 +116,96 @@ func TestApply_destroyPlan(t *testing.T) { } } +func TestApply_destroyTargeted(t *testing.T) { + originalState := &terraform.State{ + Modules: []*terraform.ModuleState{ + &terraform.ModuleState{ + Path: []string{"root"}, + Resources: map[string]*terraform.ResourceState{ + "test_instance.foo": &terraform.ResourceState{ + Type: "test_instance", + Primary: &terraform.InstanceState{ + ID: "i-ab123", + }, + }, + "test_load_balancer.foo": &terraform.ResourceState{ + Type: "test_load_balancer", + Primary: &terraform.InstanceState{ + ID: "lb-abc123", + }, + }, + }, + }, + }, + } + + statePath := testStateFile(t, originalState) + + p := testProvider() + ui := new(cli.MockUi) + c := &ApplyCommand{ + Destroy: true, + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + // Run the apply command pointing to our existing state + args := []string{ + "-force", + "-target", "test_instance.foo", + "-state", statePath, + testFixturePath("apply-destroy-targeted"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + // Verify a new state exists + if _, err := os.Stat(statePath); err != nil { + t.Fatalf("err: %s", err) + } + + f, err := os.Open(statePath) + if err != nil { + t.Fatalf("err: %s", err) + } + defer f.Close() + + state, err := terraform.ReadState(f) + if err != nil { + t.Fatalf("err: %s", err) + } + if state == nil { + t.Fatal("state should not be nil") + } + + actualStr := strings.TrimSpace(state.String()) + expectedStr := strings.TrimSpace(testApplyDestroyStr) + if actualStr != expectedStr { + t.Fatalf("bad:\n\n%s\n\n%s", actualStr, expectedStr) + } + + // Should have a backup file + f, err = os.Open(statePath + DefaultBackupExtention) + if err != nil { + t.Fatalf("err: %s", err) + } + + backupState, err := terraform.ReadState(f) + f.Close() + if err != nil { + t.Fatalf("err: %s", err) + } + + actualStr = strings.TrimSpace(backupState.String()) + expectedStr = strings.TrimSpace(originalState.String()) + if actualStr != expectedStr { + t.Fatalf("bad:\n\nactual:\n%s\n\nexpected:\nb%s", actualStr, expectedStr) + } +} + const testApplyDestroyStr = ` ` diff --git a/command/command_test.go b/command/command_test.go index 303fc4b2bdcf..2544cf531844 100644 --- a/command/command_test.go +++ b/command/command_test.go @@ -148,6 +148,27 @@ func testStateFileDefault(t *testing.T, s *terraform.State) string { return DefaultStateFilename } +// testStateFileRemote writes the state out to the remote statefile +// in the cwd. Use `testCwd` to change into a temp cwd. +func testStateFileRemote(t *testing.T, s *terraform.State) string { + path := filepath.Join(DefaultDataDir, DefaultStateFilename) + if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { + t.Fatalf("err: %s", err) + } + + f, err := os.Create(path) + if err != nil { + t.Fatalf("err: %s", err) + } + defer f.Close() + + if err := terraform.WriteState(s, f); err != nil { + t.Fatalf("err: %s", err) + } + + return path +} + // testStateOutput tests that the state at the given path contains // the expected state string. func testStateOutput(t *testing.T, path string, expected string) { diff --git a/command/flag_kv.go b/command/flag_kv.go index fd9b57b3a4a1..6e019877849e 100644 --- a/command/flag_kv.go +++ b/command/flag_kv.go @@ -85,3 +85,17 @@ func loadKVFile(rawPath string) (map[string]string, error) { return result, nil } + +// FlagStringSlice is a flag.Value implementation for parsing targets from the +// command line, e.g. -target=aws_instance.foo -target=aws_vpc.bar + +type FlagStringSlice []string + +func (v *FlagStringSlice) String() string { + return "" +} +func (v *FlagStringSlice) Set(raw string) error { + *v = append(*v, raw) + + return nil +} diff --git a/command/meta.go b/command/meta.go index 7cf3ebe051ac..b542304af929 100644 --- a/command/meta.go +++ b/command/meta.go @@ -38,6 +38,9 @@ type Meta struct { input bool variables map[string]string + // Targets for this context (private) + targets []string + color bool oldUi cli.Ui @@ -126,6 +129,9 @@ func (m *Meta) Context(copts contextOpts) (*terraform.Context, bool, error) { m.statePath = copts.StatePath } + // Tell the context if we're in a destroy plan / apply + opts.Destroy = copts.Destroy + // Store the loaded state state, err := m.State() if err != nil { @@ -138,11 +144,7 @@ func (m *Meta) Context(copts contextOpts) (*terraform.Context, bool, error) { return nil, false, fmt.Errorf("Error loading config: %s", err) } - dataDir := DefaultDataDirectory - if m.dataDir != "" { - dataDir = m.dataDir - } - err = mod.Load(m.moduleStorage(dataDir), copts.GetMode) + err = mod.Load(m.moduleStorage(m.DataDir()), copts.GetMode) if err != nil { return nil, false, fmt.Errorf("Error downloading modules: %s", err) } @@ -153,6 +155,16 @@ func (m *Meta) Context(copts contextOpts) (*terraform.Context, bool, error) { return ctx, false, nil } +// DataDir returns the directory where local data will be stored. +func (m *Meta) DataDir() string { + dataDir := DefaultDataDirectory + if m.dataDir != "" { + dataDir = m.dataDir + } + + return dataDir +} + // InputMode returns the type of input we should ask for in the form of // terraform.InputMode which is passed directly to Context.Input. func (m *Meta) InputMode() terraform.InputMode { @@ -164,6 +176,7 @@ func (m *Meta) InputMode() terraform.InputMode { mode |= terraform.InputModeProvider if len(m.variables) == 0 && m.autoKey == "" { mode |= terraform.InputModeVar + mode |= terraform.InputModeVarUnset } return mode @@ -205,7 +218,7 @@ func (m *Meta) StateOpts() *StateOpts { if localPath == "" { localPath = DefaultStateFilename } - remotePath := filepath.Join(DefaultDataDir, DefaultStateFilename) + remotePath := filepath.Join(m.DataDir(), DefaultStateFilename) return &StateOpts{ LocalPath: localPath, @@ -260,6 +273,7 @@ func (m *Meta) contextOpts() *terraform.ContextOpts { vs[k] = v } opts.Variables = vs + opts.Targets = m.targets opts.UIInput = m.UIInput() return &opts @@ -271,6 +285,7 @@ func (m *Meta) flagSet(n string) *flag.FlagSet { f.BoolVar(&m.input, "input", true, "input") f.Var((*FlagKV)(&m.variables), "var", "variables") f.Var((*FlagKVFile)(&m.variables), "var-file", "variable file") + f.Var((*FlagStringSlice)(&m.targets), "target", "resource to target") if m.autoKey != "" { f.Var((*FlagKVFile)(&m.autoVariables), m.autoKey, "variable file") @@ -381,4 +396,7 @@ type contextOpts struct { // GetMode is the module.GetMode to use when loading the module tree. GetMode module.GetMode + + // Set to true when running a destroy plan/apply. + Destroy bool } diff --git a/command/meta_test.go b/command/meta_test.go index 4b1ae03f83ec..b0c4960f0067 100644 --- a/command/meta_test.go +++ b/command/meta_test.go @@ -65,7 +65,7 @@ func TestMetaInputMode(t *testing.T) { t.Fatalf("err: %s", err) } - if m.InputMode() != terraform.InputModeStd { + if m.InputMode() != terraform.InputModeStd|terraform.InputModeVarUnset { t.Fatalf("bad: %#v", m.InputMode()) } } diff --git a/command/module_storage.go b/command/module_storage.go index 846942aaafc6..e17786a8079e 100644 --- a/command/module_storage.go +++ b/command/module_storage.go @@ -14,16 +14,16 @@ type uiModuleStorage struct { Ui cli.Ui } -func (s *uiModuleStorage) Dir(source string) (string, bool, error) { - return s.Storage.Dir(source) +func (s *uiModuleStorage) Dir(key string) (string, bool, error) { + return s.Storage.Dir(key) } -func (s *uiModuleStorage) Get(source string, update bool) error { +func (s *uiModuleStorage) Get(key string, source string, update bool) error { updateStr := "" if update { updateStr = " (update)" } s.Ui.Output(fmt.Sprintf("Get: %s%s", source, updateStr)) - return s.Storage.Get(source, update) + return s.Storage.Get(key, source, update) } diff --git a/command/output.go b/command/output.go index 05fd31f34ff0..2e3c1bee073e 100644 --- a/command/output.go +++ b/command/output.go @@ -39,7 +39,7 @@ func (c *OutputCommand) Run(args []string) int { } state := stateStore.State() - if len(state.RootModule().Outputs) == 0 { + if state.Empty() || len(state.RootModule().Outputs) == 0 { c.Ui.Error(fmt.Sprintf( "The state file has no outputs defined. Define an output\n" + "in your configuration with the `output` directive and re-run\n" + diff --git a/command/output_test.go b/command/output_test.go index fbcda9962746..d3444c389d3c 100644 --- a/command/output_test.go +++ b/command/output_test.go @@ -142,6 +142,27 @@ func TestOutput_noArgs(t *testing.T) { } } +func TestOutput_noState(t *testing.T) { + originalState := &terraform.State{} + statePath := testStateFile(t, originalState) + + ui := new(cli.MockUi) + c := &OutputCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{ + "-state", statePath, + "foo", + } + if code := c.Run(args); code != 1 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } +} + func TestOutput_noVars(t *testing.T) { originalState := &terraform.State{ Modules: []*terraform.ModuleState{ diff --git a/command/plan.go b/command/plan.go index 24365d18538c..f23e1bb6e4f0 100644 --- a/command/plan.go +++ b/command/plan.go @@ -16,7 +16,7 @@ type PlanCommand struct { } func (c *PlanCommand) Run(args []string) int { - var destroy, refresh bool + var destroy, refresh, detailed bool var outPath string var moduleDepth int @@ -29,6 +29,7 @@ func (c *PlanCommand) Run(args []string) int { cmdFlags.StringVar(&outPath, "out", "", "path") cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path") + cmdFlags.BoolVar(&detailed, "detailed-exitcode", false, "detailed-exitcode") cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } if err := cmdFlags.Parse(args); err != nil { return 1 @@ -53,6 +54,7 @@ func (c *PlanCommand) Run(args []string) int { } ctx, _, err := c.Context(contextOpts{ + Destroy: destroy, Path: path, StatePath: c.Meta.statePath, }) @@ -86,7 +88,7 @@ func (c *PlanCommand) Run(args []string) int { } } - plan, err := ctx.Plan(&terraform.PlanOpts{Destroy: destroy}) + plan, err := ctx.Plan() if err != nil { c.Ui.Error(fmt.Sprintf("Error running plan: %s", err)) return 1 @@ -128,6 +130,9 @@ func (c *PlanCommand) Run(args []string) int { ModuleDepth: moduleDepth, })) + if detailed { + return 2 + } return 0 } @@ -151,6 +156,12 @@ Options: -destroy If set, a plan will be generated to destroy all resources managed by the given configuration and state. + -detailed-exitcode Return detailed exit codes when the command exits. This + will change the meaning of exit codes to: + 0 - Succeeded, diff is empty (no changes) + 1 - Errored + 2 - Succeeded, there is a diff + -input=true Ask for input for variables if not directly set. -module-depth=n Specifies the depth of modules to show in the output. @@ -168,6 +179,10 @@ Options: up Terraform-managed resources. By default it will use the state "terraform.tfstate" if it exists. + -target=resource Resource to target. Operation will be limited to this + resource and its dependencies. This flag can be used + multiple times. + -var 'foo=bar' Set a variable in the Terraform configuration. This flag can be set multiple times. diff --git a/command/plan_test.go b/command/plan_test.go index d981c2294e30..3455fbbc6178 100644 --- a/command/plan_test.go +++ b/command/plan_test.go @@ -567,6 +567,56 @@ func TestPlan_disableBackup(t *testing.T) { } } +func TestPlan_detailedExitcode(t *testing.T) { + cwd, err := os.Getwd() + if err != nil { + t.Fatalf("err: %s", err) + } + if err := os.Chdir(testFixturePath("plan")); err != nil { + t.Fatalf("err: %s", err) + } + defer os.Chdir(cwd) + + p := testProvider() + ui := new(cli.MockUi) + c := &PlanCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{"-detailed-exitcode"} + if code := c.Run(args); code != 2 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + +func TestPlan_detailedExitcode_emptyDiff(t *testing.T) { + cwd, err := os.Getwd() + if err != nil { + t.Fatalf("err: %s", err) + } + if err := os.Chdir(testFixturePath("plan-emptydiff")); err != nil { + t.Fatalf("err: %s", err) + } + defer os.Chdir(cwd) + + p := testProvider() + ui := new(cli.MockUi) + c := &PlanCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(p), + Ui: ui, + }, + } + + args := []string{"-detailed-exitcode"} + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + const planVarFile = ` foo = "bar" ` diff --git a/command/push.go b/command/push.go new file mode 100644 index 000000000000..5f9bde51d338 --- /dev/null +++ b/command/push.go @@ -0,0 +1,312 @@ +package command + +import ( + "fmt" + "io" + "os" + "path/filepath" + "strings" + + "github.com/hashicorp/atlas-go/archive" + "github.com/hashicorp/atlas-go/v1" +) + +type PushCommand struct { + Meta + + // client is the client to use for the actual push operations. + // If this isn't set, then the Atlas client is used. This should + // really only be set for testing reasons (and is hence not exported). + client pushClient +} + +func (c *PushCommand) Run(args []string) int { + var atlasAddress, atlasToken string + var archiveVCS, moduleUpload bool + var name string + args = c.Meta.process(args, true) + cmdFlags := c.Meta.flagSet("push") + cmdFlags.StringVar(&atlasAddress, "atlas-address", "", "") + cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path") + cmdFlags.StringVar(&atlasToken, "token", "", "") + cmdFlags.BoolVar(&moduleUpload, "upload-modules", true, "") + cmdFlags.StringVar(&name, "name", "", "") + cmdFlags.BoolVar(&archiveVCS, "vcs", true, "") + cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } + if err := cmdFlags.Parse(args); err != nil { + return 1 + } + + // The pwd is used for the configuration path if one is not given + pwd, err := os.Getwd() + if err != nil { + c.Ui.Error(fmt.Sprintf("Error getting pwd: %s", err)) + return 1 + } + + // Get the path to the configuration depending on the args. + var configPath string + args = cmdFlags.Args() + if len(args) > 1 { + c.Ui.Error("The apply command expects at most one argument.") + cmdFlags.Usage() + return 1 + } else if len(args) == 1 { + configPath = args[0] + } else { + configPath = pwd + } + + // Verify the state is remote, we can't push without a remote state + s, err := c.State() + if err != nil { + c.Ui.Error(fmt.Sprintf("Failed to read state: %s", err)) + return 1 + } + if !s.State().IsRemote() { + c.Ui.Error( + "Remote state is not enabled. For Atlas to run Terraform\n" + + "for you, remote state must be used and configured. Remote\n" + + "state via any backend is accepted, not just Atlas. To\n" + + "configure remote state, use the `terraform remote config`\n" + + "command.") + return 1 + } + + // Build the context based on the arguments given + ctx, planned, err := c.Context(contextOpts{ + Path: configPath, + StatePath: c.Meta.statePath, + }) + if err != nil { + c.Ui.Error(err.Error()) + return 1 + } + if planned { + c.Ui.Error( + "A plan file cannot be given as the path to the configuration.\n" + + "A path to a module (directory with configuration) must be given.") + return 1 + } + + // Get the configuration + config := ctx.Module().Config() + if name == "" { + if config.Atlas == nil || config.Atlas.Name == "" { + c.Ui.Error( + "The name of this Terraform configuration in Atlas must be\n" + + "specified within your configuration or the command-line. To\n" + + "set it on the command-line, use the `-name` parameter.") + return 1 + } + name = config.Atlas.Name + } + + // Initialize the client if it isn't given. + if c.client == nil { + // Make sure to nil out our client so our token isn't sitting around + defer func() { c.client = nil }() + + // Initialize it to the default client, we set custom settings later + client := atlas.DefaultClient() + if atlasAddress != "" { + client, err = atlas.NewClient(atlasAddress) + if err != nil { + c.Ui.Error(fmt.Sprintf("Error initializing Atlas client: %s", err)) + return 1 + } + } + + if atlasToken != "" { + client.Token = atlasToken + } + + c.client = &atlasPushClient{Client: client} + } + + // Get the variables we might already have + vars, err := c.client.Get(name) + if err != nil { + c.Ui.Error(fmt.Sprintf( + "Error looking up previously pushed configuration: %s", err)) + return 1 + } + for k, v := range vars { + ctx.SetVariable(k, v) + } + + // Ask for input + if err := ctx.Input(c.InputMode()); err != nil { + c.Ui.Error(fmt.Sprintf( + "Error while asking for variable input:\n\n%s", err)) + return 1 + } + + // Build the archiving options, which includes everything it can + // by default according to VCS rules but forcing the data directory. + archiveOpts := &archive.ArchiveOpts{ + VCS: archiveVCS, + Extra: map[string]string{ + DefaultDataDir: c.DataDir(), + }, + } + if !moduleUpload { + // If we're not uploading modules, then exclude the modules dir. + archiveOpts.Exclude = append( + archiveOpts.Exclude, + filepath.Join(c.DataDir(), "modules")) + } + + archiveR, err := archive.CreateArchive(configPath, archiveOpts) + if err != nil { + c.Ui.Error(fmt.Sprintf( + "An error has occurred while archiving the module for uploading:\n"+ + "%s", err)) + return 1 + } + + // Upsert! + opts := &pushUpsertOptions{ + Name: name, + Archive: archiveR, + Variables: ctx.Variables(), + } + vsn, err := c.client.Upsert(opts) + if err != nil { + c.Ui.Error(fmt.Sprintf( + "An error occurred while uploading the module:\n\n%s", err)) + return 1 + } + + c.Ui.Output(c.Colorize().Color(fmt.Sprintf( + "[reset][bold][green]Configuration %q uploaded! (v%d)", + name, vsn))) + return 0 +} + +func (c *PushCommand) Help() string { + helpText := ` +Usage: terraform push [options] [DIR] + + Upload this Terraform module to an Atlas server for remote + infrastructure management. + +Options: + + -atlas-address= An alternate address to an Atlas instance. Defaults + to https://atlas.hashicorp.com + + -upload-modules=true If true (default), then the modules are locked at + their current checkout and uploaded completely. This + prevents Atlas from running "terraform get". + + -name= Name of the configuration in Atlas. This can also + be set in the configuration itself. Format is + typically: "username/name". + + -token= Access token to use to upload. If blank or unspecified, + the ATLAS_TOKEN environmental variable will be used. + + -vcs=true If true (default), push will upload only files + comitted to your VCS, if detected. + +` + return strings.TrimSpace(helpText) +} + +func (c *PushCommand) Synopsis() string { + return "Upload this Terraform module to Atlas to run" +} + +// pushClient is implementd internally to control where pushes go. This is +// either to Atlas or a mock for testing. +type pushClient interface { + Get(string) (map[string]string, error) + Upsert(*pushUpsertOptions) (int, error) +} + +type pushUpsertOptions struct { + Name string + Archive *archive.Archive + Variables map[string]string +} + +type atlasPushClient struct { + Client *atlas.Client +} + +func (c *atlasPushClient) Get(name string) (map[string]string, error) { + user, name, err := atlas.ParseSlug(name) + if err != nil { + return nil, err + } + + version, err := c.Client.TerraformConfigLatest(user, name) + if err != nil { + return nil, err + } + + var variables map[string]string + if version != nil { + variables = version.Variables + } + + return variables, nil +} + +func (c *atlasPushClient) Upsert(opts *pushUpsertOptions) (int, error) { + user, name, err := atlas.ParseSlug(opts.Name) + if err != nil { + return 0, err + } + + data := &atlas.TerraformConfigVersion{ + Variables: opts.Variables, + } + + version, err := c.Client.CreateTerraformConfigVersion( + user, name, data, opts.Archive, opts.Archive.Size) + if err != nil { + return 0, err + } + + return version, nil +} + +type mockPushClient struct { + File string + + GetCalled bool + GetName string + GetResult map[string]string + GetError error + + UpsertCalled bool + UpsertOptions *pushUpsertOptions + UpsertVersion int + UpsertError error +} + +func (c *mockPushClient) Get(name string) (map[string]string, error) { + c.GetCalled = true + c.GetName = name + return c.GetResult, c.GetError +} + +func (c *mockPushClient) Upsert(opts *pushUpsertOptions) (int, error) { + f, err := os.Create(c.File) + if err != nil { + return 0, err + } + defer f.Close() + + data := opts.Archive + size := opts.Archive.Size + if _, err := io.CopyN(f, data, size); err != nil { + return 0, err + } + + c.UpsertCalled = true + c.UpsertOptions = opts + return c.UpsertVersion, c.UpsertError +} diff --git a/command/push_test.go b/command/push_test.go new file mode 100644 index 000000000000..6aab755c9ae6 --- /dev/null +++ b/command/push_test.go @@ -0,0 +1,411 @@ +package command + +import ( + "archive/tar" + "bytes" + "compress/gzip" + "io" + "os" + "reflect" + "sort" + "testing" + + "github.com/hashicorp/terraform/terraform" + "github.com/mitchellh/cli" +) + +func TestPush_good(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + args := []string{ + "-vcs=false", + testFixturePath("push"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + actual := testArchiveStr(t, archivePath) + expected := []string{ + ".terraform/", + ".terraform/terraform.tfstate", + "main.tf", + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } + + variables := make(map[string]string) + if !reflect.DeepEqual(client.UpsertOptions.Variables, variables) { + t.Fatalf("bad: %#v", client.UpsertOptions) + } + + if client.UpsertOptions.Name != "foo" { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + +func TestPush_input(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + // Disable test mode so input would be asked and setup the + // input reader/writers. + test = false + defer func() { test = true }() + defaultInputReader = bytes.NewBufferString("foo\n") + defaultInputWriter = new(bytes.Buffer) + + args := []string{ + "-vcs=false", + testFixturePath("push-input"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + variables := map[string]string{ + "foo": "foo", + } + if !reflect.DeepEqual(client.UpsertOptions.Variables, variables) { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + +func TestPush_inputPartial(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{ + File: archivePath, + GetResult: map[string]string{"foo": "bar"}, + } + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + // Disable test mode so input would be asked and setup the + // input reader/writers. + test = false + defer func() { test = true }() + defaultInputReader = bytes.NewBufferString("foo\n") + defaultInputWriter = new(bytes.Buffer) + + args := []string{ + "-vcs=false", + testFixturePath("push-input-partial"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + variables := map[string]string{ + "foo": "bar", + "bar": "foo", + } + if !reflect.DeepEqual(client.UpsertOptions.Variables, variables) { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + +func TestPush_inputTfvars(t *testing.T) { + // Disable test mode so input would be asked and setup the + // input reader/writers. + test = false + defer func() { test = true }() + defaultInputReader = bytes.NewBufferString("nope\n") + defaultInputWriter = new(bytes.Buffer) + + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + path := testFixturePath("push-tfvars") + args := []string{ + "-var-file", path + "/terraform.tfvars", + "-vcs=false", + path, + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + actual := testArchiveStr(t, archivePath) + expected := []string{ + ".terraform/", + ".terraform/terraform.tfstate", + "main.tf", + "terraform.tfvars", + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } + + if client.UpsertOptions.Name != "foo" { + t.Fatalf("bad: %#v", client.UpsertOptions) + } + + variables := map[string]string{ + "foo": "bar", + "bar": "foo", + } + if !reflect.DeepEqual(client.UpsertOptions.Variables, variables) { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + +func TestPush_name(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Path where the archive will be "uploaded" to + archivePath := testTempFile(t) + defer os.Remove(archivePath) + + client := &mockPushClient{File: archivePath} + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + + client: client, + } + + args := []string{ + "-name", "bar", + "-vcs=false", + testFixturePath("push"), + } + if code := c.Run(args); code != 0 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } + + if client.UpsertOptions.Name != "bar" { + t.Fatalf("bad: %#v", client.UpsertOptions) + } +} + +func TestPush_noState(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{} + if code := c.Run(args); code != 1 { + t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) + } +} + +func TestPush_noRemoteState(t *testing.T) { + state := &terraform.State{ + Modules: []*terraform.ModuleState{ + &terraform.ModuleState{ + Path: []string{"root"}, + Resources: map[string]*terraform.ResourceState{ + "test_instance.foo": &terraform.ResourceState{ + Type: "test_instance", + Primary: &terraform.InstanceState{ + ID: "bar", + }, + }, + }, + }, + }, + } + statePath := testStateFile(t, state) + + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + Ui: ui, + }, + } + + args := []string{ + "-state", statePath, + } + if code := c.Run(args); code != 1 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + +func TestPush_plan(t *testing.T) { + tmp, cwd := testCwd(t) + defer testFixCwd(t, tmp, cwd) + + // Create remote state file, this should be pulled + conf, srv := testRemoteState(t, testState(), 200) + defer srv.Close() + + // Persist local remote state + s := terraform.NewState() + s.Serial = 5 + s.Remote = conf + testStateFileRemote(t, s) + + // Create a plan + planPath := testPlanFile(t, &terraform.Plan{ + Module: testModule(t, "apply"), + }) + + ui := new(cli.MockUi) + c := &PushCommand{ + Meta: Meta{ + ContextOpts: testCtxConfig(testProvider()), + Ui: ui, + }, + } + + args := []string{planPath} + if code := c.Run(args); code != 1 { + t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String()) + } +} + +func testArchiveStr(t *testing.T, path string) []string { + f, err := os.Open(path) + if err != nil { + t.Fatalf("err: %s", err) + } + defer f.Close() + + // Ungzip + gzipR, err := gzip.NewReader(f) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Accumulator + result := make([]string, 0, 10) + + // Untar + tarR := tar.NewReader(gzipR) + for { + header, err := tarR.Next() + if err == io.EOF { + break + } + if err != nil { + t.Fatalf("err: %s", err) + } + + result = append(result, header.Name) + } + + sort.Strings(result) + return result +} diff --git a/command/refresh.go b/command/refresh.go index 38d63005085d..32e7950474fa 100644 --- a/command/refresh.go +++ b/command/refresh.go @@ -135,6 +135,10 @@ Options: -state-out=path Path to write updated state file. By default, the "-state" path will be used. + -target=resource Resource to target. Operation will be limited to this + resource and its dependencies. This flag can be used + multiple times. + -var 'foo=bar' Set a variable in the Terraform configuration. This flag can be set multiple times. diff --git a/command/remote_config.go b/command/remote_config.go index cb95c4b94943..92017c48496e 100644 --- a/command/remote_config.go +++ b/command/remote_config.go @@ -41,14 +41,12 @@ func (c *RemoteConfigCommand) Run(args []string) int { cmdFlags.Var((*FlagKV)(&config), "backend-config", "config") cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } if err := cmdFlags.Parse(args); err != nil { + c.Ui.Error(fmt.Sprintf("\nError parsing CLI flags: %s", err)) return 1 } - // Show help if given no inputs - if !c.conf.disableRemote && c.remoteConf.Type == "atlas" && len(config) == 0 { - cmdFlags.Usage() - return 1 - } + // Lowercase the type + c.remoteConf.Type = strings.ToLower(c.remoteConf.Type) // Set the local state path c.statePath = c.conf.statePath @@ -88,29 +86,63 @@ func (c *RemoteConfigCommand) Run(args []string) int { return c.disableRemoteState() } - // Ensure there is no conflict + // Ensure there is no conflict, and then do the correct operation + var result int haveCache := !remoteState.Empty() haveLocal := !localState.Empty() switch { case haveCache && haveLocal: c.Ui.Error(fmt.Sprintf("Remote state is enabled, but non-managed state file '%s' is also present!", c.conf.statePath)) - return 1 + result = 1 case !haveCache && !haveLocal: // If we don't have either state file, initialize a blank state file - return c.initBlankState() + result = c.initBlankState() case haveCache && !haveLocal: // Update the remote state target potentially - return c.updateRemoteConfig() + result = c.updateRemoteConfig() case !haveCache && haveLocal: // Enable remote state management - return c.enableRemoteState() + result = c.enableRemoteState() + } + + // If there was an error, return right away + if result != 0 { + return result } - panic("unhandled case") + // If we're not pulling, then do nothing + if !c.conf.pullOnDisable { + return result + } + + // Otherwise, refresh the state + stateResult, err := c.StateRaw(c.StateOpts()) + if err != nil { + c.Ui.Error(fmt.Sprintf( + "Error while performing the initial pull. The error message is shown\n"+ + "below. Note that remote state was properly configured, so you don't\n"+ + "need to reconfigure. You can now use `push` and `pull` directly.\n"+ + "\n%s", err)) + return 1 + } + + state := stateResult.State + if err := state.RefreshState(); err != nil { + c.Ui.Error(fmt.Sprintf( + "Error while performing the initial pull. The error message is shown\n"+ + "below. Note that remote state was properly configured, so you don't\n"+ + "need to reconfigure. You can now use `push` and `pull` directly.\n"+ + "\n%s", err)) + return 1 + } + + c.Ui.Output(c.Colorize().Color(fmt.Sprintf( + "[reset][bold][green]Remote state configured and pulled."))) + return 0 } // disableRemoteState is used to disable remote state management, @@ -177,7 +209,12 @@ func (c *RemoteConfigCommand) validateRemoteConfig() error { conf := c.remoteConf _, err := remote.NewClient(conf.Type, conf.Config) if err != nil { - c.Ui.Error(fmt.Sprintf("%s", err)) + c.Ui.Error(fmt.Sprintf( + "%s\n\n"+ + "If the error message above mentions requiring or modifying configuration\n"+ + "options, these are set using the `-backend-config` flag. Example:\n"+ + "-backend-config=\"name=foo\" to set the `name` configuration", + err)) } return err } @@ -323,9 +360,10 @@ Options: -disable Disables remote state management and migrates the state to the -state path. - -pull=true Controls if the remote state is pulled before disabling. - This defaults to true to ensure the latest state is cached - before disabling. + -pull=true If disabling, this controls if the remote state is + pulled before disabling. If enabling, this controls + if the remote state is pulled after enabling. This + defaults to true. -state=path Path to read state. Defaults to "terraform.tfstate" unless remote state is enabled. diff --git a/command/remote_test.go b/command/remote_config_test.go similarity index 99% rename from command/remote_test.go rename to command/remote_config_test.go index 0452e34165d0..42a2d2d3bd1c 100644 --- a/command/remote_test.go +++ b/command/remote_config_test.go @@ -245,6 +245,7 @@ func TestRemoteConfig_initBlank(t *testing.T) { "-backend=http", "-backend-config", "address=http://example.com", "-backend-config", "access_token=test", + "-pull=false", } if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) @@ -321,6 +322,7 @@ func TestRemoteConfig_updateRemote(t *testing.T) { "-backend=http", "-backend-config", "address=http://example.com", "-backend-config", "access_token=test", + "-pull=false", } if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) @@ -376,6 +378,7 @@ func TestRemoteConfig_enableRemote(t *testing.T) { "-backend=http", "-backend-config", "address=http://example.com", "-backend-config", "access_token=test", + "-pull=false", } if code := c.Run(args); code != 0 { t.Fatalf("bad: \n%s", ui.ErrorWriter.String()) diff --git a/command/remote_pull.go b/command/remote_pull.go index 3965f0d42294..bf757ccf1953 100644 --- a/command/remote_pull.go +++ b/command/remote_pull.go @@ -61,7 +61,8 @@ func (c *RemotePullCommand) Run(args []string) int { c.Ui.Error(fmt.Sprintf("%s", change)) return 1 } else { - c.Ui.Output(fmt.Sprintf("%s", change)) + c.Ui.Output(c.Colorize().Color(fmt.Sprintf( + "[reset][bold][green]%s", change))) } return 0 diff --git a/command/remote_pull_test.go b/command/remote_pull_test.go index 94b52ce2b165..a867877e1735 100644 --- a/command/remote_pull_test.go +++ b/command/remote_pull_test.go @@ -80,15 +80,6 @@ func testRemoteState(t *testing.T, s *terraform.State, c int) (*terraform.Remote var b64md5 string buf := bytes.NewBuffer(nil) - if s != nil { - enc := json.NewEncoder(buf) - if err := enc.Encode(s); err != nil { - t.Fatalf("err: %v", err) - } - md5 := md5.Sum(buf.Bytes()) - b64md5 = base64.StdEncoding.EncodeToString(md5[:16]) - } - cb := func(resp http.ResponseWriter, req *http.Request) { if req.Method == "PUT" { resp.WriteHeader(c) @@ -98,13 +89,28 @@ func testRemoteState(t *testing.T, s *terraform.State, c int) (*terraform.Remote resp.WriteHeader(404) return } + resp.Header().Set("Content-MD5", b64md5) resp.Write(buf.Bytes()) } + srv := httptest.NewServer(http.HandlerFunc(cb)) remote := &terraform.RemoteState{ Type: "http", Config: map[string]string{"address": srv.URL}, } + + if s != nil { + // Set the remote data + s.Remote = remote + + enc := json.NewEncoder(buf) + if err := enc.Encode(s); err != nil { + t.Fatalf("err: %v", err) + } + md5 := md5.Sum(buf.Bytes()) + b64md5 = base64.StdEncoding.EncodeToString(md5[:16]) + } + return remote, srv } diff --git a/command/remote_push.go b/command/remote_push.go index 259c82863536..cb6b2249e0de 100644 --- a/command/remote_push.go +++ b/command/remote_push.go @@ -68,6 +68,8 @@ func (c *RemotePushCommand) Run(args []string) int { return 1 } + c.Ui.Output(c.Colorize().Color( + "[reset][bold][green]State successfully pushed!")) return 0 } diff --git a/command/state.go b/command/state.go index 20cb4c1e4675..2bea57042afd 100644 --- a/command/state.go +++ b/command/state.go @@ -4,6 +4,7 @@ import ( "fmt" "os" "path/filepath" + "strings" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/state" @@ -208,7 +209,7 @@ func remoteState( } // Initialize the remote client based on the local state - client, err := remote.NewClient(local.Remote.Type, local.Remote.Config) + client, err := remote.NewClient(strings.ToLower(local.Remote.Type), local.Remote.Config) if err != nil { return nil, errwrap.Wrapf(fmt.Sprintf( "Error initializing remote driver '%s': {{err}}", @@ -231,10 +232,20 @@ func remoteState( "Error reloading remote state: {{err}}", err) } switch cache.RefreshResult() { + // All the results below can be safely ignored since it means the + // pull was successful in some way. Noop = nothing happened. + // Init = both are empty. UpdateLocal = local state was older and + // updated. + // + // We don't have to do anything, the pull was successful. case state.CacheRefreshNoop: case state.CacheRefreshInit: - case state.CacheRefreshLocalNewer: case state.CacheRefreshUpdateLocal: + + // Our local state has a higher serial number than remote, so we + // want to explicitly sync the remote side with our local so that + // the remote gets the latest serial number. + case state.CacheRefreshLocalNewer: // Write our local state out to the durable storage to start. if err := cache.WriteState(local); err != nil { return nil, errwrap.Wrapf( @@ -245,8 +256,8 @@ func remoteState( "Error preparing remote state: {{err}}", err) } default: - return nil, errwrap.Wrapf( - "Error initilizing remote state: {{err}}", err) + return nil, fmt.Errorf( + "Unknown refresh result: %s", cache.RefreshResult()) } } diff --git a/command/test-fixtures/apply-destroy-targeted/main.tf b/command/test-fixtures/apply-destroy-targeted/main.tf new file mode 100644 index 000000000000..45ebc5b970cc --- /dev/null +++ b/command/test-fixtures/apply-destroy-targeted/main.tf @@ -0,0 +1,7 @@ +resource "test_instance" "foo" { + count = 3 +} + +resource "test_load_balancer" "foo" { + instances = ["${test_instance.foo.*.id}"] +} diff --git a/command/test-fixtures/plan-emptydiff/main.tf b/command/test-fixtures/plan-emptydiff/main.tf new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/command/test-fixtures/push-input-partial/main.tf b/command/test-fixtures/push-input-partial/main.tf new file mode 100644 index 000000000000..8285c1ada860 --- /dev/null +++ b/command/test-fixtures/push-input-partial/main.tf @@ -0,0 +1,8 @@ +variable "foo" {} +variable "bar" {} + +resource "test_instance" "foo" {} + +atlas { + name = "foo" +} diff --git a/command/test-fixtures/push-input/main.tf b/command/test-fixtures/push-input/main.tf new file mode 100644 index 000000000000..3bd930cf3207 --- /dev/null +++ b/command/test-fixtures/push-input/main.tf @@ -0,0 +1,7 @@ +variable "foo" {} + +resource "test_instance" "foo" {} + +atlas { + name = "foo" +} diff --git a/command/test-fixtures/push-tfvars/main.tf b/command/test-fixtures/push-tfvars/main.tf new file mode 100644 index 000000000000..8285c1ada860 --- /dev/null +++ b/command/test-fixtures/push-tfvars/main.tf @@ -0,0 +1,8 @@ +variable "foo" {} +variable "bar" {} + +resource "test_instance" "foo" {} + +atlas { + name = "foo" +} diff --git a/command/test-fixtures/push-tfvars/terraform.tfvars b/command/test-fixtures/push-tfvars/terraform.tfvars new file mode 100644 index 000000000000..92292f024a15 --- /dev/null +++ b/command/test-fixtures/push-tfvars/terraform.tfvars @@ -0,0 +1,2 @@ +foo = "bar" +bar = "foo" diff --git a/command/test-fixtures/push/main.tf b/command/test-fixtures/push/main.tf new file mode 100644 index 000000000000..2651626363b5 --- /dev/null +++ b/command/test-fixtures/push/main.tf @@ -0,0 +1,5 @@ +resource "aws_instance" "foo" {} + +atlas { + name = "foo" +} diff --git a/commands.go b/commands.go index c585b782777d..a4af7f98323a 100644 --- a/commands.go +++ b/commands.go @@ -80,6 +80,12 @@ func init() { }, nil }, + "push": func() (cli.Command, error) { + return &command.PushCommand{ + Meta: meta, + }, nil + }, + "refresh": func() (cli.Command, error) { return &command.RefreshCommand{ Meta: meta, diff --git a/config.go b/config.go index 583d7ddb2a6c..6482238889bf 100644 --- a/config.go +++ b/config.go @@ -179,7 +179,7 @@ func (c *Config) discoverSingle(glob string, m *map[string]string) error { continue } - log.Printf("[DEBUG] Discoverd plugin: %s = %s", parts[2], match) + log.Printf("[DEBUG] Discovered plugin: %s = %s", parts[2], match) (*m)[parts[2]] = match } diff --git a/config/append.go b/config/append.go index f87e67748075..bf13534e7c94 100644 --- a/config/append.go +++ b/config/append.go @@ -21,6 +21,7 @@ func Append(c1, c2 *Config) (*Config, error) { c.unknownKeys = append(c.unknownKeys, k) } } + for _, k := range c2.unknownKeys { _, present := unknowns[k] if !present { @@ -29,6 +30,11 @@ func Append(c1, c2 *Config) (*Config, error) { } } + c.Atlas = c1.Atlas + if c2.Atlas != nil { + c.Atlas = c2.Atlas + } + if len(c1.Modules) > 0 || len(c2.Modules) > 0 { c.Modules = make( []*Module, 0, len(c1.Modules)+len(c2.Modules)) diff --git a/config/append_test.go b/config/append_test.go index e7aea9d214c2..adeb7835b4d9 100644 --- a/config/append_test.go +++ b/config/append_test.go @@ -12,6 +12,9 @@ func TestAppend(t *testing.T) { }{ { &Config{ + Atlas: &AtlasConfig{ + Name: "foo", + }, Modules: []*Module{ &Module{Name: "foo"}, }, @@ -32,6 +35,9 @@ func TestAppend(t *testing.T) { }, &Config{ + Atlas: &AtlasConfig{ + Name: "bar", + }, Modules: []*Module{ &Module{Name: "bar"}, }, @@ -52,6 +58,9 @@ func TestAppend(t *testing.T) { }, &Config{ + Atlas: &AtlasConfig{ + Name: "bar", + }, Modules: []*Module{ &Module{Name: "foo"}, &Module{Name: "bar"}, diff --git a/config/config.go b/config/config.go index 8dd9810ebfbc..5814141f8bb9 100644 --- a/config/config.go +++ b/config/config.go @@ -28,6 +28,7 @@ type Config struct { // any meaningful directory. Dir string + Atlas *AtlasConfig Modules []*Module ProviderConfigs []*ProviderConfig Resources []*Resource @@ -39,6 +40,13 @@ type Config struct { unknownKeys []string } +// AtlasConfig is the configuration for building in HashiCorp's Atlas. +type AtlasConfig struct { + Name string + Include []string + Exclude []string +} + // Module is a module used within a configuration. // // This does not represent a module itself, this represents a module @@ -199,7 +207,7 @@ func (c *Config) Validate() error { if _, ok := varMap[uv.Name]; !ok { errs = append(errs, fmt.Errorf( - "%s: unknown variable referenced: %s", + "%s: unknown variable referenced: '%s'. define it with 'variable' blocks", source, uv.Name)) } diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index 8bb76c532146..353c4550075f 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -9,6 +9,7 @@ import ( "strings" "github.com/hashicorp/terraform/config/lang/ast" + "github.com/mitchellh/go-homedir" ) // Funcs is the mapping of built-in functions for configuration. @@ -57,7 +58,11 @@ func interpolationFuncFile() ast.Function { ArgTypes: []ast.Type{ast.TypeString}, ReturnType: ast.TypeString, Callback: func(args []interface{}) (interface{}, error) { - data, err := ioutil.ReadFile(args[0].(string)) + path, err := homedir.Expand(args[0].(string)) + if err != nil { + return "", err + } + data, err := ioutil.ReadFile(path) if err != nil { return "", err } diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 2061e6ad889b..e887a8c85dbc 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -302,8 +302,8 @@ func testFunction(t *testing.T, config testFunctionConfig) { if !reflect.DeepEqual(out, tc.Result) { t.Fatalf( - "%d: bad output for input: %s\n\nOutput: %#v", - i, tc.Input, out) + "%d: bad output for input: %s\n\nOutput: %#v\nExpected: %#v", + i, tc.Input, out, tc.Result) } } } diff --git a/config/lang/check_types.go b/config/lang/check_types.go index 0396eb1f3add..4fbcd731adde 100644 --- a/config/lang/check_types.go +++ b/config/lang/check_types.go @@ -100,20 +100,29 @@ func (tc *typeCheckArithmetic) TypeCheck(v *TypeCheck) (ast.Node, error) { exprs[len(tc.n.Exprs)-1-i] = v.StackPop() } - // Determine the resulting type we want + // Determine the resulting type we want. We do this by going over + // every expression until we find one with a type we recognize. + // We do this because the first expr might be a string ("var.foo") + // and we need to know what to implicit to. mathFunc := "__builtin_IntMath" mathType := ast.TypeInt - switch v := exprs[0]; v { - case ast.TypeInt: - mathFunc = "__builtin_IntMath" - mathType = v - case ast.TypeFloat: - mathFunc = "__builtin_FloatMath" - mathType = v - default: - return nil, fmt.Errorf( - "Math operations can only be done with ints and floats, got %s", - v) + for _, v := range exprs { + exit := true + switch v { + case ast.TypeInt: + mathFunc = "__builtin_IntMath" + mathType = v + case ast.TypeFloat: + mathFunc = "__builtin_FloatMath" + mathType = v + default: + exit = false + } + + // We found the type, so leave + if exit { + break + } } // Verify the args diff --git a/config/lang/eval_test.go b/config/lang/eval_test.go index 450a8abce992..44f25d6fd74f 100644 --- a/config/lang/eval_test.go +++ b/config/lang/eval_test.go @@ -134,6 +134,40 @@ func TestEval(t *testing.T) { ast.TypeString, }, + { + "foo ${bar+1}", + &ast.BasicScope{ + VarMap: map[string]ast.Variable{ + "bar": ast.Variable{ + Value: "41", + Type: ast.TypeString, + }, + }, + }, + false, + "foo 42", + ast.TypeString, + }, + + { + "foo ${bar+baz}", + &ast.BasicScope{ + VarMap: map[string]ast.Variable{ + "bar": ast.Variable{ + Value: "41", + Type: ast.TypeString, + }, + "baz": ast.Variable{ + Value: "1", + Type: ast.TypeString, + }, + }, + }, + false, + "foo 42", + ast.TypeString, + }, + { "foo ${rand()}", &ast.BasicScope{ diff --git a/config/loader.go b/config/loader.go index a1bd196d14e2..1848f314d1c5 100644 --- a/config/loader.go +++ b/config/loader.go @@ -162,7 +162,7 @@ func dirFiles(dir string) ([]string, []string, error) { // Only care about files that are valid to load name := fi.Name() extValue := ext(name) - if extValue == "" || isTemporaryFile(name) { + if extValue == "" || isIgnoredFile(name) { continue } @@ -183,11 +183,10 @@ func dirFiles(dir string) ([]string, []string, error) { return files, overrides, nil } -// isTemporaryFile returns true or false depending on whether the -// provided file name is a temporary file for the following editors: -// emacs or vim. -func isTemporaryFile(name string) bool { - return strings.HasSuffix(name, "~") || // vim - strings.HasPrefix(name, ".#") || // emacs +// isIgnoredFile returns true or false depending on whether the +// provided file name is a file that should be ignored. +func isIgnoredFile(name string) bool { + return strings.HasPrefix(name, ".") || // Unix-like hidden files + strings.HasSuffix(name, "~") || // vim (strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#")) // emacs } diff --git a/config/loader_hcl.go b/config/loader_hcl.go index 6c127ea8b2cc..f75f93df383c 100644 --- a/config/loader_hcl.go +++ b/config/loader_hcl.go @@ -17,6 +17,7 @@ type hclConfigurable struct { func (t *hclConfigurable) Config() (*Config, error) { validKeys := map[string]struct{}{ + "atlas": struct{}{}, "module": struct{}{}, "output": struct{}{}, "provider": struct{}{}, @@ -70,6 +71,15 @@ func (t *hclConfigurable) Config() (*Config, error) { } } + // Get Atlas configuration + if atlas := t.Object.Get("atlas", false); atlas != nil { + var err error + config.Atlas, err = loadAtlasHcl(atlas) + if err != nil { + return nil, err + } + } + // Build the modules if modules := t.Object.Get("module", false); modules != nil { var err error @@ -187,6 +197,19 @@ func loadFileHcl(root string) (configurable, []string, error) { return result, nil, nil } +// Given a handle to a HCL object, this transforms it into the Atlas +// configuration. +func loadAtlasHcl(obj *hclobj.Object) (*AtlasConfig, error) { + var config AtlasConfig + if err := hcl.DecodeObject(&config, obj); err != nil { + return nil, fmt.Errorf( + "Error reading atlas config: %s", + err) + } + + return &config, nil +} + // Given a handle to a HCL object, this recurses into the structure // and pulls out a list of modules. // diff --git a/config/loader_test.go b/config/loader_test.go index 39fea0296e9e..d487638e92a5 100644 --- a/config/loader_test.go +++ b/config/loader_test.go @@ -2,6 +2,7 @@ package config import ( "path/filepath" + "reflect" "strings" "testing" ) @@ -57,6 +58,11 @@ func TestLoadBasic(t *testing.T) { t.Fatalf("bad: %#v", c.Dir) } + expectedAtlas := &AtlasConfig{Name: "mitchellh/foo"} + if !reflect.DeepEqual(c.Atlas, expectedAtlas) { + t.Fatalf("bad: %#v", c.Atlas) + } + actual := variablesStr(c.Variables) if actual != strings.TrimSpace(basicVariablesStr) { t.Fatalf("bad:\n%s", actual) @@ -132,6 +138,11 @@ func TestLoadBasic_json(t *testing.T) { t.Fatalf("bad: %#v", c.Dir) } + expectedAtlas := &AtlasConfig{Name: "mitchellh/foo"} + if !reflect.DeepEqual(c.Atlas, expectedAtlas) { + t.Fatalf("bad: %#v", c.Atlas) + } + actual := variablesStr(c.Variables) if actual != strings.TrimSpace(basicVariablesStr) { t.Fatalf("bad:\n%s", actual) diff --git a/config/merge.go b/config/merge.go index c43f13c045f1..f72fdfa92093 100644 --- a/config/merge.go +++ b/config/merge.go @@ -25,6 +25,13 @@ func Merge(c1, c2 *Config) (*Config, error) { } } + // Merge Atlas configuration. This is a dumb one overrides the other + // sort of merge. + c.Atlas = c1.Atlas + if c2.Atlas != nil { + c.Atlas = c2.Atlas + } + // NOTE: Everything below is pretty gross. Due to the lack of generics // in Go, there is some hoop-jumping involved to make this merging a // little more test-friendly and less repetitive. Ironically, making it diff --git a/config/merge_test.go b/config/merge_test.go index 2dbe5aee98e4..40144f0c77f3 100644 --- a/config/merge_test.go +++ b/config/merge_test.go @@ -13,6 +13,9 @@ func TestMerge(t *testing.T) { // Normal good case. { &Config{ + Atlas: &AtlasConfig{ + Name: "foo", + }, Modules: []*Module{ &Module{Name: "foo"}, }, @@ -33,6 +36,9 @@ func TestMerge(t *testing.T) { }, &Config{ + Atlas: &AtlasConfig{ + Name: "bar", + }, Modules: []*Module{ &Module{Name: "bar"}, }, @@ -53,6 +59,9 @@ func TestMerge(t *testing.T) { }, &Config{ + Atlas: &AtlasConfig{ + Name: "bar", + }, Modules: []*Module{ &Module{Name: "foo"}, &Module{Name: "bar"}, diff --git a/config/module/detect_file.go b/config/module/detect_file.go index 2b8dbacbe6f2..859739f95489 100644 --- a/config/module/detect_file.go +++ b/config/module/detect_file.go @@ -2,6 +2,7 @@ package module import ( "fmt" + "os" "path/filepath" "runtime" ) @@ -20,8 +21,27 @@ func (d *FileDetector) Detect(src, pwd string) (string, bool, error) { "relative paths require a module with a pwd") } + // Stat the pwd to determine if its a symbolic link. If it is, + // then the pwd becomes the original directory. Otherwise, + // `filepath.Join` below does some weird stuff. + // + // We just ignore if the pwd doesn't exist. That error will be + // caught later when we try to use the URL. + if fi, err := os.Lstat(pwd); !os.IsNotExist(err) { + if err != nil { + return "", true, err + } + if fi.Mode()&os.ModeSymlink != 0 { + pwd, err = os.Readlink(pwd) + if err != nil { + return "", true, err + } + } + } + src = filepath.Join(pwd, src) } + return fmtFileURL(src), true, nil } diff --git a/config/module/detect_test.go b/config/module/detect_test.go index a81bba12b283..e1e3b437225e 100644 --- a/config/module/detect_test.go +++ b/config/module/detect_test.go @@ -45,7 +45,7 @@ func TestDetect(t *testing.T) { t.Fatalf("%d: bad err: %s", i, err) } if output != tc.Output { - t.Fatalf("%d: bad output: %s", i, output) + t.Fatalf("%d: bad output: %s\nexpected: %s", i, output, tc.Output) } } } diff --git a/config/module/folder_storage.go b/config/module/folder_storage.go index dfb79748afba..81c9a2ac1959 100644 --- a/config/module/folder_storage.go +++ b/config/module/folder_storage.go @@ -16,8 +16,8 @@ type FolderStorage struct { } // Dir implements Storage.Dir -func (s *FolderStorage) Dir(source string) (d string, e bool, err error) { - d = s.dir(source) +func (s *FolderStorage) Dir(key string) (d string, e bool, err error) { + d = s.dir(key) _, err = os.Stat(d) if err == nil { // Directory exists @@ -39,8 +39,8 @@ func (s *FolderStorage) Dir(source string) (d string, e bool, err error) { } // Get implements Storage.Get -func (s *FolderStorage) Get(source string, update bool) error { - dir := s.dir(source) +func (s *FolderStorage) Get(key string, source string, update bool) error { + dir := s.dir(key) if !update { if _, err := os.Stat(dir); err == nil { // If the directory already exists, then we're done since @@ -59,7 +59,7 @@ func (s *FolderStorage) Get(source string, update bool) error { // dir returns the directory name internally that we'll use to map to // internally. -func (s *FolderStorage) dir(source string) string { - sum := md5.Sum([]byte(source)) +func (s *FolderStorage) dir(key string) string { + sum := md5.Sum([]byte(key)) return filepath.Join(s.StorageDir, hex.EncodeToString(sum[:])) } diff --git a/config/module/folder_storage_test.go b/config/module/folder_storage_test.go index 4ffaac2bb112..7fda6b21a44a 100644 --- a/config/module/folder_storage_test.go +++ b/config/module/folder_storage_test.go @@ -24,14 +24,16 @@ func TestFolderStorage(t *testing.T) { t.Fatal("should not exist") } + key := "foo" + // We can get it - err = s.Get(module, false) + err = s.Get(key, module, false) if err != nil { t.Fatalf("err: %s", err) } // Now the module exists - dir, ok, err := s.Dir(module) + dir, ok, err := s.Dir(key) if err != nil { t.Fatalf("err: %s", err) } diff --git a/config/module/storage.go b/config/module/storage.go index dcb0cc57c8ce..9c752f6309ab 100644 --- a/config/module/storage.go +++ b/config/module/storage.go @@ -9,17 +9,17 @@ type Storage interface { Dir(string) (string, bool, error) // Get will download and optionally update the given module. - Get(string, bool) error + Get(string, string, bool) error } -func getStorage(s Storage, src string, mode GetMode) (string, bool, error) { +func getStorage(s Storage, key string, src string, mode GetMode) (string, bool, error) { // Get the module with the level specified if we were told to. if mode > GetModeNone { - if err := s.Get(src, mode == GetModeUpdate); err != nil { + if err := s.Get(key, src, mode == GetModeUpdate); err != nil { return "", false, err } } // Get the directory where the module is. - return s.Dir(src) + return s.Dir(key) } diff --git a/config/module/test-fixtures/basic-parent/a/a.tf b/config/module/test-fixtures/basic-parent/a/a.tf new file mode 100644 index 000000000000..b9b44f464037 --- /dev/null +++ b/config/module/test-fixtures/basic-parent/a/a.tf @@ -0,0 +1,3 @@ +module "b" { + source = "../c" +} diff --git a/config/module/test-fixtures/basic-parent/c/c.tf b/config/module/test-fixtures/basic-parent/c/c.tf new file mode 100644 index 000000000000..fec56017dc1b --- /dev/null +++ b/config/module/test-fixtures/basic-parent/c/c.tf @@ -0,0 +1 @@ +# Hello diff --git a/config/module/test-fixtures/basic-parent/main.tf b/config/module/test-fixtures/basic-parent/main.tf new file mode 100644 index 000000000000..2326ee22acca --- /dev/null +++ b/config/module/test-fixtures/basic-parent/main.tf @@ -0,0 +1,3 @@ +module "a" { + source = "./a" +} diff --git a/config/module/tree.go b/config/module/tree.go index fbc467317619..d7b3ac966121 100644 --- a/config/module/tree.go +++ b/config/module/tree.go @@ -23,6 +23,7 @@ type Tree struct { name string config *config.Config children map[string]*Tree + path []string lock sync.RWMutex } @@ -152,6 +153,11 @@ func (t *Tree) Load(s Storage, mode GetMode) error { "module %s: duplicated. module names must be unique", m.Name) } + // Determine the path to this child + path := make([]string, len(t.path), len(t.path)+1) + copy(path, t.path) + path = append(path, m.Name) + // Split out the subdir if we have one source, subDir := getDirSubdir(m.Source) @@ -167,7 +173,9 @@ func (t *Tree) Load(s Storage, mode GetMode) error { } // Get the directory where this module is so we can load it - dir, ok, err := getStorage(s, source, mode) + key := strings.Join(path, ".") + key = "root." + key + dir, ok, err := getStorage(s, key, source, mode) if err != nil { return err } @@ -187,6 +195,9 @@ func (t *Tree) Load(s Storage, mode GetMode) error { return fmt.Errorf( "module %s: %s", m.Name, err) } + + // Set the path of this child + children[m.Name].path = path } // Go through all the children and load them. @@ -202,10 +213,19 @@ func (t *Tree) Load(s Storage, mode GetMode) error { return nil } +// Path is the full path to this tree. +func (t *Tree) Path() []string { + return t.path +} + // String gives a nice output to describe the tree. func (t *Tree) String() string { var result bytes.Buffer - result.WriteString(t.Name() + "\n") + path := strings.Join(t.path, ", ") + if path != "" { + path = fmt.Sprintf(" (path: %s)", path) + } + result.WriteString(t.Name() + path + "\n") cs := t.Children() if cs == nil { diff --git a/config/module/tree_gob.go b/config/module/tree_gob.go index cbf8a25ed078..fcd37f4e71c0 100644 --- a/config/module/tree_gob.go +++ b/config/module/tree_gob.go @@ -22,6 +22,7 @@ func (t *Tree) GobDecode(bs []byte) error { t.name = data.Name t.config = data.Config t.children = data.Children + t.path = data.Path return nil } @@ -31,6 +32,7 @@ func (t *Tree) GobEncode() ([]byte, error) { Config: t.config, Children: t.children, Name: t.name, + Path: t.path, } var buf bytes.Buffer @@ -51,4 +53,5 @@ type treeGob struct { Config *config.Config Children map[string]*Tree Name string + Path []string } diff --git a/config/module/tree_test.go b/config/module/tree_test.go index d8a75352259e..9667c0157cbb 100644 --- a/config/module/tree_test.go +++ b/config/module/tree_test.go @@ -18,6 +18,8 @@ func TestTreeChild(t *testing.T) { t.Fatal("should not be nil") } else if c.Name() != "root" { t.Fatalf("bad: %#v", c.Name()) + } else if !reflect.DeepEqual(c.Path(), []string(nil)) { + t.Fatalf("bad: %#v", c.Path()) } // Should be able to get the root child @@ -25,6 +27,8 @@ func TestTreeChild(t *testing.T) { t.Fatal("should not be nil") } else if c.Name() != "root" { t.Fatalf("bad: %#v", c.Name()) + } else if !reflect.DeepEqual(c.Path(), []string(nil)) { + t.Fatalf("bad: %#v", c.Path()) } // Should be able to get the foo child @@ -32,6 +36,8 @@ func TestTreeChild(t *testing.T) { t.Fatal("should not be nil") } else if c.Name() != "foo" { t.Fatalf("bad: %#v", c.Name()) + } else if !reflect.DeepEqual(c.Path(), []string{"foo"}) { + t.Fatalf("bad: %#v", c.Path()) } // Should be able to get the nested child @@ -39,6 +45,8 @@ func TestTreeChild(t *testing.T) { t.Fatal("should not be nil") } else if c.Name() != "bar" { t.Fatalf("bad: %#v", c.Name()) + } else if !reflect.DeepEqual(c.Path(), []string{"foo", "bar"}) { + t.Fatalf("bad: %#v", c.Path()) } } @@ -94,6 +102,44 @@ func TestTreeLoad_duplicate(t *testing.T) { } } +func TestTreeLoad_parentRef(t *testing.T) { + storage := testStorage(t) + tree := NewTree("", testConfig(t, "basic-parent")) + + if tree.Loaded() { + t.Fatal("should not be loaded") + } + + // This should error because we haven't gotten things yet + if err := tree.Load(storage, GetModeNone); err == nil { + t.Fatal("should error") + } + + if tree.Loaded() { + t.Fatal("should not be loaded") + } + + // This should get things + if err := tree.Load(storage, GetModeGet); err != nil { + t.Fatalf("err: %s", err) + } + + if !tree.Loaded() { + t.Fatal("should be loaded") + } + + // This should no longer error + if err := tree.Load(storage, GetModeNone); err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(tree.String()) + expected := strings.TrimSpace(treeLoadParentStr) + if actual != expected { + t.Fatalf("bad: \n\n%s", actual) + } +} + func TestTreeLoad_subdir(t *testing.T) { storage := testStorage(t) tree := NewTree("", testConfig(t, "basic-subdir")) @@ -236,11 +282,16 @@ func TestTreeValidate_requiredChildVar(t *testing.T) { const treeLoadStr = ` root - foo + foo (path: foo) ` +const treeLoadParentStr = ` +root + a (path: a) + b (path: a, b) +` const treeLoadSubdirStr = ` root - foo - bar + foo (path: foo) + bar (path: foo, bar) ` diff --git a/config/raw_config.go b/config/raw_config.go index 181a95f611b1..d51578e0bc54 100644 --- a/config/raw_config.go +++ b/config/raw_config.go @@ -3,6 +3,7 @@ package config import ( "bytes" "encoding/gob" + "sync" "github.com/hashicorp/terraform/config/lang" "github.com/hashicorp/terraform/config/lang/ast" @@ -31,6 +32,7 @@ type RawConfig struct { Interpolations []ast.Node Variables map[string]InterpolatedVariable + lock sync.Mutex config map[string]interface{} unknownKeys []string } @@ -46,6 +48,20 @@ func NewRawConfig(raw map[string]interface{}) (*RawConfig, error) { return result, nil } +// Copy returns a copy of this RawConfig, uninterpolated. +func (r *RawConfig) Copy() *RawConfig { + r.lock.Lock() + defer r.lock.Unlock() + + result, err := NewRawConfig(r.Raw) + if err != nil { + panic("copy failed: " + err.Error()) + } + + result.Key = r.Key + return result +} + // Value returns the value of the configuration if this configuration // has a Key set. If this does not have a Key set, nil will be returned. func (r *RawConfig) Value() interface{} { @@ -55,6 +71,8 @@ func (r *RawConfig) Value() interface{} { } } + r.lock.Lock() + defer r.lock.Unlock() return r.Raw[r.Key] } @@ -81,6 +99,9 @@ func (r *RawConfig) Config() map[string]interface{} { // // If a variable key is missing, this will panic. func (r *RawConfig) Interpolate(vs map[string]ast.Variable) error { + r.lock.Lock() + defer r.lock.Unlock() + config := langEvalConfig(vs) return r.interpolate(func(root ast.Node) (string, error) { // We detect the variables again and check if the value of any @@ -119,6 +140,9 @@ func (r *RawConfig) Interpolate(vs map[string]ast.Variable) error { // values in this config) and returns a new config. The original config // is not modified. func (r *RawConfig) Merge(other *RawConfig) *RawConfig { + r.lock.Lock() + defer r.lock.Unlock() + // Merge the raw configurations raw := make(map[string]interface{}) for k, v := range r.Raw { @@ -252,6 +276,9 @@ func (r *RawConfig) GobDecode(b []byte) error { // tree of interpolated variables is recomputed on decode, since it is // referentially transparent. func (r *RawConfig) GobEncode() ([]byte, error) { + r.lock.Lock() + defer r.lock.Unlock() + data := gobRawConfig{ Key: r.Key, Raw: r.Raw, diff --git a/config/test-fixtures/basic.tf b/config/test-fixtures/basic.tf index 5751a8583782..5afadc779755 100644 --- a/config/test-fixtures/basic.tf +++ b/config/test-fixtures/basic.tf @@ -49,3 +49,7 @@ resource "aws_instance" "db" { output "web_ip" { value = "${aws_instance.web.private_ip}" } + +atlas { + name = "mitchellh/foo" +} diff --git a/config/test-fixtures/basic.tf.json b/config/test-fixtures/basic.tf.json index 1b946e22ba05..1013862b3399 100644 --- a/config/test-fixtures/basic.tf.json +++ b/config/test-fixtures/basic.tf.json @@ -63,5 +63,9 @@ "web_ip": { "value": "${aws_instance.web.private_ip}" } + }, + + "atlas": { + "name": "mitchellh/foo" } } diff --git a/config/test-fixtures/dir-temporary-files/.hidden.tf b/config/test-fixtures/dir-temporary-files/.hidden.tf new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/contrib/zsh-completion/_terraform b/contrib/zsh-completion/_terraform index 67896b0fcbd3..f1abf535dc1b 100644 --- a/contrib/zsh-completion/_terraform +++ b/contrib/zsh-completion/_terraform @@ -13,6 +13,7 @@ _terraform_cmds=( 'push:Uploads the the local state to the remote server' 'refresh:Update local state file against real resources' 'remote:Configures remote state management' + 'taint:Manualy forcing a destroy and recreate on the next plan/apply' 'show:Inspect Terraform state or plan' 'version:Prints the Terraform version' ) @@ -95,6 +96,16 @@ __refresh() { '-var-file=[(path) Set variables in the Terraform configuration from a file. If "terraform.tfvars" is present, it will be automatically loaded if this flag is not specified.]' } +__taint() { + _arguments \ + '-allow-missing[If specified, the command will succeed (exit code 0) even if the resource is missing.]' \ + '-backup=[(path) Path to backup the existing state file before modifying. Defaults to the "-state-out" path with ".backup" extension. Set to "-" to disable backup.]' \ + '-module=[(path) The module path where the resource lives. By default this will be root. Child modules can be specified by names. Ex. "consul" or "consul.vpc" (nested modules).]' \ + '-no-color[If specified, output will not contain any color.]' \ + '-state=[(path) Path to read and save state (unless state-out is specified). Defaults to "terraform.tfstate".]' \ + '-state-out=[(path) Path to write updated state file. By default, the "-state" path will be used.]' +} + __remote() { _arguments \ '-address=[(url) URL of the remote storage server. Required for HTTP backend, optional for Atlas and Consul.]' \ @@ -104,7 +115,7 @@ __remote() { '-disable[Disables remote state management and migrates the state to the -state path.]' \ '-name=[(name) Name of the state file in the state storage server. Required for Atlas backend.]' \ '-path=[(path) Path of the remote state in Consul. Required for the Consul backend.]' \ - '-pull=[(true) Controls if the remote state is pulled before disabling. This defaults to true to ensure the latest state is cached before disabling.]'\ + '-pull=[(true) Controls if the remote state is pulled before disabling. This defaults to true to ensure the latest state is cached before disabling.]' \ '-state=[(path) Path to read and save state (unless state-out is specified). Defaults to "terraform.tfstate".]' } @@ -145,4 +156,6 @@ case "$words[1]" in __remote ;; show) __show ;; + taint) + __taint ;; esac diff --git a/dag/dag.go b/dag/dag.go index b81cb2874db2..0f53fb1f00b3 100644 --- a/dag/dag.go +++ b/dag/dag.go @@ -17,6 +17,40 @@ type AcyclicGraph struct { // WalkFunc is the callback used for walking the graph. type WalkFunc func(Vertex) error +// Returns a Set that includes every Vertex yielded by walking down from the +// provided starting Vertex v. +func (g *AcyclicGraph) Ancestors(v Vertex) (*Set, error) { + s := new(Set) + start := asVertexList(g.DownEdges(v)) + memoFunc := func(v Vertex) error { + s.Add(v) + return nil + } + + if err := g.depthFirstWalk(start, memoFunc); err != nil { + return nil, err + } + + return s, nil +} + +// Returns a Set that includes every Vertex yielded by walking up from the +// provided starting Vertex v. +func (g *AcyclicGraph) Descendents(v Vertex) (*Set, error) { + s := new(Set) + start := asVertexList(g.UpEdges(v)) + memoFunc := func(v Vertex) error { + s.Add(v) + return nil + } + + if err := g.reverseDepthFirstWalk(start, memoFunc); err != nil { + return nil, err + } + + return s, nil +} + // Root returns the root of the DAG, or an error. // // Complexity: O(V) @@ -61,15 +95,11 @@ func (g *AcyclicGraph) TransitiveReduction() { for _, u := range g.Vertices() { uTargets := g.DownEdges(u) - vs := make([]Vertex, uTargets.Len()) - for i, vRaw := range uTargets.List() { - vs[i] = vRaw.(Vertex) - } + vs := asVertexList(g.DownEdges(u)) g.depthFirstWalk(vs, func(v Vertex) error { shared := uTargets.Intersection(g.DownEdges(v)) - for _, raw := range shared.List() { - vPrime := raw.(Vertex) + for _, vPrime := range asVertexList(shared) { g.RemoveEdge(BasicEdge(u, vPrime)) } @@ -145,12 +175,10 @@ func (g *AcyclicGraph) Walk(cb WalkFunc) error { for _, v := range vertices { // Build our list of dependencies and the list of channels to // wait on until we start executing for this vertex. - depsRaw := g.DownEdges(v).List() - deps := make([]Vertex, len(depsRaw)) + deps := asVertexList(g.DownEdges(v)) depChs := make([]<-chan struct{}, len(deps)) - for i, raw := range depsRaw { - deps[i] = raw.(Vertex) - depChs[i] = vertMap[deps[i]] + for i, dep := range deps { + depChs[i] = vertMap[dep] } // Get our channel so that we can close it when we're done @@ -200,6 +228,16 @@ func (g *AcyclicGraph) Walk(cb WalkFunc) error { return errs } +// simple convenience helper for converting a dag.Set to a []Vertex +func asVertexList(s *Set) []Vertex { + rawList := s.List() + vertexList := make([]Vertex, len(rawList)) + for i, raw := range rawList { + vertexList[i] = raw.(Vertex) + } + return vertexList +} + // depthFirstWalk does a depth-first walk of the graph starting from // the vertices in start. This is not exported now but it would make sense // to export this publicly at some point. @@ -233,3 +271,36 @@ func (g *AcyclicGraph) depthFirstWalk(start []Vertex, cb WalkFunc) error { return nil } + +// reverseDepthFirstWalk does a depth-first walk _up_ the graph starting from +// the vertices in start. +func (g *AcyclicGraph) reverseDepthFirstWalk(start []Vertex, cb WalkFunc) error { + seen := make(map[Vertex]struct{}) + frontier := make([]Vertex, len(start)) + copy(frontier, start) + for len(frontier) > 0 { + // Pop the current vertex + n := len(frontier) + current := frontier[n-1] + frontier = frontier[:n-1] + + // Check if we've seen this already and return... + if _, ok := seen[current]; ok { + continue + } + seen[current] = struct{}{} + + // Visit the current node + if err := cb(current); err != nil { + return err + } + + // Visit targets of this in reverse order. + targets := g.UpEdges(current).List() + for i := len(targets) - 1; i >= 0; i-- { + frontier = append(frontier, targets[i].(Vertex)) + } + } + + return nil +} diff --git a/dag/dag_test.go b/dag/dag_test.go index feead7968a8c..e7b2db8d2264 100644 --- a/dag/dag_test.go +++ b/dag/dag_test.go @@ -126,6 +126,68 @@ func TestAcyclicGraphValidate_cycleSelf(t *testing.T) { } } +func TestAcyclicGraphAncestors(t *testing.T) { + var g AcyclicGraph + g.Add(1) + g.Add(2) + g.Add(3) + g.Add(4) + g.Add(5) + g.Connect(BasicEdge(0, 1)) + g.Connect(BasicEdge(1, 2)) + g.Connect(BasicEdge(2, 3)) + g.Connect(BasicEdge(3, 4)) + g.Connect(BasicEdge(4, 5)) + + actual, err := g.Ancestors(2) + if err != nil { + t.Fatalf("err: %#v", err) + } + + expected := []Vertex{3, 4, 5} + + if actual.Len() != len(expected) { + t.Fatalf("bad length! expected %#v to have len %d", actual, len(expected)) + } + + for _, e := range expected { + if !actual.Include(e) { + t.Fatalf("expected: %#v to include: %#v", expected, actual) + } + } +} + +func TestAcyclicGraphDescendents(t *testing.T) { + var g AcyclicGraph + g.Add(1) + g.Add(2) + g.Add(3) + g.Add(4) + g.Add(5) + g.Connect(BasicEdge(0, 1)) + g.Connect(BasicEdge(1, 2)) + g.Connect(BasicEdge(2, 3)) + g.Connect(BasicEdge(3, 4)) + g.Connect(BasicEdge(4, 5)) + + actual, err := g.Descendents(2) + if err != nil { + t.Fatalf("err: %#v", err) + } + + expected := []Vertex{0, 1} + + if actual.Len() != len(expected) { + t.Fatalf("bad length! expected %#v to have len %d", actual, len(expected)) + } + + for _, e := range expected { + if !actual.Include(e) { + t.Fatalf("expected: %#v to include: %#v", expected, actual) + } + } +} + func TestAcyclicGraphWalk(t *testing.T) { var g AcyclicGraph g.Add(1) diff --git a/deps/v0-4-0.json b/deps/v0-4-0.json new file mode 100644 index 000000000000..1dcbc7356e44 --- /dev/null +++ b/deps/v0-4-0.json @@ -0,0 +1,81 @@ +{ + "ImportPath": "github.com/hashicorp/terraform", + "GoVersion": "go1.4.2", + "Deps": [ + { + "ImportPath": "github.com/hashicorp/atlas-go/archive", + "Comment": "20141209094003-55-g8663626", + "Rev": "86636264d03bc142dcd136d02811c469ba542444" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/v1", + "Comment": "20141209094003-55-g8663626", + "Rev": "86636264d03bc142dcd136d02811c469ba542444" + }, + { + "ImportPath": "github.com/hashicorp/consul/api", + "Comment": "v0.5.0-127-g8724845", + "Rev": "872484596472df47b95128f5996776fd73eda26c" + }, + { + "ImportPath": "github.com/hashicorp/errwrap", + "Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + }, + { + "ImportPath": "github.com/hashicorp/go-checkpoint", + "Rev": "88326f6851319068e7b34981032128c0b1a6524d" + }, + { + "ImportPath": "github.com/hashicorp/go-multierror", + "Rev": "fcdddc395df1ddf4247c69bd436e84cfa0733f7e" + }, + { + "ImportPath": "github.com/hashicorp/go-version", + "Rev": "bb92dddfa9792e738a631f04ada52858a139bcf7" + }, + { + "ImportPath": "github.com/hashicorp/hcl", + "Rev": "513e04c400ee2e81e97f5e011c08fb42c6f69b84" + }, + { + "ImportPath": "github.com/hashicorp/yamux", + "Rev": "b4f943b3f25da97dec8e26bee1c3269019de070d" + }, + { + "ImportPath": "github.com/mitchellh/cli", + "Rev": "afc399c273e70173826fb6f518a48edff23fe897" + }, + { + "ImportPath": "github.com/mitchellh/colorstring", + "Rev": "61164e49940b423ba1f12ddbdf01632ac793e5e9" + }, + { + "ImportPath": "github.com/mitchellh/copystructure", + "Rev": "c101d94abf8cd5c6213c8300d0aed6368f2d6ede" + }, + { + "ImportPath": "github.com/mitchellh/go-homedir", + "Rev": "7d2d8c8a4e078ce3c58736ab521a40b37a504c52" + }, + { + "ImportPath": "github.com/mitchellh/mapstructure", + "Rev": "442e588f213303bec7936deba67901f8fc8f18b1" + }, + { + "ImportPath": "github.com/mitchellh/osext", + "Rev": "0dd3f918b21bec95ace9dc86c7e70266cfc5c702" + }, + { + "ImportPath": "github.com/mitchellh/panicwrap", + "Rev": "45cbfd3bae250c7676c077fb275be1a2968e066a" + }, + { + "ImportPath": "github.com/mitchellh/prefixedio", + "Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724" + }, + { + "ImportPath": "github.com/mitchellh/reflectwalk", + "Rev": "9cdd861463675960a0a0083a7e2023e7b0c994d7" + } + ] +} diff --git a/deps/v0-4-1.json b/deps/v0-4-1.json new file mode 100644 index 000000000000..c7c82e55954a --- /dev/null +++ b/deps/v0-4-1.json @@ -0,0 +1,286 @@ +{ + "ImportPath": "github.com/hashicorp/terraform", + "GoVersion": "go1.4.2", + "Packages": [ + "./..." + ], + "Deps": [ + { + "ImportPath": "code.google.com/p/go-uuid/uuid", + "Comment": "null-15", + "Rev": "35bc42037350f0078e3c974c6ea690f1926603ab" + }, + { + "ImportPath": "github.com/Sirupsen/logrus", + "Comment": "v0.7.2-4-gcdd90c3", + "Rev": "cdd90c38c6e3718c731b555b9c3ed1becebec3ba" + }, + { + "ImportPath": "github.com/armon/circbuf", + "Rev": "f092b4f207b6e5cce0569056fba9e1a2735cb6cf" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/aws", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/internal/endpoints", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/internal/protocol/ec2query", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/internal/protocol/query/queryutil", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/internal/protocol/xml/xmlutil", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/internal/signer/v4", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/awslabs/aws-sdk-go/service/ec2", + "Rev": "a79c7d95c012010822e27aaa5551927f5e8a6ab6" + }, + { + "ImportPath": "github.com/cyberdelia/heroku-go/v3", + "Rev": "594d483b9b6a8ddc7cd2f1e3e7d1de92fa2de665" + }, + { + "ImportPath": "github.com/docker/docker/pkg/archive", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/pkg/fileutils", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/pkg/ioutils", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/pkg/pools", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/pkg/promise", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/pkg/system", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar", + "Comment": "v1.4.1-2478-gdd4389f", + "Rev": "dd4389fb19e442d386c3106545f04387c08e6a91" + }, + { + "ImportPath": "github.com/fsouza/go-dockerclient", + "Rev": "fb0e9fb80f074795d7c11eba700eb33058b14bfb" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/archive", + "Comment": "20141209094003-57-g90aad8f", + "Rev": "90aad8fc22a107db14dd80efdc131a197f7234e6" + }, + { + "ImportPath": "github.com/hashicorp/atlas-go/v1", + "Comment": "20141209094003-57-g90aad8f", + "Rev": "90aad8fc22a107db14dd80efdc131a197f7234e6" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/aws", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/autoscaling", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/ec2", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/elb", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/endpoints", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/iam", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/rds", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/route53", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/aws-sdk-go/gen/s3", + "Comment": "tf0.4.0", + "Rev": "1d5c8f6b881ab3e2e0c3e737886732bbfd1ced27" + }, + { + "ImportPath": "github.com/hashicorp/consul/api", + "Comment": "v0.5.0-134-ge5797d9", + "Rev": "e5797d9a86b025d009809199146747384ad34db7" + }, + { + "ImportPath": "github.com/hashicorp/errwrap", + "Rev": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + }, + { + "ImportPath": "github.com/hashicorp/go-checkpoint", + "Rev": "88326f6851319068e7b34981032128c0b1a6524d" + }, + { + "ImportPath": "github.com/hashicorp/go-multierror", + "Rev": "fcdddc395df1ddf4247c69bd436e84cfa0733f7e" + }, + { + "ImportPath": "github.com/hashicorp/go-version", + "Rev": "bb92dddfa9792e738a631f04ada52858a139bcf7" + }, + { + "ImportPath": "github.com/hashicorp/hcl", + "Rev": "513e04c400ee2e81e97f5e011c08fb42c6f69b84" + }, + { + "ImportPath": "github.com/hashicorp/yamux", + "Rev": "b2e55852ddaf823a85c67f798080eb7d08acd71d" + }, + { + "ImportPath": "github.com/imdario/mergo", + "Comment": "0.2.0-3-g2fcac99", + "Rev": "2fcac9923693d66dc0e03988a31b21da05cdea84" + }, + { + "ImportPath": "github.com/mitchellh/cli", + "Rev": "afc399c273e70173826fb6f518a48edff23fe897" + }, + { + "ImportPath": "github.com/mitchellh/colorstring", + "Rev": "61164e49940b423ba1f12ddbdf01632ac793e5e9" + }, + { + "ImportPath": "github.com/mitchellh/copystructure", + "Rev": "c101d94abf8cd5c6213c8300d0aed6368f2d6ede" + }, + { + "ImportPath": "github.com/mitchellh/go-homedir", + "Rev": "7d2d8c8a4e078ce3c58736ab521a40b37a504c52" + }, + { + "ImportPath": "github.com/mitchellh/go-linereader", + "Rev": "07bab5fdd9580500aea6ada0e09df4aa28e68abd" + }, + { + "ImportPath": "github.com/mitchellh/mapstructure", + "Rev": "442e588f213303bec7936deba67901f8fc8f18b1" + }, + { + "ImportPath": "github.com/mitchellh/osext", + "Rev": "0dd3f918b21bec95ace9dc86c7e70266cfc5c702" + }, + { + "ImportPath": "github.com/mitchellh/panicwrap", + "Rev": "45cbfd3bae250c7676c077fb275be1a2968e066a" + }, + { + "ImportPath": "github.com/mitchellh/prefixedio", + "Rev": "89d9b535996bf0a185f85b59578f2e245f9e1724" + }, + { + "ImportPath": "github.com/mitchellh/reflectwalk", + "Rev": "9cdd861463675960a0a0083a7e2023e7b0c994d7" + }, + { + "ImportPath": "github.com/pearkes/cloudflare", + "Rev": "19e280b056f3742e535ea12ae92a37ea7767ea82" + }, + { + "ImportPath": "github.com/pearkes/digitalocean", + "Rev": "e966f00c2d9de5743e87697ab77c7278f5998914" + }, + { + "ImportPath": "github.com/pearkes/dnsimple", + "Rev": "1e0c2b0eb33ca7b5632a130d6d34376a1ea46c84" + }, + { + "ImportPath": "github.com/pearkes/mailgun", + "Rev": "5b02e7e9ffee9869f81393e80db138f6ff726260" + }, + { + "ImportPath": "github.com/rackspace/gophercloud", + "Comment": "v1.0.0-558-gce0f487", + "Rev": "ce0f487f6747ab43c4e4404722df25349385bebd" + }, + { + "ImportPath": "github.com/soniah/dnsmadeeasy", + "Comment": "v1.1-2-g5578a8c", + "Rev": "5578a8c15e33958c61cf7db720b6181af65f4a9e" + }, + { + "ImportPath": "github.com/vaughan0/go-ini", + "Rev": "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1" + }, + { + "ImportPath": "github.com/xanzy/go-cloudstack/cloudstack", + "Comment": "v1.2.0-5-gf73f6ff", + "Rev": "f73f6ff1b843dbdac0a01da7b7f39883adfe2bdb" + }, + { + "ImportPath": "golang.org/x/crypto/ssh", + "Rev": "c57d4a71915a248dbad846d60825145062b4c18e" + }, + { + "ImportPath": "golang.org/x/net/context", + "Rev": "84ba27dd5b2d8135e9da1395277f2c9333a2ffda" + }, + { + "ImportPath": "golang.org/x/oauth2", + "Rev": "ce5ea7da934b76b1066c527632359e2b8f65db97" + }, + { + "ImportPath": "google.golang.org/api/compute/v1", + "Rev": "2f6114897375589857c508d7392e55d5e7580db8" + }, + { + "ImportPath": "google.golang.org/api/googleapi", + "Rev": "2f6114897375589857c508d7392e55d5e7580db8" + }, + { + "ImportPath": "google.golang.org/cloud/compute/metadata", + "Rev": "c97f5f9979a8582f3ab72873a51979619801248b" + }, + { + "ImportPath": "google.golang.org/cloud/internal", + "Rev": "c97f5f9979a8582f3ab72873a51979619801248b" + } + ] +} diff --git a/helper/resource/testing.go b/helper/resource/testing.go index cedadfc72bd9..43a59e93c0d2 100644 --- a/helper/resource/testing.go +++ b/helper/resource/testing.go @@ -190,6 +190,7 @@ func testStep( // Build the context opts.Module = mod opts.State = state + opts.Destroy = step.Destroy ctx := terraform.NewContext(&opts) if ws, es := ctx.Validate(); len(ws) > 0 || len(es) > 0 { estrs := make([]string, len(es)) @@ -209,7 +210,7 @@ func testStep( } // Plan! - if p, err := ctx.Plan(&terraform.PlanOpts{Destroy: step.Destroy}); err != nil { + if p, err := ctx.Plan(); err != nil { return state, fmt.Errorf( "Error planning: %s", err) } else { @@ -229,6 +230,16 @@ func testStep( } } + // Verify that Plan is now empty and we don't have a perpetual diff issue + if p, err := ctx.Plan(); err != nil { + return state, fmt.Errorf("Error on follow-up plan: %s", err) + } else { + if p.Diff != nil && !p.Diff.Empty() { + return state, fmt.Errorf( + "After applying this step, the plan was not empty:\n\n%s", p) + } + } + return state, err } diff --git a/helper/resource/testing_test.go b/helper/resource/testing_test.go index cf51c7b22347..2f7ed0517f04 100644 --- a/helper/resource/testing_test.go +++ b/helper/resource/testing_test.go @@ -18,6 +18,8 @@ func init() { func TestTest(t *testing.T) { mp := testProvider() + mp.DiffReturn = nil + mp.ApplyReturn = &terraform.InstanceState{ ID: "foo", } diff --git a/helper/schema/resource.go b/helper/schema/resource.go index f0e0515cd981..0c640e697cc9 100644 --- a/helper/schema/resource.go +++ b/helper/schema/resource.go @@ -3,6 +3,7 @@ package schema import ( "errors" "fmt" + "strconv" "github.com/hashicorp/terraform/terraform" ) @@ -24,6 +25,31 @@ type Resource struct { // resource. Schema map[string]*Schema + // SchemaVersion is the version number for this resource's Schema + // definition. The current SchemaVersion stored in the state for each + // resource. Provider authors can increment this version number + // when Schema semantics change. If the State's SchemaVersion is less than + // the current SchemaVersion, the InstanceState is yielded to the + // MigrateState callback, where the provider can make whatever changes it + // needs to update the state to be compatible to the latest version of the + // Schema. + // + // When unset, SchemaVersion defaults to 0, so provider authors can start + // their Versioning at any integer >= 1 + SchemaVersion int + + // MigrateState is responsible for updating an InstanceState with an old + // version to the format expected by the current version of the Schema. + // + // It is called during Refresh if the State's stored SchemaVersion is less + // than the current SchemaVersion of the Resource. + // + // The function is yielded the state's stored SchemaVersion and a pointer to + // the InstanceState that needs updating, as well as the configured + // provider's configured meta interface{}, in case the migration process + // needs to make any remote API calls. + MigrateState StateMigrateFunc + // The functions below are the CRUD operations for this resource. // // The only optional operation is Update. If Update is not implemented, @@ -69,6 +95,10 @@ type DeleteFunc func(*ResourceData, interface{}) error // See Resource documentation. type ExistsFunc func(*ResourceData, interface{}) (bool, error) +// See Resource documentation. +type StateMigrateFunc func( + int, *terraform.InstanceState, interface{}) (*terraform.InstanceState, error) + // Apply creates, updates, and/or deletes a resource. func (r *Resource) Apply( s *terraform.InstanceState, @@ -121,7 +151,7 @@ func (r *Resource) Apply( err = r.Update(data, meta) } - return data.State(), err + return r.recordCurrentSchemaVersion(data.State()), err } // Diff returns a diff of this resource and is API compatible with the @@ -158,6 +188,14 @@ func (r *Resource) Refresh( } } + needsMigration, stateSchemaVersion := r.checkSchemaVersion(s) + if needsMigration && r.MigrateState != nil { + s, err := r.MigrateState(stateSchemaVersion, s, meta) + if err != nil { + return s, err + } + } + data, err := schemaMap(r.Schema).Data(s, nil) if err != nil { return s, err @@ -169,7 +207,7 @@ func (r *Resource) Refresh( state = nil } - return state, err + return r.recordCurrentSchemaVersion(state), err } // InternalValidate should be called to validate the structure @@ -187,5 +225,45 @@ func (r *Resource) InternalValidate() error { return errors.New("resource is nil") } + if r.isTopLevel() { + // All non-Computed attributes must be ForceNew if Update is not defined + if r.Update == nil { + nonForceNewAttrs := make([]string, 0) + for k, v := range r.Schema { + if !v.ForceNew && !v.Computed { + nonForceNewAttrs = append(nonForceNewAttrs, k) + } + } + if len(nonForceNewAttrs) > 0 { + return fmt.Errorf( + "No Update defined, must set ForceNew on: %#v", nonForceNewAttrs) + } + } + } + return schemaMap(r.Schema).InternalValidate() } + +// Returns true if the resource is "top level" i.e. not a sub-resource. +func (r *Resource) isTopLevel() bool { + // TODO: This is a heuristic; replace with a definitive attribute? + return r.Create != nil +} + +// Determines if a given InstanceState needs to be migrated by checking the +// stored version number with the current SchemaVersion +func (r *Resource) checkSchemaVersion(is *terraform.InstanceState) (bool, int) { + stateSchemaVersion, _ := strconv.Atoi(is.Meta["schema_version"]) + return stateSchemaVersion < r.SchemaVersion, stateSchemaVersion +} + +func (r *Resource) recordCurrentSchemaVersion( + state *terraform.InstanceState) *terraform.InstanceState { + if state != nil && r.SchemaVersion > 0 { + if state.Meta == nil { + state.Meta = make(map[string]string) + } + state.Meta["schema_version"] = strconv.Itoa(r.SchemaVersion) + } + return state +} diff --git a/helper/schema/resource_test.go b/helper/schema/resource_test.go index 0c71abddf079..e406e55b9bb2 100644 --- a/helper/schema/resource_test.go +++ b/helper/schema/resource_test.go @@ -3,6 +3,7 @@ package schema import ( "fmt" "reflect" + "strconv" "testing" "github.com/hashicorp/terraform/terraform" @@ -10,6 +11,7 @@ import ( func TestResourceApply_create(t *testing.T) { r := &Resource{ + SchemaVersion: 2, Schema: map[string]*Schema{ "foo": &Schema{ Type: TypeInt, @@ -50,6 +52,9 @@ func TestResourceApply_create(t *testing.T) { "id": "foo", "foo": "42", }, + Meta: map[string]string{ + "schema_version": "2", + }, } if !reflect.DeepEqual(actual, expected) { @@ -338,6 +343,7 @@ func TestResourceInternalValidate(t *testing.T) { func TestResourceRefresh(t *testing.T) { r := &Resource{ + SchemaVersion: 2, Schema: map[string]*Schema{ "foo": &Schema{ Type: TypeInt, @@ -367,6 +373,9 @@ func TestResourceRefresh(t *testing.T) { "id": "bar", "foo": "13", }, + Meta: map[string]string{ + "schema_version": "2", + }, } actual, err := r.Refresh(s, 42) @@ -478,3 +487,218 @@ func TestResourceRefresh_noExists(t *testing.T) { t.Fatalf("should have no state") } } + +func TestResourceRefresh_needsMigration(t *testing.T) { + // Schema v2 it deals only in newfoo, which tracks foo as an int + r := &Resource{ + SchemaVersion: 2, + Schema: map[string]*Schema{ + "newfoo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + } + + r.Read = func(d *ResourceData, m interface{}) error { + return d.Set("newfoo", d.Get("newfoo").(int)+1) + } + + r.MigrateState = func( + v int, + s *terraform.InstanceState, + meta interface{}) (*terraform.InstanceState, error) { + // Real state migration functions will probably switch on this value, + // but we'll just assert on it for now. + if v != 1 { + t.Fatalf("Expected StateSchemaVersion to be 1, got %d", v) + } + + if meta != 42 { + t.Fatal("Expected meta to be passed through to the migration function") + } + + oldfoo, err := strconv.ParseFloat(s.Attributes["oldfoo"], 64) + if err != nil { + t.Fatalf("err: %#v", err) + } + s.Attributes["newfoo"] = strconv.Itoa((int(oldfoo * 10))) + delete(s.Attributes, "oldfoo") + + return s, nil + } + + // State is v1 and deals in oldfoo, which tracked foo as a float at 1/10th + // the scale of newfoo + s := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "oldfoo": "1.2", + }, + Meta: map[string]string{ + "schema_version": "1", + }, + } + + actual, err := r.Refresh(s, 42) + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "id": "bar", + "newfoo": "13", + }, + Meta: map[string]string{ + "schema_version": "2", + }, + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad:\n\nexpected: %#v\ngot: %#v", expected, actual) + } +} + +func TestResourceRefresh_noMigrationNeeded(t *testing.T) { + r := &Resource{ + SchemaVersion: 2, + Schema: map[string]*Schema{ + "newfoo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + } + + r.Read = func(d *ResourceData, m interface{}) error { + return d.Set("newfoo", d.Get("newfoo").(int)+1) + } + + r.MigrateState = func( + v int, + s *terraform.InstanceState, + meta interface{}) (*terraform.InstanceState, error) { + t.Fatal("Migrate function shouldn't be called!") + return nil, nil + } + + s := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "newfoo": "12", + }, + Meta: map[string]string{ + "schema_version": "2", + }, + } + + actual, err := r.Refresh(s, nil) + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "id": "bar", + "newfoo": "13", + }, + Meta: map[string]string{ + "schema_version": "2", + }, + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad:\n\nexpected: %#v\ngot: %#v", expected, actual) + } +} + +func TestResourceRefresh_stateSchemaVersionUnset(t *testing.T) { + r := &Resource{ + // Version 1 > Version 0 + SchemaVersion: 1, + Schema: map[string]*Schema{ + "newfoo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + } + + r.Read = func(d *ResourceData, m interface{}) error { + return d.Set("newfoo", d.Get("newfoo").(int)+1) + } + + r.MigrateState = func( + v int, + s *terraform.InstanceState, + meta interface{}) (*terraform.InstanceState, error) { + s.Attributes["newfoo"] = s.Attributes["oldfoo"] + return s, nil + } + + s := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "oldfoo": "12", + }, + } + + actual, err := r.Refresh(s, nil) + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "id": "bar", + "newfoo": "13", + }, + Meta: map[string]string{ + "schema_version": "1", + }, + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad:\n\nexpected: %#v\ngot: %#v", expected, actual) + } +} + +func TestResourceRefresh_migrateStateErr(t *testing.T) { + r := &Resource{ + SchemaVersion: 2, + Schema: map[string]*Schema{ + "newfoo": &Schema{ + Type: TypeInt, + Optional: true, + }, + }, + } + + r.Read = func(d *ResourceData, m interface{}) error { + t.Fatal("Read should never be called!") + return nil + } + + r.MigrateState = func( + v int, + s *terraform.InstanceState, + meta interface{}) (*terraform.InstanceState, error) { + return s, fmt.Errorf("triggering an error") + } + + s := &terraform.InstanceState{ + ID: "bar", + Attributes: map[string]string{ + "oldfoo": "12", + }, + } + + _, err := r.Refresh(s, nil) + if err == nil { + t.Fatal("expected error, but got none!") + } +} diff --git a/helper/schema/schema_test.go b/helper/schema/schema_test.go index 2c9e89f63edf..c1233ae50bab 100644 --- a/helper/schema/schema_test.go +++ b/helper/schema/schema_test.go @@ -2322,6 +2322,24 @@ func TestSchemaMap_Input(t *testing.T) { Err: false, }, + "input ignored when default function returns an empty string": { + Schema: map[string]*Schema{ + "availability_zone": &Schema{ + Type: TypeString, + Default: "", + Optional: true, + }, + }, + + Input: map[string]string{ + "availability_zone": "bar", + }, + + Result: map[string]interface{}{}, + + Err: false, + }, + "input used when default function returns nil": { Schema: map[string]*Schema{ "availability_zone": &Schema{ diff --git a/helper/ssh/communicator.go b/helper/ssh/communicator.go index 186fd4824402..f908de97dfa3 100644 --- a/helper/ssh/communicator.go +++ b/helper/ssh/communicator.go @@ -14,7 +14,7 @@ import ( "sync" "time" - "code.google.com/p/go.crypto/ssh" + "golang.org/x/crypto/ssh" ) // RemoteCmd represents a remote command being prepared or run. @@ -97,6 +97,10 @@ type Config struct { // NoPty, if true, will not request a pty from the remote end. NoPty bool + + // SSHAgentConn is a pointer to the UNIX connection for talking with the + // ssh-agent. + SSHAgentConn net.Conn } // New creates a new packer.Communicator implementation over SSH. This takes diff --git a/helper/ssh/communicator_test.go b/helper/ssh/communicator_test.go index 2e16e148229e..b71321701070 100644 --- a/helper/ssh/communicator_test.go +++ b/helper/ssh/communicator_test.go @@ -4,10 +4,11 @@ package ssh import ( "bytes" - "code.google.com/p/go.crypto/ssh" "fmt" "net" "testing" + + "golang.org/x/crypto/ssh" ) // private key for mock server @@ -75,17 +76,27 @@ func newMockLineServer(t *testing.T) string { t.Logf("Handshaking error: %v", err) } t.Log("Accepted SSH connection") + for newChannel := range chans { - channel, _, err := newChannel.Accept() + channel, requests, err := newChannel.Accept() if err != nil { t.Errorf("Unable to accept channel.") } t.Log("Accepted channel") + go func(in <-chan *ssh.Request) { + for req := range in { + if req.WantReply { + req.Reply(true, nil) + } + } + }(requests) + go func(newChannel ssh.NewChannel) { - defer channel.Close() conn.OpenChannel(newChannel.ChannelType(), nil) }(newChannel) + + defer channel.Close() } conn.Close() }() @@ -153,5 +164,8 @@ func TestStart(t *testing.T) { cmd.Command = "echo foo" cmd.Stdout = stdout - client.Start(&cmd) + err = client.Start(&cmd) + if err != nil { + t.Fatalf("error executing command: %s", err) + } } diff --git a/helper/ssh/password.go b/helper/ssh/password.go index 934bcd01f572..8db6f82da2c4 100644 --- a/helper/ssh/password.go +++ b/helper/ssh/password.go @@ -1,7 +1,7 @@ package ssh import ( - "code.google.com/p/go.crypto/ssh" + "golang.org/x/crypto/ssh" "log" ) diff --git a/helper/ssh/password_test.go b/helper/ssh/password_test.go index e74b46e06fee..6e3e0a257ad1 100644 --- a/helper/ssh/password_test.go +++ b/helper/ssh/password_test.go @@ -1,7 +1,7 @@ package ssh import ( - "code.google.com/p/go.crypto/ssh" + "golang.org/x/crypto/ssh" "reflect" "testing" ) diff --git a/helper/ssh/provisioner.go b/helper/ssh/provisioner.go index baebbd9b6659..bf8f526373d6 100644 --- a/helper/ssh/provisioner.go +++ b/helper/ssh/provisioner.go @@ -5,12 +5,15 @@ import ( "fmt" "io/ioutil" "log" + "net" + "os" "time" - "code.google.com/p/go.crypto/ssh" "github.com/hashicorp/terraform/terraform" "github.com/mitchellh/go-homedir" "github.com/mitchellh/mapstructure" + "golang.org/x/crypto/ssh" + "golang.org/x/crypto/ssh/agent" ) const ( @@ -37,6 +40,7 @@ type SSHConfig struct { KeyFile string `mapstructure:"key_file"` Host string Port int + Agent bool Timeout string ScriptPath string `mapstructure:"script_path"` TimeoutVal time.Duration `mapstructure:"-"` @@ -99,9 +103,32 @@ func safeDuration(dur string, defaultDur time.Duration) time.Duration { // PrepareConfig is used to turn the *SSHConfig provided into a // usable *Config for client initialization. func PrepareConfig(conf *SSHConfig) (*Config, error) { + var conn net.Conn + var err error + sshConf := &ssh.ClientConfig{ User: conf.User, } + if conf.Agent { + sshAuthSock := os.Getenv("SSH_AUTH_SOCK") + + if sshAuthSock == "" { + return nil, fmt.Errorf("SSH Requested but SSH_AUTH_SOCK not-specified") + } + + conn, err = net.Dial("unix", sshAuthSock) + if err != nil { + return nil, fmt.Errorf("Error connecting to SSH_AUTH_SOCK: %v", err) + } + // I need to close this but, later after all connections have been made + // defer conn.Close() + signers, err := agent.NewClient(conn).Signers() + if err != nil { + return nil, fmt.Errorf("Error getting keys from ssh agent: %v", err) + } + + sshConf.Auth = append(sshConf.Auth, ssh.PublicKeys(signers...)) + } if conf.KeyFile != "" { fullPath, err := homedir.Expand(conf.KeyFile) if err != nil { @@ -140,8 +167,17 @@ func PrepareConfig(conf *SSHConfig) (*Config, error) { } host := fmt.Sprintf("%s:%d", conf.Host, conf.Port) config := &Config{ - SSHConfig: sshConf, - Connection: ConnectFunc("tcp", host), + SSHConfig: sshConf, + Connection: ConnectFunc("tcp", host), + SSHAgentConn: conn, } return config, nil } + +func (c *Config) CleanupConfig() error { + if c.SSHAgentConn != nil { + return c.SSHAgentConn.Close() + } + + return nil +} diff --git a/state/cache.go b/state/cache.go index a20eb4a06797..e58e1ee2ddaf 100644 --- a/state/cache.go +++ b/state/cache.go @@ -2,7 +2,6 @@ package state import ( "fmt" - "reflect" "github.com/hashicorp/terraform/terraform" ) @@ -77,7 +76,7 @@ func (s *CacheState) RefreshState() error { s.refreshResult = CacheRefreshUpdateLocal case durable.Serial == cached.Serial: // They're supposedly equal, verify. - if reflect.DeepEqual(cached, durable) { + if cached.Equal(durable) { // Hashes are the same, everything is great s.refreshResult = CacheRefreshNoop break diff --git a/state/remote/consul.go b/state/remote/consul.go index 274b5e37d5ff..791f4dca376d 100644 --- a/state/remote/consul.go +++ b/state/remote/consul.go @@ -20,6 +20,9 @@ func consulFactory(conf map[string]string) (Client, error) { if addr, ok := conf["address"]; ok && addr != "" { config.Address = addr } + if scheme, ok := conf["scheme"]; ok && scheme != "" { + config.Scheme = scheme + } client, err := consulapi.NewClient(config) if err != nil { diff --git a/terraform/context.go b/terraform/context.go index e2db6e6f93d7..6beaab6360d5 100644 --- a/terraform/context.go +++ b/terraform/context.go @@ -16,9 +16,12 @@ import ( type InputMode byte const ( - // InputModeVar asks for variables + // InputModeVar asks for all variables InputModeVar InputMode = 1 << iota + // InputModeVarUnset asks for variables which are not set yet + InputModeVarUnset + // InputModeProvider asks for provider variables InputModeProvider @@ -30,6 +33,7 @@ const ( // ContextOpts are the user-configurable options to create a context with // NewContext. type ContextOpts struct { + Destroy bool Diff *Diff Hooks []Hook Module *module.Tree @@ -37,6 +41,7 @@ type ContextOpts struct { State *State Providers map[string]ResourceProviderFactory Provisioners map[string]ResourceProvisionerFactory + Targets []string Variables map[string]string UIInput UIInput @@ -46,6 +51,7 @@ type ContextOpts struct { // perform operations on infrastructure. This structure is built using // NewContext. See the documentation for that. type Context struct { + destroy bool diff *Diff diffLock sync.RWMutex hooks []Hook @@ -55,6 +61,7 @@ type Context struct { sh *stopHook state *State stateLock sync.RWMutex + targets []string uiInput UIInput variables map[string]string @@ -92,12 +99,14 @@ func NewContext(opts *ContextOpts) *Context { } return &Context{ + destroy: opts.Destroy, diff: opts.Diff, hooks: hooks, module: opts.Module, providers: opts.Providers, provisioners: opts.Provisioners, state: state, + targets: opts.Targets, uiInput: opts.UIInput, variables: opts.Variables, @@ -132,6 +141,8 @@ func (c *Context) GraphBuilder() GraphBuilder { Providers: providers, Provisioners: provisioners, State: c.state, + Targets: c.targets, + Destroy: c.destroy, } } @@ -154,6 +165,14 @@ func (c *Context) Input(mode InputMode) error { } sort.Strings(names) for _, n := range names { + // If we only care about unset variables, then if the variabel + // is set, continue on. + if mode&InputModeVarUnset != 0 { + if _, ok := c.variables[n]; ok { + continue + } + } + v := m[n] switch v.Type() { case config.VariableTypeMap: @@ -242,7 +261,7 @@ func (c *Context) Apply() (*State, error) { // // Plan also updates the diff of this context to be the diff generated // by the plan, so Apply can be called after. -func (c *Context) Plan(opts *PlanOpts) (*Plan, error) { +func (c *Context) Plan() (*Plan, error) { v := c.acquireRun() defer c.releaseRun(v) @@ -253,7 +272,7 @@ func (c *Context) Plan(opts *PlanOpts) (*Plan, error) { } var operation walkOperation - if opts != nil && opts.Destroy { + if c.destroy { operation = walkPlanDestroy } else { // Set our state to be something temporary. We do this so that @@ -365,6 +384,23 @@ func (c *Context) Validate() ([]string, []error) { return walker.ValidationWarnings, rerrs.Errors } +// Module returns the module tree associated with this context. +func (c *Context) Module() *module.Tree { + return c.module +} + +// Variables will return the mapping of variables that were defined +// for this Context. If Input was called, this mapping may be different +// than what was given. +func (c *Context) Variables() map[string]string { + return c.variables +} + +// SetVariable sets a variable after a context has already been built. +func (c *Context) SetVariable(k, v string) { + c.variables[k] = v +} + func (c *Context) acquireRun() chan<- struct{} { c.l.Lock() defer c.l.Unlock() diff --git a/terraform/context_test.go b/terraform/context_test.go index 9050d4b96a89..6ff63d813a7b 100644 --- a/terraform/context_test.go +++ b/terraform/context_test.go @@ -24,7 +24,7 @@ func TestContext2Plan(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -57,7 +57,7 @@ func TestContext2Plan_emptyDiff(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -80,7 +80,7 @@ func TestContext2Plan_minimal(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -103,7 +103,7 @@ func TestContext2Plan_modules(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -115,6 +115,30 @@ func TestContext2Plan_modules(t *testing.T) { } } +// GH-1475 +func TestContext2Plan_moduleCycle(t *testing.T) { + m := testModule(t, "plan-module-cycle") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(testTerraformPlanModuleCycleStr) + if actual != expected { + t.Fatalf("bad:\n%s", actual) + } +} + func TestContext2Plan_moduleInput(t *testing.T) { m := testModule(t, "plan-module-input") p := testProvider("aws") @@ -126,7 +150,7 @@ func TestContext2Plan_moduleInput(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -149,7 +173,7 @@ func TestContext2Plan_moduleInputComputed(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -175,7 +199,7 @@ func TestContext2Plan_moduleInputFromVar(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -198,7 +222,7 @@ func TestContext2Plan_moduleMultiVar(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -237,7 +261,7 @@ func TestContext2Plan_moduleOrphans(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -282,7 +306,7 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -332,7 +356,7 @@ func TestContext2Plan_moduleProviderDefaults(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -385,7 +409,7 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -410,7 +434,7 @@ func TestContext2Plan_moduleVar(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -433,7 +457,7 @@ func TestContext2Plan_moduleVarComputed(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -471,7 +495,7 @@ func TestContext2Plan_nil(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -491,7 +515,7 @@ func TestContext2Plan_computed(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -514,7 +538,7 @@ func TestContext2Plan_computedList(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -537,7 +561,7 @@ func TestContext2Plan_count(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -564,7 +588,7 @@ func TestContext2Plan_countComputed(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err == nil { t.Fatal("should error") } @@ -581,7 +605,7 @@ func TestContext2Plan_countIndex(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -604,7 +628,7 @@ func TestContext2Plan_countIndexZero(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -630,7 +654,7 @@ func TestContext2Plan_countVar(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -653,7 +677,7 @@ func TestContext2Plan_countZero(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -676,7 +700,7 @@ func TestContext2Plan_countOneIndex(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -731,7 +755,7 @@ func TestContext2Plan_countDecreaseToOne(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -774,7 +798,7 @@ func TestContext2Plan_countIncreaseFromNotSet(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -817,7 +841,7 @@ func TestContext2Plan_countIncreaseFromOne(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -875,7 +899,7 @@ func TestContext2Plan_countIncreaseFromOneCorrupted(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -917,10 +941,11 @@ func TestContext2Plan_destroy(t *testing.T) { Providers: map[string]ResourceProviderFactory{ "aws": testProviderFuncFixed(p), }, - State: s, + State: s, + Destroy: true, }) - plan, err := ctx.Plan(&PlanOpts{Destroy: true}) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -971,10 +996,11 @@ func TestContext2Plan_moduleDestroy(t *testing.T) { Providers: map[string]ResourceProviderFactory{ "aws": testProviderFuncFixed(p), }, - State: s, + State: s, + Destroy: true, }) - plan, err := ctx.Plan(&PlanOpts{Destroy: true}) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1020,10 +1046,11 @@ func TestContext2Plan_moduleDestroyMultivar(t *testing.T) { Providers: map[string]ResourceProviderFactory{ "aws": testProviderFuncFixed(p), }, - State: s, + State: s, + Destroy: true, }) - plan, err := ctx.Plan(&PlanOpts{Destroy: true}) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1051,7 +1078,7 @@ func TestContext2Plan_pathVar(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1118,7 +1145,7 @@ func TestContext2Plan_diffVar(t *testing.T) { }, nil } - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1143,7 +1170,7 @@ func TestContext2Plan_hook(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1183,7 +1210,7 @@ func TestContext2Plan_orphan(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1221,7 +1248,7 @@ func TestContext2Plan_state(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1273,7 +1300,7 @@ func TestContext2Plan_taint(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1324,7 +1351,7 @@ func TestContext2Plan_multiple_taint(t *testing.T) { State: s, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1336,6 +1363,40 @@ func TestContext2Plan_multiple_taint(t *testing.T) { } } +func TestContext2Plan_targeted(t *testing.T) { + m := testModule(t, "plan-targeted") + p := testProvider("aws") + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Targets: []string{"aws_instance.foo"}, + }) + + plan, err := ctx.Plan() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(plan.String()) + expected := strings.TrimSpace(` +DIFF: + +CREATE: aws_instance.foo + num: "" => "2" + type: "" => "aws_instance" + +STATE: + + + `) + if actual != expected { + t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + } +} + func TestContext2Plan_provider(t *testing.T) { m := testModule(t, "plan-provider") p := testProvider("aws") @@ -1357,7 +1418,7 @@ func TestContext2Plan_provider(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -1377,7 +1438,7 @@ func TestContext2Plan_varMultiCountOne(t *testing.T) { }, }) - plan, err := ctx.Plan(nil) + plan, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1399,7 +1460,7 @@ func TestContext2Plan_varListErr(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err == nil { t.Fatal("should error") } @@ -1457,6 +1518,141 @@ func TestContext2Refresh(t *testing.T) { } } +func TestContext2Refresh_targeted(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "refresh-targeted") + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_vpc.metoo": resourceState("aws_vpc", "vpc-abc123"), + "aws_instance.notme": resourceState("aws_instance", "i-bcd345"), + "aws_instance.me": resourceState("aws_instance", "i-abc123"), + "aws_elb.meneither": resourceState("aws_elb", "lb-abc123"), + }, + }, + }, + }, + Targets: []string{"aws_instance.me"}, + }) + + refreshedResources := make([]string, 0, 2) + p.RefreshFn = func(i *InstanceInfo, is *InstanceState) (*InstanceState, error) { + refreshedResources = append(refreshedResources, i.Id) + return is, nil + } + + _, err := ctx.Refresh() + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := []string{"aws_vpc.metoo", "aws_instance.me"} + if !reflect.DeepEqual(refreshedResources, expected) { + t.Fatalf("expected: %#v, got: %#v", expected, refreshedResources) + } +} + +func TestContext2Refresh_targetedCount(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "refresh-targeted-count") + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_vpc.metoo": resourceState("aws_vpc", "vpc-abc123"), + "aws_instance.notme": resourceState("aws_instance", "i-bcd345"), + "aws_instance.me.0": resourceState("aws_instance", "i-abc123"), + "aws_instance.me.1": resourceState("aws_instance", "i-cde567"), + "aws_instance.me.2": resourceState("aws_instance", "i-cde789"), + "aws_elb.meneither": resourceState("aws_elb", "lb-abc123"), + }, + }, + }, + }, + Targets: []string{"aws_instance.me"}, + }) + + refreshedResources := make([]string, 0, 2) + p.RefreshFn = func(i *InstanceInfo, is *InstanceState) (*InstanceState, error) { + refreshedResources = append(refreshedResources, i.Id) + return is, nil + } + + _, err := ctx.Refresh() + if err != nil { + t.Fatalf("err: %s", err) + } + + // Target didn't specify index, so we should get all our instances + expected := []string{ + "aws_vpc.metoo", + "aws_instance.me.0", + "aws_instance.me.1", + "aws_instance.me.2", + } + sort.Strings(expected) + sort.Strings(refreshedResources) + if !reflect.DeepEqual(refreshedResources, expected) { + t.Fatalf("expected: %#v, got: %#v", expected, refreshedResources) + } +} + +func TestContext2Refresh_targetedCountIndex(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "refresh-targeted-count") + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_vpc.metoo": resourceState("aws_vpc", "vpc-abc123"), + "aws_instance.notme": resourceState("aws_instance", "i-bcd345"), + "aws_instance.me.0": resourceState("aws_instance", "i-abc123"), + "aws_instance.me.1": resourceState("aws_instance", "i-cde567"), + "aws_instance.me.2": resourceState("aws_instance", "i-cde789"), + "aws_elb.meneither": resourceState("aws_elb", "lb-abc123"), + }, + }, + }, + }, + Targets: []string{"aws_instance.me[0]"}, + }) + + refreshedResources := make([]string, 0, 2) + p.RefreshFn = func(i *InstanceInfo, is *InstanceState) (*InstanceState, error) { + refreshedResources = append(refreshedResources, i.Id) + return is, nil + } + + _, err := ctx.Refresh() + if err != nil { + t.Fatalf("err: %s", err) + } + + expected := []string{"aws_vpc.metoo", "aws_instance.me.0"} + if !reflect.DeepEqual(refreshedResources, expected) { + t.Fatalf("expected: %#v, got: %#v", expected, refreshedResources) + } +} + func TestContext2Refresh_delete(t *testing.T) { p := testProvider("aws") m := testModule(t, "refresh-basic") @@ -1673,6 +1869,54 @@ func TestContext2Refresh_noState(t *testing.T) { } } +func TestContext2Refresh_output(t *testing.T) { + p := testProvider("aws") + m := testModule(t, "refresh-output") + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.web": &ResourceState{ + Type: "aws_instance", + Primary: &InstanceState{ + ID: "foo", + Attributes: map[string]string{ + "foo": "bar", + }, + }, + }, + }, + + Outputs: map[string]string{ + "foo": "foo", + }, + }, + }, + }, + }) + + p.RefreshFn = func(info *InstanceInfo, s *InstanceState) (*InstanceState, error) { + return s, nil + } + + s, err := ctx.Refresh() + if err != nil { + t.Fatalf("err: %s", err) + } + + actual := strings.TrimSpace(s.String()) + expected := strings.TrimSpace(testContextRefreshOutputStr) + if actual != expected { + t.Fatalf("bad:\n\n%s\n\n%s", actual, expected) + } +} + func TestContext2Refresh_outputPartial(t *testing.T) { p := testProvider("aws") m := testModule(t, "refresh-output-partial") @@ -2060,6 +2304,55 @@ func TestContext2Validate_moduleProviderInherit(t *testing.T) { } } +func TestContext2Validate_moduleProviderVar(t *testing.T) { + m := testModule(t, "validate-module-pc-vars") + p := testProvider("aws") + c := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Variables: map[string]string{ + "provider_var": "bar", + }, + }) + + p.ValidateFn = func(c *ResourceConfig) ([]string, []error) { + return nil, c.CheckSet([]string{"foo"}) + } + + w, e := c.Validate() + if len(w) > 0 { + t.Fatalf("bad: %#v", w) + } + if len(e) > 0 { + t.Fatalf("bad: %s", e) + } +} + +func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) { + m := testModule(t, "validate-module-pc-inherit-unused") + p := testProvider("aws") + c := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + p.ValidateFn = func(c *ResourceConfig) ([]string, []error) { + return nil, c.CheckSet([]string{"foo"}) + } + + w, e := c.Validate() + if len(w) > 0 { + t.Fatalf("bad: %#v", w) + } + if len(e) > 0 { + t.Fatalf("bad: %s", e) + } +} + func TestContext2Validate_orphans(t *testing.T) { p := testProvider("aws") m := testModule(t, "validate-good") @@ -2468,7 +2761,7 @@ func TestContext2Input(t *testing.T) { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2505,12 +2798,15 @@ func TestContext2Input_provider(t *testing.T) { actual = c.Config["foo"] return nil } + p.ValidateFn = func(c *ResourceConfig) ([]string, []error) { + return nil, c.CheckSet([]string{"foo"}) + } if err := ctx.Input(InputModeStd); err != nil { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2587,7 +2883,7 @@ func TestContext2Input_providerId(t *testing.T) { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2635,7 +2931,7 @@ func TestContext2Input_providerOnly(t *testing.T) { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2690,7 +2986,7 @@ func TestContext2Input_providerVars(t *testing.T) { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2703,6 +2999,35 @@ func TestContext2Input_providerVars(t *testing.T) { } } +func TestContext2Input_providerVarsModuleInherit(t *testing.T) { + input := new(MockUIInput) + m := testModule(t, "input-provider-with-vars-and-module") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + UIInput: input, + }) + + p.InputFn = func(i UIInput, c *ResourceConfig) (*ResourceConfig, error) { + if errs := c.CheckSet([]string{"access_key"}); len(errs) > 0 { + return c, errs[0] + } + return c, nil + } + p.ConfigureFn = func(c *ResourceConfig) error { + return nil + } + + if err := ctx.Input(InputModeStd); err != nil { + t.Fatalf("err: %s", err) + } +} + func TestContext2Input_varOnly(t *testing.T) { input := new(MockUIInput) m := testModule(t, "input-provider-vars") @@ -2738,7 +3063,7 @@ func TestContext2Input_varOnly(t *testing.T) { t.Fatalf("err: %s", err) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2758,6 +3083,48 @@ func TestContext2Input_varOnly(t *testing.T) { } } +func TestContext2Input_varOnlyUnset(t *testing.T) { + input := new(MockUIInput) + m := testModule(t, "input-vars-unset") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Variables: map[string]string{ + "foo": "foovalue", + }, + UIInput: input, + }) + + input.InputReturnMap = map[string]string{ + "var.foo": "nope", + "var.bar": "baz", + } + + if err := ctx.Input(InputModeVar | InputModeVarUnset); err != nil { + t.Fatalf("err: %s", err) + } + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + actualStr := strings.TrimSpace(state.String()) + expectedStr := strings.TrimSpace(testTerraformInputVarOnlyUnsetStr) + if actualStr != expectedStr { + t.Fatalf("bad: \n%s", actualStr) + } +} + func TestContext2Apply(t *testing.T) { m := testModule(t, "apply-good") p := testProvider("aws") @@ -2770,7 +3137,7 @@ func TestContext2Apply(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2803,7 +3170,7 @@ func TestContext2Apply_emptyModule(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2851,7 +3218,7 @@ func TestContext2Apply_createBeforeDestroy(t *testing.T) { State: state, }) - if p, err := ctx.Plan(nil); err != nil { + if p, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } else { t.Logf(p.String()) @@ -2905,7 +3272,7 @@ func TestContext2Apply_createBeforeDestroyUpdate(t *testing.T) { State: state, }) - if p, err := ctx.Plan(nil); err != nil { + if p, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } else { t.Logf(p.String()) @@ -2940,7 +3307,7 @@ func TestContext2Apply_minimal(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -2968,7 +3335,7 @@ func TestContext2Apply_badDiff(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3026,7 +3393,7 @@ func TestContext2Apply_cancel(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3067,7 +3434,7 @@ func TestContext2Apply_compute(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3136,7 +3503,7 @@ func TestContext2Apply_countDecrease(t *testing.T) { State: s, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3196,7 +3563,7 @@ func TestContext2Apply_countDecreaseToOne(t *testing.T) { State: s, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3258,7 +3625,7 @@ func TestContext2Apply_countDecreaseToOneCorrupted(t *testing.T) { State: s, }) - if p, err := ctx.Plan(nil); err != nil { + if p, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } else { testStringMatch(t, p, testTerraformApplyCountDecToOneCorruptedPlanStr) @@ -3309,7 +3676,7 @@ func TestContext2Apply_countTainted(t *testing.T) { State: s, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3337,7 +3704,7 @@ func TestContext2Apply_countVariable(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3365,7 +3732,7 @@ func TestContext2Apply_module(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3394,9 +3761,10 @@ func TestContext2Apply_moduleVarResourceCount(t *testing.T) { Variables: map[string]string{ "count": "2", }, + Destroy: true, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3414,7 +3782,7 @@ func TestContext2Apply_moduleVarResourceCount(t *testing.T) { }, }) - if _, err := ctx.Plan(&PlanOpts{Destroy: true}); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3436,7 +3804,7 @@ func TestContext2Apply_moduleBool(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3470,7 +3838,7 @@ func TestContext2Apply_multiProvider(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3503,7 +3871,7 @@ func TestContext2Apply_nilDiff(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3543,7 +3911,7 @@ func TestContext2Apply_Provisioner_compute(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3588,7 +3956,7 @@ func TestContext2Apply_provisionerCreateFail(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3627,7 +3995,7 @@ func TestContext2Apply_provisionerCreateFailNoId(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3667,7 +4035,7 @@ func TestContext2Apply_provisionerFail(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3722,7 +4090,7 @@ func TestContext2Apply_provisionerFail_createBeforeDestroy(t *testing.T) { State: state, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3771,7 +4139,7 @@ func TestContext2Apply_error_createBeforeDestroy(t *testing.T) { } p.DiffFn = testDiffFn - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3829,7 +4197,7 @@ func TestContext2Apply_errorDestroy_createBeforeDestroy(t *testing.T) { } p.DiffFn = testDiffFn - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3886,7 +4254,7 @@ func TestContext2Apply_multiDepose_createBeforeDestroy(t *testing.T) { } } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3910,7 +4278,7 @@ aws_instance.web: (1 deposed) State: state, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -3938,7 +4306,7 @@ aws_instance.web: (2 deposed) } createdInstanceId = "qux" - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } state, err = ctx.Apply() @@ -3960,7 +4328,7 @@ aws_instance.web: (1 deposed) } createdInstanceId = "quux" - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } state, err = ctx.Apply() @@ -4000,7 +4368,7 @@ func TestContext2Apply_provisionerResourceRef(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4046,7 +4414,7 @@ func TestContext2Apply_provisionerSelfRef(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4099,7 +4467,7 @@ func TestContext2Apply_provisionerMultiSelfRef(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4147,7 +4515,7 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4184,7 +4552,7 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { State: state, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4255,7 +4623,7 @@ func TestContext2Apply_outputDiffVars(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } if _, err := ctx.Apply(); err != nil { @@ -4318,7 +4686,7 @@ func TestContext2Apply_Provisioner_ConnInfo(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4354,22 +4722,32 @@ func TestContext2Apply_destroy(t *testing.T) { }) // First plan and apply a create operation - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } - if _, err := ctx.Apply(); err != nil { + state, err := ctx.Apply() + if err != nil { t.Fatalf("err: %s", err) } // Next, plan and apply a destroy operation - if _, err := ctx.Plan(&PlanOpts{Destroy: true}); err != nil { + h.Active = true + ctx = testContext2(t, &ContextOpts{ + Destroy: true, + State: state, + Module: m, + Hooks: []Hook{h}, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } - h.Active = true - - state, err := ctx.Apply() + state, err = ctx.Apply() if err != nil { t.Fatalf("err: %s", err) } @@ -4385,7 +4763,7 @@ func TestContext2Apply_destroy(t *testing.T) { expected2 := []string{"aws_instance.bar", "aws_instance.foo"} actual2 := h.IDs if !reflect.DeepEqual(actual2, expected2) { - t.Fatalf("bad: %#v", actual2) + t.Fatalf("expected: %#v\n\ngot:%#v", expected2, actual2) } } @@ -4404,22 +4782,33 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { }) // First plan and apply a create operation - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } - if _, err := ctx.Apply(); err != nil { + state, err := ctx.Apply() + + if err != nil { t.Fatalf("err: %s", err) } // Next, plan and apply a destroy operation - if _, err := ctx.Plan(&PlanOpts{Destroy: true}); err != nil { + h.Active = true + ctx = testContext2(t, &ContextOpts{ + Destroy: true, + State: state, + Module: m, + Hooks: []Hook{h}, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + }) + + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } - h.Active = true - - state, err := ctx.Apply() + state, err = ctx.Apply() if err != nil { t.Fatalf("err: %s", err) } @@ -4475,7 +4864,7 @@ func TestContext2Apply_destroyOrphan(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4532,10 +4921,11 @@ func TestContext2Apply_destroyTaintedProvisioner(t *testing.T) { Provisioners: map[string]ResourceProvisionerFactory{ "shell": testProvisionerFuncFixed(pr), }, - State: s, + State: s, + Destroy: true, }) - if _, err := ctx.Plan(&PlanOpts{Destroy: true}); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4593,7 +4983,7 @@ func TestContext2Apply_error(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4660,7 +5050,7 @@ func TestContext2Apply_errorPartial(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4695,7 +5085,7 @@ func TestContext2Apply_hook(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4743,7 +5133,7 @@ func TestContext2Apply_idAttr(t *testing.T) { }, nil } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4777,7 +5167,7 @@ func TestContext2Apply_output(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4805,7 +5195,7 @@ func TestContext2Apply_outputInvalid(t *testing.T) { }, }) - _, err := ctx.Plan(nil) + _, err := ctx.Plan() if err == nil { t.Fatalf("err: %s", err) } @@ -4826,7 +5216,7 @@ func TestContext2Apply_outputList(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4854,7 +5244,7 @@ func TestContext2Apply_outputMulti(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4882,7 +5272,7 @@ func TestContext2Apply_outputMultiIndex(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -4947,7 +5337,7 @@ func TestContext2Apply_taint(t *testing.T) { State: s, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -5012,7 +5402,7 @@ func TestContext2Apply_taintDep(t *testing.T) { State: s, }) - if p, err := ctx.Plan(nil); err != nil { + if p, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } else { t.Logf("plan: %s", p) @@ -5075,7 +5465,7 @@ func TestContext2Apply_taintDepRequiresNew(t *testing.T) { State: s, }) - if p, err := ctx.Plan(nil); err != nil { + if p, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } else { t.Logf("plan: %s", p) @@ -5093,6 +5483,199 @@ func TestContext2Apply_taintDepRequiresNew(t *testing.T) { } } +func TestContext2Apply_targeted(t *testing.T) { + m := testModule(t, "apply-targeted") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Targets: []string{"aws_instance.foo"}, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + mod := state.RootModule() + if len(mod.Resources) != 1 { + t.Fatalf("expected 1 resource, got: %#v", mod.Resources) + } + + checkStateString(t, state, ` +aws_instance.foo: + ID = foo + num = 2 + type = aws_instance + `) +} + +func TestContext2Apply_targetedCount(t *testing.T) { + m := testModule(t, "apply-targeted-count") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Targets: []string{"aws_instance.foo"}, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, state, ` +aws_instance.foo.0: + ID = foo +aws_instance.foo.1: + ID = foo +aws_instance.foo.2: + ID = foo + `) +} + +func TestContext2Apply_targetedCountIndex(t *testing.T) { + m := testModule(t, "apply-targeted-count") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + Targets: []string{"aws_instance.foo[1]"}, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, state, ` +aws_instance.foo.1: + ID = foo + `) +} + +func TestContext2Apply_targetedDestroy(t *testing.T) { + m := testModule(t, "apply-targeted") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo": resourceState("aws_instance", "i-bcd345"), + "aws_instance.bar": resourceState("aws_instance", "i-abc123"), + }, + }, + }, + }, + Targets: []string{"aws_instance.foo"}, + Destroy: true, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + mod := state.RootModule() + if len(mod.Resources) != 1 { + t.Fatalf("expected 1 resource, got: %#v", mod.Resources) + } + + checkStateString(t, state, ` +aws_instance.bar: + ID = i-abc123 + `) +} + +func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { + m := testModule(t, "apply-targeted-count") + p := testProvider("aws") + p.ApplyFn = testApplyFn + p.DiffFn = testDiffFn + ctx := testContext2(t, &ContextOpts{ + Module: m, + Providers: map[string]ResourceProviderFactory{ + "aws": testProviderFuncFixed(p), + }, + State: &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: rootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo.0": resourceState("aws_instance", "i-bcd345"), + "aws_instance.foo.1": resourceState("aws_instance", "i-bcd345"), + "aws_instance.foo.2": resourceState("aws_instance", "i-bcd345"), + "aws_instance.bar.0": resourceState("aws_instance", "i-abc123"), + "aws_instance.bar.1": resourceState("aws_instance", "i-abc123"), + "aws_instance.bar.2": resourceState("aws_instance", "i-abc123"), + }, + }, + }, + }, + Targets: []string{ + "aws_instance.foo[2]", + "aws_instance.bar[1]", + }, + Destroy: true, + }) + + if _, err := ctx.Plan(); err != nil { + t.Fatalf("err: %s", err) + } + + state, err := ctx.Apply() + if err != nil { + t.Fatalf("err: %s", err) + } + + checkStateString(t, state, ` +aws_instance.bar.0: + ID = i-abc123 +aws_instance.bar.2: + ID = i-abc123 +aws_instance.foo.0: + ID = i-bcd345 +aws_instance.foo.1: + ID = i-bcd345 + `) +} + func TestContext2Apply_unknownAttribute(t *testing.T) { m := testModule(t, "apply-unknown") p := testProvider("aws") @@ -5105,7 +5688,7 @@ func TestContext2Apply_unknownAttribute(t *testing.T) { }, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -5145,7 +5728,7 @@ func TestContext2Apply_vars(t *testing.T) { t.Fatalf("bad: %s", e) } - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -5203,7 +5786,7 @@ func TestContext2Apply_createBefore_depends(t *testing.T) { State: state, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -5312,7 +5895,7 @@ func TestContext2Apply_singleDestroy(t *testing.T) { State: state, }) - if _, err := ctx.Plan(nil); err != nil { + if _, err := ctx.Plan(); err != nil { t.Fatalf("err: %s", err) } @@ -5482,6 +6065,15 @@ func checkStateString(t *testing.T, state *State, expected string) { } } +func resourceState(resourceType, resourceID string) *ResourceState { + return &ResourceState{ + Type: resourceType, + Primary: &InstanceState{ + ID: resourceID, + }, + } +} + const testContextGraph = ` root: root aws_instance.bar @@ -5504,6 +6096,16 @@ module.child: ID = new ` +const testContextRefreshOutputStr = ` +aws_instance.web: + ID = foo + foo = bar + +Outputs: + +foo = bar +` + const testContextRefreshOutputPartialStr = ` ` diff --git a/terraform/diff.go b/terraform/diff.go index dbaf37e0249c..96b8c654adf3 100644 --- a/terraform/diff.go +++ b/terraform/diff.go @@ -357,18 +357,20 @@ func (d *InstanceDiff) RequiresNew() bool { // we say "same", it is not necessarily exactly equal. Instead, it is // just checking that the same attributes are changing, a destroy // isn't suddenly happening, etc. -func (d *InstanceDiff) Same(d2 *InstanceDiff) bool { +func (d *InstanceDiff) Same(d2 *InstanceDiff) (bool, string) { if d == nil && d2 == nil { - return true + return true, "" } else if d == nil || d2 == nil { - return false + return false, "both nil" } if d.Destroy != d2.Destroy { - return false + return false, fmt.Sprintf( + "diff: Destroy; old: %t, new: %t", d.Destroy, d2.Destroy) } if d.RequiresNew() != d2.RequiresNew() { - return false + return false, fmt.Sprintf( + "diff RequiresNew; old: %t, new: %t", d.RequiresNew(), d2.RequiresNew()) } // Go through the old diff and make sure the new diff has all the @@ -405,6 +407,12 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) bool { _, ok := d2.Attributes[k] if !ok { + // If there's no new attribute, and the old diff expected the attribute + // to be removed, that's just fine. + if diffOld.NewRemoved { + continue + } + // No exact match, but maybe this is a set containing computed // values. So check if there is an approximate hash in the key // and if so, try to match the key. @@ -420,7 +428,7 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) bool { } re, err := regexp.Compile("^" + strings.Join(parts2, `\.`) + "$") if err != nil { - return false + return false, fmt.Sprintf("regexp failed to compile; err: %#v", err) } for k2, _ := range checkNew { if re.MatchString(k2) { @@ -452,7 +460,7 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) bool { } if !ok { - return false + return false, fmt.Sprintf("attribute mismatch: %s", k) } } @@ -477,8 +485,13 @@ func (d *InstanceDiff) Same(d2 *InstanceDiff) bool { // Check for leftover attributes if len(checkNew) > 0 { - return false + extras := make([]string, 0, len(checkNew)) + for attr, _ := range checkNew { + extras = append(extras, attr) + } + return false, + fmt.Sprintf("extra attributes: %s", strings.Join(extras, ", ")) } - return true + return true, "" } diff --git a/terraform/diff_test.go b/terraform/diff_test.go index 00e958cfeaac..4eeb8d3879c9 100644 --- a/terraform/diff_test.go +++ b/terraform/diff_test.go @@ -361,29 +361,34 @@ func TestInstanceDiffSame(t *testing.T) { cases := []struct { One, Two *InstanceDiff Same bool + Reason string }{ { &InstanceDiff{}, &InstanceDiff{}, true, + "", }, { nil, nil, true, + "", }, { &InstanceDiff{Destroy: false}, &InstanceDiff{Destroy: true}, false, + "diff: Destroy; old: false, new: true", }, { &InstanceDiff{Destroy: true}, &InstanceDiff{Destroy: true}, true, + "", }, { @@ -398,6 +403,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, true, + "", }, { @@ -412,6 +418,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, false, + "attribute mismatch: bar", }, // Extra attributes @@ -428,6 +435,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, false, + "extra attributes: bar", }, { @@ -442,6 +450,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, false, + "diff RequiresNew; old: true, new: false", }, { @@ -463,6 +472,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, true, + "", }, { @@ -491,6 +501,7 @@ func TestInstanceDiffSame(t *testing.T) { }, }, true, + "", }, { @@ -506,13 +517,53 @@ func TestInstanceDiffSame(t *testing.T) { Attributes: map[string]*ResourceAttrDiff{}, }, true, + "", + }, + + // In a DESTROY/CREATE scenario, the plan diff will be run against the + // state of the old instance, while the apply diff will be run against an + // empty state (because the state is cleared when the destroy runs.) + // For complex attributes, this can result in keys that seem to disappear + // between the two diffs, when in reality everything is working just fine. + // + // Same() needs to take into account this scenario by analyzing NewRemoved + // and treating as "Same" a diff that does indeed have that key removed. + { + &InstanceDiff{ + Attributes: map[string]*ResourceAttrDiff{ + "somemap.oldkey": &ResourceAttrDiff{ + Old: "long ago", + New: "", + NewRemoved: true, + }, + "somemap.newkey": &ResourceAttrDiff{ + Old: "", + New: "brave new world", + }, + }, + }, + &InstanceDiff{ + Attributes: map[string]*ResourceAttrDiff{ + "somemap.newkey": &ResourceAttrDiff{ + Old: "", + New: "brave new world", + }, + }, + }, + true, + "", }, } for i, tc := range cases { - actual := tc.One.Same(tc.Two) - if actual != tc.Same { - t.Fatalf("%d:\n\n%#v\n\n%#v", i, tc.One, tc.Two) + same, reason := tc.One.Same(tc.Two) + if same != tc.Same { + t.Fatalf("%d: expected same: %t, got %t (%s)\n\n one: %#v\n\ntwo: %#v", + i, tc.Same, same, reason, tc.One, tc.Two) + } + if reason != tc.Reason { + t.Fatalf( + "%d: bad reason\n\nexpected: %#v\n\ngot: %#v", i, tc.Reason, reason) } } } diff --git a/terraform/eval_context.go b/terraform/eval_context.go index 120cf71e7721..4f6d7c2e74e9 100644 --- a/terraform/eval_context.go +++ b/terraform/eval_context.go @@ -33,6 +33,7 @@ type EvalContext interface { // is used to store the provider configuration for inheritance lookups // with ParentProviderConfig(). ConfigureProvider(string, *ResourceConfig) error + SetProviderConfig(string, *ResourceConfig) error ParentProviderConfig(string) *ResourceConfig // ProviderInput and SetProviderInput are used to configure providers diff --git a/terraform/eval_context_builtin.go b/terraform/eval_context_builtin.go index 15acb01eb21a..d25ea76ff13e 100644 --- a/terraform/eval_context_builtin.go +++ b/terraform/eval_context_builtin.go @@ -106,6 +106,15 @@ func (ctx *BuiltinEvalContext) ConfigureProvider( return fmt.Errorf("Provider '%s' not initialized", n) } + if err := ctx.SetProviderConfig(n, cfg); err != nil { + return nil + } + + return p.Configure(cfg) +} + +func (ctx *BuiltinEvalContext) SetProviderConfig( + n string, cfg *ResourceConfig) error { providerPath := make([]string, len(ctx.Path())+1) copy(providerPath, ctx.Path()) providerPath[len(providerPath)-1] = n @@ -115,7 +124,7 @@ func (ctx *BuiltinEvalContext) ConfigureProvider( ctx.ProviderConfigCache[PathCacheKey(providerPath)] = cfg ctx.ProviderLock.Unlock() - return p.Configure(cfg) + return nil } func (ctx *BuiltinEvalContext) ProviderInput(n string) map[string]interface{} { diff --git a/terraform/eval_context_mock.go b/terraform/eval_context_mock.go index 3190f680acf6..27a98c2d5d98 100644 --- a/terraform/eval_context_mock.go +++ b/terraform/eval_context_mock.go @@ -38,6 +38,10 @@ type MockEvalContext struct { ConfigureProviderConfig *ResourceConfig ConfigureProviderError error + SetProviderConfigCalled bool + SetProviderConfigName string + SetProviderConfigConfig *ResourceConfig + ParentProviderConfigCalled bool ParentProviderConfigName string ParentProviderConfigConfig *ResourceConfig @@ -107,6 +111,14 @@ func (c *MockEvalContext) ConfigureProvider(n string, cfg *ResourceConfig) error return c.ConfigureProviderError } +func (c *MockEvalContext) SetProviderConfig( + n string, cfg *ResourceConfig) error { + c.SetProviderConfigCalled = true + c.SetProviderConfigName = n + c.SetProviderConfigConfig = cfg + return nil +} + func (c *MockEvalContext) ParentProviderConfig(n string) *ResourceConfig { c.ParentProviderConfigCalled = true c.ParentProviderConfigName = n diff --git a/terraform/eval_diff.go b/terraform/eval_diff.go index 4a06f4c43143..0dfc96589c07 100644 --- a/terraform/eval_diff.go +++ b/terraform/eval_diff.go @@ -38,8 +38,9 @@ func (n *EvalCompareDiff) Eval(ctx EvalContext) (interface{}, error) { } }() - if !one.Same(two) { - log.Printf("[ERROR] %s: diff's didn't match", n.Info.Id) + if same, reason := one.Same(two); !same { + log.Printf("[ERROR] %s: diffs didn't match", n.Info.Id) + log.Printf("[ERROR] %s: reason: %s", n.Info.Id, reason) log.Printf("[ERROR] %s: diff one: %#v", n.Info.Id, one) log.Printf("[ERROR] %s: diff two: %#v", n.Info.Id, two) return nil, fmt.Errorf( diff --git a/terraform/eval_provider.go b/terraform/eval_provider.go index f648fe46fab8..e5205a556dda 100644 --- a/terraform/eval_provider.go +++ b/terraform/eval_provider.go @@ -6,17 +6,29 @@ import ( "github.com/hashicorp/terraform/config" ) -// EvalConfigProvider is an EvalNode implementation that configures -// a provider that is already initialized and retrieved. -type EvalConfigProvider struct { +// EvalSetProviderConfig sets the parent configuration for a provider +// without configuring that provider, validating it, etc. +type EvalSetProviderConfig struct { Provider string Config **ResourceConfig } -func (n *EvalConfigProvider) Eval(ctx EvalContext) (interface{}, error) { +func (n *EvalSetProviderConfig) Eval(ctx EvalContext) (interface{}, error) { + return nil, ctx.SetProviderConfig(n.Provider, *n.Config) +} + +// EvalBuildProviderConfig outputs a *ResourceConfig that is properly +// merged with parents and inputs on top of what is configured in the file. +type EvalBuildProviderConfig struct { + Provider string + Config **ResourceConfig + Output **ResourceConfig +} + +func (n *EvalBuildProviderConfig) Eval(ctx EvalContext) (interface{}, error) { cfg := *n.Config - // If we have a configuration set, then use that + // If we have a configuration set, then merge that in if input := ctx.ProviderInput(n.Provider); input != nil { rc, err := config.NewRawConfig(input) if err != nil { @@ -33,7 +45,19 @@ func (n *EvalConfigProvider) Eval(ctx EvalContext) (interface{}, error) { cfg = NewResourceConfig(merged) } - return nil, ctx.ConfigureProvider(n.Provider, cfg) + *n.Output = cfg + return nil, nil +} + +// EvalConfigProvider is an EvalNode implementation that configures +// a provider that is already initialized and retrieved. +type EvalConfigProvider struct { + Provider string + Config **ResourceConfig +} + +func (n *EvalConfigProvider) Eval(ctx EvalContext) (interface{}, error) { + return nil, ctx.ConfigureProvider(n.Provider, *n.Config) } // EvalInitProvider is an EvalNode implementation that initializes a provider @@ -72,7 +96,7 @@ func (n *EvalGetProvider) Eval(ctx EvalContext) (interface{}, error) { type EvalInputProvider struct { Name string Provider *ResourceProvider - Config *config.RawConfig + Config **ResourceConfig } func (n *EvalInputProvider) Eval(ctx EvalContext) (interface{}, error) { @@ -81,8 +105,7 @@ func (n *EvalInputProvider) Eval(ctx EvalContext) (interface{}, error) { return nil, nil } - rc := NewResourceConfig(n.Config) - rc.Config = make(map[string]interface{}) + rc := *n.Config // Wrap the input into a namespace input := &PrefixUIInput{ diff --git a/terraform/eval_provider_test.go b/terraform/eval_provider_test.go index 849e434a6ae8..5d50d746ba38 100644 --- a/terraform/eval_provider_test.go +++ b/terraform/eval_provider_test.go @@ -5,6 +5,71 @@ import ( "testing" ) +func TestEvalBuildProviderConfig_impl(t *testing.T) { + var _ EvalNode = new(EvalBuildProviderConfig) +} + +func TestEvalBuildProviderConfig(t *testing.T) { + config := testResourceConfig(t, map[string]interface{}{}) + provider := "foo" + + n := &EvalBuildProviderConfig{ + Provider: provider, + Config: &config, + Output: &config, + } + + ctx := &MockEvalContext{ + ParentProviderConfigConfig: testResourceConfig(t, map[string]interface{}{ + "foo": "bar", + }), + ProviderInputConfig: map[string]interface{}{ + "bar": "baz", + }, + } + if _, err := n.Eval(ctx); err != nil { + t.Fatalf("err: %s", err) + } + + expected := map[string]interface{}{ + "foo": "bar", + "bar": "baz", + } + if !reflect.DeepEqual(config.Raw, expected) { + t.Fatalf("bad: %#v", config.Raw) + } +} + +func TestEvalBuildProviderConfig_parentPriority(t *testing.T) { + config := testResourceConfig(t, map[string]interface{}{}) + provider := "foo" + + n := &EvalBuildProviderConfig{ + Provider: provider, + Config: &config, + Output: &config, + } + + ctx := &MockEvalContext{ + ParentProviderConfigConfig: testResourceConfig(t, map[string]interface{}{ + "foo": "bar", + }), + ProviderInputConfig: map[string]interface{}{ + "foo": "baz", + }, + } + if _, err := n.Eval(ctx); err != nil { + t.Fatalf("err: %s", err) + } + + expected := map[string]interface{}{ + "foo": "bar", + } + if !reflect.DeepEqual(config.Raw, expected) { + t.Fatalf("bad: %#v", config.Raw) + } +} + func TestEvalConfigProvider_impl(t *testing.T) { var _ EvalNode = new(EvalConfigProvider) } diff --git a/terraform/eval_validate.go b/terraform/eval_validate.go index c6c1f20bae91..e808240a025c 100644 --- a/terraform/eval_validate.go +++ b/terraform/eval_validate.go @@ -57,21 +57,14 @@ RETURN: // EvalValidateProvider is an EvalNode implementation that validates // the configuration of a resource. type EvalValidateProvider struct { - ProviderName string - Provider *ResourceProvider - Config **ResourceConfig + Provider *ResourceProvider + Config **ResourceConfig } func (n *EvalValidateProvider) Eval(ctx EvalContext) (interface{}, error) { provider := *n.Provider config := *n.Config - // Get the parent configuration if there is one - if parent := ctx.ParentProviderConfig(n.ProviderName); parent != nil { - merged := parent.raw.Merge(config.raw) - config = NewResourceConfig(merged) - } - warns, errs := provider.Validate(config) if len(warns) == 0 && len(errs) == 0 { return nil, nil diff --git a/terraform/evaltree_provider.go b/terraform/evaltree_provider.go index 89937d562ab0..59916d9b5e9d 100644 --- a/terraform/evaltree_provider.go +++ b/terraform/evaltree_provider.go @@ -22,10 +22,19 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { Name: n, Output: &provider, }, + &EvalInterpolate{ + Config: config, + Output: &resourceConfig, + }, + &EvalBuildProviderConfig{ + Provider: n, + Config: &resourceConfig, + Output: &resourceConfig, + }, &EvalInputProvider{ Name: n, Provider: &provider, - Config: config, + Config: &resourceConfig, }, }, }, @@ -44,10 +53,14 @@ func ProviderEvalTree(n string, config *config.RawConfig) EvalNode { Config: config, Output: &resourceConfig, }, + &EvalBuildProviderConfig{ + Provider: n, + Config: &resourceConfig, + Output: &resourceConfig, + }, &EvalValidateProvider{ - ProviderName: n, - Provider: &provider, - Config: &resourceConfig, + Provider: &provider, + Config: &resourceConfig, }, &EvalConfigProvider{ Provider: n, diff --git a/terraform/graph_builder.go b/terraform/graph_builder.go index 4d572695495e..f08e20f16d6b 100644 --- a/terraform/graph_builder.go +++ b/terraform/graph_builder.go @@ -65,6 +65,13 @@ type BuiltinGraphBuilder struct { // Provisioners is the list of provisioners supported. Provisioners []string + + // Targets is the user-specified list of resources to target. + Targets []string + + // Destroy is set to true when we're in a `terraform destroy` or a + // `terraform plan -destroy` + Destroy bool } // Build builds the graph according to the steps returned by Steps. @@ -82,12 +89,17 @@ func (b *BuiltinGraphBuilder) Steps() []GraphTransformer { return []GraphTransformer{ // Create all our resources from the configuration and state &ConfigTransformer{Module: b.Root}, - &OrphanTransformer{State: b.State, Module: b.Root}, + &OrphanTransformer{ + State: b.State, + Module: b.Root, + Targeting: (len(b.Targets) > 0), + }, // Provider-related transformations &MissingProviderTransformer{Providers: b.Providers}, &ProviderTransformer{}, &PruneProviderTransformer{}, + &DisableProviderTransformer{}, // Provisioner-related transformations &MissingProvisionerTransformer{Provisioners: b.Provisioners}, @@ -104,6 +116,10 @@ func (b *BuiltinGraphBuilder) Steps() []GraphTransformer { }, }, + // Optionally reduces the graph to a user-specified list of targets and + // their dependencies. + &TargetsTransformer{Targets: b.Targets, Destroy: b.Destroy}, + // Create the destruction nodes &DestroyTransformer{}, &CreateBeforeDestroyTransformer{}, diff --git a/terraform/graph_builder_test.go b/terraform/graph_builder_test.go index 2f072ababe97..23d1eb8babdd 100644 --- a/terraform/graph_builder_test.go +++ b/terraform/graph_builder_test.go @@ -124,13 +124,9 @@ const testBasicGraphBuilderStr = ` const testBuiltinGraphBuilderBasicStr = ` aws_instance.db - aws_instance.db (destroy tainted) -aws_instance.db (destroy tainted) - aws_instance.web (destroy tainted) + provider.aws aws_instance.web aws_instance.db -aws_instance.web (destroy tainted) - provider.aws provider.aws ` diff --git a/terraform/graph_config_node.go b/terraform/graph_config_node.go index 625992f3f5a3..791431a71052 100644 --- a/terraform/graph_config_node.go +++ b/terraform/graph_config_node.go @@ -19,6 +19,30 @@ type graphNodeConfig interface { // be depended on. GraphNodeDependable GraphNodeDependent + + // ConfigType returns the type of thing in the configuration that + // this node represents, such as a resource, module, etc. + ConfigType() GraphNodeConfigType +} + +// GraphNodeAddressable is an interface that all graph nodes for the +// configuration graph need to implement in order to be be addressed / targeted +// properly. +type GraphNodeAddressable interface { + graphNodeConfig + + ResourceAddress() *ResourceAddress +} + +// GraphNodeTargetable is an interface for graph nodes to implement when they +// need to be told about incoming targets. This is useful for nodes that need +// to respect targets as they dynamically expand. Note that the list of targets +// provided will contain every target provided, and each implementing graph +// node must filter this list to targets considered relevant. +type GraphNodeTargetable interface { + GraphNodeAddressable + + SetTargets([]ResourceAddress) } // GraphNodeConfigModule represents a module within the configuration graph. @@ -28,6 +52,10 @@ type GraphNodeConfigModule struct { Tree *module.Tree } +func (n *GraphNodeConfigModule) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeModule +} + func (n *GraphNodeConfigModule) DependableName() []string { return []string{n.Name()} } @@ -105,6 +133,10 @@ func (n *GraphNodeConfigOutput) Name() string { return fmt.Sprintf("output.%s", n.Output.Name) } +func (n *GraphNodeConfigOutput) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeOutput +} + func (n *GraphNodeConfigOutput) DependableName() []string { return []string{n.Name()} } @@ -147,6 +179,10 @@ func (n *GraphNodeConfigProvider) Name() string { return fmt.Sprintf("provider.%s", n.Provider.Name) } +func (n *GraphNodeConfigProvider) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeProvider +} + func (n *GraphNodeConfigProvider) DependableName() []string { return []string{n.Name()} } @@ -173,6 +209,11 @@ func (n *GraphNodeConfigProvider) ProviderName() string { return n.Provider.Name } +// GraphNodeProvider implementation +func (n *GraphNodeConfigProvider) ProviderConfig() *config.RawConfig { + return n.Provider.RawConfig +} + // GraphNodeDotter impl. func (n *GraphNodeConfigProvider) Dot(name string) string { return fmt.Sprintf( @@ -191,6 +232,13 @@ type GraphNodeConfigResource struct { // If this is set to anything other than destroyModeNone, then this // resource represents a resource that will be destroyed in some way. DestroyMode GraphNodeDestroyMode + + // Used during DynamicExpand to target indexes + Targets []ResourceAddress +} + +func (n *GraphNodeConfigResource) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeResource } func (n *GraphNodeConfigResource) DependableName() []string { @@ -279,6 +327,7 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) steps = append(steps, &ResourceCountTransformer{ Resource: n.Resource, Destroy: n.DestroyMode != DestroyNone, + Targets: n.Targets, }) } @@ -289,8 +338,9 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) // expand orphans, which have all the same semantics in a destroy // as a primary. steps = append(steps, &OrphanTransformer{ - State: state, - View: n.Resource.Id(), + State: state, + View: n.Resource.Id(), + Targeting: (len(n.Targets) > 0), }) steps = append(steps, &DeposedTransformer{ @@ -314,6 +364,22 @@ func (n *GraphNodeConfigResource) DynamicExpand(ctx EvalContext) (*Graph, error) return b.Build(ctx.Path()) } +// GraphNodeAddressable impl. +func (n *GraphNodeConfigResource) ResourceAddress() *ResourceAddress { + return &ResourceAddress{ + // Indicates no specific index; will match on other three fields + Index: -1, + InstanceType: TypePrimary, + Name: n.Resource.Name, + Type: n.Resource.Type, + } +} + +// GraphNodeTargetable impl. +func (n *GraphNodeConfigResource) SetTargets(targets []ResourceAddress) { + n.Targets = targets +} + // GraphNodeEvalable impl. func (n *GraphNodeConfigResource) EvalTree() EvalNode { return &EvalSequence{ @@ -381,11 +447,44 @@ func (n *graphNodeResourceDestroy) CreateNode() dag.Vertex { } func (n *graphNodeResourceDestroy) DestroyInclude(d *ModuleDiff, s *ModuleState) bool { - // Always include anything other than the primary destroy - if n.DestroyMode != DestroyPrimary { + switch n.DestroyMode { + case DestroyPrimary: + return n.destroyIncludePrimary(d, s) + case DestroyTainted: + return n.destroyIncludeTainted(d, s) + default: return true } +} + +func (n *graphNodeResourceDestroy) destroyIncludeTainted( + d *ModuleDiff, s *ModuleState) bool { + // If there is no state, there can't by any tainted. + if s == nil { + return false + } + + // Grab the ID which is the prefix (in the case count > 0 at some point) + prefix := n.Original.Resource.Id() + + // Go through the resources and find any with our prefix. If there + // are any tainted, we need to keep it. + for k, v := range s.Resources { + if !strings.HasPrefix(k, prefix) { + continue + } + + if len(v.Tainted) > 0 { + return true + } + } + + // We didn't find any tainted nodes, return + return false +} +func (n *graphNodeResourceDestroy) destroyIncludePrimary( + d *ModuleDiff, s *ModuleState) bool { // Get the count, and specifically the raw value of the count // (with interpolations and all). If the count is NOT a static "1", // then we keep the destroy node no matter what. @@ -456,15 +555,19 @@ func (n *graphNodeResourceDestroy) DestroyInclude(d *ModuleDiff, s *ModuleState) // decreases to "1". if s != nil { for k, v := range s.Resources { - if !strings.HasPrefix(k, prefix) { + // Ignore exact matches + if k == prefix { continue } - // Ignore exact matches and the 0'th index. We only care - // about if there is a decrease in count. - if k == prefix { + // Ignore anything that doesn't have a "." afterwards so that + // we only get our own resource and any counts on it. + if !strings.HasPrefix(k, prefix+".") { continue } + + // Ignore exact matches and the 0'th index. We only care + // about if there is a decrease in count. if k == prefix+".0" { continue } @@ -504,6 +607,10 @@ func (n *graphNodeModuleExpanded) Name() string { return fmt.Sprintf("%s (expanded)", dag.VertexName(n.Original)) } +func (n *graphNodeModuleExpanded) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeModule +} + // GraphNodeDotter impl. func (n *graphNodeModuleExpanded) Dot(name string) string { return fmt.Sprintf( diff --git a/terraform/graph_config_node_type.go b/terraform/graph_config_node_type.go new file mode 100644 index 000000000000..f0196096fbc1 --- /dev/null +++ b/terraform/graph_config_node_type.go @@ -0,0 +1,15 @@ +package terraform + +//go:generate stringer -type=GraphNodeConfigType graph_config_node_type.go + +// GraphNodeConfigType is an enum for the type of thing that a graph +// node represents from the configuration. +type GraphNodeConfigType int + +const ( + GraphNodeConfigTypeInvalid GraphNodeConfigType = 0 + GraphNodeConfigTypeResource GraphNodeConfigType = iota + GraphNodeConfigTypeProvider + GraphNodeConfigTypeModule + GraphNodeConfigTypeOutput +) diff --git a/terraform/graphnodeconfigtype_string.go b/terraform/graphnodeconfigtype_string.go new file mode 100644 index 000000000000..d0748979e520 --- /dev/null +++ b/terraform/graphnodeconfigtype_string.go @@ -0,0 +1,16 @@ +// generated by stringer -type=GraphNodeConfigType graph_config_node_type.go; DO NOT EDIT + +package terraform + +import "fmt" + +const _GraphNodeConfigType_name = "GraphNodeConfigTypeInvalidGraphNodeConfigTypeResourceGraphNodeConfigTypeProviderGraphNodeConfigTypeModuleGraphNodeConfigTypeOutput" + +var _GraphNodeConfigType_index = [...]uint8{0, 26, 53, 80, 105, 130} + +func (i GraphNodeConfigType) String() string { + if i < 0 || i+1 >= GraphNodeConfigType(len(_GraphNodeConfigType_index)) { + return fmt.Sprintf("GraphNodeConfigType(%d)", i) + } + return _GraphNodeConfigType_name[_GraphNodeConfigType_index[i]:_GraphNodeConfigType_index[i+1]] +} diff --git a/terraform/instancetype.go b/terraform/instancetype.go new file mode 100644 index 000000000000..08959717b979 --- /dev/null +++ b/terraform/instancetype.go @@ -0,0 +1,13 @@ +package terraform + +//go:generate stringer -type=InstanceType instancetype.go + +// InstanceType is an enum of the various types of instances store in the State +type InstanceType int + +const ( + TypeInvalid InstanceType = iota + TypePrimary + TypeTainted + TypeDeposed +) diff --git a/terraform/instancetype_string.go b/terraform/instancetype_string.go new file mode 100644 index 000000000000..fc8697644ae6 --- /dev/null +++ b/terraform/instancetype_string.go @@ -0,0 +1,16 @@ +// generated by stringer -type=InstanceType instancetype.go; DO NOT EDIT + +package terraform + +import "fmt" + +const _InstanceType_name = "TypeInvalidTypePrimaryTypeTaintedTypeDeposed" + +var _InstanceType_index = [...]uint8{0, 11, 22, 33, 44} + +func (i InstanceType) String() string { + if i < 0 || i+1 >= InstanceType(len(_InstanceType_index)) { + return fmt.Sprintf("InstanceType(%d)", i) + } + return _InstanceType_name[_InstanceType_index[i]:_InstanceType_index[i+1]] +} diff --git a/terraform/interpolate.go b/terraform/interpolate.go index cf88ad825096..a1e6d37af41f 100644 --- a/terraform/interpolate.go +++ b/terraform/interpolate.go @@ -193,7 +193,7 @@ func (i *Interpolater) valueResourceVar( result map[string]ast.Variable) error { // If we're computing all dynamic fields, then module vars count // and we mark it as computed. - if i.Operation == walkValidate || i.Operation == walkRefresh { + if i.Operation == walkValidate { result[n] = ast.Variable{ Value: config.UnknownVariableValue, Type: ast.TypeString, @@ -353,6 +353,14 @@ func (i *Interpolater) computeResourceVariable( } MISSING: + // If the operation is refresh, it isn't an error for a value to + // be unknown. Instead, we return that the value is computed so + // that the graph can continue to refresh other nodes. It doesn't + // matter because the config isn't interpolated anyways. + if i.Operation == walkRefresh { + return config.UnknownVariableValue, nil + } + return "", fmt.Errorf( "Resource '%s' does not have attribute '%s' "+ "for variable '%s'", diff --git a/terraform/plan.go b/terraform/plan.go index e73fde3832ea..715136edcfc3 100644 --- a/terraform/plan.go +++ b/terraform/plan.go @@ -18,15 +18,6 @@ func init() { gob.Register(make(map[string]string)) } -// PlanOpts are the options used to generate an execution plan for -// Terraform. -type PlanOpts struct { - // If set to true, then the generated plan will destroy all resources - // that are created. Otherwise, it will move towards the desired state - // specified in the configuration. - Destroy bool -} - // Plan represents a single Terraform execution plan, which contains // all the information necessary to make an infrastructure change. type Plan struct { diff --git a/terraform/resource_address.go b/terraform/resource_address.go new file mode 100644 index 000000000000..b54a923d8847 --- /dev/null +++ b/terraform/resource_address.go @@ -0,0 +1,98 @@ +package terraform + +import ( + "fmt" + "regexp" + "strconv" +) + +// ResourceAddress is a way of identifying an individual resource (or, +// eventually, a subset of resources) within the state. It is used for Targets. +type ResourceAddress struct { + Index int + InstanceType InstanceType + Name string + Type string +} + +func ParseResourceAddress(s string) (*ResourceAddress, error) { + matches, err := tokenizeResourceAddress(s) + if err != nil { + return nil, err + } + resourceIndex := -1 + if matches["index"] != "" { + var err error + if resourceIndex, err = strconv.Atoi(matches["index"]); err != nil { + return nil, err + } + } + instanceType := TypePrimary + if matches["instance_type"] != "" { + var err error + if instanceType, err = ParseInstanceType(matches["instance_type"]); err != nil { + return nil, err + } + } + + return &ResourceAddress{ + Index: resourceIndex, + InstanceType: instanceType, + Name: matches["name"], + Type: matches["type"], + }, nil +} + +func (addr *ResourceAddress) Equals(raw interface{}) bool { + other, ok := raw.(*ResourceAddress) + if !ok { + return false + } + + indexMatch := (addr.Index == -1 || + other.Index == -1 || + addr.Index == other.Index) + + return (indexMatch && + addr.InstanceType == other.InstanceType && + addr.Name == other.Name && + addr.Type == other.Type) +} + +func ParseInstanceType(s string) (InstanceType, error) { + switch s { + case "primary": + return TypePrimary, nil + case "deposed": + return TypeDeposed, nil + case "tainted": + return TypeTainted, nil + default: + return TypeInvalid, fmt.Errorf("Unexpected value for InstanceType field: %q", s) + } +} + +func tokenizeResourceAddress(s string) (map[string]string, error) { + // Example of portions of the regexp below using the + // string "aws_instance.web.tainted[1]" + re := regexp.MustCompile(`\A` + + // "aws_instance" + `(?P\w+)\.` + + // "web" + `(?P\w+)` + + // "tainted" (optional, omission implies: "primary") + `(?:\.(?P\w+))?` + + // "1" (optional, omission implies: "0") + `(?:\[(?P\d+)\])?` + + `\z`) + groupNames := re.SubexpNames() + rawMatches := re.FindAllStringSubmatch(s, -1) + if len(rawMatches) != 1 { + return nil, fmt.Errorf("Problem parsing address: %q", s) + } + matches := make(map[string]string) + for i, m := range rawMatches[0] { + matches[groupNames[i]] = m + } + return matches, nil +} diff --git a/terraform/resource_address_test.go b/terraform/resource_address_test.go new file mode 100644 index 000000000000..2a8caa1f8f96 --- /dev/null +++ b/terraform/resource_address_test.go @@ -0,0 +1,207 @@ +package terraform + +import ( + "reflect" + "testing" +) + +func TestParseResourceAddress(t *testing.T) { + cases := map[string]struct { + Input string + Expected *ResourceAddress + }{ + "implicit primary, no specific index": { + Input: "aws_instance.foo", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: -1, + }, + }, + "implicit primary, explicit index": { + Input: "aws_instance.foo[2]", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 2, + }, + }, + "explicit primary, explicit index": { + Input: "aws_instance.foo.primary[2]", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 2, + }, + }, + "tainted": { + Input: "aws_instance.foo.tainted", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypeTainted, + Index: -1, + }, + }, + "deposed": { + Input: "aws_instance.foo.deposed", + Expected: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypeDeposed, + Index: -1, + }, + }, + } + + for tn, tc := range cases { + out, err := ParseResourceAddress(tc.Input) + if err != nil { + t.Fatalf("unexpected err: %#v", err) + } + + if !reflect.DeepEqual(out, tc.Expected) { + t.Fatalf("bad: %q\n\nexpected:\n%#v\n\ngot:\n%#v", tn, tc.Expected, out) + } + } +} + +func TestResourceAddressEquals(t *testing.T) { + cases := map[string]struct { + Address *ResourceAddress + Other interface{} + Expect bool + }{ + "basic match": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Expect: true, + }, + "address does not set index": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: -1, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 3, + }, + Expect: true, + }, + "other does not set index": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 3, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: -1, + }, + Expect: true, + }, + "neither sets index": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: -1, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: -1, + }, + Expect: true, + }, + "different type": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Other: &ResourceAddress{ + Type: "aws_vpc", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Expect: false, + }, + "different name": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "bar", + InstanceType: TypePrimary, + Index: 0, + }, + Expect: false, + }, + "different instance type": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypeTainted, + Index: 0, + }, + Expect: false, + }, + "different index": { + Address: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 0, + }, + Other: &ResourceAddress{ + Type: "aws_instance", + Name: "foo", + InstanceType: TypePrimary, + Index: 1, + }, + Expect: false, + }, + } + + for tn, tc := range cases { + actual := tc.Address.Equals(tc.Other) + if actual != tc.Expect { + t.Fatalf("%q: expected equals: %t, got %t for:\n%#v\n%#v", + tn, tc.Expect, actual, tc.Address, tc.Other) + } + } +} diff --git a/terraform/state.go b/terraform/state.go index ddf57cd7d121..20c0da501771 100644 --- a/terraform/state.go +++ b/terraform/state.go @@ -214,7 +214,20 @@ func (s *State) DeepCopy() *State { // IncrementSerialMaybe increments the serial number of this state // if it different from the other state. func (s *State) IncrementSerialMaybe(other *State) { + if s == nil { + return + } + if other == nil { + return + } + if s.Serial > other.Serial { + return + } if !s.Equal(other) { + if other.Serial > s.Serial { + s.Serial = other.Serial + } + s.Serial++ } } @@ -331,6 +344,10 @@ func (r *RemoteState) Equals(other *RemoteState) bool { return true } +func (r *RemoteState) GoString() string { + return fmt.Sprintf("*%#v", *r) +} + // ModuleState is used to track all the state relevant to a single // module. Previous to Terraform 0.3, all state belonged to the "root" // module. @@ -832,12 +849,20 @@ type InstanceState struct { // that is necessary for the Terraform run to complete, but is not // persisted to a state file. Ephemeral EphemeralState `json:"-"` + + // Meta is a simple K/V map that is persisted to the State but otherwise + // ignored by Terraform core. It's meant to be used for accounting by + // external client code. + Meta map[string]string `json:"meta,omitempty"` } func (i *InstanceState) init() { if i.Attributes == nil { i.Attributes = make(map[string]string) } + if i.Meta == nil { + i.Meta = make(map[string]string) + } i.Ephemeral.init() } @@ -855,9 +880,19 @@ func (i *InstanceState) deepcopy() *InstanceState { n.Attributes[k] = v } } + if i.Meta != nil { + n.Meta = make(map[string]string, len(i.Meta)) + for k, v := range i.Meta { + n.Meta[k] = v + } + } return n } +func (s *InstanceState) Empty() bool { + return s == nil || s.ID == "" +} + func (s *InstanceState) Equal(other *InstanceState) bool { // Short circuit some nil checks if s == nil || other == nil { diff --git a/terraform/state_test.go b/terraform/state_test.go index 9dfbbbf04e58..7f3dbb5674f3 100644 --- a/terraform/state_test.go +++ b/terraform/state_test.go @@ -178,6 +178,50 @@ func TestStateEqual(t *testing.T) { } } +func TestStateIncrementSerialMaybe(t *testing.T) { + cases := map[string]struct { + S1, S2 *State + Serial int64 + }{ + "S2 is nil": { + &State{}, + nil, + 0, + }, + "S2 is identical": { + &State{}, + &State{}, + 0, + }, + "S2 is different": { + &State{}, + &State{ + Modules: []*ModuleState{ + &ModuleState{Path: rootModulePath}, + }, + }, + 1, + }, + "S1 serial is higher": { + &State{Serial: 5}, + &State{ + Serial: 3, + Modules: []*ModuleState{ + &ModuleState{Path: rootModulePath}, + }, + }, + 5, + }, + } + + for name, tc := range cases { + tc.S1.IncrementSerialMaybe(tc.S2) + if tc.S1.Serial != tc.Serial { + t.Fatalf("Bad: %s\nGot: %d", name, tc.S1.Serial) + } + } +} + func TestResourceStateEqual(t *testing.T) { cases := []struct { Result bool @@ -322,6 +366,34 @@ func TestResourceStateTaint(t *testing.T) { } } +func TestInstanceStateEmpty(t *testing.T) { + cases := map[string]struct { + In *InstanceState + Result bool + }{ + "nil is empty": { + nil, + true, + }, + "non-nil but without ID is empty": { + &InstanceState{}, + true, + }, + "with ID is not empty": { + &InstanceState{ + ID: "i-abc123", + }, + false, + }, + } + + for tn, tc := range cases { + if tc.In.Empty() != tc.Result { + t.Fatalf("%q expected %#v to be empty: %#v", tn, tc.In, tc.Result) + } + } +} + func TestInstanceStateEqual(t *testing.T) { cases := []struct { Result bool diff --git a/terraform/terraform_test.go b/terraform/terraform_test.go index 94664791f259..6e80f92f0d7c 100644 --- a/terraform/terraform_test.go +++ b/terraform/terraform_test.go @@ -150,6 +150,14 @@ aws_instance.foo: type = aws_instance ` +const testTerraformInputVarOnlyUnsetStr = ` +aws_instance.foo: + ID = foo + bar = baz + foo = foovalue + type = aws_instance +` + const testTerraformInputVarsStr = ` aws_instance.bar: ID = foo @@ -916,6 +924,19 @@ STATE: ` +const testTerraformPlanModuleCycleStr = ` +DIFF: + +CREATE: aws_instance.b +CREATE: aws_instance.c + some_input: "" => "" + type: "" => "aws_instance" + +STATE: + + +` + const testTerraformPlanModuleDestroyStr = ` DIFF: diff --git a/terraform/test-fixtures/apply-targeted-count/main.tf b/terraform/test-fixtures/apply-targeted-count/main.tf new file mode 100644 index 000000000000..cd861898f203 --- /dev/null +++ b/terraform/test-fixtures/apply-targeted-count/main.tf @@ -0,0 +1,7 @@ +resource "aws_instance" "foo" { + count = 3 +} + +resource "aws_instance" "bar" { + count = 3 +} diff --git a/terraform/test-fixtures/apply-targeted/main.tf b/terraform/test-fixtures/apply-targeted/main.tf new file mode 100644 index 000000000000..b07fc97f4d46 --- /dev/null +++ b/terraform/test-fixtures/apply-targeted/main.tf @@ -0,0 +1,7 @@ +resource "aws_instance" "foo" { + num = "2" +} + +resource "aws_instance" "bar" { + foo = "bar" +} diff --git a/terraform/test-fixtures/input-provider-with-vars-and-module/child/main.tf b/terraform/test-fixtures/input-provider-with-vars-and-module/child/main.tf new file mode 100644 index 000000000000..7ec25bda0c90 --- /dev/null +++ b/terraform/test-fixtures/input-provider-with-vars-and-module/child/main.tf @@ -0,0 +1 @@ +resource "aws_instance" "foo" { } diff --git a/terraform/test-fixtures/input-provider-with-vars-and-module/main.tf b/terraform/test-fixtures/input-provider-with-vars-and-module/main.tf new file mode 100644 index 000000000000..c5112dca05f1 --- /dev/null +++ b/terraform/test-fixtures/input-provider-with-vars-and-module/main.tf @@ -0,0 +1,7 @@ +provider "aws" { + access_key = "abc123" +} + +module "child" { + source = "./child" +} diff --git a/terraform/test-fixtures/input-vars-unset/main.tf b/terraform/test-fixtures/input-vars-unset/main.tf new file mode 100644 index 000000000000..28cf230e6d48 --- /dev/null +++ b/terraform/test-fixtures/input-vars-unset/main.tf @@ -0,0 +1,7 @@ +variable "foo" {} +variable "bar" {} + +resource "aws_instance" "foo" { + foo = "${var.foo}" + bar = "${var.bar}" +} diff --git a/terraform/test-fixtures/plan-module-cycle/child/main.tf b/terraform/test-fixtures/plan-module-cycle/child/main.tf new file mode 100644 index 000000000000..e2e60c1f086d --- /dev/null +++ b/terraform/test-fixtures/plan-module-cycle/child/main.tf @@ -0,0 +1,5 @@ +variable "in" {} + +output "out" { + value = "${var.in}" +} diff --git a/terraform/test-fixtures/plan-module-cycle/main.tf b/terraform/test-fixtures/plan-module-cycle/main.tf new file mode 100644 index 000000000000..e9c459721f53 --- /dev/null +++ b/terraform/test-fixtures/plan-module-cycle/main.tf @@ -0,0 +1,12 @@ +module "a" { + source = "./child" + in = "${aws_instance.b.id}" +} + +resource "aws_instance" "b" {} + +resource "aws_instance" "c" { + some_input = "${module.a.out}" + + depends_on = ["aws_instance.b"] +} diff --git a/terraform/test-fixtures/plan-module-provider-defaults-var/main.tf b/terraform/test-fixtures/plan-module-provider-defaults-var/main.tf index 83b2411543c0..e6e3f1c29369 100644 --- a/terraform/test-fixtures/plan-module-provider-defaults-var/main.tf +++ b/terraform/test-fixtures/plan-module-provider-defaults-var/main.tf @@ -5,3 +5,5 @@ module "child" { provider "aws" { from = "${var.foo}" } + +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/plan-targeted/main.tf b/terraform/test-fixtures/plan-targeted/main.tf new file mode 100644 index 000000000000..1b6cdae67b0e --- /dev/null +++ b/terraform/test-fixtures/plan-targeted/main.tf @@ -0,0 +1,7 @@ +resource "aws_instance" "foo" { + num = "2" +} + +resource "aws_instance" "bar" { + foo = "${aws_instance.foo.num}" +} diff --git a/terraform/test-fixtures/refresh-output/main.tf b/terraform/test-fixtures/refresh-output/main.tf new file mode 100644 index 000000000000..42a01bd5ca19 --- /dev/null +++ b/terraform/test-fixtures/refresh-output/main.tf @@ -0,0 +1,5 @@ +resource "aws_instance" "web" {} + +output "foo" { + value = "${aws_instance.web.foo}" +} diff --git a/terraform/test-fixtures/refresh-targeted-count/main.tf b/terraform/test-fixtures/refresh-targeted-count/main.tf new file mode 100644 index 000000000000..f564b629c1ac --- /dev/null +++ b/terraform/test-fixtures/refresh-targeted-count/main.tf @@ -0,0 +1,9 @@ +resource "aws_vpc" "metoo" {} +resource "aws_instance" "notme" { } +resource "aws_instance" "me" { + vpc_id = "${aws_vpc.metoo.id}" + count = 3 +} +resource "aws_elb" "meneither" { + instances = ["${aws_instance.me.*.id}"] +} diff --git a/terraform/test-fixtures/refresh-targeted/main.tf b/terraform/test-fixtures/refresh-targeted/main.tf new file mode 100644 index 000000000000..3a76184647fc --- /dev/null +++ b/terraform/test-fixtures/refresh-targeted/main.tf @@ -0,0 +1,8 @@ +resource "aws_vpc" "metoo" {} +resource "aws_instance" "notme" { } +resource "aws_instance" "me" { + vpc_id = "${aws_vpc.metoo.id}" +} +resource "aws_elb" "meneither" { + instances = ["${aws_instance.me.*.id}"] +} diff --git a/terraform/test-fixtures/transform-destroy-prefix/main.tf b/terraform/test-fixtures/transform-destroy-prefix/main.tf new file mode 100644 index 000000000000..dd85754d4727 --- /dev/null +++ b/terraform/test-fixtures/transform-destroy-prefix/main.tf @@ -0,0 +1,3 @@ +resource "aws_instance" "foo" {} + +resource "aws_instance" "foo-bar" {} diff --git a/terraform/test-fixtures/transform-provider-disable-keep/child/main.tf b/terraform/test-fixtures/transform-provider-disable-keep/child/main.tf new file mode 100644 index 000000000000..9d02c162c8ba --- /dev/null +++ b/terraform/test-fixtures/transform-provider-disable-keep/child/main.tf @@ -0,0 +1,7 @@ +variable "value" {} + +provider "aws" { + value = "${var.value}" +} + +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/transform-provider-disable-keep/main.tf b/terraform/test-fixtures/transform-provider-disable-keep/main.tf new file mode 100644 index 000000000000..7f9aa3f9fb22 --- /dev/null +++ b/terraform/test-fixtures/transform-provider-disable-keep/main.tf @@ -0,0 +1,9 @@ +variable "foo" {} + +module "child" { + source = "./child" + + value = "${var.foo}" +} + +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/transform-provider-disable/child/main.tf b/terraform/test-fixtures/transform-provider-disable/child/main.tf new file mode 100644 index 000000000000..9d02c162c8ba --- /dev/null +++ b/terraform/test-fixtures/transform-provider-disable/child/main.tf @@ -0,0 +1,7 @@ +variable "value" {} + +provider "aws" { + value = "${var.value}" +} + +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/transform-provider-disable/main.tf b/terraform/test-fixtures/transform-provider-disable/main.tf new file mode 100644 index 000000000000..a405f9895d14 --- /dev/null +++ b/terraform/test-fixtures/transform-provider-disable/main.tf @@ -0,0 +1,7 @@ +variable "foo" {} + +module "child" { + source = "./child" + + value = "${var.foo}" +} diff --git a/terraform/test-fixtures/transform-targets-basic/main.tf b/terraform/test-fixtures/transform-targets-basic/main.tf new file mode 100644 index 000000000000..b845a1de69f8 --- /dev/null +++ b/terraform/test-fixtures/transform-targets-basic/main.tf @@ -0,0 +1,16 @@ +resource "aws_vpc" "me" {} + +resource "aws_subnet" "me" { + vpc_id = "${aws_vpc.me.id}" +} + +resource "aws_instance" "me" { + subnet_id = "${aws_subnet.me.id}" +} + +resource "aws_vpc" "notme" {} +resource "aws_subnet" "notme" {} +resource "aws_instance" "notme" {} +resource "aws_instance" "notmeeither" { + name = "${aws_instance.me.id}" +} diff --git a/terraform/test-fixtures/transform-targets-destroy/main.tf b/terraform/test-fixtures/transform-targets-destroy/main.tf new file mode 100644 index 000000000000..da99de43c81f --- /dev/null +++ b/terraform/test-fixtures/transform-targets-destroy/main.tf @@ -0,0 +1,18 @@ +resource "aws_vpc" "notme" {} + +resource "aws_subnet" "notme" { + vpc_id = "${aws_vpc.notme.id}" +} + +resource "aws_instance" "me" { + subnet_id = "${aws_subnet.notme.id}" +} + +resource "aws_instance" "notme" {} +resource "aws_instance" "metoo" { + name = "${aws_instance.me.id}" +} + +resource "aws_elb" "me" { + instances = "${aws_instance.me.*.id}" +} diff --git a/terraform/test-fixtures/validate-module-pc-inherit-unused/child/main.tf b/terraform/test-fixtures/validate-module-pc-inherit-unused/child/main.tf new file mode 100644 index 000000000000..919f140bba6b --- /dev/null +++ b/terraform/test-fixtures/validate-module-pc-inherit-unused/child/main.tf @@ -0,0 +1 @@ +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/validate-module-pc-inherit-unused/main.tf b/terraform/test-fixtures/validate-module-pc-inherit-unused/main.tf new file mode 100644 index 000000000000..32c8a38f1e6f --- /dev/null +++ b/terraform/test-fixtures/validate-module-pc-inherit-unused/main.tf @@ -0,0 +1,7 @@ +module "child" { + source = "./child" +} + +provider "aws" { + foo = "set" +} diff --git a/terraform/test-fixtures/validate-module-pc-vars/child/main.tf b/terraform/test-fixtures/validate-module-pc-vars/child/main.tf new file mode 100644 index 000000000000..3b4e15483d93 --- /dev/null +++ b/terraform/test-fixtures/validate-module-pc-vars/child/main.tf @@ -0,0 +1,7 @@ +variable "value" {} + +provider "aws" { + foo = "${var.value}" +} + +resource "aws_instance" "foo" {} diff --git a/terraform/test-fixtures/validate-module-pc-vars/main.tf b/terraform/test-fixtures/validate-module-pc-vars/main.tf new file mode 100644 index 000000000000..7d2d03e14291 --- /dev/null +++ b/terraform/test-fixtures/validate-module-pc-vars/main.tf @@ -0,0 +1,7 @@ +variable "provider_var" {} + +module "child" { + source = "./child" + + value = "${var.provider_var}" +} diff --git a/terraform/transform_destroy_test.go b/terraform/transform_destroy_test.go index 784ff0669efd..56acec0494c2 100644 --- a/terraform/transform_destroy_test.go +++ b/terraform/transform_destroy_test.go @@ -299,6 +299,104 @@ func TestPruneDestroyTransformer_countState(t *testing.T) { } } +func TestPruneDestroyTransformer_prefixMatch(t *testing.T) { + mod := testModule(t, "transform-destroy-prefix") + + diff := &Diff{} + state := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: RootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.foo-bar.0": &ResourceState{ + Primary: &InstanceState{ID: "foo"}, + }, + + "aws_instance.foo-bar.1": &ResourceState{ + Primary: &InstanceState{ID: "foo"}, + }, + }, + }, + }, + } + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + tf := &DestroyTransformer{} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + tf := &PruneDestroyTransformer{Diff: diff, State: state} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testTransformPruneDestroyPrefixStr) + if actual != expected { + t.Fatalf("bad:\n\n%s", actual) + } +} + +func TestPruneDestroyTransformer_tainted(t *testing.T) { + mod := testModule(t, "transform-destroy-basic") + + diff := &Diff{} + state := &State{ + Modules: []*ModuleState{ + &ModuleState{ + Path: RootModulePath, + Resources: map[string]*ResourceState{ + "aws_instance.bar": &ResourceState{ + Tainted: []*InstanceState{ + &InstanceState{ID: "foo"}, + }, + }, + }, + }, + }, + } + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + tf := &DestroyTransformer{} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + tf := &PruneDestroyTransformer{Diff: diff, State: state} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testTransformPruneDestroyTaintedStr) + if actual != expected { + t.Fatalf("bad:\n\n%s", actual) + } +} + const testTransformDestroyBasicStr = ` aws_instance.bar aws_instance.bar (destroy tainted) @@ -317,63 +415,53 @@ aws_instance.foo (destroy) const testTransformPruneDestroyBasicStr = ` aws_instance.bar - aws_instance.bar (destroy tainted) aws_instance.foo -aws_instance.bar (destroy tainted) aws_instance.foo - aws_instance.foo (destroy tainted) -aws_instance.foo (destroy tainted) - aws_instance.bar (destroy tainted) ` const testTransformPruneDestroyBasicDiffStr = ` aws_instance.bar - aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo -aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo - aws_instance.foo (destroy tainted) -aws_instance.foo (destroy tainted) - aws_instance.bar (destroy tainted) ` const testTransformPruneDestroyCountStr = ` aws_instance.bar - aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo -aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo - aws_instance.foo (destroy tainted) -aws_instance.foo (destroy tainted) - aws_instance.bar (destroy tainted) ` const testTransformPruneDestroyCountDecStr = ` aws_instance.bar - aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo -aws_instance.bar (destroy tainted) aws_instance.bar (destroy) aws_instance.foo - aws_instance.foo (destroy tainted) -aws_instance.foo (destroy tainted) - aws_instance.bar (destroy tainted) ` const testTransformPruneDestroyCountStateStr = ` +aws_instance.bar + aws_instance.foo +aws_instance.foo +` + +const testTransformPruneDestroyPrefixStr = ` +aws_instance.foo +aws_instance.foo-bar + aws_instance.foo-bar (destroy) +aws_instance.foo-bar (destroy) +` + +const testTransformPruneDestroyTaintedStr = ` aws_instance.bar aws_instance.bar (destroy tainted) aws_instance.foo aws_instance.bar (destroy tainted) aws_instance.foo - aws_instance.foo (destroy tainted) -aws_instance.foo (destroy tainted) - aws_instance.bar (destroy tainted) ` const testTransformCreateBeforeDestroyBasicStr = ` diff --git a/terraform/transform_orphan.go b/terraform/transform_orphan.go index e2a9c7dcd432..5de64c65c682 100644 --- a/terraform/transform_orphan.go +++ b/terraform/transform_orphan.go @@ -2,6 +2,7 @@ package terraform import ( "fmt" + "log" "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/config/module" @@ -25,6 +26,11 @@ type OrphanTransformer struct { // using the graph path. Module *module.Tree + // Targets are user-specified resources to target. We need to be aware of + // these so we don't improperly identify orphans when they've just been + // filtered out of the graph via targeting. + Targeting bool + // View, if non-nil will set a view on the module state. View string } @@ -35,6 +41,13 @@ func (t *OrphanTransformer) Transform(g *Graph) error { return nil } + if t.Targeting { + log.Printf("Skipping orphan transformer because we have targets.") + // If we are in a run where we are targeting nodes, we won't process + // orphans for this run. + return nil + } + // Build up all our state representatives resourceRep := make(map[string]struct{}) for _, v := range g.Vertices() { diff --git a/terraform/transform_provider.go b/terraform/transform_provider.go index f6c566b26c7b..351e8eb12abc 100644 --- a/terraform/transform_provider.go +++ b/terraform/transform_provider.go @@ -4,6 +4,7 @@ import ( "fmt" "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform/config" "github.com/hashicorp/terraform/dag" ) @@ -12,6 +13,7 @@ import ( // they satisfy. type GraphNodeProvider interface { ProviderName() string + ProviderConfig() *config.RawConfig } // GraphNodeProviderConsumer is an interface that nodes that require @@ -21,6 +23,52 @@ type GraphNodeProviderConsumer interface { ProvidedBy() []string } +// DisableProviderTransformer "disables" any providers that are only +// depended on by modules. +type DisableProviderTransformer struct{} + +func (t *DisableProviderTransformer) Transform(g *Graph) error { + for _, v := range g.Vertices() { + // We only care about providers + pn, ok := v.(GraphNodeProvider) + if !ok { + continue + } + + // Go through all the up-edges (things that depend on this + // provider) and if any is not a module, then ignore this node. + nonModule := false + for _, sourceRaw := range g.UpEdges(v).List() { + source := sourceRaw.(dag.Vertex) + cn, ok := source.(graphNodeConfig) + if !ok { + nonModule = true + break + } + + if cn.ConfigType() != GraphNodeConfigTypeModule { + nonModule = true + break + } + } + if nonModule { + // We found something that depends on this provider that + // isn't a module, so skip it. + continue + } + + // Disable the provider by replacing it with a "disabled" provider + disabled := &graphNodeDisabledProvider{GraphNodeProvider: pn} + if !g.Replace(v, disabled) { + panic(fmt.Sprintf( + "vertex disappeared from under us: %s", + dag.VertexName(v))) + } + } + + return nil +} + // ProviderTransformer is a GraphTransformer that maps resources to // providers within the graph. This will error if there are any resources // that don't map to proper resources. @@ -94,6 +142,40 @@ func (t *PruneProviderTransformer) Transform(g *Graph) error { return nil } +type graphNodeDisabledProvider struct { + GraphNodeProvider +} + +// GraphNodeEvalable impl. +func (n *graphNodeDisabledProvider) EvalTree() EvalNode { + var resourceConfig *ResourceConfig + + return &EvalOpFilter{ + Ops: []walkOperation{walkInput, walkValidate, walkRefresh, walkPlan, walkApply}, + Node: &EvalSequence{ + Nodes: []EvalNode{ + &EvalInterpolate{ + Config: n.ProviderConfig(), + Output: &resourceConfig, + }, + &EvalBuildProviderConfig{ + Provider: n.ProviderName(), + Config: &resourceConfig, + Output: &resourceConfig, + }, + &EvalSetProviderConfig{ + Provider: n.ProviderName(), + Config: &resourceConfig, + }, + }, + }, + } +} + +func (n *graphNodeDisabledProvider) Name() string { + return fmt.Sprintf("%s (disabled)", dag.VertexName(n.GraphNodeProvider)) +} + type graphNodeMissingProvider struct { ProviderNameValue string } @@ -111,6 +193,10 @@ func (n *graphNodeMissingProvider) ProviderName() string { return n.ProviderNameValue } +func (n *graphNodeMissingProvider) ProviderConfig() *config.RawConfig { + return nil +} + // GraphNodeDotter impl. func (n *graphNodeMissingProvider) Dot(name string) string { return fmt.Sprintf( diff --git a/terraform/transform_provider_test.go b/terraform/transform_provider_test.go index fdf25b7a4965..719cef75cc38 100644 --- a/terraform/transform_provider_test.go +++ b/terraform/transform_provider_test.go @@ -92,6 +92,98 @@ func TestPruneProviderTransformer(t *testing.T) { } } +func TestDisableProviderTransformer(t *testing.T) { + mod := testModule(t, "transform-provider-disable") + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &MissingProviderTransformer{Providers: []string{"aws"}} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &ProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &PruneProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &DisableProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testTransformDisableProviderBasicStr) + if actual != expected { + t.Fatalf("bad:\n\n%s", actual) + } +} + +func TestDisableProviderTransformer_keep(t *testing.T) { + mod := testModule(t, "transform-provider-disable-keep") + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &MissingProviderTransformer{Providers: []string{"aws"}} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &ProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &PruneProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &DisableProviderTransformer{} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(testTransformDisableProviderKeepStr) + if actual != expected { + t.Fatalf("bad:\n\n%s", actual) + } +} + func TestGraphNodeMissingProvider_impl(t *testing.T) { var _ dag.Vertex = new(graphNodeMissingProvider) var _ dag.NamedVertex = new(graphNodeMissingProvider) @@ -122,3 +214,17 @@ foo_instance.web provider.foo provider.foo ` + +const testTransformDisableProviderBasicStr = ` +module.child + provider.aws (disabled) +provider.aws (disabled) +` + +const testTransformDisableProviderKeepStr = ` +aws_instance.foo + provider.aws +module.child + provider.aws +provider.aws +` diff --git a/terraform/transform_resource.go b/terraform/transform_resource.go index 8c2a00c788e8..7a968885ac08 100644 --- a/terraform/transform_resource.go +++ b/terraform/transform_resource.go @@ -12,6 +12,7 @@ import ( type ResourceCountTransformer struct { Resource *config.Resource Destroy bool + Targets []ResourceAddress } func (t *ResourceCountTransformer) Transform(g *Graph) error { @@ -27,7 +28,7 @@ func (t *ResourceCountTransformer) Transform(g *Graph) error { } // For each count, build and add the node - nodes := make([]dag.Vertex, count) + nodes := make([]dag.Vertex, 0, count) for i := 0; i < count; i++ { // Set the index. If our count is 1 we special case it so that // we handle the "resource.0" and "resource" boundary properly. @@ -49,9 +50,14 @@ func (t *ResourceCountTransformer) Transform(g *Graph) error { } } + // Skip nodes if targeting excludes them + if !t.nodeIsTargeted(node) { + continue + } + // Add the node now - nodes[i] = node - g.Add(nodes[i]) + nodes = append(nodes, node) + g.Add(node) } // Make the dependency connections @@ -64,6 +70,25 @@ func (t *ResourceCountTransformer) Transform(g *Graph) error { return nil } +func (t *ResourceCountTransformer) nodeIsTargeted(node dag.Vertex) bool { + // no targets specified, everything stays in the graph + if len(t.Targets) == 0 { + return true + } + addressable, ok := node.(GraphNodeAddressable) + if !ok { + return false + } + + addr := addressable.ResourceAddress() + for _, targetAddr := range t.Targets { + if targetAddr.Equals(addr) { + return true + } + } + return false +} + type graphNodeExpandedResource struct { Index int Resource *config.Resource @@ -77,6 +102,28 @@ func (n *graphNodeExpandedResource) Name() string { return fmt.Sprintf("%s #%d", n.Resource.Id(), n.Index) } +// GraphNodeAddressable impl. +func (n *graphNodeExpandedResource) ResourceAddress() *ResourceAddress { + // We want this to report the logical index properly, so we must undo the + // special case from the expand + index := n.Index + if index == -1 { + index = 0 + } + return &ResourceAddress{ + Index: index, + // TODO: kjkjkj + InstanceType: TypePrimary, + Name: n.Resource.Name, + Type: n.Resource.Type, + } +} + +// graphNodeConfig impl. +func (n *graphNodeExpandedResource) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeResource +} + // GraphNodeDependable impl. func (n *graphNodeExpandedResource) DependableName() []string { return []string{ @@ -124,7 +171,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { Output: &provider, }) vseq.Nodes = append(vseq.Nodes, &EvalInterpolate{ - Config: n.Resource.RawConfig, + Config: n.Resource.RawConfig.Copy(), Resource: resource, Output: &resourceConfig, }) @@ -142,7 +189,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { Name: p.Type, Output: &provisioner, }, &EvalInterpolate{ - Config: p.RawConfig, + Config: p.RawConfig.Copy(), Resource: resource, Output: &resourceConfig, }, &EvalValidateProvisioner{ @@ -196,7 +243,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { Node: &EvalSequence{ Nodes: []EvalNode{ &EvalInterpolate{ - Config: n.Resource.RawConfig, + Config: n.Resource.RawConfig.Copy(), Resource: resource, Output: &resourceConfig, }, @@ -307,7 +354,7 @@ func (n *graphNodeExpandedResource) EvalTree() EvalNode { }, &EvalInterpolate{ - Config: n.Resource.RawConfig, + Config: n.Resource.RawConfig.Copy(), Resource: resource, Output: &resourceConfig, }, @@ -467,6 +514,11 @@ func (n *graphNodeExpandedResourceDestroy) Name() string { return fmt.Sprintf("%s (destroy)", n.graphNodeExpandedResource.Name()) } +// graphNodeConfig impl. +func (n *graphNodeExpandedResourceDestroy) ConfigType() GraphNodeConfigType { + return GraphNodeConfigTypeResource +} + // GraphNodeEvalable impl. func (n *graphNodeExpandedResourceDestroy) EvalTree() EvalNode { info := n.instanceInfo() @@ -508,19 +560,9 @@ func (n *graphNodeExpandedResourceDestroy) EvalTree() EvalNode { Name: n.ProvidedBy()[0], Output: &provider, }, - &EvalIf{ - If: func(ctx EvalContext) (bool, error) { - return n.Resource.Lifecycle.CreateBeforeDestroy, nil - }, - Then: &EvalReadStateTainted{ - Name: n.stateId(), - Output: &state, - Index: -1, - }, - Else: &EvalReadState{ - Name: n.stateId(), - Output: &state, - }, + &EvalReadState{ + Name: n.stateId(), + Output: &state, }, &EvalRequireState{ State: &state, diff --git a/terraform/transform_targets.go b/terraform/transform_targets.go new file mode 100644 index 000000000000..29a6d53c6fbd --- /dev/null +++ b/terraform/transform_targets.go @@ -0,0 +1,103 @@ +package terraform + +import "github.com/hashicorp/terraform/dag" + +// TargetsTransformer is a GraphTransformer that, when the user specifies a +// list of resources to target, limits the graph to only those resources and +// their dependencies. +type TargetsTransformer struct { + // List of targeted resource names specified by the user + Targets []string + + // Set to true when we're in a `terraform destroy` or a + // `terraform plan -destroy` + Destroy bool +} + +func (t *TargetsTransformer) Transform(g *Graph) error { + if len(t.Targets) > 0 { + // TODO: duplicated in OrphanTransformer; pull up parsing earlier + addrs, err := t.parseTargetAddresses() + if err != nil { + return err + } + + targetedNodes, err := t.selectTargetedNodes(g, addrs) + if err != nil { + return err + } + + for _, v := range g.Vertices() { + if targetedNodes.Include(v) { + } else { + g.Remove(v) + } + } + } + return nil +} + +func (t *TargetsTransformer) parseTargetAddresses() ([]ResourceAddress, error) { + addrs := make([]ResourceAddress, len(t.Targets)) + for i, target := range t.Targets { + ta, err := ParseResourceAddress(target) + if err != nil { + return nil, err + } + addrs[i] = *ta + } + return addrs, nil +} + +func (t *TargetsTransformer) selectTargetedNodes( + g *Graph, addrs []ResourceAddress) (*dag.Set, error) { + targetedNodes := new(dag.Set) + for _, v := range g.Vertices() { + // Keep all providers; they'll be pruned later if necessary + if r, ok := v.(GraphNodeProvider); ok { + targetedNodes.Add(r) + continue + } + + // For the remaining filter, we only care about addressable nodes + r, ok := v.(GraphNodeAddressable) + if !ok { + continue + } + + if t.nodeIsTarget(r, addrs) { + targetedNodes.Add(r) + // If the node would like to know about targets, tell it. + if n, ok := r.(GraphNodeTargetable); ok { + n.SetTargets(addrs) + } + + var deps *dag.Set + var err error + if t.Destroy { + deps, err = g.Descendents(r) + } else { + deps, err = g.Ancestors(r) + } + if err != nil { + return nil, err + } + + for _, d := range deps.List() { + targetedNodes.Add(d) + } + } + } + return targetedNodes, nil +} + +func (t *TargetsTransformer) nodeIsTarget( + r GraphNodeAddressable, addrs []ResourceAddress) bool { + addr := r.ResourceAddress() + for _, targetAddr := range addrs { + if targetAddr.Equals(addr) { + return true + } + } + return false +} diff --git a/terraform/transform_targets_test.go b/terraform/transform_targets_test.go new file mode 100644 index 000000000000..2daa72827e5b --- /dev/null +++ b/terraform/transform_targets_test.go @@ -0,0 +1,71 @@ +package terraform + +import ( + "strings" + "testing" +) + +func TestTargetsTransformer(t *testing.T) { + mod := testModule(t, "transform-targets-basic") + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &TargetsTransformer{Targets: []string{"aws_instance.me"}} + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(` +aws_instance.me + aws_subnet.me +aws_subnet.me + aws_vpc.me +aws_vpc.me + `) + if actual != expected { + t.Fatalf("bad:\n\nexpected:\n%s\n\ngot:\n%s\n", expected, actual) + } +} + +func TestTargetsTransformer_destroy(t *testing.T) { + mod := testModule(t, "transform-targets-destroy") + + g := Graph{Path: RootModulePath} + { + tf := &ConfigTransformer{Module: mod} + if err := tf.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + { + transform := &TargetsTransformer{ + Targets: []string{"aws_instance.me"}, + Destroy: true, + } + if err := transform.Transform(&g); err != nil { + t.Fatalf("err: %s", err) + } + } + + actual := strings.TrimSpace(g.String()) + expected := strings.TrimSpace(` +aws_elb.me + aws_instance.me +aws_instance.me +aws_instance.metoo + aws_instance.me + `) + if actual != expected { + t.Fatalf("bad:\n\nexpected:\n%s\n\ngot:\n%s\n", expected, actual) + } +} diff --git a/version.go b/version.go index 8862f9e35a76..08c31b1e412a 100644 --- a/version.go +++ b/version.go @@ -4,7 +4,7 @@ package main var GitCommit string // The main version number that is being run at the moment. -const Version = "0.4.0" +const Version = "0.5.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/website/Gemfile.lock b/website/Gemfile.lock index a55579e6dbf7..e863605326c9 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -1,6 +1,6 @@ GIT remote: https://github.com/hashicorp/middleman-hashicorp - revision: 783fe9517dd02badb85e5ddfeda4d8e35bbd05a8 + revision: fb03b8e60efc96f68c2dded0f49632fcd6eb6482 specs: middleman-hashicorp (0.1.0) bootstrap-sass (~> 3.3) @@ -20,16 +20,16 @@ GIT GEM remote: https://rubygems.org/ specs: - activesupport (4.1.9) + activesupport (4.1.10) i18n (~> 0.6, >= 0.6.9) json (~> 1.7, >= 1.7.7) minitest (~> 5.1) thread_safe (~> 0.1) tzinfo (~> 1.1) - autoprefixer-rails (5.1.7) + autoprefixer-rails (5.1.8) execjs json - bootstrap-sass (3.3.3) + bootstrap-sass (3.3.4.1) autoprefixer-rails (>= 5.0.0.1) sass (>= 3.2.19) builder (3.2.2) @@ -53,14 +53,14 @@ GEM sass (>= 3.3.0, < 3.5) compass-import-once (1.0.5) sass (>= 3.2, < 3.5) - daemons (1.1.9) + daemons (1.2.2) em-websocket (0.5.1) eventmachine (>= 0.12.9) http_parser.rb (~> 0.6.0) erubis (2.7.0) eventmachine (1.0.7) execjs (2.4.0) - ffi (1.9.6) + ffi (1.9.8) haml (4.0.6) tilt hike (1.2.3) @@ -75,8 +75,8 @@ GEM less (2.6.0) commonjs (~> 0.2.7) libv8 (3.16.14.7) - listen (2.8.5) - celluloid (>= 0.15.2) + listen (2.10.0) + celluloid (~> 0.16.0) rb-fsevent (>= 0.9.3) rb-inotify (>= 0.9) middleman (3.3.10) @@ -159,7 +159,7 @@ GEM eventmachine (~> 1.0) rack (~> 1.0) thor (0.19.1) - thread_safe (0.3.4) + thread_safe (0.3.5) tilt (1.4.1) timers (4.0.1) hitimes diff --git a/website/README.md b/website/README.md index 4ab1cc864d90..3ad8eaf69379 100644 --- a/website/README.md +++ b/website/README.md @@ -4,7 +4,7 @@ This subdirectory contains the entire source for the [Terraform Website](http:// This is a [Middleman](http://middlemanapp.com) project, which builds a static site from these source files. -## Contributions Welcome! +## Contributions Welcome If you find a typo or you feel like you can improve the HTML, CSS, or JavaScript, we welcome contributions. Feel free to open issues or pull diff --git a/website/source/assets/stylesheets/_docs.scss b/website/source/assets/stylesheets/_docs.scss index f144f813a017..a2c9a55908b7 100755 --- a/website/source/assets/stylesheets/_docs.scss +++ b/website/source/assets/stylesheets/_docs.scss @@ -10,11 +10,13 @@ body.layout-atlas, body.layout-consul, body.layout-dnsimple, body.layout-dme, +body.layout-docker, body.layout-cloudflare, body.layout-cloudstack, body.layout-google, body.layout-heroku, body.layout-mailgun, +body.layout-openstack, body.layout-digitalocean, body.layout-aws, body.layout-docs, diff --git a/website/source/docs/commands/apply.html.markdown b/website/source/docs/commands/apply.html.markdown index 8ae1e8ee96e9..9bb5acdbff5f 100644 --- a/website/source/docs/commands/apply.html.markdown +++ b/website/source/docs/commands/apply.html.markdown @@ -44,10 +44,16 @@ The command-line flags are all optional. The list of available flags are: * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. +* `-target=resource` - A [Resource + Address](/docs/internals/resource-addressing.html) to target. Operation will + be limited to this resource and its dependencies. This flag can be used + multiple times. + * `-var 'foo=bar'` - Set a variable in the Terraform configuration. This flag can be set multiple times. * `-var-file=foo` - Set variables in the Terraform configuration from a file. If "terraform.tfvars" is present, it will be automatically - loaded if this flag is not specified. + loaded first. Any files specified by `-var-file` override any values + in a "terraform.tfvars". diff --git a/website/source/docs/commands/destroy.html.markdown b/website/source/docs/commands/destroy.html.markdown index 4ea84f880019..0a0f3a738b78 100644 --- a/website/source/docs/commands/destroy.html.markdown +++ b/website/source/docs/commands/destroy.html.markdown @@ -21,3 +21,9 @@ confirmation before destroying. This command accepts all the flags that the [apply command](/docs/commands/apply.html) accepts. If `-force` is set, then the destroy confirmation will not be shown. + +The `-target` flag, instead of affecting "dependencies" will instead also +destroy any resources that _depend on_ the target(s) specified. + +The behavior of any `terraform destroy` command can be previewed at any time +with an equivalent `terraform plan -destroy` command. diff --git a/website/source/docs/commands/plan.html.markdown b/website/source/docs/commands/plan.html.markdown index 14c10c5da3db..1c0b1b68ac31 100644 --- a/website/source/docs/commands/plan.html.markdown +++ b/website/source/docs/commands/plan.html.markdown @@ -28,6 +28,13 @@ The command-line flags are all optional. The list of available flags are: * `-destroy` - If set, generates a plan to destroy all the known resources. +* `-detailed-exitcode` - Return a detailed exit code when the command exits. + When provided, this argument changes the exit codes and their meanings to + provide more granular information about what the resulting plan contains: + * 0 = Succeeded with empty diff (no changes) + * 1 = Error + * 2 = Succeeded with non-empty diff (changes present) + * `-input=true` - Ask for input for variables if not directly set. * `-module-depth=n` - Specifies the depth of modules to show in the output. @@ -45,6 +52,11 @@ The command-line flags are all optional. The list of available flags are: * `-state=path` - Path to the state file. Defaults to "terraform.tfstate". +* `-target=resource` - A [Resource + Address](/docs/internals/resource-addressing.html) to target. Operation will + be limited to this resource and its dependencies. This flag can be used + multiple times. + * `-var 'foo=bar'` - Set a variable in the Terraform configuration. This flag can be set multiple times. diff --git a/website/source/docs/commands/push.html.markdown b/website/source/docs/commands/push.html.markdown new file mode 100644 index 000000000000..1a752e657f87 --- /dev/null +++ b/website/source/docs/commands/push.html.markdown @@ -0,0 +1,97 @@ +--- +layout: "docs" +page_title: "Command: push" +sidebar_current: "docs-commands-push" +description: |- + The `terraform push` command is used to upload the Terraform configuration to HashiCorp's Atlas service for automatically managing your infrastructure in the cloud. +--- + +# Command: push + +The `terraform push` command uploads your Terraform configuration to +be managed by HashiCorp's [Atlas](https://atlas.hashicorp.com). +By uploading your configuration to Atlas, Atlas can automatically run +Terraform for you, will save all state transitions, will save plans, +and will keep a history of all Terraform runs. + +This makes it significantly easier to use Terraform as a team: team +members modify the Terraform configurations locally and continue to +use normal version control. When the Terraform configurations are ready +to be run, they are pushed to Atlas, and any member of your team can +run Terraform with the push of a button. + +Atlas can also be used to set ACLs on who can run Terraform, and a +future update of Atlas will allow parallel Terraform runs and automatically +perform infrastructure locking so only one run is modifying the same +infrastructure at a time. + +## Usage + +Usage: `terraform push [options] [path]` + +The `path` argument is the same as for the +[apply](/docs/commands/apply.html) command. + +The command-line flags are all optional. The list of available flags are: + +* `-atlas-address=` - An alternate address to an Atlas instance. + Defaults to `https://atlas.hashicorp.com`. + +* `-upload-modules=true` - If true (default), then the + [modules](/docs/modules/index.html) + being used are all locked at their current checkout and uploaded + completely to Atlas. This prevents Atlas from running `terraform get` + for you. + +* `-name=` - Name of the infrastructure configuration in Atlas. + The format of this is: "username/name" so that you can upload + configurations not just to your account but to other accounts and + organizations. This setting can also be set in the configuration + in the + [Atlas section](/docs/configuration/atlas.html). + +* `-no-color` - Disables output with coloring + +* `-token=` - Atlas API token to use to authorize the upload. + If blank or unspecified, the `ATLAS_TOKEN` environmental variable + will be used. + +* `-vcs=true` - If true (default), then Terraform will detect if a VCS + is in use, such as Git, and will only upload files that are comitted to + version control. If no version control system is detected, Terraform will + upload all files in `path` (parameter to the command). + +## Packaged Files + +The files that are uploaded and packaged with a `push` are all the +files in the `path` given as the parameter to the command, recursively. +By default (unless `-vcs=false` is specified), Terraform will automatically +detect when a VCS such as Git is being used, and in that case will only +upload the files that are comitted. Because of this built-in intelligence, +you don't have to worry about excluding folders such as ".git" or ".hg" usually. + +If Terraform doesn't detect a VCS, it will upload all files. + +The reason Terraform uploads all of these files is because Terraform +cannot know what is and isn't being used for provisioning, so it uploads +all the files to be safe. To exclude certain files, specify the `-exclude` +flag when pushing, or specify the `exclude` parameter in the +[Atlas configuration section](/docs/configuration/atlas.html). + +## Remote State Requirement + +`terraform push` requires that +[remote state](/docs/commands/remote-config.html) +is enabled. The reasoning for this is simple: `terraform push` sends your +configuration to be managed remotely. For it to keep the state in sync +and for you to be able to easily access that state, remote state must +be enabled instead of juggling local files. + +While `terraform push` sends your configuration to be managed by Atlas, +the remote state backend _does not_ have to be Atlas. It can be anything +as long as it is accessible by the public internet, since Atlas will need +to be able to communicate to it. + +**Warning:** The credentials for accessing the remote state will be +sent up to Atlas as well. Therefore, we recommend you use access keys +that are restricted if possible. diff --git a/website/source/docs/commands/refresh.html.markdown b/website/source/docs/commands/refresh.html.markdown index cc797ca387b1..0fc3fc9383ce 100644 --- a/website/source/docs/commands/refresh.html.markdown +++ b/website/source/docs/commands/refresh.html.markdown @@ -36,6 +36,11 @@ The command-line flags are all optional. The list of available flags are: * `-state-out=path` - Path to write updated state file. By default, the `-state` path will be used. +* `-target=resource` - A [Resource + Address](/docs/internals/resource-addressing.html) to target. Operation will + be limited to this resource and its dependencies. This flag can be used + multiple times. + * `-var 'foo=bar'` - Set a variable in the Terraform configuration. This flag can be set multiple times. diff --git a/website/source/docs/commands/remote-config.html.markdown b/website/source/docs/commands/remote-config.html.markdown index 3fd6c9b17a9a..3ced2a43411e 100644 --- a/website/source/docs/commands/remote-config.html.markdown +++ b/website/source/docs/commands/remote-config.html.markdown @@ -33,32 +33,33 @@ By default, `remote config` will look for the "terraform.tfstate" file, but that can be specified by the `-state` flag. If no state file exists, a blank state will be configured. +When enabling remote storage, use the `-backend-config` flag to set +the required configuration variables as documented below. See the example +below this section for more details. + When remote storage is disabled, the existing remote state is migrated to a local file. This defaults to the `-state` path during restore. The following backends are supported: -* Atlas - Stores the state in Atlas. Requires the `-name` and `-access-token` flag. - The `-address` flag can optionally be provided. +* Atlas - Stores the state in Atlas. Requires the `name` and `access-token` + variables. The `address` variable can optionally be provided. * Consul - Stores the state in the KV store at a given path. - Requires the `path` flag. The `-address` and `-access-token` - flag can optionally be provided. Address is assumed to be the + Requires the `path` variable. The `address` and `access-token` + variables can optionally be provided. Address is assumed to be the local agent if not provided. * HTTP - Stores the state using a simple REST client. State will be fetched - via GET, updated via POST, and purged with DELETE. Requires the `-address` flag. + via GET, updated via POST, and purged with DELETE. Requires the `address` variable. The command-line flags are all optional. The list of available flags are: -* `-address=url` - URL of the remote storage server. Required for HTTP backend, - optional for Atlas and Consul. - -* `-access-token=token` - Authentication token for state storage server. - Required for Atlas backend, optional for Consul. +* `-backend=Atlas` - The remote backend to use. Must be one of the above + supported backends. -* `-backend=Atlas` - Specifies the type of remote backend. Must be one - of Atlas, Consul, or HTTP. Defaults to Atlas. +* `-backend-config="k=v"` - Specify a configuration variable for a backend. + This is how you set the required variables for the backends above. * `-backup=path` - Path to backup the existing state file before modifying. Defaults to the "-state" path with ".backup" extension. @@ -67,15 +68,22 @@ The command-line flags are all optional. The list of available flags are: * `-disable` - Disables remote state management and migrates the state to the `-state` path. -* `-name=name` - Name of the state file in the state storage server. - Required for Atlas backend. - -* `-path=path` - Path of the remote state in Consul. Required for the - Consul backend. - -* `-pull=true` - Controls if the remote state is pulled before disabling. - This defaults to true to ensure the latest state is cached before disabling. +* `-pull=true` - Controls if the remote state is pulled before disabling + or after enabling. This defaults to true to ensure the latest state + is available under both conditions. * `-state=path` - Path to read state. Defaults to "terraform.tfstate" unless remote state is enabled. +## Example: Consul + +The example below will push your remote state to Consul. Note that for +this example, it would go to the public Consul demo. In practice, you +should use your own private Consul server: + +``` +$ terraform remote config \ + -backend=consul \ + -backend-config="address=demo.consul.io:80" \ + -backend-config="path=tf" +``` diff --git a/website/source/docs/commands/remote.html.markdown b/website/source/docs/commands/remote.html.markdown index 3bc96c802d42..22d341891222 100644 --- a/website/source/docs/commands/remote.html.markdown +++ b/website/source/docs/commands/remote.html.markdown @@ -16,7 +16,7 @@ Terraform will automatically fetch the latest state from the remote server when necessary and if any updates are made, the newest state is persisted back to the remote server. In this mode, users do not need to durably store the state using version -control or shared storaged. +control or shared storage. ## Usage diff --git a/website/source/docs/configuration/atlas.html.md b/website/source/docs/configuration/atlas.html.md new file mode 100644 index 000000000000..e975c88bae31 --- /dev/null +++ b/website/source/docs/configuration/atlas.html.md @@ -0,0 +1,58 @@ +--- +layout: "docs" +page_title: "Configuring Atlas" +sidebar_current: "docs-config-atlas" +description: |- + Atlas is the ideal way to use Terraform in a team environment. Atlas will run Terraform for you, safely handle parallelization across different team members, save run history along with plans, and more. +--- + +# Atlas Configuration + +Terraform can be configured to be able to upload to HashiCorp's +[Atlas](https://atlas.hashicorp.com). This configuration doesn't change +the behavior of Terraform itself, it only configures your Terraform +configuration to support being uploaded to Atlas via the +[push command](/docs/commands/push.html). + +For more information on the benefits of uploading your Terraform +configuration to Atlas, please see the +[push command documentation](/docs/commands/push.html). + +This page assumes you're familiar with the +[configuration syntax](/docs/configuration/syntax.html) +already. + +## Example + +Atlas configuration looks like the following: + +``` +atlas { + name = "mitchellh/production-example" +} +``` + +## Description + +The `atlas` block configures the settings when Terraform is +[pushed](/docs/commands/push.html) to Atlas. Only one `atlas` block +is allowed. + +Within the block (the `{ }`) is configuration for Atlas uploading. +No keys are required, but the key typically set is `name`. + +**No value within the `atlas` block can use interpolations.** Due +to the nature of this configuration, interpolations are not possible. +If you want to parameterize these settings, use the Atlas block to +set defaults, then use the command-line flags of the +[push command](/docs/commands/push.html) to override. + +## Syntax + +The full syntax is: + +``` +atlas { + name = VALUE +} +``` diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index 15f39ddf1a79..e51830a8d24c 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -16,6 +16,9 @@ into strings. These interpolations are wrapped in `${}`, such as The interpolation syntax is powerful and allows you to reference variables, attributes of resources, call functions, etc. +You can also perform simple math in interpolations, allowing +you to write expressions such as `${count.index+1}`. + ## Available Variables **To reference user variables**, use the `var.` prefix followed by the diff --git a/website/source/docs/configuration/resources.html.md b/website/source/docs/configuration/resources.html.md index 8f21e96e1d1a..c74f030c9629 100644 --- a/website/source/docs/configuration/resources.html.md +++ b/website/source/docs/configuration/resources.html.md @@ -118,7 +118,7 @@ variable "instance_ips" { resource "aws_instance" "app" { count = "3" - private_ip = "${lookup(instance_ips, count.index)}" + private_ip = "${lookup(var.instance_ips, count.index)}" # ... } ``` diff --git a/website/source/docs/internals/resource-addressing.html.markdown b/website/source/docs/internals/resource-addressing.html.markdown new file mode 100644 index 000000000000..b4b994a88af4 --- /dev/null +++ b/website/source/docs/internals/resource-addressing.html.markdown @@ -0,0 +1,57 @@ +--- +layout: "docs" +page_title: "Internals: Resource Address" +sidebar_current: "docs-internals-resource-addressing" +description: |- + Resource addressing is used to target specific resources in a larger + infrastructure. +--- + +# Resource Addressing + +A __Resource Address__ is a string that references a specific resource in a +larger infrastructure. The syntax of a resource address is: + +``` +.[optional fields] +``` + +Required fields: + + * `resource_type` - Type of the resource being addressed. + * `resource_name` - User-defined name of the resource. + +Optional fields may include: + + * `[N]` - where `N` is a `0`-based index into a resource with multiple + instances specified by the `count` meta-parameter. Omitting an index when + addressing a resource where `count > 1` means that the address references + all instances. + + +## Examples + +Given a Terraform config that includes: + +``` +resource "aws_instance" "web" { + # ... + count = 4 +} +``` + +An address like this: + + +``` +aws_instance.web[3] +``` + +Refers to only the last instance in the config, and an address like this: + +``` +aws_instance.web +``` + + +Refers to all four "web" instances. diff --git a/website/source/docs/providers/atlas/index.html.markdown b/website/source/docs/providers/atlas/index.html.markdown index cd0280a493b0..c2528c6b517c 100644 --- a/website/source/docs/providers/atlas/index.html.markdown +++ b/website/source/docs/providers/atlas/index.html.markdown @@ -35,7 +35,9 @@ resource "atlas_artifact" "web" { The following arguments are supported: * `address` - (Optional) Atlas server endpoint. Defaults to public Atlas. - This is only required when using an on-premise deployment of Atlas. + This is only required when using an on-premise deployment of Atlas. This can + also be specified with the `ATLAS_ADDRESS` shell environment variable. -* `token` - (Required) API token +* `token` - (Required) API token. This can also be specified with the + `ATLAS_TOKEN` shell environment variable. diff --git a/website/source/docs/providers/aws/r/autoscale.html.markdown b/website/source/docs/providers/aws/r/autoscale.html.markdown index bb592a819a57..fe0643b72507 100644 --- a/website/source/docs/providers/aws/r/autoscale.html.markdown +++ b/website/source/docs/providers/aws/r/autoscale.html.markdown @@ -23,6 +23,17 @@ resource "aws_autoscaling_group" "bar" { desired_capacity = 4 force_delete = true launch_configuration = "${aws_launch_configuration.foobar.name}" + + tag { + key = "foo" + value = "bar" + propagate_at_launch = true + } + tag { + key = "lorem" + value = "ipsum" + propagate_at_launch = false + } } ``` @@ -44,6 +55,14 @@ The following arguments are supported: group names. * `vpc_zone_identifier` (Optional) A list of subnet IDs to launch resources in. * `termination_policies` (Optional) A list of policies to decide how the instances in the auto scale group should be terminated. +* `tag` (Optional) A list of tag blocks. Tags documented below. + +Tags support the following: + +* `key` - (Required) Key +* `value` - (Required) Value +* `propagate_at_launch` - (Required) Enables propagation of the tag to + Amazon EC2 instances launched via this ASG ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/db_instance.html.markdown b/website/source/docs/providers/aws/r/db_instance.html.markdown index 7727b9dd679e..61b5b6dcfb80 100644 --- a/website/source/docs/providers/aws/r/db_instance.html.markdown +++ b/website/source/docs/providers/aws/r/db_instance.html.markdown @@ -58,9 +58,13 @@ The following arguments are supported: * `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. * `vpc_security_group_ids` - (Optional) List of VPC security groups to associate. * `security_group_names` - (Optional/Deprecated) List of DB Security Groups to associate. - Only used for [DB Instances on the _EC2-Classic_ Platform](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.FindDefaultVPC). + Only used for [DB Instances on the _EC2-Classic_ Platform](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.FindDefaultVPC). * `db_subnet_group_name` - (Optional) Name of DB subnet group * `parameter_group_name` - (Optional) Name of the DB parameter group to associate. +* `storage_encrypted` - (Optional) Specifies whether the DB instance is encrypted. The default is `false` if not specified. +* `apply_immediately` - (Optional) Specifies whether any database modifications + are applied immediately, or during the next maintenance window. Default is + `false`. See [Amazon RDS Documentation for more for more information.](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) ## Attributes Reference @@ -82,4 +86,5 @@ The following attributes are exported: * `port` - The database port * `status` - The RDS instance status * `username` - The master username for the database +* `storage_encrypted` - Specifies whether the DB instance is encrypted diff --git a/website/source/docs/providers/aws/r/db_parameter_group.html.markdown b/website/source/docs/providers/aws/r/db_parameter_group.html.markdown index defb698eb7a6..41e2f7b86042 100644 --- a/website/source/docs/providers/aws/r/db_parameter_group.html.markdown +++ b/website/source/docs/providers/aws/r/db_parameter_group.html.markdown @@ -12,7 +12,7 @@ Provides an RDS DB parameter group resource. ``` resource "aws_db_parameter_group" "default" { - name = "rds_pg" + name = "rds-pg" family = "mysql5.6" description = "RDS default parameter group" @@ -20,7 +20,7 @@ resource "aws_db_parameter_group" "default" { name = "character_set_server" value = "utf8" } - + parameter { name = "character_set_client" value = "utf8" diff --git a/website/source/docs/providers/aws/r/instance.html.markdown b/website/source/docs/providers/aws/r/instance.html.markdown index 15ea59121ada..d3cb333382c0 100644 --- a/website/source/docs/providers/aws/r/instance.html.markdown +++ b/website/source/docs/providers/aws/r/instance.html.markdown @@ -14,7 +14,8 @@ and deleted. Instances also support [provisioning](/docs/provisioners/index.html ## Example Usage ``` -# Create a new instance of the ami-1234 on an m1.small node with an AWS Tag naming it "HelloWorld" +# Create a new instance of the ami-1234 on an m1.small node +# with an AWS Tag naming it "HelloWorld" resource "aws_instance" "web" { ami = "ami-1234" instance_type = "m1.small" @@ -47,32 +48,71 @@ The following arguments are supported: * `iam_instance_profile` - (Optional) The IAM Instance Profile to launch the instance with. * `tags` - (Optional) A mapping of tags to assign to the resource. -* `block_device` - (Optional) A list of block devices to add. Their keys are documented below. * `root_block_device` - (Optional) Customize details about the root block - device of the instance. Available keys are documented below. + device of the instance. See [Block Devices](#block-devices) below for details. +* `ebs_block_device` - (Optional) Additional EBS block devices to attach to the + instance. See [Block Devices](#block-devices) below for details. +* `ephemeral_block_device` - (Optional) Customize Ephemeral (also known as + "Instance Store") volumes on the instance. See [Block Devices](#block-devices) below for details. -Each `block_device` supports the following: + + +## Block devices + +Each of the `*_block_device` attributes controls a portion of the AWS +Instance's "Block Device Mapping". It's a good idea to familiarize yourself with [AWS's Block Device +Mapping docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) +to understand the implications of using these attributes. + +The `root_block_device` mapping supports the following: + +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, + or `"io1"`. (Default: `"standard"`). +* `volume_size` - (Optional) The size of the volume in gigabytes. +* `iops` - (Optional) The amount of provisioned + [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + This must be set with a `volume_type` of `"io1"`. +* `delete_on_termination` - (Optional) Whether the volume should be destroyed + on instance termination (Default: `true`). + +Modifying any of the `root_block_device` settings requires resource +replacement. + +Each `ebs_block_device` supports the following: * `device_name` - The name of the device to mount. -* `virtual_name` - (Optional) The virtual device name. * `snapshot_id` - (Optional) The Snapshot ID to mount. -* `volume_type` - (Optional) The type of volume. Can be standard, gp2, or io1. Defaults to standard. +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, + or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. -* `iops` - (Optional) The amount of provisioned IOPS. Setting this implies a - volume_type of "io1". -* `delete_on_termination` - (Optional) Should the volume be destroyed on instance termination (defaults true). -* `encrypted` - (Optional) Should encryption be enabled (defaults false). +* `iops` - (Optional) The amount of provisioned + [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + This must be set with a `volume_type` of `"io1"`. +* `delete_on_termination` - (Optional) Whether the volume should be destroyed + on instance termination (Default: `true`). +* `encrypted` - (Optional) Enables [EBS + encryption](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) + on the volume (Default: `false`). -The `root_block_device` mapping supports the following: +Modifying any `ebs_block_device` currently requires resource replacement. -* `device_name` - The name of the root device on the target instance. Must - match the root device as defined in the AMI. Defaults to "/dev/sda1", which - is the typical root volume for Linux instances. -* `volume_type` - (Optional) The type of volume. Can be standard, gp2, or io1. Defaults to standard. -* `volume_size` - (Optional) The size of the volume in gigabytes. -* `iops` - (Optional) The amount of provisioned IOPS. Setting this implies a - volume_type of "io1". -* `delete_on_termination` - (Optional) Should the volume be destroyed on instance termination (defaults true). +Each `ephemeral_block_device` supports the following: + +* `device_name` - The name of the block device to mount on the instance. +* `virtual_name` - The [Instance Store Device + Name](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) + (e.g. `"ephemeral0"`) + +Each AWS Instance type has a different set of Instance Store block devices +available for attachment. AWS [publishes a +list](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) +of which ephemeral devices are available on each type. The devices are always +identified by the `virtual_name` in the format `"ephemeral{0..N}"`. + +~> **NOTE:** Currently, changes to `*_block_device` configuration of _existing_ +resources cannot be automatically detected by Terraform. After making updates +to block device configuration, resource recreation can be manually triggered by +using the [`taint` command](/docs/commands/taint.html). ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/internet_gateway.html.markdown b/website/source/docs/providers/aws/r/internet_gateway.html.markdown index ec79f922a4c9..cefedc6ad723 100644 --- a/website/source/docs/providers/aws/r/internet_gateway.html.markdown +++ b/website/source/docs/providers/aws/r/internet_gateway.html.markdown @@ -29,6 +29,18 @@ The following arguments are supported: * `vpc_id` - (Required) The VPC ID to create in. * `tags` - (Optional) A mapping of tags to assign to the resource. +-> **Note:** It's recommended to denote that the AWS Instance or Elastic IP depends on the Internet Gateway. For example: + + + resource "aws_internet_gateway" "gw" { + vpc_id = "${aws_vpc.main.id}" + } + + resource "aws_instance" "foo" { + depends_on = ["aws_internet_gateway.gw"] + } + + ## Attributes Reference The following attributes are exported: diff --git a/website/source/docs/providers/aws/r/launch_config.html.markdown b/website/source/docs/providers/aws/r/launch_config.html.markdown index 677f3b088084..67954017abae 100644 --- a/website/source/docs/providers/aws/r/launch_config.html.markdown +++ b/website/source/docs/providers/aws/r/launch_config.html.markdown @@ -24,7 +24,8 @@ resource "aws_launch_configuration" "as_conf" { The following arguments are supported: -* `name` - (Required) The name of the launch configuration. +* `name` - (Optional) The name of the launch configuration. If you leave + this blank, Terraform will auto-generate it. * `image_id` - (Required) The EC2 image ID to launch. * `instance_type` - (Required) The size of instance to launch. * `iam_instance_profile` - (Optional) The IAM instance profile to associate @@ -33,6 +34,62 @@ The following arguments are supported: * `security_groups` - (Optional) A list of associated security group IDS. * `associate_public_ip_address` - (Optional) Associate a public ip address with an instance in a VPC. * `user_data` - (Optional) The user data to provide when launching the instance. +* `block_device_mapping` - (Optional) A list of block devices to add. Their keys are documented below. + + +## Block devices + +Each of the `*_block_device` attributes controls a portion of the AWS +Launch Configuration's "Block Device Mapping". It's a good idea to familiarize yourself with [AWS's Block Device +Mapping docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) +to understand the implications of using these attributes. + +The `root_block_device` mapping supports the following: + +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, + or `"io1"`. (Default: `"standard"`). +* `volume_size` - (Optional) The size of the volume in gigabytes. +* `iops` - (Optional) The amount of provisioned + [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + This must be set with a `volume_type` of `"io1"`. +* `delete_on_termination` - (Optional) Whether the volume should be destroyed + on instance termination (Default: `true`). + +Modifying any of the `root_block_device` settings requires resource +replacement. + +Each `ebs_block_device` supports the following: + +* `device_name` - The name of the device to mount. +* `snapshot_id` - (Optional) The Snapshot ID to mount. +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, + or `"io1"`. (Default: `"standard"`). +* `volume_size` - (Optional) The size of the volume in gigabytes. +* `iops` - (Optional) The amount of provisioned + [IOPS](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). + This must be set with a `volume_type` of `"io1"`. +* `delete_on_termination` - (Optional) Whether the volume should be destroyed + on instance termination (Default: `true`). + +Modifying any `ebs_block_device` currently requires resource replacement. + +Each `ephemeral_block_device` supports the following: + +* `device_name` - The name of the block device to mount on the instance. +* `virtual_name` - The [Instance Store Device + Name](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames) + (e.g. `"ephemeral0"`) + +Each AWS Instance type has a different set of Instance Store block devices +available for attachment. AWS [publishes a +list](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) +of which ephemeral devices are available on each type. The devices are always +identified by the `virtual_name` in the format `"ephemeral{0..N}"`. + +~> **NOTE:** Changes to `*_block_device` configuration of _existing_ resources +cannot currently be detected by Terraform. After updating to block device +configuration, resource recreation can be manually triggered by using the +[`taint` command](/docs/commands/taint.html). ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/security_group.html.markdown b/website/source/docs/providers/aws/r/security_group.html.markdown index 869f4bdc52cb..e1db4b1d9dfc 100644 --- a/website/source/docs/providers/aws/r/security_group.html.markdown +++ b/website/source/docs/providers/aws/r/security_group.html.markdown @@ -17,7 +17,7 @@ Basic usage ``` resource "aws_security_group" "allow_all" { name = "allow_all" - description = "Allow all inbound traffic" + description = "Allow all inbound traffic" ingress { from_port = 0 @@ -67,29 +67,31 @@ The following arguments are supported: egress rule. Each egress block supports fields documented below. VPC only. * `vpc_id` - (Optional) The VPC ID. -* `owner_id` - (Optional) The AWS Owner ID. +* `tags` - (Optional) A mapping of tags to assign to the resource. The `ingress` block supports: * `cidr_blocks` - (Optional) List of CIDR blocks. Cannot be used with `security_groups`. * `from_port` - (Required) The start port. * `protocol` - (Required) The protocol. -* `security_groups` - (Optional) List of security group IDs. Cannot be used with `cidr_blocks`. +* `security_groups` - (Optional) List of security group Group Names if using + EC2-Classic or the default VPC, or Group IDs if using a non-default VPC. + Cannot be used with `cidr_blocks`. * `self` - (Optional) If true, the security group itself will be added as a source to this ingress rule. * `to_port` - (Required) The end range port. -* `tags` - (Optional) A mapping of tags to assign to the resource. The `egress` block supports: * `cidr_blocks` - (Optional) List of CIDR blocks. Cannot be used with `security_groups`. * `from_port` - (Required) The start port. * `protocol` - (Required) The protocol. -* `security_groups` - (Optional) List of security group IDs. Cannot be used with `cidr_blocks`. +* `security_groups` - (Optional) List of security group Group Names if using + EC2-Classic or the default VPC, or Group IDs if using a non-default VPC. + Cannot be used with `cidr_blocks`. * `self` - (Optional) If true, the security group itself will be added as a source to this egress rule. * `to_port` - (Required) The end range port. -* `tags` - (Optional) A mapping of tags to assign to the resource. ## Attributes Reference diff --git a/website/source/docs/providers/aws/r/vpc_peering.html.markdown b/website/source/docs/providers/aws/r/vpc_peering.html.markdown index 59af3c0ca259..1d396a5843e5 100644 --- a/website/source/docs/providers/aws/r/vpc_peering.html.markdown +++ b/website/source/docs/providers/aws/r/vpc_peering.html.markdown @@ -56,4 +56,4 @@ The following attributes are exported: ## Notes -You still have to accept the peering with the aws console, aws-cli or goamz +You still have to accept the peering with the aws console, aws-cli or aws-sdk-go. diff --git a/website/source/docs/providers/aws/r/vpn_gateway.html.markdown b/website/source/docs/providers/aws/r/vpn_gateway.html.markdown new file mode 100644 index 000000000000..b64000ce5576 --- /dev/null +++ b/website/source/docs/providers/aws/r/vpn_gateway.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "aws" +page_title: "AWS: aws_vpn_gateway" +sidebar_current: "docs-aws-resource-vpn-gateway" +description: |- + Provides a resource to create a VPC VPN Gateway. +--- + +# aws\_vpn\_gateway + +Provides a resource to create a VPC VPN Gateway. + +## Example Usage + +``` +resource "aws_vpn_gateway" "vpn_gw" { + vpc_id = "${aws_vpc.main.id}" + + tags { + Name = "main" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `vpc_id` - (Required) The VPC ID to create in. +* `availability_zone` - (Optional) The Availability Zone for the virtual private gateway. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the VPN Gateway. + diff --git a/website/source/docs/providers/cloudflare/index.html.markdown b/website/source/docs/providers/cloudflare/index.html.markdown index a806d12546fe..dc8d61c32b09 100644 --- a/website/source/docs/providers/cloudflare/index.html.markdown +++ b/website/source/docs/providers/cloudflare/index.html.markdown @@ -33,7 +33,7 @@ resource "cloudflare_record" "www" { The following arguments are supported: -* `email` - (Required) The email associated with the account -* `token` - (Required) The Cloudflare API token - - +* `email` - (Required) The email associated with the account. This can also be + specified with the `CLOUDFLARE_EMAIL` shell environment variable. +* `token` - (Required) The Cloudflare API token. This can also be specified + with the `CLOUDFLARE_TOKEN` shell environment variable. diff --git a/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown b/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown index 17fa20927e30..b905bc0e99d9 100644 --- a/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown +++ b/website/source/docs/providers/cloudstack/r/egress_firewall.html.markdown @@ -58,4 +58,4 @@ The `rule` block supports: The following attributes are exported: -* `ID` - The network ID for which the egress firewall rules are created. +* `id` - The network ID for which the egress firewall rules are created. diff --git a/website/source/docs/providers/cloudstack/r/firewall.html.markdown b/website/source/docs/providers/cloudstack/r/firewall.html.markdown index 1c659e6bfcbb..8b8aa0089cc5 100644 --- a/website/source/docs/providers/cloudstack/r/firewall.html.markdown +++ b/website/source/docs/providers/cloudstack/r/firewall.html.markdown @@ -58,4 +58,4 @@ The `rule` block supports: The following attributes are exported: -* `ID` - The IP address ID for which the firewall rules are created. +* `id` - The IP address ID for which the firewall rules are created. diff --git a/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown b/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown index fb6b0891fbad..f82b8f446490 100644 --- a/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown +++ b/website/source/docs/providers/cloudstack/r/network_acl_rule.html.markdown @@ -66,4 +66,4 @@ The `rule` block supports: The following attributes are exported: -* `ID` - The ACL ID for which the rules are created. +* `id` - The ACL ID for which the rules are created. diff --git a/website/source/docs/providers/cloudstack/r/template.html.markdown b/website/source/docs/providers/cloudstack/r/template.html.markdown new file mode 100644 index 000000000000..1757193af200 --- /dev/null +++ b/website/source/docs/providers/cloudstack/r/template.html.markdown @@ -0,0 +1,78 @@ +--- +layout: "cloudstack" +page_title: "CloudStack: cloudstack_template" +sidebar_current: "docs-cloudstack-resource-template" +description: |- + Registers an existing template into the CloudStack cloud. +--- + +# cloudstack\_template + +Registers an existing template into the CloudStack cloud. + +## Example Usage + +``` +resource "cloudstack_template" "centos64" { + name = "CentOS 6.4 x64" + format= "VHD" + hypervisor = "XenServer" + os_type = "CentOS 6.4 (64bit)" + url = "http://someurl.com/template.vhd" + zone = "zone-1" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the template. + +* `display_text` - (Optional) The display name of the template. + +* `format` - (Required) The format of the template. Valid values are "QCOW2", + "RAW", and "VHD". + +* `hypervisor` - (Required) The target hypervisor for the template. Changing + this forces a new resource to be created. + +* `os_type` - (Required) The OS Type that best represents the OS of this + template. + +* `url` - (Required) The URL of where the template is hosted. Changing this + forces a new resource to be created. + +* `zone` - (Required) The name of the zone where this template will be created. + Changing this forces a new resource to be created. + +* `is_dynamically_scalable` - (Optional) Set to indicate if the template contains + tools to support dynamic scaling of VM cpu/memory. + +* `is_extractable` - (Optional) Set to indicate if the template is extractable + (defaults false) + +* `is_featured` - (Optional) Set to indicate if the template is featured + (defaults false) + +* `is_public` - (Optional) Set to indicate if the template is available for + all accounts (defaults true) + +* `password_enabled` - (Optional) Set to indicate if the template should be + password enabled (defaults false) + +* `is_ready_timeout` - (Optional) The maximum time in seconds to wait until the + template is ready for use (defaults 300 seconds) + +## Attributes Reference + +The following attributes are exported: + +* `id` - The template ID. +* `display_text` - The display text of the template. +* `is_dynamically_scalable` - Set to "true" if the template is dynamically scalable. +* `is_extractable` - Set to "true" if the template is extractable. +* `is_featured` - Set to "true" if the template is featured. +* `is_public` - Set to "true" if the template is public. +* `password_enabled` - Set to "true" if the template is password enabled. +* `is_ready` - Set to "true" once the template is ready for use. diff --git a/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown b/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown new file mode 100644 index 000000000000..3ecf17cbca65 --- /dev/null +++ b/website/source/docs/providers/cloudstack/r/vpn_connection.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "cloudstack" +page_title: "CloudStack: cloudstack_vpn_connection" +sidebar_current: "docs-cloudstack-resource-vpn-connection" +description: |- + Creates a site to site VPN connection. +--- + +# cloudstack\_vpn\_connection + +Creates a site to site VPN connection. + +## Example Usage + +Basic usage: + +``` +resource "cloudstack_vpn_connection" "default" { + customergatewayid = "xxx" + vpngatewayid = "xxx" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `customergatewayid` - (Required) The Customer Gateway ID to connect. + Changing this forces a new resource to be created. + +* `vpngatewayid` - (Required) The VPN Gateway ID to connect. + Changing this forces a new resource to be created. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the VPN Connection. diff --git a/website/source/docs/providers/cloudstack/r/vpn_customer_gateway.html.markdown b/website/source/docs/providers/cloudstack/r/vpn_customer_gateway.html.markdown new file mode 100644 index 000000000000..84183b8d6cf7 --- /dev/null +++ b/website/source/docs/providers/cloudstack/r/vpn_customer_gateway.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "cloudstack" +page_title: "CloudStack: cloudstack_vpn_customer_gateway" +sidebar_current: "docs-cloudstack-resource-vpn-customer-gateway" +description: |- + Creates a site to site VPN local customer gateway. +--- + +# cloudstack\_vpn\_customer\_gateway + +Creates a site to site VPN local customer gateway. + +## Example Usage + +Basic usage: + +``` +resource "cloudstack_vpn_customer_gateway" "default" { + name = "test-vpc" + cidr = "10.0.0.0/8" + esp_policy = "aes256-sha1" + gateway = "192.168.0.1" + ike_policy = "aes256-sha1" + ipsec_psk = "terraform" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the VPN Customer Gateway. + +* `cidr` - (Required) The CIDR block that needs to be routed through this gateway. + +* `esp_policy` - (Required) The ESP policy to use for this VPN Customer Gateway. + +* `gateway` - (Required) The public IP address of the related VPN Gateway. + +* `ike_policy` - (Required) The IKE policy to use for this VPN Customer Gateway. + +* `ipsec_psk` - (Required) The IPSEC pre-shared key used for this gateway. + +* `dpd` - (Optional) If DPD is enabled for the related VPN connection (defaults false) + +* `esp_lifetime` - (Optional) The ESP lifetime of phase 2 VPN connection to this + VPN Customer Gateway in seconds (defaults 86400) + +* `ike_lifetime` - (Optional) The IKE lifetime of phase 2 VPN connection to this + VPN Customer Gateway in seconds (defaults 86400) + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the VPN Customer Gateway. +* `dpd` - Enable or disable DPD is enabled for the related VPN connection. +* `esp_lifetime` - The ESP lifetime of phase 2 VPN connection to this VPN Customer Gateway. +* `ike_lifetime` - The IKE lifetime of phase 2 VPN connection to this VPN Customer Gateway. diff --git a/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown b/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown new file mode 100644 index 000000000000..10aabd796752 --- /dev/null +++ b/website/source/docs/providers/cloudstack/r/vpn_gateway.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "cloudstack" +page_title: "CloudStack: cloudstack_vpn_gateway" +sidebar_current: "docs-cloudstack-resource-vpn-gateway" +description: |- + Creates a site to site VPN local gateway. +--- + +# cloudstack\_vpn\_gateway + +Creates a site to site VPN local gateway. + +## Example Usage + +Basic usage: + +``` +resource "cloudstack_vpn_gateway" "default" { + vpc = "test-vpc" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `vpc` - (Required) The name of the VPC for which to create the VPN Gateway. + Changing this forces a new resource to be created. + +## Attributes Reference + +The following attributes are exported: + +* `id` - The ID of the VPN Gateway. +* `public_ip` - The public IP address associated with the VPN Gateway. diff --git a/website/source/docs/providers/dme/index.html.markdown b/website/source/docs/providers/dme/index.html.markdown index 1c4150f2d99e..c175ffd7bbe6 100644 --- a/website/source/docs/providers/dme/index.html.markdown +++ b/website/source/docs/providers/dme/index.html.markdown @@ -35,7 +35,10 @@ resource "dme_record" "www" { The following arguments are supported: -* `akey` - (Required) The DNSMadeEasy API key -* `skey` - (Required) The DNSMadeEasy Secret key +* `akey` - (Required) The DNSMadeEasy API key. This can also be specified with + the `DME_AKEY` shell environment variable. +* `skey` - (Required) The DNSMadeEasy Secret key. This can also be specified + with the `DME_SKEY` shell environment variable. * `usesandbox` - (Optional) If true, the DNSMadeEasy sandbox will be - used + used. This can also be specified with the `DME_USESANDBOX` shell environment + variable. diff --git a/website/source/docs/providers/dnsimple/index.html.markdown b/website/source/docs/providers/dnsimple/index.html.markdown index ad98c321812a..23828c6d1647 100644 --- a/website/source/docs/providers/dnsimple/index.html.markdown +++ b/website/source/docs/providers/dnsimple/index.html.markdown @@ -33,7 +33,7 @@ resource "dnsimple_record" "www" { The following arguments are supported: -* `token` - (Required) The DNSimple API token -* `email` - (Required) The email associated with the token +* `token` - (Required) The DNSimple API token. It must be provided, but it can also be sourced from the `DNSIMPLE_TOKEN` environment variable. +* `email` - (Required) The email associated with the token. It must be provided, but it can also be sourced from the `DNSIMPLE_EMAIL` environment variable. diff --git a/website/source/docs/providers/do/index.html.markdown b/website/source/docs/providers/do/index.html.markdown index 453a3cb371dd..9e18277a3cf9 100644 --- a/website/source/docs/providers/do/index.html.markdown +++ b/website/source/docs/providers/do/index.html.markdown @@ -32,5 +32,6 @@ resource "digitalocean_droplet" "web" { The following arguments are supported: -* `token` - (Required) This is the DO API token. +* `token` - (Required) This is the DO API token. This can also be specified + with the `DIGITALOCEAN_TOKEN` shell environment variable. diff --git a/website/source/docs/providers/docker/index.html.markdown b/website/source/docs/providers/docker/index.html.markdown new file mode 100644 index 000000000000..9d057fd34654 --- /dev/null +++ b/website/source/docs/providers/docker/index.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "docker" +page_title: "Provider: Docker" +sidebar_current: "docs-docker-index" +description: |- + The Docker provider is used to interact with Docker containers and images. +--- + +# Docker Provider + +The Docker provider is used to interact with Docker containers and images. +It uses the Docker API to manage the lifecycle of Docker containers. Because +the Docker provider uses the Docker API, it is immediately compatible not +only with single server Docker but Swarm and any additional Docker-compatible +API hosts. + +Use the navigation to the left to read about the available resources. + +
+Note: The Docker provider is new as of Terraform 0.4. +It is ready to be used but many features are still being added. If there +is a Docker feature missing, please report it in the GitHub repo. +
+ +## Example Usage + +``` +# Configure the Docker provider +provider "docker" { + host = "tcp://127.0.0.1:1234/" +} + +# Create a container +resource "docker_container" "foo" { + image = "${docker_image.ubuntu.latest}" + name = "foo" +} + +resource "docker_image" "ubuntu" { + name = "ubuntu:latest" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `host` - (Required) This is the address to the Docker host. If this is + blank, the `DOCKER_HOST` environment variable will also be read. + +* `cert_path` - (Optional) Path to a directory with certificate information + for connecting to the Docker host via TLS. If this is blank, the + `DOCKER_CERT_PATH` will also be checked. diff --git a/website/source/docs/providers/docker/r/container.html.markdown b/website/source/docs/providers/docker/r/container.html.markdown new file mode 100644 index 000000000000..418e35fc12be --- /dev/null +++ b/website/source/docs/providers/docker/r/container.html.markdown @@ -0,0 +1,77 @@ +--- +layout: "docker" +page_title: "Docker: docker_container" +sidebar_current: "docs-docker-resource-container" +description: |- + Manages the lifecycle of a Docker container. +--- + +# docker\_container + +Manages the lifecycle of a Docker container. + +## Example Usage + +``` +# Start a container +resource "docker_container" "ubuntu" { + name = "foo" + image = "${docker_image.ubuntu.latest}" +} + +# Find the latest Ubuntu precise image. +resource "docker_image" "ubuntu" { + image = "ubuntu:precise" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required, string) The name of the Docker container. +* `image` - (Required, string) The ID of the image to back this container. + The easiest way to get this value is to use the `docker_image` resource + as is shown in the example above. + +* `command` - (Optional, list of strings) The command to use to start the + container. +* `dns` - (Optional, set of strings) Set of DNS servers. +* `env` - (Optional, set of strings) Environmental variables to set. +* `hostname` - (Optional, string) Hostname of the container. +* `domainname` - (Optional, string) Domain name of the container. +* `must_run` - (Optional, bool) If true, then the Docker container will be + kept running. If false, then as long as the container exists, Terraform + assumes it is successful. +* `ports` - (Optional) See [Ports](#ports) below for details. +* `publish_all_ports` - (Optional, bool) Publish all ports of the container. +* `volumes` - (Optional) See [Volumes](#volumes) below for details. + + +## Ports + +`ports` is a block within the configuration that can be repeated to specify +the port mappings of the container. Each `ports` block supports +the following: + +* `internal` - (Required, int) Port within the container. +* `external` - (Required, int) Port exposed out of the container. +* `ip` - (Optional, string) IP address/mask that can access this port. +* `protocol` - (Optional, string) Protocol that can be used over this port, + defaults to TCP. + + +## Volumes + +`volumes` is a block within the configuration that can be repeated to specify +the volumes attached to a container. Each `volumes` block supports +the following: + +* `from_container` - (Optional, string) The container where the volume is + coming from. +* `container_path` - (Optional, string) The path in the container where the + volume will be mounted. +* `host_path` - (Optional, string) The path on the host where the volume + is coming from. +* `read_only` - (Optinal, bool) If true, this volume will be readonly. + Defaults to false. diff --git a/website/source/docs/providers/docker/r/image.html.markdown b/website/source/docs/providers/docker/r/image.html.markdown new file mode 100644 index 000000000000..7a79ad083fad --- /dev/null +++ b/website/source/docs/providers/docker/r/image.html.markdown @@ -0,0 +1,41 @@ +--- +layout: "docker" +page_title: "Docker: docker_image" +sidebar_current: "docs-docker-resource-image" +description: |- + Downloads and exports the ID of a Docker image. +--- + +# docker\_image + +Downloads and exports the ID of a Docker image. This can be used alongside +[docker\_container](/docs/providers/docker/r/container.html) +to programmatically get the latest image IDs without having to hardcode +them. + +## Example Usage + +``` +# Find the latest Ubuntu precise image. +resource "docker_image" "ubuntu" { + image = "ubuntu:precise" +} + +# Access it somewhere else with ${docker_image.ubuntu.latest} +``` + +## Argument Reference + +The following arguments are supported: + +* `image` - (Required) The name of the Docker image, including any tags. +* `keep_updated` - (Optional) If true, then the Docker image will always + be updated on the host to the latest. If this is false, as long as an + image is downloaded with the correct tag, it won't be redownloaded if + there is a newer image. + +## Attributes Reference + +The following attributes are exported in addition to the above configuration: + +* `latest` (string) - The ID of the image. diff --git a/website/source/docs/providers/google/index.html.markdown b/website/source/docs/providers/google/index.html.markdown index cacf7599bcd5..8adb9ed908b5 100644 --- a/website/source/docs/providers/google/index.html.markdown +++ b/website/source/docs/providers/google/index.html.markdown @@ -39,10 +39,14 @@ The following keys can be used to configure the provider. retrieving this file are below. The _account file_ can be "" if you are running terraform from a GCE instance with a properly-configured [Compute Engine Service Account](https://cloud.google.com/compute/docs/authentication). + This can also be specified with the `GOOGLE_ACCOUNT_FILE` shell environment + variable. -* `project` - (Required) The name of the project to apply any resources to. +* `project` - (Required) The ID of the project to apply any resources to. This + can also be specified with the `GOOGLE_PROJECT` shell environment variable. -* `region` - (Required) The region to operate under. +* `region` - (Required) The region to operate under. This can also be specified + with the `GOOGLE_REGION` shell environment variable. ## Authentication JSON File diff --git a/website/source/docs/providers/google/r/compute_instance.html.markdown b/website/source/docs/providers/google/r/compute_instance.html.markdown index 5c6cfe0270b3..3d3104d17fdf 100644 --- a/website/source/docs/providers/google/r/compute_instance.html.markdown +++ b/website/source/docs/providers/google/r/compute_instance.html.markdown @@ -93,6 +93,9 @@ The `disk` block supports: * `type` - (Optional) The GCE disk type. +* `size` - (Optional) The size of the image in gigabytes. If not specified, + it will inherit the size of its base image. + The `network_interface` block supports: * `network` - (Required) The name of the network to attach this interface to. diff --git a/website/source/docs/providers/heroku/index.html.markdown b/website/source/docs/providers/heroku/index.html.markdown index b04fd001ce76..696a41963d57 100644 --- a/website/source/docs/providers/heroku/index.html.markdown +++ b/website/source/docs/providers/heroku/index.html.markdown @@ -33,6 +33,8 @@ resource "heroku_app" "default" { The following arguments are supported: -* `api_key` - (Required) Heroku API token -* `email` - (Required) Email to be notified by Heroku +* `api_key` - (Required) Heroku API token. It must be provided, but it can also + be sourced from the `HEROKU_API_KEY` environment variable. +* `email` - (Required) Email to be notified by Heroku. It must be provided, but + it can also be sourced from the `HEROKU_EMAIL` environment variable. diff --git a/website/source/docs/providers/heroku/r/addon.html.markdown b/website/source/docs/providers/heroku/r/addon.html.markdown index d39cb1e8bcb3..f9907597a4ad 100644 --- a/website/source/docs/providers/heroku/r/addon.html.markdown +++ b/website/source/docs/providers/heroku/r/addon.html.markdown @@ -19,6 +19,12 @@ resource "heroku_app" "default" { name = "test-app" } +# Create a database, and configure the app to use it +resource "heroku_addon" "database" { + app = "${heroku_app.default.name}" + plan = "heroku-postgresql:hobby-basic" +} + # Add a web-hook addon for the app resource "heroku_addon" "webhook" { app = "${heroku_app.default.name}" diff --git a/website/source/docs/providers/heroku/r/app.html.markdown b/website/source/docs/providers/heroku/r/app.html.markdown index d05bd2fb06ab..9e51d62f2c4c 100644 --- a/website/source/docs/providers/heroku/r/app.html.markdown +++ b/website/source/docs/providers/heroku/r/app.html.markdown @@ -17,6 +17,7 @@ create and manage applications on Heroku. # Create a new Heroku app resource "heroku_app" "default" { name = "my-cool-app" + region = "us" config_vars { FOOBAR = "baz" diff --git a/website/source/docs/providers/index.html.markdown b/website/source/docs/providers/index.html.markdown index f03c17c54bbe..5365d0e86ee0 100644 --- a/website/source/docs/providers/index.html.markdown +++ b/website/source/docs/providers/index.html.markdown @@ -14,7 +14,7 @@ etc. Almost any infrastructure noun can be represented as a resource in Terrafor Terraform is agnostic to the underlying platforms by supporting providers. A provider is responsible for understanding API interactions and exposing resources. Providers -generally are an IaaS (e.g. AWS, DigitalOcean, GCE), PaaS (e.g. Heroku, CloudFoundry), +generally are an IaaS (e.g. AWS, DigitalOcean, GCE, OpenStack), PaaS (e.g. Heroku, CloudFoundry), or SaaS services (e.g. Atlas, DNSimple, CloudFlare). Use the navigation to the left to read about the available providers. diff --git a/website/source/docs/providers/openstack/index.html.markdown b/website/source/docs/providers/openstack/index.html.markdown new file mode 100644 index 000000000000..02b8c8dc8a4a --- /dev/null +++ b/website/source/docs/providers/openstack/index.html.markdown @@ -0,0 +1,74 @@ +--- +layout: "openstack" +page_title: "Provider: OpenStack" +sidebar_current: "docs-openstack-index" +description: |- + The OpenStack provider is used to interact with the many resources supported by OpenStack. The provider needs to be configured with the proper credentials before it can be used. +--- + +# OpenStack Provider + +The OpenStack provider is used to interact with the +many resources supported by OpenStack. The provider needs to be configured +with the proper credentials before it can be used. + +Use the navigation to the left to read about the available resources. + +## Example Usage + +``` +# Configure the OpenStack Provider +provider "openstack" { + user_name = "admin" + tenant_name = "admin" + password = "pwd" + auth_url = "http://myauthurl:5000/v2.0" +} + +# Create a web server +resource "openstack_compute_instance_v2" "test-server" { + ... +} +``` + +## Configuration Reference + +The following arguments are supported: + +* `auth_url` - (Required) If omitted, the `OS_AUTH_URL` environment + variable is used. + +* `user_name` - (Optional; Required for Identity V2) If omitted, the + `OS_USERNAME` environment variable is used. + +* `user_id` - (Optional) + +* `password` - (Optional; Required if not using `api_key`) If omitted, the + `OS_PASSWORD` environment variable is used. + +* `api_key` - (Optional; Required if not using `password`) + +* `domain_id` - (Optional) + +* `domain_name` - (Optional) + +* `tenant_id` - (Optional) + +* `tenant_name` - (Optional) If omitted, the `OS_TENANT_NAME` environment + variable is used. + +## Testing + +In order to run the Acceptance Tests for development, the following environment +variables must also be set: + +* `OS_REGION_NAME` - The region in which to create the server instance. + +* `OS_IMAGE_ID` or `OS_IMAGE_NAME` - a UUID or name of an existing image in + Glance. + +* `OS_FLAVOR_ID` or `OS_FLAVOR_NAME` - an ID or name of an existing flavor. + +* `OS_POOL_NAME` - The name of a Floating IP pool. + +* `OS_NETWORK_ID` - The UUID of a network in your test environment. diff --git a/website/source/docs/providers/openstack/r/blockstorage_volume_v1.html.markdown b/website/source/docs/providers/openstack/r/blockstorage_volume_v1.html.markdown new file mode 100644 index 000000000000..779d67e1c392 --- /dev/null +++ b/website/source/docs/providers/openstack/r/blockstorage_volume_v1.html.markdown @@ -0,0 +1,71 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_blockstorage_volume_v1" +sidebar_current: "docs-openstack-resource-blockstorage-volume-v1" +description: |- + Manages a V1 volume resource within OpenStack. +--- + +# openstack\_blockstorage\_volume_v1 + +Manages a V1 volume resource within OpenStack. + +## Example Usage + +``` +resource "openstack_blockstorage_volume_v1" "volume_1" { + region = "RegionOne" + name = "tf-test-volume" + description = "first test volume" + size = 3 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to create the volume. If + omitted, the `OS_REGION_NAME` environment variable is used. Changing this + creates a new volume. + +* `size` - (Required) The size of the volume to create (in gigabytes). Changing + this creates a new volume. + +* `name` - (Optional) A unique name for the volume. Changing this updates the + volume's name. + +* `description` - (Optional) A description of the volume. Changing this updates + the volume's description. + +* `image_id` - (Optional) The image ID from which to create the volume. + Changing this creates a new volume. + +* `snapshot_id` - (Optional) The snapshot ID from which to create the volume. + Changing this creates a new volume. + +* `source_vol_id` - (Optional) The volume ID from which to create the volume. + Changing this creates a new volume. + +* `metadata` - (Optional) Metadata key/value pairs to associate with the volume. + Changing this updates the existing volume metadata. + +* `volume_type` - (Optional) The type of volume to create (either SATA or SSD). + Changing this creates a new volume. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `size` - See Argument Reference above. +* `name` - See Argument Reference above. +* `description` - See Argument Reference above. +* `image_id` - See Argument Reference above. +* `source_vol_id` - See Argument Reference above. +* `snapshot_id` - See Argument Reference above. +* `metadata` - See Argument Reference above. +* `volume_type` - See Argument Reference above. +* `attachment` - If a volume is attached to an instance, this attribute will + display the Attachment ID, Instance ID, and the Device as the Instance + sees it. diff --git a/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown new file mode 100644 index 000000000000..7d5874457cb4 --- /dev/null +++ b/website/source/docs/providers/openstack/r/compute_instance_v2.html.markdown @@ -0,0 +1,149 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_compute_instance_v2" +sidebar_current: "docs-openstack-resource-compute-instance-v2" +description: |- + Manages a V2 VM instance resource within OpenStack. +--- + +# openstack\_compute\_instance_v2 + +Manages a V2 VM instance resource within OpenStack. + +## Example Usage + +``` +resource "openstack_compute_instance_v2" "test-server" { + name = "tf-test" + image_id = "ad091b52-742f-469e-8f3c-fd81cadf0743" + flavor_id = "3" + metadata { + this = "that" + } + key_pair = "my_key_pair_name" + security_groups = ["test-group-1"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to create the server instance. If + omitted, the `OS_REGION_NAME` environment variable is used. Changing this + creates a new server. + +* `name` - (Required) A unique name for the resource. + +* `image_id` - (Optional; Required if `image_name` is empty) The image ID of + the desired image for the server. Changing this creates a new server. + +* `image_name` - (Optional; Required if `image_id` is empty) The name of the + desired image for the server. Changing this creates a new server. + +* `flavor_id` - (Optional; Required if `flavor_name` is empty) The flavor ID of + the desired flavor for the server. Changing this resizes the existing server. + +* `flavor_name` - (Optional; Required if `flavor_id` is empty) The name of the + desired flavor for the server. Changing this resizes the existing server. + +* `floating_ip` - (Optional) A Floating IP that will be associated with the + Instance. The Floating IP must be provisioned already. + +* `user_data` - (Optional) The user data to provide when launching the instance. + Changing this creates a new server. + +* `security_groups` - (Optional) An array of one or more security group names + to associate with the server. Changing this results in adding/removing + security groups from the existing server. + +* `availability_zone` - (Optional) The availability zone in which to create + the server. Changing this creates a new server. + +* `network` - (Optional) An array of one or more networks to attach to the + instance. The network object structure is documented below. Changing this + creates a new server. + +* `metadata` - (Optional) Metadata key/value pairs to make available from + within the instance. Changing this updates the existing server metadata. + +* `config_drive` - (Optional) Whether to use the config_drive feature to + configure the instance. Changing this creates a new server. + +* `admin_pass` - (Optional) The administrative password to assign to the server. + Changing this changes the root password on the existing server. + +* `key_pair` - (Optional) The name of a key pair to put on the server. The key + pair must already be created and associated with the tenant's account. + Changing this creates a new server. + +* `block_device` - (Optional) The object for booting by volume. The block_device + object structure is documented below. Changing this creates a new server. + +* `volume` - (Optional) Attach an existing volume to the instance. The volume + structure is described below. + +The `network` block supports: + +* `uuid` - (Required unless `port` or `name` is provided) The network UUID to + attach to the server. + +* `name` - (Required unless `uuid` or `port` is provided) The human-readable + name of the network. + +* `port` - (Required unless `uuid` or `name` is provided) The port UUID of a + network to attach to the server. + +* `fixed_ip_v4` - (Optional) Specifies a fixed IPv4 address to be used on this + network. + +The `block_device` block supports: + +* `uuid` - (Required) The UUID of the image, volume, or snapshot. + +* `source_type` - (Required) The source type of the device. Must be one of + "image", "volume", or "snapshot". + +* `volume_size` - (Optional) The size of the volume to create (in gigabytes). + +* `boot_index` - (Optional) The boot index of the volume. It defaults to 0. + +* `destination_type` - (Optional) The type that gets created. Possible values + are "volume" and "local". + +The `volume` block supports: + +* `volume_id` - (Required) The UUID of the volume to attach. + +* `device` - (Optional) The device that the volume will be attached as. For + example: `/dev/vdc`. Omit this option to allow the volume to be + auto-assigned a device. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `access_ip_v4` - The first detected Fixed IPv4 address _or_ the + Floating IP. +* `access_ip_v6` - The first detected Fixed IPv6 address. +* `metadata` - See Argument Reference above. +* `security_groups` - See Argument Reference above. +* `flavor_id` - See Argument Reference above. +* `flavor_name` - See Argument Reference above. +* `network/uuid` - See Argument Reference above. +* `network/name` - See Argument Reference above. +* `network/port` - See Argument Reference above. +* `network/fixed_ip_v4` - The Fixed IPv4 address of the Instance on that + network. +* `network/fixed_ip_v6` - The Fixed IPv6 address of the Instance on that + network. +* `network/mac` - The MAC address of the NIC on that network. + +## Notes + +If you configure the instance to have multiple networks, be aware that only +the first network can be associated with a Floating IP. So the first network +in the instance resource _must_ be the network that you have configured to +communicate with your floating IP / public network via a Neutron Router. diff --git a/website/source/docs/providers/openstack/r/compute_keypair_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_keypair_v2.html.markdown new file mode 100644 index 000000000000..0c3beae2798b --- /dev/null +++ b/website/source/docs/providers/openstack/r/compute_keypair_v2.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_compute_keypair_v2" +sidebar_current: "docs-openstack-resource-compute-keypair-v2" +description: |- + Manages a V2 keypair resource within OpenStack. +--- + +# openstack\_compute\_keypair_v2 + +Manages a V2 keypair resource within OpenStack. + +## Example Usage + +``` +resource "openstack_compute_keypair_v2" "test-keypair" { + name = "my-keypair" + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAjpC1hwiOCCmKEWxJ4qzTTsJbKzndLotBCz5PcwtUnflmU+gHJtWMZKpuEGVi29h0A/+ydKek1O18k10Ff+4tyFjiHDQAnOfgWf7+b1yK+qDip3X1C0UPMbwHlTfSGWLGZqd9LvEFx9k3h/M+VtMvwR1lJ9LUyTAImnNjWG7TaIPmui30HvM2UiFEmqkr4ijq45MyX2+fLIePLRIF61p4whjHAQYufqyno3BS48icQb4p6iVEZPo4AE2o9oIyQvj2mx4dk5Y8CgSETOZTYDOR3rU2fZTRDRgPJDH9FWvQjF5tA0p3d9CoWWd2s6GKKbfoUIi8R/Db1BSPJwkqB" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Compute client. + Keypairs are associated with accounts, but a Compute client is needed to + create one. If omitted, the `OS_REGION_NAME` environment variable is used. + Changing this creates a new keypair. + +* `name` - (Required) A unique name for the keypair. Changing this creates a new + keypair. + +* `public_key` - (Required) A pregenerated OpenSSH-formatted public key. + Changing this creates a new keypair. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `public_key` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown b/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown new file mode 100644 index 000000000000..5b9538793d31 --- /dev/null +++ b/website/source/docs/providers/openstack/r/compute_secgroup_v2.html.markdown @@ -0,0 +1,76 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_compute_secgroup_v2" +sidebar_current: "docs-openstack-resource-compute-secgroup-2" +description: |- + Manages a V2 security group resource within OpenStack. +--- + +# openstack\_compute\_secgroup_v2 + +Manages a V2 security group resource within OpenStack. + +## Example Usage + +``` +resource "openstack_compute_secgroup_v2" "secgroup_1" { + name = "my_secgroup" + description = "my security group" + rule { + from_port = 22 + to_port = 22 + ip_protocol = "tcp" + cidr = "0.0.0.0/0" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Compute client. + A Compute client is needed to create a security group. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + security group. + +* `name` - (Required) A unique name for the security group. Changing this + updates the `name` of an existing security group. + +* `description` - (Required) A description for the security group. Changing this + updates the `description` of an existing security group. + +* `rule` - (Optional) A rule describing how the security group operates. The + rule object structure is documented below. Changing this updates the + security group rules. + +The `rule` block supports: + +* `from_port` - (Required) An integer representing the lower bound of the port +range to open. Changing this creates a new security group rule. + +* `to_port` - (Required) An integer representing the upper bound of the port +range to open. Changing this creates a new security group rule. + +* `ip_protocol` - (Required) The protocol type that will be allowed. Changing +this creates a new security group rule. + +* `cidr` - (Optional) Required if `from_group_id` is empty. The IP range that +will be the source of network traffic to the security group. Use 0.0.0.0./0 +to allow all IP addresses. Changing this creates a new security group rule. + +* `from_group_id` - (Optional) Required if `cidr` is empty. The ID of a group +from which to forward traffic to the parent group. Changing +this creates a new security group rule. + +* `self` - (Optional) Required if `cidr` and `from_group_id` is empty. If true, +the security group itself will be added as a source to this ingress rule. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `description` - See Argument Reference above. +* `rule` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/lb_monitor_v1.html.markdown b/website/source/docs/providers/openstack/r/lb_monitor_v1.html.markdown new file mode 100644 index 000000000000..cbf6b2b873a1 --- /dev/null +++ b/website/source/docs/providers/openstack/r/lb_monitor_v1.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_lb_monitor_v1" +sidebar_current: "docs-openstack-resource-lb-monitor-v1" +description: |- + Manages a V1 load balancer monitor resource within OpenStack. +--- + +# openstack\_lb\_monitor_v1 + +Manages a V1 load balancer monitor resource within OpenStack. + +## Example Usage + +``` +resource "openstack_lb_monitor_v1" "monitor_1" { + type = "PING" + delay = 30 + timeout = 5 + max_retries = 3 + admin_state_up = "true" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Networking client. + A Networking client is needed to create an LB monitor. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + LB monitor. + +* `type` - (Required) The type of probe, which is PING, TCP, HTTP, or HTTPS, + that is sent by the monitor to verify the member state. Changing this + creates a new monitor. + +* `delay` - (Required) The time, in seconds, between sending probes to members. + Changing this creates a new monitor. + +* `timeout` - (Required) Maximum number of seconds for a monitor to wait for a + ping reply before it times out. The value must be less than the delay value. + Changing this updates the timeout of the existing monitor. + +* `max_retries` - (Required) Number of permissible ping failures before changing + the member's status to INACTIVE. Must be a number between 1 and 10. Changing + this updates the max_retries of the existing monitor. + +* `url_path` - (Optional) Required for HTTP(S) types. URI path that will be + accessed if monitor type is HTTP or HTTPS. Changing this updates the + url_path of the existing monitor. + +* `http_method` - (Optional) Required for HTTP(S) types. The HTTP method used + for requests by the monitor. If this attribute is not specified, it defaults + to "GET". Changing this updates the http_method of the existing monitor. + +* `expected_codes` - (Optional) equired for HTTP(S) types. Expected HTTP codes + for a passing HTTP(S) monitor. You can either specify a single status like + "200", or a range like "200-202". Changing this updates the expected_codes + of the existing monitor. + +* `admin_state_up` - (Optional) The administrative state of the monitor. + Acceptable values are "true" and "false". Changing this value updates the + state of the existing monitor. + +* `tenant_id` - (Optional) The owner of the monitor. Required if admin wants to + create a monitor for another tenant. Changing this creates a new monitor. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `type` - See Argument Reference above. +* `delay` - See Argument Reference above. +* `timeout` - See Argument Reference above. +* `max_retries` - See Argument Reference above. +* `url_path` - See Argument Reference above. +* `http_method` - See Argument Reference above. +* `expected_codes` - See Argument Reference above. +* `admin_state_up` - See Argument Reference above. +* `tenant_id` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown b/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown new file mode 100644 index 000000000000..5ddbdf1af800 --- /dev/null +++ b/website/source/docs/providers/openstack/r/lb_pool_v1.html.markdown @@ -0,0 +1,90 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_lb_pool_v1" +sidebar_current: "docs-openstack-resource-lb-pool-v1" +description: |- + Manages a V1 load balancer pool resource within OpenStack. +--- + +# openstack\_lb\_pool_v1 + +Manages a V1 load balancer pool resource within OpenStack. + +## Example Usage + +``` +resource "openstack_lb_pool_v1" "pool_1" { + name = "tf_test_lb_pool" + protocol = "HTTP" + subnet_id = "12345" + lb_method = "ROUND_ROBIN" + monitor_ids = ["67890"] + member { + address = "192.168.0.1" + port = 80 + admin_state_up = "true" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Networking client. + A Networking client is needed to create an LB pool. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + LB pool. + +* `name` - (Required) The name of the pool. Changing this updates the name of + the existing pool. + +* `protocol` - (Required) The protocol used by the pool members, you can use + either 'TCP, 'HTTP', or 'HTTPS'. Changing this creates a new pool. + +* `subnet_id` - (Required) The network on which the members of the pool will be + located. Only members that are on this network can be added to the pool. + Changing this creates a new pool. + +* `lb_method` - (Required) The algorithm used to distribute load between the + members of the pool. The current specification supports 'ROUND_ROBIN' and + 'LEAST_CONNECTIONS' as valid values for this attribute. + +* `tenant_id` - (Optional) The owner of the pool. Required if admin wants to + create a pool member for another tenant. Changing this creates a new pool. + +* `monitor_ids` - (Optional) A list of IDs of monitors to associate with the + pool. + +* `member` - (Optional) An existing node to add to the pool. Changing this + updates the members of the pool. The member object structure is documented + below. + +The `member` block supports: + +* `address` - (Required) The IP address of the member. Changing this creates a +new member. + +* `port` - (Required) An integer representing the port on which the member is +hosted. Changing this creates a new member. + +* `admin_state_up` - (Optional) The administrative state of the member. +Acceptable values are 'true' and 'false'. Changing this value updates the +state of the existing member. + +* `tenant_id` - (Optional) The owner of the member. Required if admin wants to +create a pool member for another tenant. Changing this creates a new member. + + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `protocol` - See Argument Reference above. +* `subnet_id` - See Argument Reference above. +* `lb_method` - See Argument Reference above. +* `tenant_id` - See Argument Reference above. +* `monitor_id` - See Argument Reference above. +* `member` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/lb_vip_v1.html.markdown b/website/source/docs/providers/openstack/r/lb_vip_v1.html.markdown new file mode 100644 index 000000000000..7a9bc3d4b0ef --- /dev/null +++ b/website/source/docs/providers/openstack/r/lb_vip_v1.html.markdown @@ -0,0 +1,95 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_lb_vip_v1" +sidebar_current: "docs-openstack-resource-lb-vip-v1" +description: |- + Manages a V1 load balancer vip resource within OpenStack. +--- + +# openstack\_lb\_vip_v1 + +Manages a V1 load balancer vip resource within OpenStack. + +## Example Usage + +``` +resource "openstack_lb_vip_v1" "vip_1" { + name = "tf_test_lb_vip" + subnet_id = "12345" + protocol = "HTTP" + port = 80 + pool_id = "67890" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Networking client. + A Networking client is needed to create a VIP. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + VIP. + +* `name` - (Required) The name of the vip. Changing this updates the name of + the existing vip. + +* `subnet_id` - (Required) The network on which to allocate the vip's address. A + tenant can only create vips on networks authorized by policy (e.g. networks + that belong to them or networks that are shared). Changing this creates a + new vip. + +* `protocol` - (Required) The protocol - can be either 'TCP, 'HTTP', or + HTTPS'. Changing this creates a new vip. + +* `port` - (Required) The port on which to listen for client traffic. Changing + this creates a new vip. + +* `pool_id` - (Required) The ID of the pool with which the vip is associated. + Changing this updates the pool_id of the existing vip. + +* `tenant_id` - (Optional) The owner of the vip. Required if admin wants to + create a vip member for another tenant. Changing this creates a new vip. + +* `address` - (Optional) The IP address of the vip. Changing this creates a new + vip. + +* `description` - (Optional) Human-readable description for the vip. Changing + this updates the description of the existing vip. + +* `persistence` - (Optional) Omit this field to prevent session persistence. + The persistence object structure is documented below. Changing this updates + the persistence of the existing vip. + +* `conn_limit` - (Optional) The maximum number of connections allowed for the + vip. Default is -1, meaning no limit. Changing this updates the conn_limit + of the existing vip. + +* `admin_state_up` - (Optional) The administrative state of the vip. + Acceptable values are "true" and "false". Changing this value updates the + state of the existing vip. + +The `persistence` block supports: + +* `type` - (Required) The type of persistence mode. Valid values are "SOURCE_IP", + "HTTP_COOKIE", or "APP_COOKIE". + +* `cookie_name` - (Optional) The name of the cookie if persistence mode is set + appropriately. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `subnet_id` - See Argument Reference above. +* `protocol` - See Argument Reference above. +* `port` - See Argument Reference above. +* `pool_id` - See Argument Reference above. +* `tenant_id` - See Argument Reference above. +* `address` - See Argument Reference above. +* `description` - See Argument Reference above. +* `persistence` - See Argument Reference above. +* `conn_limit` - See Argument Reference above. +* `admin_state_up` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown new file mode 100644 index 000000000000..f62454e30f95 --- /dev/null +++ b/website/source/docs/providers/openstack/r/networking_network_v2.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_networking_network_v2" +sidebar_current: "docs-openstack-resource-networking-network-v2" +description: |- + Manages a V2 Neutron network resource within OpenStack. +--- + +# openstack\_networking\_network_v2 + +Manages a V2 Neutron network resource within OpenStack. + +## Example Usage + +``` +resource "openstack_networking_network_v2" "network_1" { + name = "tf_test_network" + admin_state_up = "true" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Networking client. + A Networking client is needed to create a Neutron network. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + network. + +* `name` - (Optional) The name of the network. Changing this updates the name of + the existing network. + +* `shared` - (Optional) Specifies whether the network resource can be accessed + by any tenant or not. Changing this updates the sharing capabalities of the + existing network. + +* `tenant_id` - (Optional) The owner of the network. Required if admin wants to + create a network for another tenant. Changing this creates a new network. + +* `admin_state_up` - (Optional) The administrative state of the network. + Acceptable values are "true" and "false". Changing this value updates the + state of the existing network. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `shared` - See Argument Reference above. +* `tenant_id` - See Argument Reference above. +* `admin_state_up` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/networking_subnet_v2.html.markdown b/website/source/docs/providers/openstack/r/networking_subnet_v2.html.markdown new file mode 100644 index 000000000000..a8243a81788f --- /dev/null +++ b/website/source/docs/providers/openstack/r/networking_subnet_v2.html.markdown @@ -0,0 +1,98 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_networking_subnet_v2" +sidebar_current: "docs-openstack-resource-networking-subnet-v2" +description: |- + Manages a V2 Neutron subnet resource within OpenStack. +--- + +# openstack\_networking\_subnet_v2 + +Manages a V2 Neutron subnet resource within OpenStack. + +## Example Usage + +``` +resource "openstack_networking_network_v2" "network_1" { + name = "tf_test_network" + admin_state_up = "true" +} + +resource "openstack_networking_subnet_v2" "subnet_1" { + network_id = "${openstack_networking_network_v2.network_1.id}" + cidr = "192.168.199.0/24" + ip_version = 4 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to obtain the V2 Networking client. + A Networking client is needed to create a Neutron subnet. If omitted, the + `OS_REGION_NAME` environment variable is used. Changing this creates a new + subnet. + +* `network_id` - (Required) The UUID of the parent network. Changing this + creates a new subnet. + +* `cidr` - (Required) CIDR representing IP range for this subnet, based on IP + version. Changing this creates a new subnet. + +* `ip_version` - (Required) IP version, either 4 or 6. Changing this creates a + new subnet. + +* `name` - (Optional) The name of the subnet. Changing this updates the name of + the existing subnet. + +* `tenant_id` - (Optional) The owner of the subnet. Required if admin wants to + create a subnet for another tenant. Changing this creates a new subnet. + +* `allocation_pools` - (Optional) An array of sub-ranges of CIDR available for + dynamic allocation to ports. The allocation_pool object structure is + documented below. Changing this creates a new subnet. + +* `gateway_ip` - (Optional) Default gateway used by devices in this subnet. + Changing this updates the gateway IP of the existing subnet. + +* `enable_dhcp` - (Optional) The administrative state of the network. + Acceptable values are "true" and "false". Changing this value enables or + disables the DHCP capabilities of the existing subnet. + +* `dns_nameservers` - (Optional) An array of DNS name server names used by hosts + in this subnet. Changing this updates the DNS name servers for the existing + subnet. + +* `host_routes` - (Optional) An array of routes that should be used by devices + with IPs from this subnet (not including local subnet route). The host_route + object structure is documented below. Changing this updates the host routes + for the existing subnet. + +The `allocation_pools` block supports: + +* `start` - (Required) The starting address. + +* `end` - (Required) The ending address. + +The `host_routes` block supports: + +* `destination_cidr` - (Required) The destination CIDR. + +* `next_hop` - (Required) The next hop in the route. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `network_id` - See Argument Reference above. +* `cidr` - See Argument Reference above. +* `ip_version` - See Argument Reference above. +* `name` - See Argument Reference above. +* `tenant_id` - See Argument Reference above. +* `allocation_pools` - See Argument Reference above. +* `gateway_ip` - See Argument Reference above. +* `enable_dhcp` - See Argument Reference above. +* `dns_nameservers` - See Argument Reference above. +* `host_routes` - See Argument Reference above. diff --git a/website/source/docs/providers/openstack/r/objectstorage_container_v1.html.markdown b/website/source/docs/providers/openstack/r/objectstorage_container_v1.html.markdown new file mode 100644 index 000000000000..d81eccc53180 --- /dev/null +++ b/website/source/docs/providers/openstack/r/objectstorage_container_v1.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "openstack" +page_title: "OpenStack: openstack_objectstorage_container_v1" +sidebar_current: "docs-openstack-resource-objectstorage-container-v1" +description: |- + Manages a V1 container resource within OpenStack. +--- + +# openstack\_objectstorage\_container_v1 + +Manages a V1 container resource within OpenStack. + +## Example Usage + +``` +resource "openstack_objectstorage_container_v1" "container_1" { + region = "RegionOne" + name = "tf-test-container-1" + metadata { + test = "true" + } + content_type = "application/json" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Required) The region in which to create the container. If + omitted, the `OS_REGION_NAME` environment variable is used. Changing this + creates a new container. + +* `name` - (Required) A unique name for the container. Changing this creates a + new container. + +* `container_read` - (Optional) Sets an access control list (ACL) that grants + read access. This header can contain a comma-delimited list of users that + can read the container (allows the GET method for all objects in the + container). Changing this updates the access control list read access. + +* `container_sync_to` - (Optional) The destination for container synchronization. + Changing this updates container synchronization. + +* `container_sync_key` - (Optional) The secret key for container synchronization. + Changing this updates container synchronization. + +* `container_write` - (Optional) Sets an ACL that grants write access. + Changing this updates the access control list write access. + +* `metadata` - (Optional) Custom key/value pairs to associate with the container. + Changing this updates the existing container metadata. + +* `content_type` - (Optional) The MIME type for the container. Changing this + updates the MIME type. + +## Attributes Reference + +The following attributes are exported: + +* `region` - See Argument Reference above. +* `name` - See Argument Reference above. +* `container_read` - See Argument Reference above. +* `container_sync_to` - See Argument Reference above. +* `container_sync_key` - See Argument Reference above. +* `container_write` - See Argument Reference above. +* `metadata` - See Argument Reference above. +* `content_type` - See Argument Reference above. diff --git a/website/source/docs/provisioners/connection.html.markdown b/website/source/docs/provisioners/connection.html.markdown index af55fb2e433e..6d289c6dadb7 100644 --- a/website/source/docs/provisioners/connection.html.markdown +++ b/website/source/docs/provisioners/connection.html.markdown @@ -46,6 +46,8 @@ The following arguments are supported: * `key_file` - The SSH key to use for the connection. This takes preference over the password if provided. +* `agent` - Set to true to enable using ssh-agent to authenticate. + * `host` - The address of the resource to connect to. This is provided by the provider. * `port` - The port to connect to. This defaults to 22. diff --git a/website/source/intro/getting-started/install.html.markdown b/website/source/intro/getting-started/install.html.markdown index dc4729bc964c..629e97a5735a 100644 --- a/website/source/intro/getting-started/install.html.markdown +++ b/website/source/intro/getting-started/install.html.markdown @@ -40,11 +40,16 @@ usage: terraform [--version] [--help] [] Available commands are: apply Builds or changes infrastructure + destroy Destroy Terraform-managed infrastructure + get Download and install modules for the configuration graph Create a visual graph of Terraform resources + init Initializes Terraform configuration from a module output Read an output from a state file plan Generate and show an execution plan refresh Update local state file against real resources + remote Configure remote state storage show Inspect Terraform state or plan + taint Manually mark a resource for recreation version Prints the Terraform version ``` diff --git a/website/source/intro/vs/cloudformation.html.markdown b/website/source/intro/vs/cloudformation.html.markdown index 74dc0dc111be..382a76582dba 100644 --- a/website/source/intro/vs/cloudformation.html.markdown +++ b/website/source/intro/vs/cloudformation.html.markdown @@ -37,6 +37,3 @@ phases, meaning operators are forced to mentally reason about the effects of a change, which quickly becomes intractable in large infrastructures. Terraform lets operators apply changes with confidence, as they know exactly what will happen beforehand. - -~> **Note:** It should be clarified that OpenStack provider support is not yet -part of Terraform, though it is actively in the works. diff --git a/website/source/layouts/aws.erb b/website/source/layouts/aws.erb index 0bcff9cd831f..8e5875ecb912 100644 --- a/website/source/layouts/aws.erb +++ b/website/source/layouts/aws.erb @@ -98,7 +98,11 @@ > aws_vpc_peering - + + > + aws_vpn_gateway + + diff --git a/website/source/layouts/cloudstack.erb b/website/source/layouts/cloudstack.erb index a0e137aae311..30f69a020e85 100644 --- a/website/source/layouts/cloudstack.erb +++ b/website/source/layouts/cloudstack.erb @@ -1,66 +1,82 @@ <% wrap_layout :inner do %> - <% content_for :sidebar do %> - - <% end %> - - <%= yield %> - <% end %> + <% content_for :sidebar do %> + + <% end %> + + <%= yield %> +<% end %> diff --git a/website/source/layouts/docker.erb b/website/source/layouts/docker.erb new file mode 100644 index 000000000000..920e7aa43054 --- /dev/null +++ b/website/source/layouts/docker.erb @@ -0,0 +1,30 @@ +<% wrap_layout :inner do %> + <% content_for :sidebar do %> + + <% end %> + + <%= yield %> +<% end %> diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index fa8bce84a46b..a065d26a6920 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -45,6 +45,10 @@ Modules + > + Atlas + + @@ -79,6 +83,10 @@ plan + > + push + + > refresh @@ -122,7 +130,7 @@ > DigitalOcean - + > DNSMadeEasy @@ -132,6 +140,10 @@ DNSimple + > + Docker + + > Google Cloud @@ -143,6 +155,10 @@ > Mailgun + + > + OpenStack + @@ -207,6 +223,10 @@ > Resource Lifecycle + + > + Resource Addressing + diff --git a/website/source/layouts/openstack.erb b/website/source/layouts/openstack.erb new file mode 100644 index 000000000000..22afb4aeb2e9 --- /dev/null +++ b/website/source/layouts/openstack.erb @@ -0,0 +1,53 @@ +<% wrap_layout :inner do %> + <% content_for :sidebar do %> + + <% end %> + + <%= yield %> + <% end %>