Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HEREDOC parsing error after updating to 0.6.7 #4065

Closed
marekrogala opened this issue Nov 25, 2015 · 7 comments
Closed

HEREDOC parsing error after updating to 0.6.7 #4065

marekrogala opened this issue Nov 25, 2015 · 7 comments

Comments

@marekrogala
Copy link

I'm using a provisioning step that looks like this (but with more lines):

    provisioner "remote-exec" {
        inline = [
<<EOT
sudo \
A=${var.a} \
B=${var.b} \
sh script.sh
EOT
        ]
    }

After upgrading to 0.6.7 it started to fail with the following error:
Error loading config: Error parsing file.tf: At 44:1: unexpected token while parsing list: HEREDOC

Is this expected behavior? Can I use HEREDOC in lists or why not?

@jen20
Copy link
Contributor

jen20 commented Nov 25, 2015

If this worked before I'd call it a regression. I'll investigate fixing it. This was likely introduced in the new HCL parser used in Terraform 0.6.7.

@mrwilby
Copy link

mrwilby commented Nov 26, 2015

My heredocs also broke in 0.6.7. I am not sure if the use of a numeric in the HEREDOC name is invalid (couldn't quickly find confirmation) but anyway, these don't parse anymore:

variable "whitelist-all" {
default = <<EOF1
...
EOF1

If I change to:

variable "whitelist-all" {
default = <<EOF_ALL
...
EOF_ALL

everything is okay. So not line endings in my case... Looks like the HEREDOC parser has been 'improved' in general...

jen20 added a commit to hashicorp/hcl that referenced this issue Nov 26, 2015
This fixes a regression in Terraform where HEREDOCS were previously
supported in lists, reported in hashicorp/terraform#4065.
jen20 added a commit that referenced this issue Nov 26, 2015
@jen20
Copy link
Contributor

jen20 commented Nov 26, 2015

@marekrogala I've fixed the HCL parser and added better test coverage around this in Terraform itself as well as in the HCL library.

@mrwilby, there were major changes to the entire HCL parser (it was completely rewritten), and Terraform 0.6.7 is the first release of any HashiCorp product to use it. Your example looks good, so if it's not parsing, that is a bug we should address. I'll open a new issue for it though, since the fix here is unlikely to resolve it.

@phinze
Copy link
Contributor

phinze commented Dec 5, 2015

Hey folks, this should be fixed with 0.6.8!

Please feel free to follow up or file a fresh issue if you still see any HEREDOC problems on 0.6.8.

@phinze phinze closed this as completed Dec 5, 2015
@TrentNow
Copy link

Hello Guys,

I am receiving a heredoc issue from the following Terraform script: provider "aws"{
region = "var.aws_region"
profile = "var.aws_profile"
}

#S3_Access

resource "aws_iam_instance_profile" "s3_access"{
name = "s3_access"
roles = ["${aws_iam_role.s3_access.name}"]
}

resource "aws_iam_role_policy" "s3_role_policy"{
name = "s3_role_policy"
role = "${aws_iam_role.s3_access.id}"
policy = << EOF

{
"Version": "2012-10-17",
"Statement": [

{"Action": "s3:",
"Effect": "Allow",
"Resource": "
"
}
]
}
EOF
}

resource "aws_iam_role" "s3_access"{
name = "s3_access"
assume_role_policy = <<EOF

{
"Version": "2012-10-17",
"Statement" [

{"Action": "sts:AssumeRole",
"Principle": {
"Service": "ec2.amazonaws.com"
},

"Effect": "Allow",
"Sid": ""
}

]

}
}

resource "aws_vpc" "vpc"{
cidr_block = "10.1.0.0/16"
}

resource "aws_internet_gateway" "internet_gateway"{
vpc_id = "${aws_vpc.vpc.id}"
}

resource "aws_route_table" "route_table"{
vpc_id = "${aws_vpc.vpc.id}"

route {
cidr_block = "0.0.0.0/0"
ip_gateway = "${aws_internet_gateway.internet_gateway.id}"
}
}

resource "aws_default_route_table" "private"{
default_route_table_id = "${aws_vpc.vpc.default_route_table_id}"
}

resource "aws_subnet" "public"{
vpc_id = "${aws_vpc.vpc.id}"
c/'idr_block = "10.1.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
}

resource "aws_subnet" "private1"{
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "10.1.2.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1a"
}

resource "aws_subnet" "private2"{
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "10.1.3.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1b"
}

resource "aws_subnet" "rds1"{
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "10.1.4.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1a"
}

resource "aws_vpc_endpoint" "private-s3"{
vpc_id = "${aws_vpc.vpc.id}"
service_name = "com.amazonaws.${var.aws_region}.s3"
route_table = "${aws_vpc.vpc.main_route_table_id}"
policy = <<POLICY

{"Version": "2012-10-17",
"Statement": [

{"Action": "",
"Effect": "Allow",
"Prinicple": "
",
"Resource": "*"
}
]
}
POLICY
}

resource "aws_subnet" "rds2"{
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "10.1.5.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1b"
}

resource "aws_subnet" "rds3"{
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "10.1.6.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1c"
}

resource "aws_subnet_association" "public_assoc"{
subnet_id = "${aws_subnet.public.id}"
route_table = "${aws_route_table.public.id}"
}

resource "aws_subnet_association" "private1_assoc"{
subnet_id = "${aws_subnet.private1.id}"
route_table = "${aws_route_table.public.id}"
}

resource "aws_subnet_association" "private2_assoc"{
subnet_id = "${aws_subnet.private2.id}"
route_table = "${aws_route_table.public.id}"
}

resource "aws_db_subnetgroup" "rds_subnetgroup"
name = "rds_sg"
subnet_ids = ["${aws_subnet.rds1.id}", "${aws_subnet.rds2.id}", "${aws_subnet.rds3.id}"]
}

resource "aws_security_group" "public"{
name = "public_sg"
description = "Used for the public and private instances such as ELB's"
vpc_id = "${aws_vpc.vpc.id}"

ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_block = "10.1.0.0/16"
}

ingress {
from_port = 80
to_port = 80
protocol = "true"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_security_group" "private"{
name = "private_sg"
description = "Used for private instances"
vpc_id = "${aws_vpc.vpc.id}"

ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_security_group" "rds_subnetgroup"
name = "rds_sg"
description = "Used for db instances"
vpc_id = "${aws_vpc.vpc.id}"

ingress{
from_port = 3306
to_port = 3306
protocol = "tcp"
security_group = ["${aws_security_group.public.id}","${aws_security_group.private.id}"]
}

resource "aws_db_instance" "db" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.27"
db_instance_class = "${var.db_instance_class}"
name = "${var.dbname}"
username = "${var.dbusername}"
password = "${var.dbpassword}"
db_subnet_group_name = "${aws_subent.rds_subnetgroup.name}"
security_group = "${aws_security_group.RDS.id}"
}

resource "aws_key_pair" "auth"{
key_name = "${var.key_name}"
public_key = "${file(var.public_key_path)}"
}

resource "aws_s3_bucket" "code"{
bucket = "${var.domain_name}_devops"
acl = private
force_destroy = true

}

resource "aws_instance" "dev"{
instance_type = "${var.dev_instance_type}"
ami = "${var.dev_ami}"
tags {
Name = "dev"
}
key_name = "${aws_key_pair.auth.id}"
vpc_security_group = "${aws_security_group.public.id}"
iam_instance_profile = "${aws_iam_instance_profile.s3_access.id}"
subnet_id = "${aws_subnet.public.id}"

provisioner "local-exec" {
command = "cat < aws_hosts
[dev]
${aws_instance.dev.public_ip}
[dev:vars]
s3code=${aws_s3_bucket.code.bucket}
EOF"
}

provisioner "local-exec" {
command = "sleep 6m && ansible-playbook -i aws_hosts wordpress.yml"

}
}

#Load Balancer

resource "aws_elb" "prod"{
name = "${var.domain_name}-prod-elb"
subnets = ["${aws_subnet.private1.id}", "${aws_subnet.private2.id}"]
security_group = "${aws_security_group.public.id}"
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}

health_check {
healthy_threshold = "${var.elb_healthy_threshold}"
unhealthy_threshold = "${var.elb_unhealthy_threshold}"
timeout = "${var.elb_timeout}"
target = "HTTP:80/"
interval = "${var.elb_interval}"
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = "${var.domain_name}-prod-elb"
}
}

#AMI

resource "random_id" "ami"{
byte_length = 8
}

resource "aws_ami_from_instance" "golden"{
name = "ami-${random_id.ami.b64}"
source_instance_id = "${aws_instance.dev.id}"
provisioner = "local_exec" {
command = "cat < userdata

#!/bin/bash
usr/bin/aws s3 sync s3://${aws_s3_bucket.code.bucket} /var/www/html/
/bin/touch /var/spool/cron/root
sudo /bin/echo '/5*** aws s3 sync s3://${aws_s3_bucket.code.bucket} /var/www/html/' >> /var/spool/cron/root
EOF"
}
}

resource "aws_launch_configuration" "lc"{
name_prefix = "lc-"
image_id = "${aws_ami_from_instance.golden.id}"
instance_type = "${var.lc_instance_type}"
security_groups = ["${aws_security_group.private.id}"]
iam_instance_profile = "${aws_iam_instance_profile.s3_access.id}"
key_name = "${aws_key_pair.auth.id}"
user_data = "${file("userdata")}"
lifecycle {
create_before_destroy = true
}
}

resource "aws_autoscaling_group" "asg"{
availability_zones = ["${var.aws_region}a", "${var.aws_region}c"]
name = "asg-${aws_launch_configuration.lc.id}"
max_size = "${var.asg_max}"
min_size = "${var.asg_min}"
health_check_grace_period = "${var.asg_grace}"
health_check_type = "${var.asg_hct}"
desired_capacity = "${var.asg_cap}"
force_delete = true
load_balancers = ["${aws_elb.prod.id}"]
vpc_zone_identifier = ["${aws_subnet.private1.id}","${aws_subnet.private2.id}"]
launch_configuration = "${aws_launch_configuration.lc.name}"

tag {
key = "Name"
value = "asg-instance"
propagate_at_launch = true
}

lifecycle {
  create_before_destroy = true

}
}

resource "aws_route53_zone" "primary"{
name = "${var.domain_name}.com"
delegation_set_id = "${var.delegation_set}"
}

resource "aws_route53_record" "www"{
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www.${var.domain_name}.com"
type = "A"

alias {
name = "${aws_elb.prod.dns_name}"
zone_id = "${aws_elb.prod.zone_id}"
evaluate_target_health = false
}
}

resource "aws_route53_record" "dev"{
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "dev.${var.domain_name}"
type = "A"
ttl = "300"
records = ["${aws_instance.dev.public_ip}"]
}

resource "aws_route53_record" "db"{
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "db.${var.domain_name}.com"
type = "CNAME"
ttl = "300"
records = ["${aws_db_instance.db.address}"]
}

@apparentlymart
Copy link
Contributor

Hi @TrentNow!

The issue discussed here was fixed and tested several major versions ago, so the error you're seeing here is most likely something separate. If you open a new issue for this, and fill out the new issue template, we can dig into it and see what's going on there.

Thanks!

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants