Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add plugin/support for AWS Elastic Beanstalk Docker environments #632

Closed
kipsigman opened this issue Jul 29, 2015 · 13 comments
Closed

Add plugin/support for AWS Elastic Beanstalk Docker environments #632

kipsigman opened this issue Jul 29, 2015 · 13 comments

Comments

@kipsigman
Copy link

I'm running a Play app on AWS Elastic Beanstalk with Docker deployment. I've created the following custom tasks which might work well as a plugin or extension to the DockerPlugin:

// Elastic Beanstalk tasks
lazy val elasticBeanstalkStage = taskKey[Unit]("Create a local directory with all the files for an AWS Elastic Beanstalk Docker distribution.")

elasticBeanstalkStage := {
  // Depends on docker:stage
  val dockerStageValue = (stage in Docker).value

  // Copy Elastic Beanstalk Dockerrun.aws.json configuration file to Docker stagingDirectory
  val elasticBeanstalkSource = baseDirectory.value / "elastic-beanstalk"
  IO.copyDirectory(elasticBeanstalkSource, dockerStageValue, true)
}

lazy val elasticBeanstalkDist = taskKey[File]("Creates a zip for an AWS Elastic Beanstalk Docker distribution")

elasticBeanstalkDist := {
  val log = streams.value.log

  // Depends on elasticBeanstalkStage
  val stageValue = elasticBeanstalkStage.value

  // Zip Docker target
  val dockerStagingDirectory: File = (stagingDirectory in Docker).value

  val zipFile: File = (target.value) / s"${name.value}-${version.value}-elastic-beanstalk.zip"
  log.info(s"Zipping $dockerStagingDirectory to $zipFile")
  Process(s"zip -r $zipFile .", dockerStagingDirectory) !!

  zipFile
}
@muuki88
Copy link
Contributor

muuki88 commented Jul 29, 2015

Hi @kipsigman . thanks for sharing. If I get everything correctly ( have not yet used AWS beanstalk ) your extension does

  • stage the docker build ( because you need the Dockerfile )
  • add beanstalk configs to the docker staging directory
  • zipping the staging directory

This would perfectly fit in a small archetype plugin, which adds these two tasks. Maybe this could be a custom package format as well, so we don't need any new keys. E.g. beanstalk:packageBin

One note on the code. Using Process for zipping is not a good idea. This will shut windows users out and relies on system configuration. AFAIK sbt.IO has zipping or use java7 NIO for that.

@easel
Copy link

easel commented Aug 11, 2015

I've got a bunch of code to this end as well, uploading the file to s3 and creating the application via the AWS API. I also have code to generate the Dockerrun.aws.json file to deployment via image pull instead of rebuilding from the zip file.

To create the "full" file I've been override the default packageBin task. Probably shouldn't do that if we made a plugin:

    packageBin <<= (baseDirectory, packageBin in Universal,
      relativeDockerMappings, streams) map {
      (bd, artifactPath, mappings, s: TaskStreams) =>
        s.log.info(s"Creating $artifactPath")
        ZipHelper.zip(mappings, artifactPath)
        artifactPath
    } dependsOn (stage in Docker),

Creating the app:

  def publishToElasticBeanstalk(s: TaskStreams, name: String, version: String,
                                zipFile: Types.Id[File]): Unit = {
    import scala.collection.JavaConversions._
    s.log.info(s"Deploying $name version $version")
    val creds: BasicAWSCredentials = new BasicAWSCredentials(accessKey, secretAccessKey)
    val client: AWSElasticBeanstalkClient = new AWSElasticBeanstalkClient(creds)
    val applications: DescribeApplicationVersionsResult = client.describeApplicationVersions()
    s.log.info(applications.getApplicationVersions.toString)
    val versions = applications.getApplicationVersions map {
      _.getVersionLabel
    }
    if (versions.contains(version)) {
      s.log.info("Application already exists")
    } else {
      val request = new CreateApplicationVersionRequest(name, version)
        .withSourceBundle(new S3Location(bucketName, zipFile.getName))
      val result = client.createApplicationVersion(request)
      s.log.info(result.toString)
    }
    val updateEnvironmentRequest = new UpdateEnvironmentRequest()
      .withEnvironmentName(System.getenv("DEPLOYMENT_ENV"))
      .withVersionLabel(version)
    val updateEnvironmentResult = client.updateEnvironment(updateEnvironmentRequest)
    s.log.info(updateEnvironmentResult.toString)
  }

Publishing the image zip or the full zip to s3:

  // Support functions for publishing to an S3 bucket lifted from the S3Plugin
  private def getClient(creds: Seq[Credentials], host: String) = {
    val cred = Credentials.forHost(creds, host) match {
      case Some(cred) => cred
      case None => sys.error("Could not find S3 credentials for the host: " + host)
    }
    // username -> Access Key Id ; passwd -> Secret Access Key
    new AmazonS3Client(new BasicAWSCredentials(cred.userName, cred.passwd),
      new ClientConfiguration().withProtocol(Protocol.HTTPS))
  }

  private def getBucket(host: String) = removeEndIgnoreCase(host, ".s3.amazonaws.com")

  // Define a new "publish aws" task that publishes a zip file to s3 and then
  // initiates deployment via elastic beanstalk by creating a new "application version"
  lazy val publishAws = taskKey[Unit]("publish-aws")

  lazy val awsImageArchiveDockerrun = taskKey[File]("aws-dockerrun-file")

  lazy val awsImageArchive = taskKey[File]("aws-dockerrun-image")

  lazy val uploadAwsImageArchive = taskKey[Unit]("publish-aws-image-archive")

  lazy val publishAwsImage = taskKey[Unit]("publish-aws")

  val dockerRunFileName = "Dockerrun.aws.json"

  val settings: Seq[Setting[_]] = s3Settings ++ Seq(
    credentials += Credentials("Amazon S3", hostName, accessKey, secretAccessKey),

    awsImageArchiveDockerrun := {
      val file = target.value / "publish-aws" / dockerRunFileName
      file.delete()
      IO.write(file, dockerrun(version.value))
      file
    },

    awsImageArchive := {
      val artifactFile = target.value / "publish-aws" / s"aws-image-${version.value}.zip"
      streams.value.log.info(s"Creating ${artifactFile.getName}")
      val filteredMappings = Packaging.relativeDockerMappings.value.filter { case (file, path) =>
        path.startsWith(".ebextensions")
      } ++ Seq((awsImageArchiveDockerrun.value, dockerRunFileName))
      ZipHelper.zip(filteredMappings, artifactFile)
      (publish in Docker).value
      artifactFile
    },

    uploadAwsImageArchive := {
      val client = getClient(credentials.value, hostName)
      val bucket = getBucket(hostName)
      val mappings = Seq (awsImageArchive.value -> awsImageArchive.value.getName)
      mappings foreach { case (file, key) =>
        streams.value.log.info("Uploading " + file.getAbsolutePath + " as " + key + " into " + bucket)
        val request = new PutObjectRequest(bucket, key, file)
        client.putObject(request)
      }
      streams.value.log.info("Uploaded " + mappings.length + " files to the S3 bucket \"" + bucket + "\".")
    },

    publishAwsImage := {
      uploadAwsImageArchive.value
      Publishing.publishToElasticBeanstalk(
        streams.value,
        name.value,
        version.value,
        awsImageArchive.value
      )
    },

    // Publish a full elastic beanstalk zip bundle, including all the sources to build
    // a Docker image from scratch
    mappings in upload <+= (packageBin in Universal) map { artifactPath =>
      artifactPath -> artifactPath.getName
    },

    host in upload := hostName,

    upload <<= upload dependsOn packageBin,


    publishAws <<= (streams, name, version, packageBin in Universal) map {
      (s: TaskStreams, name: String, version: String, packageBin: Types.Id[File]) =>
        publishToElasticBeanstalk(s, name, version, packageBin)
    } dependsOn upload
  )

@muuki88 any pointers to a good starting place for creating an archetype plugin?

@muuki88
Copy link
Contributor

muuki88 commented Aug 11, 2015

Hi @easel that looks pretty sophisticated already! IMHO this looks more like a new packaging format then an archetype. A special docker-format if you want. In this case a good starting point is the DockerPlugin itself or the RpmPlugin. The plugins code is well structured and documented.

I'm really happy if you make this contribution. However if you implement this in the sbt-native-packager framework you won't be able to schedule your own releases and sometimes have to wait a bit. And I will assign you to reported bugs ;) If that's okay for you, awesome :)

A technical question. We would have to dependend on the aws-client library, right? But it's pure java, so we won't have issues with scala 2.10/11/12 stuff.

@easel
Copy link

easel commented Aug 12, 2015

Let me take a look at the plugins you referenced and get back to you. I use this code and both of those every day, so I have to understand how they work anyway. For simplicity's sake, I think I'll just hack it into a fork and then you can take a look at it and we can decide whether to bring it into the main tree or keep it separate.

Thanks for the feedback!

@fiadliel
Copy link
Contributor

Just a note here: I'm a little concerned about adding the AWS client libraries, from a dependency point of view. They in turn pull in Apache HttpClient, Jackson and Joda-Time, with potentially large dependency changes with major releases.

I know that in work, it's a big deal as to which HttpClient API was used, since it's such a common dependency across libraries.

Updates with to AWS with major version changes don't happen that often; the 1.10.0 release happened in June, after a fairly lengthy 1.9 series. But if any other plugins also depend on this, or any of the above libraries, it might eventually be necessary to start either only supporting a subset of versions, or cross-building?

If this change can be written in such a way as to avoid the extra dependencies and features if they aren't used, I wouldn't be as worried.

@easel
Copy link

easel commented Aug 24, 2015

Yeah, I agree. On the other hand, duplicating them significantly increases the risk of failing to keep up with AWS and things breaking. Right now I'm working towards a separate plugin that just depends on native packager so we can keep those dependencies out of the main tree. Not far enough along to know if that's going to work or not.

@fiadliel
Copy link
Contributor

I think it might be best... on a related note, having a prominent page on the sbt-native-packager website, saying "these plugins work well with sbt-native-packager", might be nice to have; like the SBT site has.

@muuki88
Copy link
Contributor

muuki88 commented Aug 24, 2015

@fiadliel I like this idea. Currently there is only a Related SBT Plugins list in the README.

@easel ping here when you have released something :) eager to take a look

@naderghanbari
Copy link

Anyone here faced a problem with Zip files created by native packager? Basically I did exactly the same thing (deploying the universal package with AWS EB command line) but somehow the zip file can not be opened by EB. Before I was using native zip utility and everything worked but then I made my plugin dependent on Universal native packager and for some reason it does not work now.

@muuki88
Copy link
Contributor

muuki88 commented Oct 10, 2015

On which system are you building the zip? Non-native zip (java) has some problems with the exectuable flag (mainly on OSX)

@naderghanbari
Copy link

I'm building on Mac OS X indeed. So I should use the native zip utility. I've seen some flag like this in the code of sbt-native-packager but I couldn't find the settings key for it. How to make the universal plugin to use the native zip util instead of Java?

@muuki88
Copy link
Contributor

muuki88 commented Oct 12, 2015

I think it's just adding this line to your build.sbt

useNativeZip

@naderghanbari
Copy link

Thanks, I thought it's a key so I tried something like useNativeZip := true but looking at the code after your comment I realised that it's a Setting.

@muuki88 muuki88 closed this as completed Sep 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants