diff --git a/content/ko/case-studies/_index.html b/content/ko/case-studies/_index.html
index 28bdab44afc5f..d863ae158ff75 100644
--- a/content/ko/case-studies/_index.html
+++ b/content/ko/case-studies/_index.html
@@ -1,7 +1,7 @@
---
-title: Case Studies
-linkTitle: Case Studies
-bigheader: Kubernetes User Case Studies
+title: 사례 연구
+linkTitle: 사례 연구
+bigheader: 쿠버네티스 사용자 사례 연구
abstract: A collection of users running Kubernetes in production.
layout: basic
class: gridPage
diff --git a/content/ko/case-studies/adform/adform_featured_logo.png b/content/ko/case-studies/adform/adform_featured_logo.png
new file mode 100644
index 0000000000000..7e3be727e39e6
Binary files /dev/null and b/content/ko/case-studies/adform/adform_featured_logo.png differ
diff --git a/content/ko/case-studies/adform/index.html b/content/ko/case-studies/adform/index.html
new file mode 100644
index 0000000000000..e9a8acc7a22f2
--- /dev/null
+++ b/content/ko/case-studies/adform/index.html
@@ -0,0 +1,118 @@
+---
+title: Adform Case Study
+linkTitle: Adform
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+logo: adform_featured_logo.png
+draft: false
+featured: true
+weight: 47
+quote: >
+ Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
+---
+
+
+
CASE STUDY:
Improving Performance and Morale with Cloud Native
+
+
+
+
+
+
+ Company AdForm Location Copenhagen, Denmark Industry Adtech
+
+
+
+
+
+
+
Challenge
+ Adform’s mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."
+
+
+
+
+
Solution
+ The team, which had already been using Prometheus for monitoring, embraced Kubernetes and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."
+
+
+
+
+
+
+
Impact
+ "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on your systems."
+
+
+
+
+
+
+
+
+"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."
— Edgaras Apšega, IT Systems Engineer, Adform
+
+
+
+
+
+
+
+
Adform made headlines last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.
With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a white paper revealing what it did—and others could too—to limit customers’ exposure to the scam.
+In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.
+The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."
+
+
+
+
+
+
+ "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that it’s open source, you can contribute."
— Edgaras Apšega, IT Systems Engineer, Adform
+
+
+
+
+
+The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."
+A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
+Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they’re still doing it."
+
+The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
+
+
+
+
+
+
+"Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore."
— Andrius Cibulskis, IT Systems Engineer, Adform
+
+
+
+
+
+This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before.
+The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines.
+Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."
+
+
+
+
+
+
+
+ "I think that our company just started our cloud native journey. It seems like a huge road ahead, but we’re really happy that we joined it."
— Edgaras Apšega, IT Systems Engineer, Adform
+
+
+
+
+All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
+The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it’s cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we’re interested in is the Virtual Kubelet that lets you spin up the working nodes on different clouds to do some computing."
+
+Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we’re really happy that we joined it."
+
+
+
+
+
+
diff --git a/content/ko/case-studies/amadeus/amadeus_featured.png b/content/ko/case-studies/amadeus/amadeus_featured.png
new file mode 100644
index 0000000000000..d23d7b0163854
Binary files /dev/null and b/content/ko/case-studies/amadeus/amadeus_featured.png differ
diff --git a/content/ko/case-studies/amadeus/amadeus_logo.png b/content/ko/case-studies/amadeus/amadeus_logo.png
new file mode 100644
index 0000000000000..6191c7f6819f2
Binary files /dev/null and b/content/ko/case-studies/amadeus/amadeus_logo.png differ
diff --git a/content/ko/case-studies/amadeus/index.html b/content/ko/case-studies/amadeus/index.html
new file mode 100644
index 0000000000000..8b6294d4f9afa
--- /dev/null
+++ b/content/ko/case-studies/amadeus/index.html
@@ -0,0 +1,105 @@
+---
+title: Amadeus Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_amadeus.css
+---
+
+
+
CASE STUDY:
Another Technical Evolution for a 30-Year-Old Company
+
+
+
+ Company Amadeus IT Group Location Madrid, Spain Industry Travel Technology
+
+
+
+
+
+
Challenge
+ In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company’s goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.
+
+
+
Solution
+ Mountain has been overseeing the company’s migration to Kubernetes, using OpenShift Container Platform, Red Hat’s enterprise container platform.
+
+
Impact
+ One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It’s now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
+
+
+
+
+
+
+ "We want multi-data center capabilities, and we want them for our mainstream system as well. We didn’t think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring." - Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group
+
+
+
+
+
+
In his two decades at Amadeus, Eric Mountain has been the migrations guy.
+ Back in the day, he worked on the company’s move from Unix to Linux, and now he’s overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone’s travel experience, without interrupting workflows for the customers who depend on our technology."
+ That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.
+ The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company’s main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn’t achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."
+ More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It’s wasteful on many levels. For instance, an application doesn’t necessarily use the machine very optimally. Virtualization can help a bit, but it’s not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can’t simply say, ‘Well, I’ll bring in another machine and give it that role.’ It’s not fast. It’s not efficient. So we wanted the next level of automation."
+
+
+
+
+
+ "We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
+
+
+
+
+
+ While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like Python and databases like Couchbase, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.
+
+ All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of Kubernetes whatever happens to be missing from our point of view, or go with OpenShift and build whatever remains there."
+
+ The team decided against building everything themselves—though they’d done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.
+
+ Ultimately, they went with OpenShift Container Platform, Red Hat’s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."
+
+ The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there’s always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
+
+
+
+
+
+ "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
+
+
+
+
+
+ The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project’s needs, "We couldn’t rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn’t offered in the Kubernetes or OpenShift ecosystem. Now that Prometheus and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
+
+ The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
+
+ Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That’s one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can’t simply do absolutely everything from one day to the next. And we mustn’t sell it that way."
+
+ The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain’s team selected a smaller application that was representative of all the company’s other applications in its complexity: "We just made sure we picked something that’s complex enough, and we showed that it can be done."
+
+
+
+
+
+ "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
+
+
+
+
+
+ Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, ‘There is a system, and it works, so why change?’" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company’s existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"
+
+ "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
+
+ So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure you’re going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."
+
+ His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there’s no complicated license key for the evaluation period and you’re not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You’ve got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you’ll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."
+
+ And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it’s important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It’s the only real way that you’ll see that you might be able to do things."
+
+
diff --git a/content/ko/case-studies/ancestry/ancestry_featured.png b/content/ko/case-studies/ancestry/ancestry_featured.png
new file mode 100644
index 0000000000000..6d63daae32139
Binary files /dev/null and b/content/ko/case-studies/ancestry/ancestry_featured.png differ
diff --git a/content/ko/case-studies/ancestry/ancestry_logo.png b/content/ko/case-studies/ancestry/ancestry_logo.png
new file mode 100644
index 0000000000000..5fbade8decbc1
Binary files /dev/null and b/content/ko/case-studies/ancestry/ancestry_logo.png differ
diff --git a/content/ko/case-studies/ancestry/index.html b/content/ko/case-studies/ancestry/index.html
new file mode 100644
index 0000000000000..a992a284ac86b
--- /dev/null
+++ b/content/ko/case-studies/ancestry/index.html
@@ -0,0 +1,117 @@
+---
+title: Ancestry Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_ancestry.css
+---
+
+
+
CASE STUDY:
Digging Into the Past With New Technology
+
+
+
+
+ Company Ancestry Location Lehi, Utah Industry Internet Company, Online Services
+
+
+
+
+
+
+
+
+
Challenge
+Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. Ancestry currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, ancestry.com, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our products."
+
+
+
+
+
+
+
Solution
+
+ The company is transitioning to cloud native infrastructure, using Docker containerization, Kubernetes orchestration and Prometheus for cluster monitoring.
+
+
Impact
+ "Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We’ve truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
+
+
+
+
+
+
+ "At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"
- PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY
+
+
+
+
+
+
It started with a Shaky Leaf.
+
+ Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.
+ So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on Kubernetes, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."
+ And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."
+ The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
+
+
+
+
+
+"And when it [Kubernetes] went live smoothly in early 2016, 'our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes,' MacKay adds. 'The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation.'"
+
+
+
+
+
+ That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like Java and Python on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.
+ His team looked at orchestration platforms offered by Docker Compose, Mesos and OpenStack, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."
+
+ Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."
+ Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."
+ Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."
+ Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
+
+
+
+
+
+
+
+"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."
+
+
+
+
+
+
+ With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."
+ The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."
+ A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."
+ The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."
+
+
+
+
+
+ "... 'I believe in Kubernetes. I believe in containerization. I think
+ if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about,
+ and it'll go forward.'"
+
+
+
+
+
+
+
+Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."
+That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."
+As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending meetups to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
+
When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."
+With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.
+"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."
+He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"
+
+
+
+
diff --git a/content/ko/case-studies/blablacar/blablacar_featured.png b/content/ko/case-studies/blablacar/blablacar_featured.png
new file mode 100644
index 0000000000000..cfe37257b99e7
Binary files /dev/null and b/content/ko/case-studies/blablacar/blablacar_featured.png differ
diff --git a/content/ko/case-studies/blablacar/blablacar_logo.png b/content/ko/case-studies/blablacar/blablacar_logo.png
new file mode 100644
index 0000000000000..14606e036002e
Binary files /dev/null and b/content/ko/case-studies/blablacar/blablacar_logo.png differ
diff --git a/content/ko/case-studies/blablacar/index.html b/content/ko/case-studies/blablacar/index.html
new file mode 100644
index 0000000000000..2d55ffb8d07fe
--- /dev/null
+++ b/content/ko/case-studies/blablacar/index.html
@@ -0,0 +1,98 @@
+---
+title: BlaBlaCar Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_blablacar.css
+---
+
+
+
CASE STUDY:
Turning to Containerization to Support Millions of Rideshares
+
+
+
+
+ Company BlaBlaCar Location Paris, France Industry Ridesharing Company
+
+
+
+
+
+
+
Challenge
+ The world’s largest long-distance carpooling community, BlaBlaCar, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.
+
+
+
Solution
+ Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime rkt, initially deployed using fleet cluster manager. Last year, the company switched to Kubernetes orchestration, and now also uses Prometheus for monitoring.
+
+
+
+
Impact
+ "Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It’s really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they’re developing, and not on the infrastructure."
+
+
+
+
+
+ "When you’re switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before." - Simon Lallemand, Infrastructure Engineer at BlaBlaCar
+
+
+
+
+
+
For the 40 million users of BlaBlaCar, it’s easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.
+ Behind the scenes, though, the infrastructure was falling woefully behind the rider community’s exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."
+ By 2015, the company had about 50 bare metal servers. The team was using a MySQL database and PHP, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, Chef, but had little automation in its process. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."
+ Instead, BlaBlaCar began its cloud-native journey but wasn’t sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didn’t want to go to virtualization on premise.
+ The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with CoreOS Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."
+
+
+
+
+
+ "With all the tooling that we made around the containers, copying a new service is a matter of minutes. It’s a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
+
+
+
+
+
+ Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with Docker but decided to go with rkt. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.
+ Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemand’s team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When you’re focused on your product sometimes you forget if it’s really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."
+ After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as dgr, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools Nerve and Synapse; their versions, Go-Nerve and Go-Synapse, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.
+ At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (It’s now at 100 percent.) "It’s a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."
+ In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So it’s really a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
+
+
+
+
+
+ "We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
+
+
+
+
+
+ In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic fleet tool from CoreOS to deploy their containers. (They did build a tool called GGN, which they’ve open-sourced, to make it more manageable for their system engineers to use.)
+ Still, the team knew that they’d want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we don’t want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in Kubernetes, which had just begun supporting rkt implementation.
+ After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using Prometheus, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.
+ BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "It’s really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."
+ The team is particularly happy that they’re now able to plan capacity better in the company’s data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."
+
+
+
+
+
+ "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
+
+
+
+
+
+ And these advances ultimately trickle down to BlaBlaCar’s users. "We have improved availability overall on our website," says Lallemand. "When you’re switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."
+ Within BlaBlaCar’s technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different ‘tribes’—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster."
+ This DevOps transformation turned out to be a positive one for the company’s staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants."
+ With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I don’t say microservices because they’re not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have."
+ When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that it’s such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether it’s flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. That’s what we’ve done. It’s important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."
+
+
diff --git a/content/ko/case-studies/blackrock/blackrock_featured.png b/content/ko/case-studies/blackrock/blackrock_featured.png
new file mode 100644
index 0000000000000..3898b88c9fa43
Binary files /dev/null and b/content/ko/case-studies/blackrock/blackrock_featured.png differ
diff --git a/content/ko/case-studies/blackrock/blackrock_logo.png b/content/ko/case-studies/blackrock/blackrock_logo.png
new file mode 100644
index 0000000000000..51e914a63b259
Binary files /dev/null and b/content/ko/case-studies/blackrock/blackrock_logo.png differ
diff --git a/content/ko/case-studies/blackrock/index.html b/content/ko/case-studies/blackrock/index.html
new file mode 100644
index 0000000000000..96b66334d9ec3
--- /dev/null
+++ b/content/ko/case-studies/blackrock/index.html
@@ -0,0 +1,112 @@
+---
+title: BlackRock Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_blackrock.css
+---
+
+
+
CASE STUDY:
+
Rolling Out Kubernetes in Production in 100 Days
+
+
+
+
+
+ Company BlackRock Location New York, NY Industry Financial Services
+
+
+
+
+
+
+
+
+
Challenge
+ The world’s largest asset manager, BlackRock operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Michael Francis, a Managing Director in BlackRock’s Product Group, which runs the company’s investment management platform. "Managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?"
+
+
+
+
Solution
+ Drawing from what they learned during a pilot done last year using Docker environments, Francis put together a cross-sectional team of 20 to build an investor research web app using Kubernetes with the goal of getting it into production within one quarter.
+
+
Impact
+ "Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism. But I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What’s interesting is that just having this technology there is changing the way our developers are starting to think about their future development."
+
+
+
+
+
+
+
+
+ "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier." - Michael Francis, Managing Director, BlackRock
+
+
+
+
+
+
+ One of the management objectives for BlackRock’s Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.
+ For a company that’s the world’s largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial."
+ In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "we’ve managed to integrate a radically new thought process into a controlled infrastructure that we didn’t want to change."
+ After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
+ Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "It’s a very bursty process," says Francis, who is head of data for the company’s Aladdin investment management platform division.
+ Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Francis. But "managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."
+
+
+
+
+
+ "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
+
+
+
+
+
+ Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you’d have to build an infrastructure to define limits for our processes, and the Python notebooks weren’t really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."
+ Made up of managers from technology, infrastructure, production operations, development and information security, Francis’s team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using Ansible and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don’t understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn’t build anywhere near the amount we thought we were going to end up building."
+ In search of a solution in which they could manage usage on a user-by-user level, Francis’s team gravitated to Red Hat’s OpenShift Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that’s an indicator of the momentum."
+ Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock’s existing framework. "It’s about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"
+ The first (anticipated) speed bump was working around issues behind BlackRock’s corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesn’t necessarily work." The team ran into these types of problems using Minikube and did a few small pushes back to the open source project.
+
+
+
+
+
+
+
+ "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there."
+
+
+
+
+
+ There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "It’s all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"
+ Another issue they had to navigate was that in BlackRock’s existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didn’t make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didn’t have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."
+ The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetes’s very elastic infrastructure to the production infrastructure. We’ll continue to go in that direction. It enables us to scale as we need to from the operational perspective."
+ The solution also had to be complementary with BlackRock’s centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I don’t need to hire more people."
+ With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."
+ The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what they’re good at. This hasn’t been top-down."
+
+
+
+
+
+
+
+ "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I don’t need to hire more people."
+
+
+
+
+
+
+ They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldn’t use features that weren’t in the core of Kubernetes and Docker. But if there was a real need, they’d build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager Helm is one example]. People have similar problems."
+ By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.
+ Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. We’re not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."
+ For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
+
+
+
diff --git a/content/ko/case-studies/box/box_featured.png b/content/ko/case-studies/box/box_featured.png
new file mode 100644
index 0000000000000..fc6dec602af17
Binary files /dev/null and b/content/ko/case-studies/box/box_featured.png differ
diff --git a/content/ko/case-studies/box/box_logo.png b/content/ko/case-studies/box/box_logo.png
new file mode 100644
index 0000000000000..b401dec6248c6
Binary files /dev/null and b/content/ko/case-studies/box/box_logo.png differ
diff --git a/content/ko/case-studies/box/box_small.png b/content/ko/case-studies/box/box_small.png
new file mode 100644
index 0000000000000..105b66a5832bb
Binary files /dev/null and b/content/ko/case-studies/box/box_small.png differ
diff --git a/content/ko/case-studies/box/index.html b/content/ko/case-studies/box/index.html
new file mode 100644
index 0000000000000..bead8eb01a5bf
--- /dev/null
+++ b/content/ko/case-studies/box/index.html
@@ -0,0 +1,114 @@
+---
+title: Box Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_box.css
+video: https://www.youtube.com/embed/of45hYbkIZs?autoplay=1
+quote: >
+ Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud.
+
+---
+
+
+
CASE STUDY:
+
An Early Adopter Envisions
+ a New Cloud Platform
+
+
+
+
+
+ Company Box Location Redwood City, California Industry Technology
+
+
+
+
+
+
+
+
+
+
Challenge
+ Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. Box was built primarily with bare metal inside the company’s own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It’s been a huge challenge because of different clouds, especially bare metal, have very different interfaces."
+
+
+
+
+
Solution
+ Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, Kubernetes container orchestration. Kubernetes, Ghods says, has allowed Box’s developers to "target a universal set of concepts that are portable across all clouds."
+
+
Impact
+ "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we’re working on getting it to an hour."
+
+
+
+
+
+
+
+ "We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
- SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX
+
+
+
+
+
+
+
In the summer of 2014, Box was feeling the pain of a decade’s worth of hardware and software infrastructure that wasn’t keeping up with the company’s needs.
+
+ A platform that allows its more than 50 million users (including governments and big businesses like General Electric) to manage and share content in the cloud, Box was originally a PHP monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It’s been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."
+ Box’s cloud native journey accelerated that June, when Ghods attended DockerCon. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.
+ At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of Borg veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Google’s internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
+ Another plus: Ghods liked that Kubernetes has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like OpenShift or Deis that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."
+ Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghods’s team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldn’t fail synchronous incoming requests from customers."
+
+
+
+
+
+
+ "As we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
+
+
+
+
+
+ The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and that’s ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And that’s going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."
+ While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
+
"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
+ Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then we’d upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."
+ In any case, Box didn’t have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into Nagios, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."
+ Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and it’s not very incremental," Ghods says. "We’re essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But it’s important to keep in mind that it’s not nearly as proven as many other solutions out there. You can’t say how long this or that company took to do it because there just aren’t that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."
+
+
+
+
+
+ "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."
+
+
+
+
+
+ Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:
+
1. Deliver early and often.
Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Box’s unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project."
+
2. Keep an open mind about what your company has to abstract away from developers and what it doesn’t.
Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates.
+ This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, it’s better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.
+ In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And we’re working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage it."
+ By Ghods’s estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "We’re very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months we’ll likely be between 20 to 50 percent. We’re working hard on enabling all stateless service use cases, and shift our focus to stateful services after that."
+
+
+
+
+
+ "Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
+
+
+
+
+
+ In fact, that’s what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I don’t think people have seen the full potential of what’s possible when you can program against one single interface," he says. "The same way AWS changed infrastructure so that you don’t have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that you’re running, which is pretty exciting. That’s the vision."
+ Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and CoreOS’s etcd operator. "I honestly believe it’s the most exciting thing I’ve seen in cloud infrastructure," he says, "because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."
+ Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies don’t have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.
+ "The same way it doesn’t make sense to deviate from Linux because it’s such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When you’re on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now it’s really going to be shocking if you run your infrastructure any other way."
+
+
diff --git a/content/ko/case-studies/box/video.png b/content/ko/case-studies/box/video.png
new file mode 100644
index 0000000000000..4c61e7440fc48
Binary files /dev/null and b/content/ko/case-studies/box/video.png differ
diff --git a/content/ko/case-studies/buffer/buffer_featured.png b/content/ko/case-studies/buffer/buffer_featured.png
new file mode 100644
index 0000000000000..cd6aaba4ca6a0
Binary files /dev/null and b/content/ko/case-studies/buffer/buffer_featured.png differ
diff --git a/content/ko/case-studies/buffer/buffer_logo.png b/content/ko/case-studies/buffer/buffer_logo.png
new file mode 100644
index 0000000000000..1b4b3b7e525d0
Binary files /dev/null and b/content/ko/case-studies/buffer/buffer_logo.png differ
diff --git a/content/ko/case-studies/buffer/index.html b/content/ko/case-studies/buffer/index.html
new file mode 100644
index 0000000000000..333db6a74eb32
--- /dev/null
+++ b/content/ko/case-studies/buffer/index.html
@@ -0,0 +1,112 @@
+---
+title: Buffer Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_buffer.css
+---
+
+
+
CASE STUDY:
+
Making Deployments Easy for a Small, Distributed Team
+
+
+
+
+ Company Buffer Location Around the World Industry Social Media Technology
+
+
+
+
+
+
+
+
+
Challenge
+ With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary."
+
+
+
+
Solution
+ Embracing containerization, Buffer moved its infrastructure from Amazon Web Services’ Elastic Beanstalk to Docker on AWS, orchestrated with Kubernetes.
+
+
+
Impact
+ The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that it’s going to work has shortened things up a lot. Our feedback cycles are a lot faster now too."
+
+
+
+
+
+
+ "It’s amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, it’s there in the next release or it’s coming in the next few months."
- DAN FARRELLY, BUFFER ARCHITECT
+
+
+
+
+
+
Dan Farrelly uses a carpentry analogy to explain the problem his company, Buffer, began having as its team of developers grew over the past few years.
+
+ "If you’re building a table by yourself, it’s fine," the company’s architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while you’re sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes.
+ Since around 2012, Buffer had already been using Elastic Beanstalk, the orchestration service for deploying infrastructure offered by Amazon Web Services. "We were deploying a single monolithic PHP application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn’t spend too much time on it. If things were getting a little bit slow, we’d maybe use a faster server or just scale up one instance, and it would be good enough. We’d move on."
+ But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer’s then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.
Some of the company’s team was already successfully using Docker in their development environment, but the only application running on Docker in production was a marketing website that didn’t see real user traffic. They wanted to go further with Docker, and the next step was looking at options for orchestration.
+
+
+
+
+
+ And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
+
+
+
+
+
+ First they considered Mesosphere, DC/OS and Amazon Elastic Container Service (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didn’t need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes’ controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well."
+ And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
+ Above all, it provided a powerful solution for one of the company’s most distinguishing characteristics: their remote team that’s spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. It’s been really cool to see people moving much faster."
+
+ With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies might."
+
+
+
+
+
+ "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the door."
+
+
+
+
+
+ To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a Python analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a Slack command, ‘/deploy,’ and it goes out instantly. They don’t need to wait on these slow turnaround times. They don’t even know where it’s running; it doesn’t matter."
+
+ One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly.
+
+ To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the door."
+
+ Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it."
+
+ Another thing they weren’t able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of ‘fingers crossed.’ And this is something that gets run 800,000 times a day, the core of our business. If it doesn’t work, our business doesn’t work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isn’t working. This has leveled up our ability to deploy and roll out new changes quickly while reducing risk."
+
+
+
+
+
+
+ "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
+
+
+
+
+
+ By October 2016, 54 percent of Buffer’s traffic was going through their Kubernetes cluster. "There’s a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes."
+
+ The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything they’ve pulled out of their old infrastructure, plus the new services they’re developing in Kubernetes, on another cluster. "I want to bring all the benefits that we’ve seen on our early services to everyone on the team," says Farrelly.
+
+
For Buffer’s engineers, it’s an exciting process. "Every time we’re deploying a new service, we need to figure out: OK, what’s the architecture? How do these services communicate? What’s the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. It’s enabling us to experiment as we’re learning how to design a service-oriented architecture. Before, we just wouldn’t have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it."
+
+ Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "It’s cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "We’re very deep in Amazon but it’s nice to know we could move away if we need to."
+
+ At this point, the team at Buffer can’t imagine running their infrastructure any other way—and they’re happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
+
+
+
diff --git a/content/ko/case-studies/capital-one/capitalone_featured_logo.png b/content/ko/case-studies/capital-one/capitalone_featured_logo.png
new file mode 100644
index 0000000000000..f57c7697e36fd
Binary files /dev/null and b/content/ko/case-studies/capital-one/capitalone_featured_logo.png differ
diff --git a/content/ko/case-studies/capital-one/index.html b/content/ko/case-studies/capital-one/index.html
new file mode 100644
index 0000000000000..773db4869e4e3
--- /dev/null
+++ b/content/ko/case-studies/capital-one/index.html
@@ -0,0 +1,96 @@
+---
+title: Capital One Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Supporting Fast Decisioning Applications with Kubernetes
+
+
+
+
+
+
+ Company Capital One Location McLean, Virginia Industry Retail banking
+
+
+
+
+
+
+
Challenge
+ The team set out to build a provisioning platform for Capital One applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.
+
+
+
Solution
+ The decision to run Kubernetes "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."
+
+
+
+
+
Impact
+ "Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.
+
+
+
+
+
+
+
+
+
+"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before." — Jamil Jadallah, Scrum Master
+
+
+
+
+
+
+ As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."
+ Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in Flink that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
+
+
+
+
+
+
+
+ "We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
+
+
+
+
+
+
+ In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
+ Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "That’s a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
+
+
+
+
+
+ With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
+
+
+
+
+
+ Kubernetes has also been a great time-saver for Capital One’s required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. It’s now a quick Kubernetes job.
+ Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because it’s all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. There’s capex related to those licenses that we don’t have to pay for. Moreover, there’s capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)
+
+
+
+
+
+ "If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."
+
+
+
+
+ And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since we’re data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."
+ The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and that’s good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, ‘Is my AWS server broken? Is my pod not running?’"
+
+
+
diff --git a/content/ko/case-studies/ccp-games/ccp_logo.png b/content/ko/case-studies/ccp-games/ccp_logo.png
new file mode 100644
index 0000000000000..cbf3d267ba8dd
Binary files /dev/null and b/content/ko/case-studies/ccp-games/ccp_logo.png differ
diff --git a/content/ko/case-studies/ccp-games/index.html b/content/ko/case-studies/ccp-games/index.html
new file mode 100644
index 0000000000000..8867cbb323442
--- /dev/null
+++ b/content/ko/case-studies/ccp-games/index.html
@@ -0,0 +1,4 @@
+---
+title: CCP Games
+content_url: https://cloud.google.com/customers/ccp-games/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/comcast/comcast_logo.png b/content/ko/case-studies/comcast/comcast_logo.png
new file mode 100644
index 0000000000000..3f0ef76645680
Binary files /dev/null and b/content/ko/case-studies/comcast/comcast_logo.png differ
diff --git a/content/ko/case-studies/comcast/index.html b/content/ko/case-studies/comcast/index.html
new file mode 100644
index 0000000000000..7ace6a246d67f
--- /dev/null
+++ b/content/ko/case-studies/comcast/index.html
@@ -0,0 +1,4 @@
+---
+title: Comcast
+content_url: https://youtu.be/lmeFkH-rHII
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/concur/concur_featured_logo.png b/content/ko/case-studies/concur/concur_featured_logo.png
new file mode 100644
index 0000000000000..473427a3baee3
Binary files /dev/null and b/content/ko/case-studies/concur/concur_featured_logo.png differ
diff --git a/content/ko/case-studies/concur/index.html b/content/ko/case-studies/concur/index.html
new file mode 100644
index 0000000000000..0bb619f527a53
--- /dev/null
+++ b/content/ko/case-studies/concur/index.html
@@ -0,0 +1,4 @@
+---
+title: Concur
+content_url: http://searchitoperations.techtarget.com/news/450297178/Tech-firms-roll-out-Kubernetes-in-production
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/crowdfire/crowdfire_featured_logo.png b/content/ko/case-studies/crowdfire/crowdfire_featured_logo.png
new file mode 100644
index 0000000000000..ef84b16ea06c7
Binary files /dev/null and b/content/ko/case-studies/crowdfire/crowdfire_featured_logo.png differ
diff --git a/content/ko/case-studies/crowdfire/index.html b/content/ko/case-studies/crowdfire/index.html
new file mode 100644
index 0000000000000..227a5c08394bd
--- /dev/null
+++ b/content/ko/case-studies/crowdfire/index.html
@@ -0,0 +1,101 @@
+---
+title: Crowdfire Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_crowdfire.css
+---
+
+
+
CASE STUDY:
How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach
+
+
+
+
+ Company Crowdfire Location Mumbai, India Industry Social Media Software
+
+
+
+
+
+
+
Challenge
+ Crowdfire helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on Google App Engine, and in 2015, the company began a transformation to microservices running on Amazon Web Services Elastic Beanstalk. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.
+
Solution
+ "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible.
+
+
+
+
+
+
Impact
+ "Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."
+
+
+
+
+
+
+ "In the 15 months that we’ve been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control." - Amanpreet Singh, Software Engineer at Crowdfire
+
+
+
+
+
"If you build it, they will come."
+ For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isn’t as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.
+ With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services Elastic Beanstalk and started breaking it down into microservices.
+ It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."
+ As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."
+ Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetes’s opinionated approach made it easier to get started."
+
+
+
+
+
+ "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
+
+
+
+
+ There was another compelling business reason for the cloud-native approach. "In today’s world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."
+ So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didn’t understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it."
+ To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when it’s night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."
+ Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on Terraform and Ansible. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked AMIs to make the node bringup faster, and is planning to change its networking layer.)
+
+
+
+
+
+ "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure."
+
+
+
+
+
+ First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."
+ Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "We’re completely migrated and we run all new services on Kubernetes," says Singh.
+ The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
+ All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, they’re happy with the low deploy times and self-healing services."
+ And they’re much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now we’re doing 30+ production and 50+ staging deployments almost every day."
+
+
+
+
+
+
+ The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
+
+
+
+
+
+
+ Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "They’ve started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."
+ With Crowdfire’s commitment to Kubernetes, Singh is looking to expand the company’s cloud-native stack. The team already uses Prometheus for monitoring, and he says he is evaluating Linkerd and Envoy Proxy as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including OpenTracing and gRPC are also on his radar.
+ Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says.
+ And when people ask him about Crowdfire’s experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isn’t easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are ‘Kubernetes-ready,’ meaning if they have proper health checks and handle termination signals to shut down gracefully."
+ And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that we’ve been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
+
+
+
+
diff --git a/content/ko/case-studies/ebay/ebay_featured.png b/content/ko/case-studies/ebay/ebay_featured.png
new file mode 100644
index 0000000000000..4ad17a4af5036
Binary files /dev/null and b/content/ko/case-studies/ebay/ebay_featured.png differ
diff --git a/content/ko/case-studies/ebay/ebay_logo.png b/content/ko/case-studies/ebay/ebay_logo.png
new file mode 100644
index 0000000000000..830913c52b13f
Binary files /dev/null and b/content/ko/case-studies/ebay/ebay_logo.png differ
diff --git a/content/ko/case-studies/ebay/index.html b/content/ko/case-studies/ebay/index.html
new file mode 100644
index 0000000000000..e0bf4f6e970b2
--- /dev/null
+++ b/content/ko/case-studies/ebay/index.html
@@ -0,0 +1,4 @@
+---
+title: Ebay
+content_url: http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/goldman-sachs/gs_logo.png b/content/ko/case-studies/goldman-sachs/gs_logo.png
new file mode 100644
index 0000000000000..5cc8c14566ae5
Binary files /dev/null and b/content/ko/case-studies/goldman-sachs/gs_logo.png differ
diff --git a/content/ko/case-studies/goldman-sachs/index.html b/content/ko/case-studies/goldman-sachs/index.html
new file mode 100644
index 0000000000000..93d3022d12440
--- /dev/null
+++ b/content/ko/case-studies/goldman-sachs/index.html
@@ -0,0 +1,4 @@
+---
+title: Goldman Sachs
+content_url: http://blogs.wsj.com/cio/2016/02/24/big-changes-in-goldmans-software-emerge-from-small-containers/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/golfnow/golfnow_featured.png b/content/ko/case-studies/golfnow/golfnow_featured.png
new file mode 100644
index 0000000000000..0b99ac3b8f8f8
Binary files /dev/null and b/content/ko/case-studies/golfnow/golfnow_featured.png differ
diff --git a/content/ko/case-studies/golfnow/golfnow_logo.png b/content/ko/case-studies/golfnow/golfnow_logo.png
new file mode 100644
index 0000000000000..dbeb127b02a27
Binary files /dev/null and b/content/ko/case-studies/golfnow/golfnow_logo.png differ
diff --git a/content/ko/case-studies/golfnow/index.html b/content/ko/case-studies/golfnow/index.html
new file mode 100644
index 0000000000000..f4bf4d4f278c2
--- /dev/null
+++ b/content/ko/case-studies/golfnow/index.html
@@ -0,0 +1,125 @@
+---
+title: GolfNow Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_golfnow.css
+---
+
+
+
CASE STUDY:
+
Saving Time and Money with Cloud Native Infrastructure
+
+
+
+
+ Company GolfNow Location Orlando, Florida Industry Golf Industry Technology and Services Provider
+
+
+
+
+
+
+
+
+
Challenge
+ A member of the NBC Sports Group, GolfNow is the golf industry’s technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow’s monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow’s Director, Architecture. "We wanted the ability to more easily expand globally."
+
+
+
+
+
Solution
+ Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on Docker and Kubernetes.
+
+
Impact
+ The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.
+
+
+
+
+
+
+
+ "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."
- SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW
+
+
+
+
+
+
It’s not every day that you can say you’ve slashed an operating expense by half.
+
+ But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, GolfNow, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.
+
+ A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNow’s Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant we’d have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."
+
+ In moving just the first of GolfNow’s important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.
+
+ The path to those stellar results began in late 2014. In order to support GolfNow’s global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from C#.NET and SQL Server since it didn’t run very well on Linux, where everything container was running smoothly."
+
+ To that end, the team shifted to working with Node.js, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and MongoDB, the open-source database program. At the time, Docker, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since that’s the way the industry is heading."
+
+
+
+
+
+ "The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all.'"
+
+
+
+
+
+ GolfNow’s dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? ‘It worked on my machine!’ But then we started getting to the point of, ‘How do we make sure that these things stay up and running?’"
+ That led the team on a quest to find the right orchestration system for the company’s needs. Sheriff says the first few options they tried were either too heavy or "didn’t feel quite right." In late summer 2015, they discovered the just-released Kubernetes, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."
+
+ But before they could go with Kubernetes, NBC, GolfNow’s parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing company’s platform user interface, but didn’t like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriff’s VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, who’s now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other company’s platform.
+
+ "We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."
+
+ At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, ‘Alright, it’s over. Kubernetes wins.’"
+
+ The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasn’t quite finished yet. At the time, it was running in Heroku Compose and other third-party services—resulting in a large monthly bill.
+
+
+
+
+
+
+ "'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night.'"
+
+
+
+
+
+ "The goal was to take all of that out and put it within this new platform we’ve created with Kubernetes on Google Compute Engine (GCE)," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. We’d take the config, change it and make it hit the database that was running in our cluster."
+
+ Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."
+
+ After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all."
+
+ Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me." Sheriff puts it in these terms: "Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night."
+
+ A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into .NET Core [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.
+
+ Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with Drone, an open-source continuous delivery platform, to make it more developer-centric. "Now they’re able to manage configuration, they’re able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."
+
+
+
+
+
+
+ "Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient.'"
+
+
+
+
+
+ And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "We’re actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."
+
+ The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something that’s more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. We’ve tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything you’ve given us."
+
+ Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons they’ve learned: "You’ve got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You can’t have people who are half in, half out." And if you don’t have buy-in from the get go, proving it out will get you there.
+
+ "This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient."
+
+
+
diff --git a/content/ko/case-studies/haufegroup/haufegroup_featured.png b/content/ko/case-studies/haufegroup/haufegroup_featured.png
new file mode 100644
index 0000000000000..08b09ec9db8b7
Binary files /dev/null and b/content/ko/case-studies/haufegroup/haufegroup_featured.png differ
diff --git a/content/ko/case-studies/haufegroup/haufegroup_logo.png b/content/ko/case-studies/haufegroup/haufegroup_logo.png
new file mode 100644
index 0000000000000..5d8245b0f6d18
Binary files /dev/null and b/content/ko/case-studies/haufegroup/haufegroup_logo.png differ
diff --git a/content/ko/case-studies/haufegroup/index.html b/content/ko/case-studies/haufegroup/index.html
new file mode 100644
index 0000000000000..f4256ff569b4a
--- /dev/null
+++ b/content/ko/case-studies/haufegroup/index.html
@@ -0,0 +1,112 @@
+---
+title: Haufe Group Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_haufegroup.css
+---
+
+
+
+
CASE STUDY:
Paving the Way for Cloud Native for Midsize Companies
+
+
+
+
+ Company Haufe Group Location Freiburg, Germany Industry Media and Software
+
+
+
+
+
+
+
+
+
Challenge
+ Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."
+
+
+
Solution
+ Haufe Group began its cloud-native journey when Microsoft Azure became available in Europe; the company needed cloud deployments for its desktop apps with bandwidth-heavy download services. "After that, it has been different projects trying out different things," says Danielsson. Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy.
+
+
+ A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. The company is now getting ready to go live with two services in production using Kubernetes orchestration on Microsoft Azure and Amazon Web Services. The team is also working on breaking up one of their core Java Enterprise desktop products into microservices to allow for better evolvability and dynamic scaling in the cloud.
+
+
+
Impact
+ With the ability to adapt workloads, Danielsson says, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." Plus, shorter release times have had a major impact. "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," he says. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
+
+
+
+
+
+
+
+
+ "Over the next couple of years, people won’t even think that much about it when they want to run containers. Kubernetes is going to be the go-to solution." - Martin Danielsson, Solution Architect, Haufe Group
+
+
+
+
+
+
+
More than 80 years ago, Haufe Group was founded as a traditional publishing company, printing books and commentary on paper.
By the 1990s, though, the company’s leaders recognized that the future was digital, and to their credit, were able to transform Haufe Group into a media and software business that now gets 95 percent of its sales from digital products. "Among the German companies doing this, we were one of the early adopters," says Martin Danielsson, Solution Architect for Haufe Group.
+ And now they’re leading the way for midsize companies embracing cloud-native technology like Kubernetes. "The really big companies like Ticketmaster and Google get it right, and the startups get it right because they’re faster," says Danielsson. "We’re in this big lump of companies in the middle with a lot of legacy, a lot of structure, a lot of culture that does not easily fit the cloud technologies. We’re just 1,500 people, but we have hundreds of customer-facing applications. So we’re doing things that will be relevant for many companies of our size or even smaller."
+ Many of those legacy challenges stemmed from simply following the technology trends of the times. "We used to do full DevOps," he says. In the 1990s and 2000s, "that meant that you had your hardware in the basement. And then 10 years ago, the hype of the moment was to outsource application operations, outsource everything, and strip down your IT department to take away the distraction of all these hardware things. That’s not our area of expertise. We didn’t want to be an infrastructure provider. And now comes the backlash of that."
+ Haufe Group began feeling the pain as they were developing more new products, from Internet portals for tax experts to personnel training software, that have created demands for increased speed, reliability and scalability. "Right now, we have this break in workflows, where we go from writing concepts to developing, handing it over to production and then handing that over to your host provider," he says. "And then when things go bad we have no clue what went wrong. We definitely want to take back control, and we want to move a lot faster. Adapting workloads is something that we really want to be able to do."
+ Those needs led them to explore cloud-native technology. Their first foray into the cloud was doing deployments in Microsoft Azure, once it became available in Europe, for desktop products that had built-in download services. Hosting expenses for such bandwidth-heavy services were too high, so the company turned to the cloud. "After that, it has been different projects trying out different things," says Danielsson.
+
+
+
+
+
+ "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
+
+
+
+
+
+
+ Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy. A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker.
+ Some experiments went further than others; German regulations about sensitive data proved to be a road block in moving some workloads to Azure and Amazon Web Services. "Due to our history, Germany is really strict with things like personally identifiable data," Danielsson says.
+ These experiments took on new life with the arrival of the Azure Sovereign Cloud for Germany (an Azure clone run by the German T-Systems provider). With the availability of Azure.de—which conforms to Germany’s privacy regulations—teams started to seriously consider deploying production loads in Docker into the cloud. "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
+ In parallel, Danielsson had built an API management system with the aim of supporting CI/CD scenarios, aspects of which were missing in off-the-shelf API management products. With a foundation based on Mashape’s Kong gateway, it is open-sourced as wicked.haufe.io. He put wicked.haufe.io to use with his product team.
Otherwise, Danielsson says his philosophy was "don’t try to reinvent the wheel all the time. Go for what’s there and 99 percent of the time it will be enough. And if you think you really need something custom or additional, think perhaps once or twice again. One of the things that I find so amazing with this cloud-native framework is that everything ties in."
+ Currently, Haufe Group is working on two projects using Kubernetes in production. One is a new mobile application for researching legislation and tax laws. "We needed a way to take out functionality from a legacy core and put an application on top of that with an API gateway—a lot of moving parts that screams containers," says Danielsson. So the team moved the build pipeline away from "deploying to some old, huge machine that you could deploy anything to" and onto a Kubernetes cluster where there would be automatic CI/CD "with feature branches and all these things that were a bit tedious in the past."
+
+
+
+
+
+ "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
+
+
+
+
+
+ It was a proof of concept effort, and the proof was in the pudding. "Everyone was really impressed at what we accomplished in a week," says Danielsson. "We did these kinds of integrations just to make sure that we got a handle on how Kubernetes works. If you can create optimism and buzz around something, it’s half won. And if the developers and project managers know this is working, you’re more or less done." Adds Reinhardt: "You need to create some very visible, quick wins in order to overcome the status quo."
+ The impact on the speed of deployment was clear: "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
+ The potential impact on cost was another bonus. "Hosting applications is quite expensive, so moving to the cloud is something that we really want to be able to do," says Danielsson. With the ability to adapt workloads, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost."
+ Just as importantly, Danielsson says, there’s added flexibility: "When we try to move or rework applications that are really crucial, it’s often tricky to validate whether the path we want to take is going to work out well. In order to validate that, we would need to reproduce the environment and really do testing, and that’s prohibitively expensive and simply not doable with traditional host providers. Cloud native gives us the ability to do risky changes and validate them in a cost-effective way."
+ As word of the two successful test projects spread throughout the company, interest in Kubernetes has grown. "We want to be able to support our developers in running Kubernetes clusters but we’re not there yet, so we allow them to do it as long as they’re aware that they are on their own," says Danielsson. "So that’s why we are also looking at things like [the managed Kubernetes platform] CoreOS Tectonic, Azure Container Service, ECS, etc. These kinds of services will be a lot more relevant to midsize companies that want to leverage cloud native but don’t have the IT departments or the structure around that."
+ In the next year and a half, Danielsson says the company will be working on moving one of their legacy desktop products, a web app for researching legislation and tax laws originally built in Java Enterprise, onto cloud-native technology. "We’re doing a microservice split out right now so that we can independently deploy the different parts," he says. The main website, which provides free content for customers, is also moving to cloud native.
+
+
+
+
+
+
+ "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
+
+
+
+
+
+
+ But with these goals, Danielsson believes there are bigger cultural challenges that need to be constantly addressed. The move to new technology, not to mention a shift toward DevOps, means a lot of change for employees. "The roles were rather fixed in the past," he says. "You had developers, you had project leads, you had testers. And now you get into these really, really important things like test automation. Testers aren’t actually doing click testing anymore, and they have to write automated testing. And if you really want to go full-blown CI/CD, all these little pieces have to work together so that you get the confidence to do a check in, and know this check in is going to land in production, because if I messed up, some test is going to break. This is a really powerful thing because whatever you do, whenever you merge something into the trunk or to the master, this is going live. And that’s where you either get the people or they run away screaming."
+ Danielsson understands that it may take some people much longer to get used to the new ways.
+ "Culture is nothing that you can force on people," he says. "You have to live it for yourself. You have to evangelize. You have to show the advantages time and time again: This is how you can do it, this is what you get from it." To that end, his team has scheduled daylong workshops for the staff, bringing in outside experts to talk about everything from API to Devops to cloud.
+ For every person who runs away screaming, many others get drawn in. "Get that foot in the door and make them really interested in this stuff," says Danielsson. "Usually it catches on. We have people you never would have expected chanting, ‘Docker Docker Docker’ now. It’s cool to see them realize that there is a world outside of their Python libraries. It’s awesome to see them really work with Kubernetes."
+ Ultimately, Reinhardt says, "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
+
+
+
diff --git a/content/ko/case-studies/homeoffice/homeoffice_logo.png b/content/ko/case-studies/homeoffice/homeoffice_logo.png
new file mode 100644
index 0000000000000..35d9722611159
Binary files /dev/null and b/content/ko/case-studies/homeoffice/homeoffice_logo.png differ
diff --git a/content/ko/case-studies/homeoffice/index.html b/content/ko/case-studies/homeoffice/index.html
new file mode 100644
index 0000000000000..589c7507d4839
--- /dev/null
+++ b/content/ko/case-studies/homeoffice/index.html
@@ -0,0 +1,4 @@
+---
+title: Home Office UK
+content_url: https://www.youtube.com/watch?v=F3iMkz_NSvU
+---
diff --git a/content/ko/case-studies/huawei/huawei_featured.png b/content/ko/case-studies/huawei/huawei_featured.png
new file mode 100644
index 0000000000000..22071b4691e38
Binary files /dev/null and b/content/ko/case-studies/huawei/huawei_featured.png differ
diff --git a/content/ko/case-studies/huawei/huawei_logo.png b/content/ko/case-studies/huawei/huawei_logo.png
new file mode 100644
index 0000000000000..94361a27eb5af
Binary files /dev/null and b/content/ko/case-studies/huawei/huawei_logo.png differ
diff --git a/content/ko/case-studies/huawei/index.html b/content/ko/case-studies/huawei/index.html
new file mode 100644
index 0000000000000..29de86f5c4ef4
--- /dev/null
+++ b/content/ko/case-studies/huawei/index.html
@@ -0,0 +1,101 @@
+---
+title: Huawei Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_huawei.css
+---
+
+
CASE STUDY:
Embracing Cloud Native as a User – and a Vendor
+
+
+
+
+ Company Huawei Location Shenzhen, China Industry Telecommunications Equipment
+
+
+
+
+
+
+
+
+
Challenge
+ A multinational company that’s the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It’s very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company’s Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
+
+
+
+
Solution
+ After deciding to use container technology, Huawei began moving the internal I.T. department’s applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.
+
+
+
Impact
+ "By the end of 2016, Huawei’s internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally – and the demand it is seeing externally – the company has also built the technologies into FusionStage™, the PaaS solution it offers its customers.
+
+
+
+
+
+
+
+
+ "If you’re a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology." - Peixin Hou, chief software architect and community director for open source
+
+
+
+
+
+
+ Huawei’s Kubernetes journey began with one developer.
+ Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in Kubernetes, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.
+ And as fate would have it, at the same time, Huawei was looking for a better orchestration system for its internal enterprise I.T. department, which supports every business flow processing. "We have more than 180,000 employees worldwide, and a complicated internal procedure, so probably every week this department needs to develop some new applications," says Peixin Hou, Huawei’s Chief Software Architect and Community Director for Open Source. "Very often our I.T. departments need to launch tens of thousands of containers, with tasks running across thousands of nodes across the world. It’s very much a distributed system, so we found that managing all of the tasks in a more consistent way is always a challenge."
+ In the past, Huawei had used virtual machines to encapsulate applications, but "every time when we start a VM," Hou says, "whether because it’s a new service or because it was a service that was shut down because of some abnormal node functioning, it takes a lot of time." Huawei turned to containerization, so the timing was right to try Kubernetes. It took a year to adopt that engineer’s suggestion – the process "is not overnight," says Hou – but once in use, he says, "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
+ Hou sees great benefits to the company that come with using this technology: "Kubernetes brings agility, scale-out capability, and DevOps practice to the cloud-based applications," he says. "It provides us with the ability to customize the scheduling architecture, which makes possible the affinity between container tasks that gives greater efficiency. It supports multiple container formats. It has extensive support for various container networking solutions and container storage."
+
+
+
+
+
+ "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
+
+
+
+
+
+ And not least of all, there’s an impact on the bottom line. Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent, which is very helpful for our business."
+ Pleased with those initial results, and seeing a demand for cloud native technologies from its customers, Huawei doubled down on Kubernetes. In the spring of 2016, the company became not only a user but also a vendor.
+ "We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei’s FusionStage™ PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We’ve started to work with some Chinese banks, and we see a lot of interest from our customers like China Mobile and Deutsche Telekom."
+ "If you’re just a user, you’re just a user," adds Hou. "But if you’re a vendor, in order to even convince your customers, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology. We provide customer wisdom." While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huawei’s solutions. It’s a big selling point that most of the public cloud providers now support Kubernetes. "This makes the cross-cloud transition much easier than with other solutions," says Hou.
+
+
+
+
+
+ "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them."
+
+
+
+
+
+ Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes, Hou is looking to convince more departments to move over to the cloud native development cycle and practice. "We have a lot of software developers, so we will provide them with our platform as a service solution, our own product," he says. "We would like to see significant cuts in their iteration cycle."
+ Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology: "When you start to design the architecture of your application, think about cloud native, think about microservice architecture from the beginning," he says. "I think you will benefit from that."
+ But if you already have legacy applications, "start from some microservice-friendly part of those applications first, parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says. "Don’t think from day one that within how many days I want to move the whole architecture, or move everything into microservices. Don’t put that as a kind of target. You should do it in a gradual manner. And I would say for legacy applications, not every piece would be suitable for microservice architecture. No need to force it."
+ After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There’s still 20 percent that’s not, but it’s fine. If we can make 80 percent of our workload really be cloud native, to have agility, it’s a much better world at the end of the day."
+
+
+
+
+
+
+ "In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There’s still 20 percent that’s not, but it’s fine. If we can make 80 percent of our workload really be cloud native, to have agility, it’s a much better world at the end of the day."
+
+
+
+
+
+ In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called Container Ops, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."
+ Still, Hou sees this technology as only halfway to its full potential. First and foremost, he’d like to expand the scale it can orchestrate, which is important for supersized companies like Huawei – as well as some of its customers.
+ Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes, Huawei is now a top contributor to the community. "We’ve learned that the more you contribute to the community," he says, "the more you get back."
+
+
+
diff --git a/content/ko/case-studies/ibm/ibm_featured_logo.png b/content/ko/case-studies/ibm/ibm_featured_logo.png
new file mode 100644
index 0000000000000..b819876bf790a
Binary files /dev/null and b/content/ko/case-studies/ibm/ibm_featured_logo.png differ
diff --git a/content/ko/case-studies/ibm/ibm_featured_logo.svg b/content/ko/case-studies/ibm/ibm_featured_logo.svg
new file mode 100644
index 0000000000000..577d8e97d960d
--- /dev/null
+++ b/content/ko/case-studies/ibm/ibm_featured_logo.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/ko/case-studies/ibm/index.html b/content/ko/case-studies/ibm/index.html
new file mode 100644
index 0000000000000..4f6927416c1a7
--- /dev/null
+++ b/content/ko/case-studies/ibm/index.html
@@ -0,0 +1,111 @@
+---
+title: IBM Case Study
+
+linkTitle: IBM
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+logo: ibm_featured_logo.svg
+featured: true
+weight: 2
+quote: >
+ We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project.
+---
+
+
+
CASE STUDY:
Building an Image Trust Service on Kubernetes with Notary and TUF
+
+
+
+
+ Company IBM Location Armonk, New York Industry Cloud Computing
+
+
+
+
+
+
+
Challenge
+ IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
+
+
Solution
+ The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
+
+
+
+
+
+
+
Impact
+ IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose."
+
+
+
+
+
+
+
+
+ "We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project." - Michael Hough, a software developer with the IBM Container Registry team
+
+
+
+
+
Docker had already created the Notary project as an implementation of The Update Framework (TUF), and this implementation of TUF provided the capabilities for Docker Content Trust.
"After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem", says Michael Hough, a software developer with the IBM Cloud Container Registry team.
+
+The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM’s container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were "attractive design decisions that confirmed our choice of Notary," he says.
+
+The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM's cloud platform, "where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers," Hough says. "When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers."
+The availability of image signing "is a huge benefit to security-conscious customers who require this level of image provenance and security," Hough says. "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
+
+
+
+
+
+
+ "Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem"
- Michael Hough, a software developer with the IBM Cloud Container Registry team
+
+
+
+
+ Now that the Notary-implemented service is generally available in IBM’s public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. "We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service," Hough says.
+
+ The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. "We are hoping to drive adoption of Portieris through its use of our Notary offering," Hough says.
+
+ IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM's vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects.
+
+
+
+
+
+
+
+ "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
- Michael Hough, a software developer with the IBM Cloud Container Registry team
+
+
+
+
+
+
+ "Given that, we see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project," Hough says. Because the entire cloud native world is a fast-moving area with many competing vendors and solutions, "we see the CNCF model as an arbiter of openness and fair play across the ecosystem," he says.
+
+With both TUF and Notary as part of CNCF, IBM expects there to be standardization around these capabilities beyond just de facto standards for signing and provenance. IBM has determined to not simply consume Notary, but also to contribute to the open source project where applicable. "IBMers have contributed a CouchDB backend to support our use of IBM Cloudant as the persistent store; and are working on generalization of the pkcs11 provider, allowing support of other security hardware devices beyond Yubikey," Hough says.
+
+
+
+
+
+ "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
- Michael Hough, a software developer with the IBM Cloud Container Registry team
+
+What advice does Hough have for other companies that are looking to deploy Notary or a cloud native infrastructure?
+
+"While this is true for many areas of cloud native infrastructure software, we found that a high-availability, multi-region deployment of Notary requires a solid implementation to handle certificate management and rotation," he says. "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
+
+
+
+
diff --git a/content/ko/case-studies/ing/index.html b/content/ko/case-studies/ing/index.html
new file mode 100644
index 0000000000000..6e2648a455139
--- /dev/null
+++ b/content/ko/case-studies/ing/index.html
@@ -0,0 +1,99 @@
+---
+title: ING Case Study
+linkTitle: ING
+case_study_styles: true
+cid: caseStudies
+weight: 50
+featured: true
+css: /css/style_case_studies.css
+quote: >
+ The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us.
+---
+
+
+
+
CASE STUDY:
Driving Banking Innovation with Cloud Native
+
+
+
+
+
+ Company ING Location Amsterdam, Netherlands
+ Industry Finance
+
+
+
+
+
+
+
Challenge
+ After undergoing an agile transformation, ING realized it needed a standardized platform to support the work their developers were doing. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it’s not really useful for a company to have one hundred wheels, instead of one good wheel.
+
+
+
Solution
+ Using Kubernetes for container orchestration and Docker for containerization, the ING team began building an internal public cloud for its CI/CD pipeline and green-field applications. The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. The bank-account management app Yolt in the U.K. (and soon France and Italy) market already is live hosted on a Kubernetes framework. At least two greenfield projects currently on the Kubernetes framework will be going into production later this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
+
+
+
+
+
+
+
Impact
+ "Cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Infrastructure Architect Onno Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
+
+
+
+
+
+
+ "The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us."
+
— Thijs Ebbers, Infrastructure Architect, ING
+
+
+
+
+
ING has long embraced innovation in banking, launching the internet-based ING Direct in 1997.
In that same spirit, the company underwent an agile transformation a few years ago. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it’s not really useful for a company to have one hundred wheels, instead of one good wheel."
+ Looking to standardize the deployment process within the company’s strict security guidelines, the team looked at several solutions and found that in the past year, "Kubernetes won the container management framework wars," says Ebbers. "We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That’s one of the reasons we got Kubernetes."
+ They also embraced Docker to address a major pain point in ING’s CI/CD pipeline. Before containerization, "Every development team had to order a VM, and it was quite a heavy delivery model for them," says Infrastructure Architect Onno Van der Voort. "Another use case for containerization is when the application travels through the pipeline, they fire up Docker containers to do test work against the applications and after they’ve done the work, the containers get killed again."
+
+
+
+
+
+ "We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That’s one of the reasons we got Kubernetes."
+
— Thijs Ebbers, Infrastructure Architect, ING
+
+
+
+
+ Because of industry regulations, applications are only allowed to go through the pipeline, where compliance is enforced, rather than be deployed directly into a container. "We have to run the complete platform of services we need, many routing from different places," says Van der Voort. "We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It’s complex." For that reason, ING has chosen to start on the OpenShift Origin Kubernetes distribution.
+ Already, "cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
+ The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. Some legacy applications are also being rewritten as cloud native in order to run on the framework. At least two smaller greenfield projects built on Kubernetes will go into production this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
+
+
+
+
+
+ "We have to run the complete platform of services we need, many routing from different places. We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It’s complex."
— Onno Van der Voort, Infrastructure Architect, ING
+
+
+
+
+
+ The team, however, doesn’t see the bank’s back-end systems going onto the Kubernetes platform. "Our philosophy is it only makes sense to move things to cloud if they are cloud native," says Van der Voort. "If you have traditional architecture, build traditional patterns, it doesn’t hold any value to go to the cloud." Adds Cloud Platform Architect Alfonso Fernandez-Barandiaran: "ING has a strategy about where we will go, in order to improve our agility. So it’s not about how cool this technology is, it’s about finding the right technology and the right approach."
+ The Kubernetes framework will be hosting some greenfield projects that are high priority for ING: applications the company is developing in response to PSD2, the European Commission directive requiring more innovative online and mobile payments that went into effect at the beginning of 2018. For example, a bank-account management app called Yolt, serving the U.K. market (and soon France and Italy), was built on a Kubernetes platform and has gone into production. ING is also developing blockchain-enabled applications that will live on the Kubernetes platform. "We’ve been contacted by a lot of development teams that have ideas with what they want to do with containers," says Ebbers.
+
+
+
+
+
+Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology."
— Alfonso Fernandez-Barandiaran, Cloud Platform Architect, ING
+
+
+
+ Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology," says Fernandez-Barandiaran.
+ The results, after all, are worth the effort. "The big cloud native promise to our business is the ability to go from idea to production within 48 hours," says Ebbers. "That would require all these projects to be mature. We are some years away from this, but that’s quite feasible to us."
+
+
+
+
diff --git a/content/ko/case-studies/ing/ing_featured_logo.png b/content/ko/case-studies/ing/ing_featured_logo.png
new file mode 100644
index 0000000000000..f6d4489715aa4
Binary files /dev/null and b/content/ko/case-studies/ing/ing_featured_logo.png differ
diff --git a/content/ko/case-studies/jd/index.html b/content/ko/case-studies/jd/index.html
new file mode 100644
index 0000000000000..ee61da9d1438e
--- /dev/null
+++ b/content/ko/case-studies/jd/index.html
@@ -0,0 +1,4 @@
+---
+title: JD.COM
+content_url: https://kubernetes.io/blog/2017/02/inside-jd-com-shift-to-kubernetes-from-openstack
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/jd/jd_logo.png b/content/ko/case-studies/jd/jd_logo.png
new file mode 100644
index 0000000000000..58ef32a3221e4
Binary files /dev/null and b/content/ko/case-studies/jd/jd_logo.png differ
diff --git a/content/ko/case-studies/liveperson/index.html b/content/ko/case-studies/liveperson/index.html
new file mode 100644
index 0000000000000..0cadb0f274073
--- /dev/null
+++ b/content/ko/case-studies/liveperson/index.html
@@ -0,0 +1,4 @@
+---
+title: LivePerson
+content_url: https://www.openstack.org/videos/video/running-kubernetes-on-openstack-at-liveperson
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/liveperson/liveperson_logo.png b/content/ko/case-studies/liveperson/liveperson_logo.png
new file mode 100644
index 0000000000000..b7e63d94f72d6
Binary files /dev/null and b/content/ko/case-studies/liveperson/liveperson_logo.png differ
diff --git a/content/ko/case-studies/monzo/index.html b/content/ko/case-studies/monzo/index.html
new file mode 100644
index 0000000000000..99d4f35934145
--- /dev/null
+++ b/content/ko/case-studies/monzo/index.html
@@ -0,0 +1,4 @@
+---
+title: Monzo
+content_url: https://youtu.be/YkOY7DgXKyw
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/monzo/monzo_logo.png b/content/ko/case-studies/monzo/monzo_logo.png
new file mode 100644
index 0000000000000..854409d17eabb
Binary files /dev/null and b/content/ko/case-studies/monzo/monzo_logo.png differ
diff --git a/content/ko/case-studies/naic/index.html b/content/ko/case-studies/naic/index.html
new file mode 100644
index 0000000000000..daf4201e5dd65
--- /dev/null
+++ b/content/ko/case-studies/naic/index.html
@@ -0,0 +1,116 @@
+---
+title: NAIC Case Study
+
+linkTitle: NAIC
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+logo: naic_featured_logo.png
+featured: true
+weight: 3
+quote: >
+ Our culture and technology transition is a strategy embraced by our top leaders. It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half.
+---
+
+
+
CASE STUDY:
A Culture and Technology Transition Enabled by Kubernetes
+
+
+
+
+ Company National Association of Insurance Commissioners (NAIC) Location Washington, DC Industry Regulatory
+
+
+
+
+
+
+
Challenge
+ The National Association of Insurance Commissioners (NAIC), the U.S. standard-setting and regulatory support organization, was looking for a way to deliver new services faster to provide more value for members and staff. It also needed greater agility to improve productivity internally.
+
+
Solution
+ Beginning in 2016, they started using Cloud Native Computing Foundation (CNCF) tools such as Prometheus. NAIC began hosting internal systems and development systems on Kubernetes at the beginning of 2018, as part of a broad move toward the public cloud. "Our culture and technology transition is a strategy embraced by our top leaders," says Dan Barker, Chief Enterprise Architect. "It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."
+
+
+
+
+
+
+
+
+
Impact
+ Leveraging Kubernetes, "our development teams can create rapid prototypes far faster than they used to," Barker said. Applications running on Kubernetes are more resilient than those running in other environments. The deployment of open source solutions is helping influence company culture, as NAIC becomes a more open and transparent organization.
+
+ "We completed a small prototype in two days that would have previously taken at least a month," Barker says. Resiliency is currently measured in how much downtime systems have. "They’ve basically had none, and the occasional issue is remedied in minutes," he says.
+
+
+
+
+
+
+
+
+ "Our culture and technology transition is a strategy embraced by our top leaders. It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies." - Dan Barker, Chief Enterprise Architect, NAIC
+
+
+
+
+ NAIC—which was created and overseen by the chief insurance regulators from the 50 states, the District of Columbia and five U.S. territories—provides a means through which state insurance regulators establish standards and best practices, conduct peer reviews, and coordinate their regulatory oversight. Their staff supports these efforts and represents the collective views of regulators in the United States and internationally. NAIC members, together with the organization’s central resources, form the national system of state-based insurance regulation in the United States.
+The organization has been using the cloud for years, and wanted to find more ways to quickly deliver new services that provide more value for members and staff. They looked to Kubernetes for a solution. Within NAIC, several groups are leveraging Kubernetes, one being the Platform Engineering Team. "The team building out these tools are not only deploying and operating Kubernetes, but they’re also using them," Barker says. "In fact, we’re using GitLab to deploy Kubernetes with a pipeline using kops. This team was created from developers, operators, and quality engineers from across the company, so their jobs have changed quite a bit."
+In addition, NAIC is onboarding teams to the new platform, and those teams have seen a lot of change in how they work and what they can do. "They now have more power in creating their own infrastructure and deploying their own applications," Barker says. They also use pipelines to facilitate their currently manual processes. NAIC has consumers who are using GitLab heavily, and they’re starting to use Kubernetes to deploy simple applications that help their internal processes.
+
+
+
+
+
+
+ "In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community." - Dan Barker, Chief Enterprise Architect, NAIC
+
+
+
+
+ "We needed greater agility to enable our own productivity internally," he says. "We decided it was right for us to move everything to the public cloud [Amazon Web Services] to help with that process and be able to access many of the native tools that allows us to move faster by not needing to build everything."
+The NAIC also wanted to be cloud-agnostic, "and Kubernetes helps with this for our compute layer," Barker says. "Compute is pretty standard across the clouds, and now we can take advantage of any of them while getting all of the other features Kubernetes offers."
+The NAIC currently hosts internal systems and development systems on Kubernetes, and has already seen how impactful it can be. "Our development teams can create rapid prototypes in minutes instead of weeks," Barker says. "This recently happened with an internal tool that had no measurable wait time on the infrastructure. It was solely development bound. There is now a central shared resource that lives in AWS, which means it can grow as needed."
+The native integrations into Kubernetes at NAIC has made it easy to write code and have it running in minutes instead of weeks. Applications running on Kubernetes have also proven to be more resilient than those running in other environments. "We even have teams using this to create more internal tools to help with communication or automating some of their current tasks," Barker says.
+
+"We knew that Kubernetes had become the de facto standard for container orchestration," he says. "Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
+
+As for other CNCF projects, NAIC is using Prometheus on a small scale and hopes to continue using it moving forward because of the seamless integration with Kubernetes. The Association also is considering gRPC as its internal communications standard, Envoy in conjunction with Istio for service mesh, OpenTracing and Jaeger for tracing aggregation, and Fluentd with its Elasticsearch cluster.
+
+
+
+
+
+ "We knew that Kubernetes had become the de facto standard for container orchestration. Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
- Dan Barker, Chief Enterprise Architect, NAIC
+
+
+
+
+
+
+
+The open governance and broad industry participation in CNCF provided a comfort level with the technology, Barker says. "We also see it as helping to influence our own company culture," he says. "We’re moving to be a more open and transparent company, and we are encouraging our staff to get involved with the different working groups and codebases. We recently became CNCF members to help further our commitment to community contribution and transparency."
+Factors such as vendor-neutrality and cross-industry investment were important in the selection. "In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community," Barker says.
+NAIC is a largely Oracle shop, Barker says, and has been running mostly Java on JBoss. "However, we have years of history with other applications," he says. "Some of these have been migrated by completely rewriting the application, while others are just being modified slightly to fit into this new paradigm."
+Running on AWS cloud, the Association has not specifically taken a microservices approach. "We are moving to microservices where practical, but we haven’t found that it’s a necessity to operate them within Kubernetes," Barker says
+All of its databases are currently running within public cloud services, but they have explored eventually running those in Kubernetes, as it makes sense. "We’re doing this to get more reuse from common components and to limit our failure domains to something more manageable and observable," Barker says.
+
+
+
+
+
+
+ "We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
- Dan Barker, Chief Enterprise Architect, NAIC
+
+
+
+
+
+NAIC has seen a significant business impact from its efforts. "We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
+He says the organization is moving toward continuous deployment "because the business case makes sense. The research is becoming very hard to argue with. We want to reduce our batch sizes and optimize on delivering value to customers and not feature count. This is requiring a larger cultural shift than just a technology shift."
+NAIC is "becoming more open and transparent, as well as more resilient to failure," Barker says. "Even our customers are wanting more and more of this and trying to figure out how they can work with us to accomplish our mutual goals faster. Members of the insurance industry have reached out so that we can better learn together and grow as an industry."
+
+
+
+
diff --git a/content/ko/case-studies/naic/naic_featured_logo.png b/content/ko/case-studies/naic/naic_featured_logo.png
new file mode 100644
index 0000000000000..f2497114bf40a
Binary files /dev/null and b/content/ko/case-studies/naic/naic_featured_logo.png differ
diff --git a/content/ko/case-studies/newyorktimes/index.html b/content/ko/case-studies/newyorktimes/index.html
new file mode 100644
index 0000000000000..c65b5fe88355f
--- /dev/null
+++ b/content/ko/case-studies/newyorktimes/index.html
@@ -0,0 +1,108 @@
+---
+title: New York Times Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
The New York Times: From Print to the Web to Cloud Native
+
+
+
+
+
+
+ Company New York Times Location New York, N.Y.
+ Industry News Media
+
+
+
+
+
+
+
Challenge
+ When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center," says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would "design for the abstractions that cloud providers offer us."
+
+
+
Solution
+ The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
+
+
+
+
+
+
Impact
+ Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes," says Engineering Manager Brian Balser. Adds Li: "Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary." Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ "I think once you get over the initial hump, things get a lot easier and actually a lot faster." — Deep Kapadia, Executive Director, Engineering at The New York Times
+
+
+
+
+ Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. After the company decided a few years ago to move out of its private data centers—including one located in the pricy real estate of Manhattan. It recently took another step into the future by going cloud native.
+ At first, the infrastructure team "managed the virtual machines in the Amazon cloud, and they deployed more critical applications in our data centers and the less critical ones on AWS as an experiment," says Deep Kapadia, Executive Director, Engineering at The New York Times. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center."
+ To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would "design for the abstractions that cloud providers offer us." In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
+ At the time, says team member Tony Li, a Site Reliability Engineer, "We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
+ In early 2017, the first production application—the nytimes.com mobile homepage—began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site’s end-user facing applications run on GCP, with the majority on Kubernetes.
+
+
+
+
+
+ "We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
+
+
+
+
+
+ The team found that the speed of delivery was immediately impacted. "Deploying Docker images versus spinning up VMs was quite a lot faster," says Engineering Manager Brian Balser. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes."
+ The plan is to get as much as possible, not just the website, running on Kubernetes, and beyond that, moving toward serverless deployments. For instance, The New York Times crossword app was built on Google App Engine, which has been the main platform for the company’s experimentation with serverless. "The hardest part was getting the engineers over the hurdle of how little they had to do," Chief Technology Officer Nick Rockwell recently told The CTO Advisor. "Our experience has been very, very good. We have invested a lot of work into deploying apps on container services, and I’m really excited about experimenting with deploying those on App Engine Flex and AWS Fargate and seeing how that feels, because that’s a great migration path."
+ There are some exceptions to the move to cloud native, of course. "We have the print publishing business as well," says Kapadia. "A lot of that is definitely not going down the cloud-native path because they’re using vendor software and even special machinery that prints the physical paper. But even those teams are looking at things like App Engine and Kubernetes if they can."
+ Kapadia acknowledges that there was a steep learning curve for some engineers, but "I think once you get over the initial hump, things get a lot easier and actually a lot faster."
+
+
+
+
+
+ "Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
+
+
+
+
+
+ At The New York Times, they did. As teams started sharing their own best practices with each other, "We’re no longer the bottleneck for figuring out certain things," Kapadia says. "Most of the infrastructure and systems were managed by a centralized function. We’ve sort of blown that up, partly because Google and Amazon have tools that allow us to do that. We provide teams with complete ownership of their Google Cloud Platform projects, and give them a set of sensible defaults or standards. We let them know, ‘If this works for you as is, great! If not, come talk to us and we’ll figure out how to make it work for you.’"
+ As a result, "It’s really allowed teams to move at a much more rapid pace than they were able to in the past," says Kapadia. Adds Li: "The use of GKE means each team can get their own compute cluster, reducing the number of individual instances they have to care about since developers can treat the cluster as a whole. Because the ticket-based workflow was removed from requesting resources and connections, developers can just call an API to get what they want. Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary."
+ Another benefit to adopting Kubernetes: allowing for a more unified approach to deployment across the engineering staff. "Before, many teams were building their own tools for deployment," says Balser. With Kubernetes—as well as the other CNCF projects The New York Times uses, including Fluentd to collect logs for all of its AWS servers, gRPC for its Publishing Pipeline, Prometheus, and Envoy—"we can benefit from the advances that each of these technologies make, instead of trying to catch up."
+
+
+
+
+
+
+Li calls the Cloud Native Computing Foundation’s projects "a northern star that we can all look at and follow."
+
+
+
+
+ These open-source technologies have given the company more portability. "CNCF has enabled us to follow an industry standard," says Kapadia. "It allows us to think about whether we want to move away from our current service providers. Most of our applications are connected to Fluentd. If we wish to switch our logging provider from provider A to provider B we can do that. We’re running Kubernetes in GCP today, but if we want to run it in Amazon or Azure, we could potentially look into that as well."
+ Li calls the Cloud Native Computing Foundation’s projects "a northern star that we can all look at and follow." Led by that star, the team is looking ahead to a year of onboarding the remaining half of the 40 or so product engineering teams to extract even more value out of the technology. "Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
+
+
+
diff --git a/content/ko/case-studies/newyorktimes/newyorktimes_featured.png b/content/ko/case-studies/newyorktimes/newyorktimes_featured.png
new file mode 100644
index 0000000000000..fad0927883a93
Binary files /dev/null and b/content/ko/case-studies/newyorktimes/newyorktimes_featured.png differ
diff --git a/content/ko/case-studies/newyorktimes/newyorktimes_logo.png b/content/ko/case-studies/newyorktimes/newyorktimes_logo.png
new file mode 100644
index 0000000000000..693a742c3ebcb
Binary files /dev/null and b/content/ko/case-studies/newyorktimes/newyorktimes_logo.png differ
diff --git a/content/ko/case-studies/nordstrom/index.html b/content/ko/case-studies/nordstrom/index.html
new file mode 100644
index 0000000000000..5385c2473d481
--- /dev/null
+++ b/content/ko/case-studies/nordstrom/index.html
@@ -0,0 +1,110 @@
+---
+title: Nordstrom Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Finding Millions in Potential Savings in a Tough Retail Climate
+
+
+
+
+
+
+
+ Company Nordstrom Location Seattle, Washington Industry Retail
+
+
+
+
+
+
+
Challenge
+ Nordstrom wanted to increase the efficiency and speed of its technology operations, which includes the Nordstrom.com e-commerce site. At the same time, Nordstrom Technology was looking for ways to tighten its technology operational costs.
+
+
+
Solution
+ After embracing a DevOps transformation and launching a continuous integration/continuous deployment (CI/CD) project four years ago, the company reduced its deployment time from three months to 30 minutes. But they wanted to go even faster across environments, so they began their cloud native journey, adopting Docker containers orchestrated with Kubernetes.
+
+
+
+
+
+
+
+
+
Impact
+ Nordstrom Technology developers using Kubernetes now deploy faster and can "just focus on writing applications," says Dhawal Patel, a senior engineer on the team building a Kubernetes enterprise platform for Nordstrom. Furthermore, the team has increased Ops efficiency, improving CPU utilization from 5x to 12x depending on the workload. "We run thousands of virtual machines (VMs), but aren’t effectively using all those resources," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at a 10x increase."
+
+
+
+
+
+
+ "We are always looking for ways to optimize and provide more value through technology. With Kubernetes we are showcasing two types of efficiency that we can bring: Dev efficiency and Ops efficiency. It’s a win-win."
+ -— Dhawal Patel, senior engineer at Nordstrom
+
+
+
+
+ When Dhawal Patel joined Nordstrom five years ago as an application developer for the retailer’s website, he realized there was an opportunity to help speed up development cycles.
+
+ In those early DevOps days, Nordstrom Technology still followed a traditional model of silo teams and functions. "As a developer, I was spending more time fixing environments than writing code and adding value to business," Patel says. "I was passionate about that—so I was given the opportunity to help fix it."
+
+ The company was eager to move faster, too, and in 2013 launched the first continuous integration/continuous deployment (CI/CD) project. That project was the first step in Nordstrom’s cloud native journey.
+
+ Dev and Ops team members built a CI/CD pipeline, working with the company’s servers on premise. The team chose Chef, and wrote cookbooks that automated virtual IP creation, servers, and load balancing. "After we completed the project, deployment went from three months to 30 minutes," says Patel. "We still had multiple environments—dev, test, staging, then production—so with each environment running the Chef cookbooks, it took 30 minutes. It was a huge achievement at that point."
+
But new environments still took too long to turn up, so the next step was working in the cloud. Today, Nordstrom Technology has built an enterprise platform that allows the company’s 1,500 developers to deploy applications running as Docker containers in the cloud, orchestrated with Kubernetes.
+
+
+
+
+
+ "We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core,"
+
+
+
+
+
+"The cloud provided faster access to resources, because it took weeks for us to get a virtual machine (VM) on premises," says Patel. "But now we can do the same thing in only five minutes."
+
+Nordstrom’s first foray into scheduling containers on a cluster was a homegrown system based on CoreOS fleet. They began doing a few proofs of concept projects with that system until Kubernetes 1.0 was released when they made the switch. "We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core," says Marius Grigoriu, Sr. Manager of the Kubernetes team at Nordstrom.
+While Kubernetes is often thought as a platform for microservices, the first application to launch on Kubernetes in a critical production role at Nordstrom was Jira. "It was not the ideal microservice we were hoping to get as our first application," Patel admits, "but the team that was working on it was really passionate about Docker and Kubernetes, and they wanted to try it out. They had their application running on premises, and wanted to move it to Kubernetes."
+
+The benefits were immediate for the teams that came on board. "Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn’t need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
+
+
+
+
+
+ "Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn’t need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
+
+
+
+
+
+ To support these early adopters, Patel’s team began growing the cluster and building production-grade services. "We integrated with Prometheus for monitoring, with a Grafana front end; we used Fluentd to push logs to Elasticsearch, so that gives us log aggregation," says Patel. The team also added dozens of open-source components, including CNCF projects and has made contributions to Kubernetes, Terraform, and kube2iam.
+
+There are now more than 60 development teams running Kubernetes in Nordstrom Technology, and as success stories have popped up, more teams have gotten on board. "Our initial customer base, the ones who were willing to try this out, are now going and evangelizing to the next set of users," says Patel. "One early adopter had Docker containers and he was not sure how to run it in production. We sat with him and within 15 minutes we deployed it in production. He thought it was amazing, and more people in his org started coming in."
+
+For Nordstrom Technology, going cloud-native has vastly improved development and operational efficiency. The developers using Kubernetes now deploy faster and can focus on building value in their applications. One such team started with a 25-minute merge to deploy by launching virtual machines in the cloud. Switching to Kubernetes was a 5x speedup in their process, improving their merge to deploy time to 5 minutes.
+
+
+
+
+ "With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. we are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that’s a huge reduction in operational overhead."
+
+
+
+
+ Speed is great, and easily demonstrated, but perhaps the bigger impact lies in the operational efficiency. "We run thousands of VMs on AWS, and their overall average CPU utilization is about four percent," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. We are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that’s a huge reduction in operational overhead."
+
+ Nordstrom Technology is also exploring running Kubernetes on bare metal on premises. "If we can build an on-premises Kubernetes cluster," says Patel, "we could bring the power of cloud to provision resources fast on-premises. Then for the developer, their interface is Kubernetes; they might not even realize or care that their services are now deployed on premises because they’re only working with Kubernetes."
+ For that reason, Patel is eagerly following Kubernetes’ development of multi-cluster capabilities. "With cluster federation, we can have our on-premise as the primary cluster and the cloud as a secondary burstable cluster," he says. "So, when there is an anniversary sale or Black Friday sale, and we need more containers - we can go to the cloud."
+
+ That kind of possibility—as well as the impact that Grigoriu and Patel’s team has already delivered using Kubernetes—is what led Nordstrom on its cloud native journey in the first place. "The way the retail environment is today, we are trying to build responsiveness and flexibility where we can," says Grigoriu. "Kubernetes makes it easy to: bring efficiency to both the Dev and Ops side of the equation. It’s a win-win."
+
+
+
diff --git a/content/ko/case-studies/nordstrom/nordstrom_featured_logo.png b/content/ko/case-studies/nordstrom/nordstrom_featured_logo.png
new file mode 100644
index 0000000000000..a557ffa82f12e
Binary files /dev/null and b/content/ko/case-studies/nordstrom/nordstrom_featured_logo.png differ
diff --git a/content/ko/case-studies/northwestern-mutual/index.html b/content/ko/case-studies/northwestern-mutual/index.html
new file mode 100644
index 0000000000000..dac0ef0d66d55
--- /dev/null
+++ b/content/ko/case-studies/northwestern-mutual/index.html
@@ -0,0 +1,99 @@
+---
+title: Northwestern Mutual Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Cloud Native at Northwestern Mutual
+
+
+
+
+
+
+
+ Company Northwestern Mutual Location Milwaukee, WI Industry Insurance and Financial Services
+
+
+
+
+
+
+
Challenge
+ In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
+
+
Solution
+ The platform team came up with a plan for using the public cloud (AWS), Docker containers, and Kubernetes for orchestration. "Kubernetes gave us that base framework so teams can be very autonomous in what they’re building and deliver very quickly and frequently," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. The team also built and open-sourced Kanali, a Kubernetes-native API management tool that uses OpenTracing, Jaeger, and gRPC.
+
+
+
+
+
+
+
Impact
+ Before, infrastructure deployments could take weeks; now, it is done in a matter of minutes. The number of deployments has increased dramatically, from about 24 a year to over 500 in just the first 10 months of 2017. Availability has also increased: There used to be a six-hour control window for commits every Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now we have eliminated the planned outage windows," says Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. Kanali has had an impact on the bottom line. The vendor API management product that the company previously used required 23 servers, "dedicated, to only API management," says Pfremmer. "Now it’s all integrated in the existing stack and running as another deployment on Kubernetes. And that’s just one environment. Between the three that we had plus the test, that’s hard dollar savings."
+
+
+
+
+
+
+
+"In a large enterprise, you’re going to have people using Kubernetes, but then you’re also going to have people using WAS and .NET. You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff." — Frank Greco Jr., Cloud Native Engineer at Northwestern Mutual
+
+
+
+
For more than 160 years, Northwestern Mutual has maintained its industry leadership in part by keeping a strong focus on risk management.
+ For many years, the company took a similar approach to managing its technology and has recently undergone a digital transformation to advance the company’s digital strategy - including making a lot of noise in the cloud-native world.
+In the spring of 2015, this insurance and financial services company acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on an on-premise datacenter; deployments were very traditional and had to many manual steps that were error prone.
+In order to give the company’s 4.5 million clients the digital experience they’d come to expect, says Williams, "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website. We essentially said, 'You build the system that you think is necessary to support a new, modern-facing one.’ That’s why we departed from anything legacy."
+
+
+
+
+
+
+ "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."
+
+
+
+
+
+ Williams and the rest of the platform team decided that the first step would be to start moving from private data centers to AWS. With a new microservice architecture in mind—and the freedom to implement what was best for the organization—they began using Docker containers. After looking into the various container orchestration options, they went with Kubernetes, even though it was still in beta at the time. "There was some debate whether we should build something ourselves, or just leverage that product and evolve with it," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."
+As early adopters, the team had to do a lot of work with Ansible scripts to stand up the cluster. "We had a lot of hard security requirements given the nature of our business," explains Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. "We found ourselves running a configuration that very few other people ever tried." The client experience group was the first to use the new platform; today, a few hundred of the company’s 1,500 engineers are using it and more are eager to get on board.
+The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
+
+
+
+
+
+
+"Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it."
+
+
+
+
+
+ The process used to be so cumbersome that minor bug releases would be bundled with feature releases. With the new streamlined system enabled by Kubernetes, the number of deployments has increased from about 24 a year to more than 500 in just the first 10 months of 2017. Availability has also been improved: There used to be a six-hour control window for commits every early Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now there’s no planned outage window," notes Pfremmer.
+Northwestern Mutual built that API management tool—called Kanali—and open sourced it in the summer of 2017. The team took on the project because it was a key capability for what they were building and prior the solution worked in an "anti-cloud native way that was different than everything else we were doing," says Greco. Now API management is just another container deployed to Kubernetes along with a separate Jaeger deployment.
+Now the engineers using the Kubernetes deployment platform have the added benefit of visibility in production—and autonomy. Before, a centralized team and would have to run a trace. "Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it." says Greco.
+
+
+
+
+
+
+ "We’re trying to make what we’re doing known so that we can find people who are like, 'Yeah, that’s interesting. I want to come do it!’"
+
+
+
+
+ But the team didn’t stop there. "In a large enterprise, you’re going to have people using Kubernetes, but then you’re also going to have people using WAS and .NET," says Greco. "You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff."
+ As the team continues to improve its stack and share its Kubernetes best practices, it feels that Northwestern Mutual’s reputation as a technology-first company is evolving too. "No one would think a company that’s 160-plus years old is foraying this deep into the cloud and infrastructure stack," says Pfremmer. And they’re hoping that means they’ll be able to attract new talent. "We’re trying to make what we’re doing known so that we can find people who are like, 'Yeah, that’s interesting. I want to come do it!’"
+
+
+
+
+
diff --git a/content/ko/case-studies/northwestern-mutual/northwestern_featured_logo.png b/content/ko/case-studies/northwestern-mutual/northwestern_featured_logo.png
new file mode 100644
index 0000000000000..7c1422f32b86d
Binary files /dev/null and b/content/ko/case-studies/northwestern-mutual/northwestern_featured_logo.png differ
diff --git a/content/ko/case-studies/ocado/index.html b/content/ko/case-studies/ocado/index.html
new file mode 100644
index 0000000000000..6a930f945ce6b
--- /dev/null
+++ b/content/ko/case-studies/ocado/index.html
@@ -0,0 +1,99 @@
+---
+title: Ocado Case Study
+
+linkTitle: Ocado
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+logo: ocado_featured_logo.png
+featured: true
+weight: 4
+quote: >
+ People at Ocado Technology have been quite amazed. They ask, ‘Can we do this on a Dev cluster?’ and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing.
+---
+
+
CASE STUDY:
Ocado: Running Grocery Warehouses with a Cloud Native Platform
+
+
+
+
+ Company Ocado Technology Location Hatfield, England Industry Grocery retail technology and platforms
+
+
+
+
+
+
+
Challenge
+ The world’s largest online-only grocery retailer, Ocado developed the Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other retailers such as Kroger. To set up the first warehouses for the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS’s fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
+
Solution
+ The team decided to migrate from fleet to Kubernetes on Ocado’s private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing. The first app on Kubernetes, a business-critical service in the warehouses, went into production in the summer of 2017, with a mass migration continuing into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes.
+
+
+
+
+
+
+
Impact
+ With Kubernetes, "the speed from idea to implementation to deployment is amazing," says Bryant. "I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month." And because there are no longer restrictive deployment windows in the warehouses, the rate of deployments has gone from as few as two per week to dozens per week. Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. Says DevOps Team Leader Kevin McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster." The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resources to host the same applications in Kubernetes in our test environments."
+
+
+
+
+
+
+
+ "People at Ocado Technology have been quite amazed. They ask, ‘Can we do this on a Dev cluster?’ and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing." - Mike Bryant, Platform Engineer, Ocado
+
+
+
+
+
When it was founded in 2000, Ocado was an online-only grocery retailer in the U.K. In the years since, it has expanded from delivering produce to families to providing technology to other grocery retailers.
+The company began developing its Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other grocery chains around the world, such as Kroger. To set up the first warehouses on the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS’s fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew, and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
+Bryant had already been using Kubernetes with Code for Life, a children’s education project that’s part of Ocado’s charity arm. "We really liked it, so we started looking at it seriously for our production workloads," says Bryant. The team that managed fleet had researched orchestration solutions and landed on Kubernetes as well. "We were looking for a platform with wide adoption, and that was where the momentum was," says DevOps Team Leader Kevin McCormack. The two paths converged, and "We didn’t even go through any proof-of-concept stage. The Code for Life work served that purpose," says Bryant.
+
+
+
+
+
+ "We were looking for a platform with wide adoption, and that was where the momentum was, the two paths converged, and we didn’t even go through any proof-of-concept stage. The Code for Life work served that purpose,"
- Kevin McCormack, DevOps Team Leader, Ocado
+
+
+
+
+ In the summer of 2016, the team began migrating from fleet to Kubernetes on Ocado’s private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing.
+ The first app on Kubernetes, a business-critical service in the warehouses, went into production a year later. Once that app was running smoothly, a mass migration continued into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes, and the platform is live in Ocado’s warehouses, managing tens of thousands of orders a week. At full capacity, Ocado’s latest warehouse in Erith, southeast London, will deliver more than 200,000 orders per week, making it the world’s largest facility for online grocery.
+ There are about 150 microservices now running on Kubernetes, with multiple instances of many of them. "We’re not just deploying all these microservices at once. We’re deploying them all for one warehouse, and then they’re all being deployed again for the next warehouse, and again and again," says Bryant.
+ The move to Kubernetes was eye-opening for many people at Ocado Technology. "In the early days of putting the platform into our test infrastructure, the technical architect asked what network performance was like on Weave Net with encryption turned on," recalls Bryant. "So we found a Docker container for iPerf, wrote a daemon set, deployed it. A few moments later, we’ve deployed the entire thing across this cluster. He was pretty blown away by that."
+
+
+
+
+
+ "The unified API of Kubernetes means this is all in one place, and it’s one flow for approval and rollout. I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
- Mike Bryant, Platform Engineer, Ocado
+
+
+
+
+
+
+ Indeed, the impact has been profound. "Prior to containerization, we had quite restrictive deployment windows in our warehouses," says Bryant. "Moving to microservices, we’ve been able to deploy much more frequently. We’ve been able to move towards continuous delivery in a number of areas. In our older warehouse, new application deployments involve talking to a bunch of different teams for different levels of the stack: from VM provisioning, to storage, to load balancers, and so on. The unified API of Kubernetes means this is all in one place, and it’s one flow for approval and rollout. I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
+ The rate of deployment has gone from as few as two per week to dozens per week. "With Kubernetes, some of our development teams have been able to deploy their application to production on the new platform without us noticing," says Bryant, "which means they’re faster at doing what they need to do and we have less work."
+ Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. "That lets us shrink quite a lot of our deployments from being per-core VM deployments to having fractions of the core," says Bryant. Adds McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster. This means we use our hardware better since if we have to always have two nodes of excess capacity available in case of node failures then we only need two extra instead of 20."
+
+
+
+
+
+ "CNCF have provided us with support of different technologies. We’ve been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We’re not being asked to commit to this one way of doing things. The vast diversity of viewpoints in CNCF lead to better technology."
- Mike Bryant, Platform Engineer, Ocado
+
+
+
+
+
+ The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resource to host the same applications in Kubernetes in our test environments."
+ One of the broader benefits of cloud native, says Bryant, is the unified API. "We have one method of doing our deployments that covers the wide range of things we need to do, and we can extend the API," he says. In addition to using Prometheus Operator, the Ocado team has started writing its own operators, some of which have been open sourced. Plus, "CNCF has provided us with support of these different technologies. We’ve been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We’re not being asked to commit to this one way of doing things. The vast diversity of viewpoints in the CNCF leads to better technology."
+ Ocado’s own technology, in the form of its Smart Platform, will soon be used around the world. And cloud native plays a crucial role in this global expansion. "I wouldn’t have wanted to try it without Kubernetes," says Bryant. "Kubernetes has made it so much nicer, especially to have that consistent way of deploying all of the applications, then taking the same thing and being able to replicate it. It’s very valuable."
+
+
+
diff --git a/content/ko/case-studies/ocado/ocado_featured_logo.png b/content/ko/case-studies/ocado/ocado_featured_logo.png
new file mode 100644
index 0000000000000..0c2ef19ec3b03
Binary files /dev/null and b/content/ko/case-studies/ocado/ocado_featured_logo.png differ
diff --git a/content/ko/case-studies/openAI/index.html b/content/ko/case-studies/openAI/index.html
new file mode 100644
index 0000000000000..4814ce6fdbe9e
--- /dev/null
+++ b/content/ko/case-studies/openAI/index.html
@@ -0,0 +1,99 @@
+---
+title: OpenAI Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Launching and Scaling Up Experiments, Made Simple
+
+
+
+
+
+
+ Company OpenAI Location San Francisco, California Industry Artificial Intelligence Research
+
+
+
+
+
+
+
Challenge
+ An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.
+
+
Solution
+ OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. "We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster," says Christopher Berner, Head of Infrastructure. "This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
+
+
+
+
+
Impact
+ The company has benefited from greater portability: "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. Being able to use its own data centers when appropriate is "lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud," he adds. "As long as the utilization is high, the costs are much lower there." Launching experiments also takes far less time: "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
+
+
+
+
+
+
+
+
+
+
+
+Check out "Building the Infrastructure that Powers the Future of AI" presented by Vicki Cheung, Member of Technical Staff & Jonas Schneider, Member of Technical Staff at OpenAI from KubeCon/CloudNativeCon Europe 2017.
+
+
+
+
+
+
From experiments in robotics to old-school video game play research, OpenAI’s work in artificial intelligence technology is meant to be shared.
+ With a mission to ensure powerful AI systems are safe, OpenAI cares deeply about open source—both benefiting from it and contributing safety technology into it. "The research that we do, we want to spread it as widely as possible so everyone can benefit," says OpenAI’s Head of Infrastructure Christopher Berner. The lab’s philosophy—as well as its particular needs—lent itself to embracing an open source, cloud native strategy for its deep learning infrastructure.
+ OpenAI started running Kubernetes on top of AWS in 2016, and a year later, migrated the Kubernetes clusters to Azure. "We probably use Kubernetes differently from a lot of people," says Berner. "We use it for batch scheduling and as a workload manager for the cluster. It’s a way of coordinating a large number of containers that are all connected together. We rely on our autoscaler to dynamically scale up and down our cluster. This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
+ In the past year, Berner has overseen the launch of several Kubernetes clusters in OpenAI’s own data centers. "We run them in a hybrid model where the control planes—the Kubernetes API servers, etcd and everything—are all in Azure, and then all of the Kubernetes nodes are in our own data center," says Berner. "The cloud is really convenient for managing etcd and all of the masters, and having backups and spinning up new nodes if anything breaks. This model allows us to take advantage of lower costs and have the availability of more specialized hardware in our own data center."
+
+
+
+
+
+
+ OpenAI’s experiments take advantage of Kubernetes’ benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters..."
+
+
+
+
+
+ Different teams at OpenAI currently run a couple dozen projects. While the largest-scale workloads manage bare cloud VMs directly, most of OpenAI’s experiments take advantage of Kubernetes’ benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. The on-prem clusters are generally "used for workloads where you need lots of GPUs, something like training an ImageNet model. Anything that’s CPU heavy, that’s run in the cloud. But we also have a number of teams that run their experiments both in Azure and in our own data centers, just depending on which cluster has free capacity, and that’s hugely valuable."
+ Berner has made the Kubernetes clusters available to all OpenAI teams to use if it’s a good fit. "I’ve worked a lot with our games team, which at the moment is doing research on classic console games," he says. "They had been running a bunch of their experiments on our dev servers, and they had been trying out Google cloud, managing their own VMs. We got them to try out our first on-prem Kubernetes cluster, and that was really successful. They’ve now moved over completely to it, and it has allowed them to scale up their experiments by 10x, and do that without needing to invest significant engineering time to figure out how to manage more machines. A lot of people are now following the same path."
+
+
+
+
+
+"One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
+
+
+
+
+
+ That path has been simplified by frameworks and tools that two of OpenAI’s teams have developed to handle interaction with Kubernetes. "You can just write some Python code, fill out a bit of configuration with exactly how many machines you need and which types, and then it will prepare all of those specifications and send it to the Kube cluster so that it gets launched there," says Berner. "And it also provides a bit of extra monitoring and better tooling that’s designed specifically for these machine learning projects."
+ The impact that Kubernetes has had at OpenAI is impressive. With Kubernetes, the frameworks and tooling, including the autoscaler, in place, launching experiments takes far less time. "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
+ Plus, the flexibility they now have to use their on-prem Kubernetes cluster when appropriate is "lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud," he says. "As long as the utilization is high, the costs are much lower in our data center. To an extent, you can also customize your hardware to exactly what you need."
+
+
+
+
+
+
+ "Research teams can now take advantage of the frameworks we’ve built on top of Kubernetes, which make it easy to launch experiments, scale them by 10x or 50x, and take little effort to manage." — CHRISTOPHER BERNER, HEAD OF INFRASTRUCTURE FOR OPENAI
+
+
+
+
+
+ OpenAI is also benefiting from other technologies in the CNCF cloud-native ecosystem. gRPC is used by many of its systems for communications between different services, and Prometheus is in place "as a debugging tool if things go wrong," says Berner. "We actually haven’t had any real problems in our Kubernetes clusters recently, so I don’t think anyone has looked at our Prometheus monitoring in a while. If something breaks, it will be there."
+ One of the things Berner continues to focus on is Kubernetes’ ability to scale, which is essential to deep learning experiments. OpenAI has been able to push one of its Kubernetes clusters on Azure up to more than 2,500 nodes. "I think we’ll probably hit the 5,000-machine number that Kubernetes has been tested at before too long," says Berner, adding, "We’re definitely hiring if you’re excited about working on these things!"
+
+
+
diff --git a/content/ko/case-studies/openAI/openai_featured.png b/content/ko/case-studies/openAI/openai_featured.png
new file mode 100644
index 0000000000000..b2b667c0bb13d
Binary files /dev/null and b/content/ko/case-studies/openAI/openai_featured.png differ
diff --git a/content/ko/case-studies/openAI/openai_logo.png b/content/ko/case-studies/openAI/openai_logo.png
new file mode 100644
index 0000000000000..a85a81ea063d0
Binary files /dev/null and b/content/ko/case-studies/openAI/openai_logo.png differ
diff --git a/content/ko/case-studies/peardeck/index.html b/content/ko/case-studies/peardeck/index.html
new file mode 100644
index 0000000000000..688754a6200f9
--- /dev/null
+++ b/content/ko/case-studies/peardeck/index.html
@@ -0,0 +1,111 @@
+---
+title: Pear Deck Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_peardeck.css
+---
+
+
+
CASE STUDY:
Infrastructure for a Growing EdTech Startup
+
+
+
+
+ Company Pear Deck Location Iowa City, Iowa Industry Educational Software
+
+
+
+
+
+
+
+
Challenge
+ The three-year-old startup provides a web app for teachers to interact with their students in the classroom. The JavaScript app was built on Google’s web app development platform Firebase, using Heroku. As the user base steadily grew, so did the development team. "We outgrew Heroku when we started wanting to have multiple services, and the deploying story got pretty horrendous. We were frustrated that we couldn’t have the developers quickly stage a version," says CEO Riley Eynon-Lynch. "Tracing and monitoring became basically impossible." On top of that, many of Pear Deck’s customers are behind government firewalls and connect through Firebase, not Pear Deck’s servers, making troubleshooting even more difficult.
+
+ The new cloud native stack immediately improved the development workflow, speeding up deployments. Prometheus gave Pear Deck "a lot of confidence, knowing that people are still logging into the app and using it all the time," says Eynon-Lynch. "The biggest impact is being able to work as a team on the configuration in git in a pull request, and the biggest confidence comes from the solidity of the abstractions and the trust that we have in Kubernetes actually making our yaml files a reality."
+
+
+
+
+
+
+ "We didn’t even realize how stressed out we were about our lack of insight into what was happening with the app. I’m really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
– RILEY EYNON-LYNCH, CEO OF PEAR DECK
+
+
+
+
+
+
With the speed befitting a startup, Pear Deck delivered its first prototype to customers within three months of incorporating.
+As a former high school math teacher, CEO Riley Eynon-Lynch felt an urgency to provide a tech solution to classes where instructors struggle to interact with every student in a short amount of time. "Pear Deck is an app that students can use to interact with the teacher all at once," he says. "When the teacher asks a question, instead of just the kid at the front of the room answering again, everybody can answer every single question. It’s a huge fundamental shift in the messaging to the students about how much we care about them and how much they are a part of the classroom."
+Eynon-Lynch and his partners quickly built a JavaScript web app on Google’s web app development platform Firebase, and launched the minimum viable product [MVP] on Heroku "because it was fast and easy," he says. "We made everything as easy as we could."
+
+But once it launched, the user base began growing steadily at a rate of 30 percent a month. "Our Heroku bill was getting totally insane," Eynon-Lynch says. But even more crucially, as the company hired more developers to keep pace, "we outgrew Heroku. We wanted to have multiple services and the deploying story got pretty horrendous. We were frustrated that we couldn’t have the developers quickly stage a version. Tracing and monitoring became basically impossible."
+
+On top of that, many of Pear Deck’s customers are behind government firewalls and connect through Firebase, not Pear Deck’s servers, making troubleshooting even more difficult.
+
+The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to Docker containers running on Google Kubernetes Engine, orchestrated by Kubernetes and monitored with Prometheus.
+
+
+
+
+
+ "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.
+
+
+
+
+
+ They had considered other options like Google’s App Engine (which they were already using for one service) and Amazon’s Elastic Compute Cloud (EC2), while experimenting with running one small service that wasn’t accessible to the Internet in Kubernetes. "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "We didn’t really consider Terraform and the other competitors because the abstractions offered by Kubernetes just jumped off the page to us."
+ Once the team started porting its Heroku apps into Kubernetes, which was "super easy," he says, the impact was immediate. "Before, to make a new version of the app meant going to Heroku and reconfiguring 10 new services, so basically no one was willing to do it, and we never staged things," he says. "Now we can deploy our exact same configuration in lots of different clusters in 30 seconds. We have a full set up that’s always running, and then any of our developers or designers can stage new versions with one command, including their recent changes. We stage all the time now, and everyone stopped talking about how cool it is because it’s become invisible how great it is."
+
+ Along with Kubernetes came Prometheus. "Until pretty recently we didn’t have any kind of visibility into aggregate server metrics or performance," says Eynon-Lynch. The team had tried to use Google Kubernetes Engine’s Stackdriver monitoring, but had problems making it work, and considered New Relic. When they started looking at Prometheus in the fall of 2016, "the fit between the abstractions in Prometheus and the way we think about how our system works, was so clear and obvious," he says.
+ The integration with Kubernetes made set-up easy. Once Helm installed Prometheus, "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
+
+
+
+
+
+ "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
+
+
+
+
+
+ With Pear Deck’s specific challenges—traffic through Firebase as well as government firewalls—Prometheus was a game-changer. "We didn’t even realize how stressed out we were about our lack of insight into what was happening with the app," Eynon-Lynch says. Before, when a customer would report that the app wasn’t working, the team had to manually investigate the problem without knowing whether customers were affected all over the world, or whether Firebase was down, and where.
+ To help solve that problem, the team wrote a script that pings Firebase from several different geographical locations, and then reports the responses to Prometheus in a histogram. "A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening," he says. "It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus. We weren’t going to have to figure out, ‘Where do we send these metrics? How do we aggregate the metrics? How do we understand them?’"
+ Plus, Prometheus has allowed Pear Deck to build alarms for business goals. One measures the rate of successful app loads and goes off if the day’s loads are less than 90 percent of the loads from seven days before. "We run a JavaScript app behind ridiculous firewalls and all kinds of crazy browser extensions messing with it—Chrome will push a feature that breaks some CSS that we’re using," Eynon-Lynch says. "So that gives us a lot of confidence, and we at least know that people are still logging into the app and using it all the time."
+ Now, when a customer complains, and none of the alarms have gone off, the team can feel confident that it’s not a widespread problem. "Just to be sure, we can go and double check the graphs and say, ‘Yep, there’s currently 10,000 people connected to that Firebase node. It’s definitely working. Let’s investigate your network settings, customer,’" he says. "And we can pass that back off to our support reps instead of the whole development team freaking out that Firebase is down."
+ Pear Deck is also giving back to the community, building and open-sourcing a metrics aggregator that enables end-user monitoring in Prometheus. "We can measure, for example, the time to interactive-dom on the web clients," he says. "The users all report that to our aggregator, then the aggregator reports to Prometheus. So we can set an alarm for some client side errors."
+ Most of Pear Deck’s services have now been moved onto Kubernetes. And all of the team’s new code is going on Kubernetes. "Kubernetes lets us experiment with service configurations and stage them on a staging cluster all at once, and test different scenarios and talk about them as a development team looking at code, not just talking about the steps we would eventually take as humans," says Eynon-Lynch.
+
+
+
+
+
+ "A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening. It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus...in terms of the cloud, Kubernetes and Prometheus have so much to offer," he says.
+
+
+
+
+
+ Looking ahead, the team is planning to explore autoscaling on Kubernetes. With users all over the world but mostly in the United States, there are peaks and valleys in the traffic. One service that’s still on App Engine can get as many as 10,000 requests a second during the day but far less at night. "We pay for the same servers at night, so I understand there’s autoscaling that we can be taking advantage of," he says. "Implementing it is a big worry, exposing the rest of our Kubernetes cluster to us and maybe messing that up. But it’s definitely our intention to move everything over, because now none of the developers want to work on that app anymore because it’s such a pain to deploy it."
+
+They’re also eager to explore the work that Kubernetes is doing with stateful sets. "Right now all of the services we run in Kubernetes are stateless, and Google basically runs our databases for us and manages backups," Eynon-Lynch says. "But we’re interested in building our own web-socket solution that doesn’t have to be super stateful but will have maybe an hour’s worth of state on it."
+
+That project will also involve Prometheus, for a dark launch of web socket connections. "We don’t know how reliable web socket connections behind all these horrible firewalls will be to our servers," he says. "We don’t know what work Firebase has done to make them more reliable. So I’m really looking forward to trying to get persistent connections with web sockets to our clients and have optional tools to understand if it’s working. That’s our next new adventure, into stateful servers."
+
+As for Prometheus, Eynon-Lynch thinks the company has only gotten started. "We haven’t instrumented all our important features, especially those that depend on third parties," he says. "We have to wait for those third parties to tell us they’re down, which sometimes they don’t do for a long time. So I’m really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
+
+For a spry startup that’s continuing to grow rapidly—and yes, they’re hiring!—Pear Deck is notably satisfied with how its infrastructure has evolved in the cloud native ecosystem. "Usually I have some angsty thing where I want to get to the new, better technology," says Eynon-Lynch, "but in terms of the cloud, Kubernetes and Prometheus have so much to offer."
+
+
+
+
diff --git a/content/ko/case-studies/peardeck/peardeck_featured.png b/content/ko/case-studies/peardeck/peardeck_featured.png
new file mode 100644
index 0000000000000..ce87ee2d47f7b
Binary files /dev/null and b/content/ko/case-studies/peardeck/peardeck_featured.png differ
diff --git a/content/ko/case-studies/peardeck/peardeck_logo.png b/content/ko/case-studies/peardeck/peardeck_logo.png
new file mode 100644
index 0000000000000..c1b9772ec45a0
Binary files /dev/null and b/content/ko/case-studies/peardeck/peardeck_logo.png differ
diff --git a/content/ko/case-studies/pearson/index.html b/content/ko/case-studies/pearson/index.html
new file mode 100644
index 0000000000000..ddb567afb3d09
--- /dev/null
+++ b/content/ko/case-studies/pearson/index.html
@@ -0,0 +1,87 @@
+---
+title: Pearson Case Study
+linkTitle: Pearson
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+featured: false
+quote: >
+ We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online.
+---
+
+
CASE STUDY:
Reinventing the World’s Largest Education Company With Kubernetes
+
+
+
+ Company Pearson Location Global
+ Industry Education
+
+
+
+
+
+
Challenge
+ A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.
+
+
Solution
+ "To transform our infrastructure, we had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way." The team chose Docker container technology and Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers’ productivity."
+
+
+
+
Impact
+ With the platform, there has been substantial improvements in productivity and speed of delivery. "In some cases, we’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team. Jackson estimates they’ve achieved 15-20% developer productivity savings. Before, outages were an issue during their busiest time of year, the back-to-school period. Now, there’s high confidence in their ability to meet aggressive customer SLAs.
+
+
+
+
+
+
+ "We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
+
+
+
+
+ In 2015, Pearson was already serving 75 million learners as the world’s largest education company, offering curriculum and assessment tools for Pre-K through college and beyond. Understanding that innovating the digital education experience was the key to the future of all forms of education, the company set out to increase its reach to 200 million people by 2025.
+ That goal would require a transformation of its existing infrastructure, which was in data centers. In some cases, it took nine months to provision physical assets. In order to adapt to the demands of its growing online audience, Pearson needed an infrastructure platform that would be able to scale quickly and deliver business-critical products to market faster. "We had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way."
+ With 400 development groups and diverse brands with varying business and technical needs, Pearson embraced Docker container technology so that each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Jackson chose Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers’ productivity," he says.
+ The team adopted Kubernetes when it was still version 1.2 and are still going strong now on 1.7; they use Terraform and Ansible to deploy it on to basic AWS primitives. "We were trying to understand how we can create value for Pearson from this technology," says Ben Somogyi, Principal Architect for the Cloud Platforms. "It turned out that Kubernetes’ benefits are huge. We’re trying to help our applications development teams that use our platform go faster, so we filled that gap with a CI/CD pipeline that builds their images for them, standardizes them, patches everything up, allows them to deploy their different environments onto the cluster, and obfuscating the details of how difficult the work underneath the covers is."
+
+
+
+
+ "Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
+
+
+
+
+ That work resulted in two tools for building and deploying applications in the cluster that Pearson has open sourced. "We’re an education company, so we want to share what we can," says Somogyi.
+ Now that development teams no longer have to worry about infrastructure, there have been substantial improvements in productivity and speed of delivery. "In some cases, we’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and to get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team.
+ According to Jackson, the Cloud Platforms team can "provision a new proof-of-concept environment for a development team in minutes, and then they can take that to production as quickly as they are able to. This is the value proposition of all major technology services, and we had to compete like one to become our developers’ preferred choice. Just because you work for the same company, you do not have the right to force people into a mediocre service. Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
+ Jackson estimates they’ve achieved a 15-20% boost in productivity for developer teams who adopt the platform. They also see a reduction in the number of customer-impacting incidents. Plus, says Jackson, "Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
+
+
+
+
+ "Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
+
+
+
+
+ Availability has also been positively impacted. The back-to-school period is the company’s busiest time of year, and "you have to keep applications up," says Somogyi. Before, this was a pain point for the legacy infrastructure. Now, for the applications that have been migrated to the Kubernetes platform, "We have 100% uptime. We’re not worried about 9s. There aren’t any. It’s 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges," says Shirley.
+
+ "You can’t even begin to put a price on how much that saves the company," Jackson explains. "A reduction in the number of support cases takes load out of our operations. The customer sentiment of having a reliable product drives customer retention and growth. It frees us to think about investing more into our digital transformation and taking a better quality of education to a global scale."
+
+ The platform itself is also being broken down, "so we can quickly release smaller pieces of the platform, like upgrading our Kubernetes or all the different modules that make up our platform," says Somogyi. "One of the big focuses in 2018 is this scheme of delivery to update the platform itself."
+
+ Guided by Pearson’s overarching goal of getting to 200 million users, the team has run internal tests of the platform’s scalability. "We had a challenge: 28 million requests within a 10 minute period," says Shirley. "And we demonstrated that we can hit that, with an acceptable latency. We saw that we could actually get that pretty readily, and we scaled up in just a few seconds, using open source tools entirely. Shout out to Locustfor that one. So that’s amazing."
+
+
+
+ "We have 100% uptime. We’re not worried about 9s. There aren’t any. It’s 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges. You can’t even begin to put a price on how much that saves the company."
— Benjamin Somogyi, Principal Systems Architect at Pearson
+
+
+ In just two years, "We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure," says Jackson. "But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
+ So far, about 15 production products are running on the new platform, including Pearson’s new flagship digital education service, the Global Learning Platform. The Cloud Platform team continues to prepare, onboard and support customers that are a good fit for the platform. Some existing products will be refactored into 12-factor apps, while others are being developed so that they can live on the platform from the get-go. "There are challenges with bringing in new customers of course, because we have to help them to see a different way of developing, a different way of building," says Shirley.
+ But, he adds, "It is our corporate motto: Always Learning. We encourage those teams that haven’t started a cloud native journey, to see the future of technology, to learn, to explore. It will pique your interest. Keep learning."
+
+
diff --git a/content/ko/case-studies/pearson/pearson_featured.png b/content/ko/case-studies/pearson/pearson_featured.png
new file mode 100644
index 0000000000000..6f8ffec49e6ef
Binary files /dev/null and b/content/ko/case-studies/pearson/pearson_featured.png differ
diff --git a/content/ko/case-studies/pearson/pearson_logo.png b/content/ko/case-studies/pearson/pearson_logo.png
new file mode 100644
index 0000000000000..57e586f3ebd32
Binary files /dev/null and b/content/ko/case-studies/pearson/pearson_logo.png differ
diff --git a/content/ko/case-studies/philips/index.html b/content/ko/case-studies/philips/index.html
new file mode 100644
index 0000000000000..e45d41a776baa
--- /dev/null
+++ b/content/ko/case-studies/philips/index.html
@@ -0,0 +1,4 @@
+---
+title: Philips
+content_url: https://cloud.google.com/customers/philips/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/philips/philips_logo.png b/content/ko/case-studies/philips/philips_logo.png
new file mode 100644
index 0000000000000..9ba3421a61b81
Binary files /dev/null and b/content/ko/case-studies/philips/philips_logo.png differ
diff --git a/content/ko/case-studies/pinterest/index.html b/content/ko/case-studies/pinterest/index.html
new file mode 100644
index 0000000000000..56bf0ecbc5d4e
--- /dev/null
+++ b/content/ko/case-studies/pinterest/index.html
@@ -0,0 +1,109 @@
+---
+title: Pinterest Case Study
+linkTitle: Pinterest
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+featured: false
+weight: 30
+quote: >
+ We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do.
+---
+
+
+
+
CASE STUDY:
Pinning Its Past, Present, and Future on Cloud Native
+
+
+
+
+
+
+ Company Pinterest Location San Francisco, California Industry Web and Mobile App
+
+
+
+
+
+
+
Challenge
+ After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
+
+
+
+
Solution
+ The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
+
+
+
+
+
+
Impact
+ "By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
+
+
+
+
+
+
+
+
+"So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on that Kubernetes shared cluster. That is the win we were pushing for."
— Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest
+
+
+
+
+
+ Pinterest was born on the cloud—running on AWS since day one in 2010—but even cloud native companies can experience some growing pains. Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.
+With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
+So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
+The first phase involved moving to Docker. "Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. "To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point."
+
+
+
+
+
+ "Though Kubernetes lacked certain things we wanted, we realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
+
+
+
+
+
+The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict’s infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. "It became clear that running on VMs is just not sustainable with what we’re doing," says Benedict. "A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece."
+That led to the second phase of the roadmap. In July 2017, after an eight-week evaluation period, the team chose Kubernetes over other orchestration platforms. "Kubernetes lacked certain things at the time—for example, we wanted Spark on Kubernetes," says Benedict. "But we realized that the dev cycles we would put in to even try building that is well worth the outcome, both for Pinterest as well as the community. We’ve been in those conversations in the Big Data SIG. We realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."
+At the beginning of 2018, the team began onboarding its first use case into the Kubernetes system: Jenkins workloads. "Although we have builds happening during a certain period of the day, we always need to allocate peak capacity," says Benedict. "They don’t have any auto-scaling capabilities, so that capacity stays constant. It is difficult to speed up builds because ramping up takes more time. So given those kind of concerns, we thought that would be a perfect use case for us to work on."
+
+
+
+
+
+
+"So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
+
+
+
+
+
+ They ramped up the cluster, and working with a team of four people, got the Jenkins Kubernetes cluster ready for production. "We still have our static Jenkins cluster," says Benedict, "but on Kubernetes, we are doing similar builds, testing the entire pipeline, getting the artifact ready and just doing the comparison to see, how much time did it take to build over here. Is the SLA okay, is the artifact generated correct, are there issues there?"
+"So far it’s been good," he adds, "especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
+By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the Jenkins Kubernetes Plugin to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
+
+
+
+
+
+
+ "We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
+
+
+
+
+ Benedict points to a "pretty robust roadmap" going forward. In addition to the Pinterest big data team’s experiments with Spark on Kubernetes, the company collaborated with Amazon’s EKS team on an ENI/CNI plug in.
+Once the Jenkins cluster is up and running out of dark mode, Benedict hopes to establish best practices, including having governance primitives established—including integration with the chargeback system—before moving on to migrating the next service. "We have a healthy pipeline of use-cases to be on-boarded. After Jenkins, we want to enable support for Tensorflow and Apache Spark. At some point, we aim to move the company’s monolithic API service. If we move that and understand the complexity around that, it builds our confidence," says Benedict. "It sets us up for migration of all our other services."
+After years of being a cloud native pioneer, Pinterest is eager to share its ongoing journey. "We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do," says Benedict. "We’re in a great position to contribute back some of those learnings."
+
+
+
+
+
+
diff --git a/content/ko/case-studies/pinterest/pinterest_feature.png b/content/ko/case-studies/pinterest/pinterest_feature.png
new file mode 100644
index 0000000000000..ea5d625789468
Binary files /dev/null and b/content/ko/case-studies/pinterest/pinterest_feature.png differ
diff --git a/content/ko/case-studies/pinterest/pinterest_logo.png b/content/ko/case-studies/pinterest/pinterest_logo.png
new file mode 100644
index 0000000000000..0f744e7828cc2
Binary files /dev/null and b/content/ko/case-studies/pinterest/pinterest_logo.png differ
diff --git a/content/ko/case-studies/pokemon-go/index.html b/content/ko/case-studies/pokemon-go/index.html
new file mode 100644
index 0000000000000..ed4e168019236
--- /dev/null
+++ b/content/ko/case-studies/pokemon-go/index.html
@@ -0,0 +1,4 @@
+---
+title: Pokemon GO
+content_url: https://cloudplatform.googleblog.com/2016/09/bringing-Pokemon-GO-to-life-on-Google-Cloud.html
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/pokemon-go/pokemon_go_logo.png b/content/ko/case-studies/pokemon-go/pokemon_go_logo.png
new file mode 100644
index 0000000000000..3cf2b5c7ef63d
Binary files /dev/null and b/content/ko/case-studies/pokemon-go/pokemon_go_logo.png differ
diff --git a/content/ko/case-studies/samsung-sds/index.html b/content/ko/case-studies/samsung-sds/index.html
new file mode 100644
index 0000000000000..db4aa479abc1e
--- /dev/null
+++ b/content/ko/case-studies/samsung-sds/index.html
@@ -0,0 +1,4 @@
+---
+title: Samsung SDS
+content_url: http://www.nextplatform.com/2016/05/24/samsung-experts-put-kubernetes-paces/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/samsung-sds/sds_logo.png b/content/ko/case-studies/samsung-sds/sds_logo.png
new file mode 100644
index 0000000000000..0a172df65d7e1
Binary files /dev/null and b/content/ko/case-studies/samsung-sds/sds_logo.png differ
diff --git a/content/ko/case-studies/sap/index.html b/content/ko/case-studies/sap/index.html
new file mode 100644
index 0000000000000..856dc8be9d286
--- /dev/null
+++ b/content/ko/case-studies/sap/index.html
@@ -0,0 +1,4 @@
+---
+title: SAP
+content_url: https://youtu.be/4gyeixJLabo
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/sap/sap_logo.png b/content/ko/case-studies/sap/sap_logo.png
new file mode 100644
index 0000000000000..681a3fe5cff8c
Binary files /dev/null and b/content/ko/case-studies/sap/sap_logo.png differ
diff --git a/content/ko/case-studies/sap/sap_small.png b/content/ko/case-studies/sap/sap_small.png
new file mode 100644
index 0000000000000..ada89de7595e5
Binary files /dev/null and b/content/ko/case-studies/sap/sap_small.png differ
diff --git a/content/ko/case-studies/slingtv/index.html b/content/ko/case-studies/slingtv/index.html
new file mode 100644
index 0000000000000..a11527c2d9429
--- /dev/null
+++ b/content/ko/case-studies/slingtv/index.html
@@ -0,0 +1,110 @@
+---
+title: SlingTV Case Study
+linkTitle: Sling TV
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+featured: true
+weight: 49
+quote: >
+ I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.
+---
+
+
+
+
CASE STUDY:
Sling TV: Marrying Kubernetes and AI to Enable Proper Web Scale
+
+
+
+
+
+
+ Company Sling TV Location Englewood, Colorado Industry Streaming television
+
+
+
+
+
+
+
Challenge
+ Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. The company has particular challenges: “We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Linder. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale.”
+
+
+
+
Solution
+ Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of that sort of customer base,” Linder partnered with Rancher Labs to build Sling TV’s next-generation platform around Kubernetes. “We are going to need to enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business at some point, so getting that sort of abstraction was a real goal,” he says. “That is one of the biggest reasons why we picked Kubernetes.” The team launched its first applications on Kubernetes in Sling TV’s two internal data centers. The push to enable AWS as a data center option is underway and should be available by the end of 2018. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
+
+
+
+
+
+
Impact
+ “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
+
+
+
+
+
+
+
+
+ “I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+
The beauty of streaming television, like the service offered by Sling TV, is that you can watch it from any device you want, wherever you want.
Of course, from the provider side of things, that creates a particular set of challenges
+“We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and we have to do it at web scale.”
+Indeed, Sling TV experienced great customer growth from the beginning of its launch by DISH Network in 2015. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Linder. Tasked with building a next-generation web scale platform for the “personalized customer experience,” Linder has spent the past year bringing Kubernetes to Sling TV.
+Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of our customers,” Linder partnered with Rancher Labs to build the platform around Kubernetes. “They have really helped us get our head around how to use Kubernetes,” he says. “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
+
+
+
+
+
+
+ “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+One big reason he chose Kubernetes was getting a level of abstraction that would enable the company to “enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business,” he says. Another factor was how much the Kubernetes ecosystem has matured over the past couple of years. “We have spent a lot of time and energy around making logging, monitoring and alerting production ready to give us insights into applications’ well-being,” says Linder. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
+With the emphasis on common tooling, “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools and services. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
+
+
+
+
+
+“We have to be able to react to changes and hiccups in the matrix. It is the foundation for our ability to deliver a high-quality service for our customers."
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+
+ The team launched its first applications on Kubernetes in Sling TV’s two internal data centers in the early part of Q1 2018 and began to enable AWS as a data center option. The company plans to expand into other public clouds in the future.
+The first application that went into production is a web socket-based back-end notification service. “It allows back-end changes to trigger messages to our clients in the field without the polling,” says Linder. “We are talking about very high volumes of messages with this application. Without something like Kubernetes to be able to scale up and down, as well as just support that overall workload, that is pretty hard to do. I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
+ Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. “We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly,” says Linder.
+
+
+
+
+
+
+ This undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works".
— BRAD LINDER, CLOUD NATIVE & BIG DATA EVANGELIST FOR SLING TV
+
+
+
+
+ Ultimately, this undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works,” he adds. “We want the artificial agents and the big data platform using the actual logs and events coming out of the applications, Kubernetes, the infrastructure, backing services and changes to the environment to make decisions like, ‘Hey we need more capacity for this service so please add more nodes.’ From a platform perspective, if you are truly doing web scale stuff and you are not using AI and big data, in my opinion, you are going to implode under your own weight. It is not a question of if, it is when. If you are in a ‘millions of users’ sort of environment, that implosion is going to be catastrophic. We are on our way to this goal and have learned a lot along the way.”
+For Sling TV, moving to cloud native has been exactly what they needed. “We have to be able to react to changes and hiccups in the matrix,” says Linder. “It is the foundation for our ability to deliver a high-quality service for our customers. Building intelligent platforms, tools and clients in the field consuming those services has got to be part of all of this. In my eyes that is a big part of what cloud native is all about. It is taking these distributed, potentially unreliable entities and enabling a robust customer experience they expect.”
+
+
+
+
+
+
+
+
diff --git a/content/ko/case-studies/slingtv/slingtv_featured_logo.png b/content/ko/case-studies/slingtv/slingtv_featured_logo.png
new file mode 100644
index 0000000000000..b52143ee8b6c6
Binary files /dev/null and b/content/ko/case-studies/slingtv/slingtv_featured_logo.png differ
diff --git a/content/ko/case-studies/soundcloud/index.html b/content/ko/case-studies/soundcloud/index.html
new file mode 100644
index 0000000000000..50611ffd85cd1
--- /dev/null
+++ b/content/ko/case-studies/soundcloud/index.html
@@ -0,0 +1,4 @@
+---
+title: Soundcloud
+content_url: https://www.youtube.com/watch?v=5378N5iLb2Q
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/soundcloud/soundcloud_logo.png b/content/ko/case-studies/soundcloud/soundcloud_logo.png
new file mode 100644
index 0000000000000..f8c12f05b5edf
Binary files /dev/null and b/content/ko/case-studies/soundcloud/soundcloud_logo.png differ
diff --git a/content/ko/case-studies/squarespace/index.html b/content/ko/case-studies/squarespace/index.html
new file mode 100644
index 0000000000000..58c5cae5cc288
--- /dev/null
+++ b/content/ko/case-studies/squarespace/index.html
@@ -0,0 +1,101 @@
+---
+title: Squarespace Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Squarespace: Gaining Productivity and Resilience with Kubernetes
+
+
+
+
+
+ Company Squarespace Location New York, N.Y. Industry Software as a Service, Website-Building Platform
+
+
+
+
+
+
+
Challenge
+ Moving from a monolith to microservices in 2014 "solved a problem on the development side, but it pushed that problem to the infrastructure team," says Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace. "The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
+
+
Solution
+The team experimented with container orchestration platforms, and found that Kubernetes "answered all the questions that we had," says Lynch. The company began running Kubernetes in its data centers in 2016.
+
+
+
+
+
+
Impact
+Since Squarespace moved to Kubernetes, in conjunction with modernizing its networking stack, deployment time has been reduced by almost 85%. Before, their VM deployment would take half an hour; now, says Lynch, "someone can generate a templated application, deploy it within five minutes, and have actual instances containerized, running in our staging environment at that point." Because of that, "productivity time is the big cost saver," he adds. "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on." Resilience has also been improved with Kubernetes: "If a node goes down, it’s rescheduled immediately and there’s no performance impact."
+
+
+
+
+
+
+
+
+
"Once you prove that Kubernetes solves one problem, everyone immediately starts solving other problems without you even having to evangelize it."
+ — Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace
+
+
+
+
+
Since it was started in a dorm room in 2003, Squarespace has made it simple for millions of people to create their own websites.
Behind the scenes, though, the company’s monolithic Java application was making things not so simple for its developers to keep improving the platform. So in 2014, the company decided to "go down the microservices path," says Kevin Lynch, staff engineer on Squarespace’s Site Reliability team. "But we were always deploying our applications in vCenter VMware VMs [in our own data centers]. Microservices solved a problem on the development side, but it pushed that problem to the Infrastructure team. The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
+ After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had." Deploying it in the data center rather than the public cloud was their biggest challenge, and at the time, not a lot of other companies were doing that. "We had to figure out how to deploy this in our infrastructure for ourselves, and we had to integrate it with our other applications," says Lynch.
+ At the same time, Squarespace’s Network Engineering team was modernizing its networking stack, switching from a traditional layer-two network to a layer-three spine-and-leaf network. "It mapped beautifully with what we wanted to do with Kubernetes," says Lynch. "It gives us the ability to have our servers communicate directly with the top-of-rack switches. We use Calico for CNI networking for Kubernetes, so we can announce all these individual Kubernetes pod IP addresses and have them integrate seamlessly with our other services that are still provisioned in the VMs."
+
+
+
+
+
+ After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
+
+
+
+
+
+ Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects Prometheus and fluentd to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."
+
+ And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that’s not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there’s some business delay in there as well."
+
+ With faster deployments, "productivity time is the big cost saver," says Lynch. "We had a team that was implementing a new file storage service, and they just started integrating that with our storage back end without our involvement"—which wouldn’t have been possible before Kubernetes. He adds: "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on."
+
+
+
+
+
+
+ "We switched to Kubernetes, a new world....It allowed us to streamline our process, so we can now easily create an entire microservice project from templates," Lynch says. And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment.
+
+
+
+
+
+ There’s also been a positive impact on the application’s resilience. "When we’re deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it’s rescheduled immediately and there’s no performance impact."
+
+ Another big benefit is autoscaling. "It wasn’t really possible with the way we’ve been using VMware," says Lynch, "but now we can just add the appropriate autoscaling features via Kubernetes directly, and boom, it’s scaling up as demand increases. And it worked out of the box."
+
+ For others starting out with Kubernetes, Lynch says his best advice is to "fail fast": "Once you’ve planned things out, just execute. Kubernetes has been really great for trying something out quickly and seeing if it works or not."
+
+
+
+
+
+
+ "When we’re deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it’s rescheduled immediately and there’s no performance impact."
+
+
+
+
+ Lynch and his team are planning to open source some of the tools they’ve developed to extend Kubernetes and use it as an API itself. The first tool injects dependent applications as containers in a pod. "When you ship an application, usually it comes along with a whole bunch of dependent applications that need to be shipped with that, for example, fluentd for logging," he explains. With this tool, the developer doesn’t need to worry about the configurations.
+
+ Going forward, all new services at Squarespace are going into Kubernetes, and the end goal is to convert everything it can. About a quarter of existing services have been migrated. "Our monolithic application is going to be the last one, just because it’s so big and complex," says Lynch. "But now I’m seeing other services get moved over, like the file storage service. Someone just did it and it worked—painlessly. So I believe if we tackle it, it’s probably going to be a lot easier than we fear. Maybe I should just take my own advice and fail fast!"
+
+
+
+
diff --git a/content/ko/case-studies/squarespace/squarespace_featured_logo.png b/content/ko/case-studies/squarespace/squarespace_featured_logo.png
new file mode 100644
index 0000000000000..551b6da32119d
Binary files /dev/null and b/content/ko/case-studies/squarespace/squarespace_featured_logo.png differ
diff --git a/content/ko/case-studies/wepay/index.html b/content/ko/case-studies/wepay/index.html
new file mode 100644
index 0000000000000..b8ce8201d5f5b
--- /dev/null
+++ b/content/ko/case-studies/wepay/index.html
@@ -0,0 +1,4 @@
+---
+title: WePay
+content_url: http://thenewstack.io/wepay-kubernetes-changed-business/
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/wepay/wepay_logo.png b/content/ko/case-studies/wepay/wepay_logo.png
new file mode 100644
index 0000000000000..4e35dd8fd694a
Binary files /dev/null and b/content/ko/case-studies/wepay/wepay_logo.png differ
diff --git a/content/ko/case-studies/wikimedia/index.html b/content/ko/case-studies/wikimedia/index.html
new file mode 100644
index 0000000000000..abc74e3ee33fa
--- /dev/null
+++ b/content/ko/case-studies/wikimedia/index.html
@@ -0,0 +1,96 @@
+---
+title: Wikimedia Case Study
+
+class: gridPage
+cid: caseStudies
+---
+
+
+
Wikimedia Case Study
+
+
+
+
+
+
Using Kubernetes to Build Tools to Improve the World's Wikis
+
+ The non-profit Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia. To help users maintain and use wikis, it runs Wikimedia Tool Labs, a hosting environment for community developers working on tools and bots to help editors and other volunteers do their work, including reducing vandalism. The community around Wikimedia Tool Labs began forming nearly 10 years ago.
+
+
+
+
+ "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."
+
+
— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs
+
+
+
+
+
+
+
+
+
+
Challenges:
+
+
Simplify a complex, difficult-to-manage infrastructure
+
Allow developers to continue writing tools and bots using existing techniques
+
+
+
+
Why Kubernetes:
+
+
Wikimedia Tool Labs chose Kubernetes because it can mimic existing workflows, while reducing complexity
+
+
+
+
Approach:
+
+
Migrate old systems and a complex infrastructure to Kubernetes
+
+
+
+
Results:
+
+
20 percent of web tools that account for more than 40 percent of web traffic now run on Kubernetes
+
A 25-node cluster that keeps up with each new Kubernetes release
+
Thousands of lines of old code have been deleted, thanks to Kubernetes
+
+
+
+
+
+
+
+
+
+
Using Kubernetes to provide tools for maintaining wikis
+
+ Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."
+
+
+ To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.
+
+
+ "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.
+
+
+
+
+
+
+
+
+
Simplifying infrastructure and keeping wikis running better
+
+ Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
+
+
+ In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
+
+
+ "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.
+
+
+
+
diff --git a/content/ko/case-studies/wikimedia/wikimedia_featured.png b/content/ko/case-studies/wikimedia/wikimedia_featured.png
new file mode 100644
index 0000000000000..7b1f89ac98490
Binary files /dev/null and b/content/ko/case-studies/wikimedia/wikimedia_featured.png differ
diff --git a/content/ko/case-studies/wikimedia/wikimedia_logo.png b/content/ko/case-studies/wikimedia/wikimedia_logo.png
new file mode 100644
index 0000000000000..3ad5b63034204
Binary files /dev/null and b/content/ko/case-studies/wikimedia/wikimedia_logo.png differ
diff --git a/content/ko/case-studies/wink/index.html b/content/ko/case-studies/wink/index.html
new file mode 100644
index 0000000000000..3f47f8c779d42
--- /dev/null
+++ b/content/ko/case-studies/wink/index.html
@@ -0,0 +1,109 @@
+---
+title: Wink Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_wink.css
+---
+
+
+
CASE STUDY:
+
Cloud-Native Infrastructure Keeps Your Smart Home Connected
+
+
+
+
+
+ Company Wink Location New York, N.Y. Industry Internet of Things Platform
+
+
+
+
+
+
+
+
+
Challenge
+ Building a low-latency, highly reliable infrastructure to serve communications between millions of connected smart-home devices and the company’s consumer hubs and mobile app, with an emphasis on horizontal scalability, the ability to encrypt everything quickly and connections that could be easily brought back up if anything went wrong.
+
+
Solution
+ Across-the-board use of a Kubernetes-Docker-CoreOS Container Linux stack.
+
+
+
+
Impact
+ "Two of the biggest American retailers [Home Depot and Walmart] are carrying and promoting the brand and the hardware,” Wink Head of Engineering Kit Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has built. With 80 percent of Wink’s workload running on a unified stack of Kubernetes-Docker-CoreOS, the company has put itself in a position to continually innovate and improve its products and services. Committing to this technology, says Klein, "makes building on top of the infrastructure relatively easy.”
+
+
+
+
+
+
+
+
+ "It’s not proprietary, it’s totally open, it’s really portable. You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one open source Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro/machine image to validate. The benefits are enormous because you save money, and you save time.”
- KIT KLEIN, HEAD OF ENGINEERING, WINK
+
+
+
+
+
+
+
How many people does it take to turn on a light bulb?
+
+ Kit Klein whips out his phone to demonstrate. With a few swipes, the head of engineering at Wink pulls up the smart-home app created by the New York City-based company and taps the light button. "Honestly when you’re holding the phone and you’re hitting the light,” he says, "by the time you feel the pressure of your finger on the screen, it’s on. It takes as long as the signal to travel to your brain.”
+ Sure, it takes just one finger and less than 200 milliseconds to turn on the light – or lock a door or change a thermostat. But what allows Wink to help consumers manage their connected smart-home products with such speed and ease is a sophisticated, cloud native infrastructure that Klein and his team built and continue to develop using a unified stack of CoreOS, the open-source operating system designed for clustered deployments, and Kubernetes, an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. "When you have a big, complex network of interdependent microservices that need to be able to discover each other, and need to be horizontally scalable and tolerant to failure, that’s what this is really optimized for,” says Klein. "A lot of people end up relying on proprietary services [offered by some big cloud providers] to do some of this stuff, but what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.”
+ Indeed, Wink did. The company’s mission statement is to make the connected home accessible – that is, user-friendly for non-technical owners, affordable and perhaps most importantly, reliable. "If you can’t trust that when you hit the switch, you know a light is going to go on, or if you’re remote and you’re checking on your house and that information isn’t accurate, then the convenience of the system is lost,” says Klein. "So that’s where the infrastructure comes in.”
+ Wink was incubated within Quirky, a company that developed crowd-sourced inventions. The Wink app was first introduced in 2013, and at the time, it controlled only a few consumer products such as the PivotPower Strip that Quirky produced in collaboration with GE. As smart-home products proliferated, Wink was launched in 2014 in Home Depot stores nationwide. Its first project: a hub that could integrate with smart products from about a dozen brands like Honeywell and Chamberlain. The biggest challenge would be to build the infrastructure to serve all those communications between the hub and the products, with a focus on maximizing reliability and minimizing latency.
+ "When we originally started out, we were moving very fast trying to get the first product to market, the minimum viable product,” says Klein. "Lots of times you go down a path and end up having to backtrack and try different things. But in this particular case, we did a lot of the work up front, which led to us making a really sound decision to deploy it on CoreOS Container Linux. And that was very early in the life of it.”
+
+
+
+
+
+
+ "...what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.”
+
+
+
+
+
+ Concern number one: Wink’s products need to connect to consumer devices in people’s homes, behind a firewall. "You don’t have an end point like a URL, and you don’t even know what ports are open behind that firewall,” Klein explains. "So you essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent because you want to decrease as much as possible the overhead of sending a message – you never know when someone is going to turn on the lights.”
+ With the earliest version of the Wink Hub, when you decided to turn your lights on or off, the request would be sent to the cloud and then executed. Subsequent updates to Wink’s software enabled local control, cutting latency down to about 10 milliseconds for many devices. But with the need for cloud-enabled integrations of an ever-growing ecosystem of smart home products, low-latency internet connectivity is still a critical consideration.
+
+
"You essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent...you never know when someone is going to turn on the lights.”
+ In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service,” says Klein. "We’ve always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker.”
+ At the time – just over two years ago – Docker wasn’t yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn’t really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads.”
+ Once Wink’s backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.”
+
+
+
+
+
+
+ "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.”
+
+
+
+
+
+ Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed,” he says. "We didn’t have to futz around with trying to take something like a Linux distro and install everything. It’s got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It’s not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed.”
+ Wink’s hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they’ve moved almost every other piece of their infrastructure – from third-party cloud-to-cloud integrations to their customer service and payment portals – onto CoreOS Container Linux clusters.
+ Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn’t take care of routing, sharing configurations, secrets, et cetera, among instances of a service,” Klein says. "All of those layers of functionality can be implemented, of course, but if you don’t want to spend a lot of time writing unit files manually – which of course nobody does – you need to create a tool to automate some of that, which we did.”
+ Wink quickly embraced the Kubernetes container cluster manager when it was launched in 2015 and integrated with CoreOS core technology, and as promised, it ended up providing the features Wink wanted and had planned to build. "If not for Kubernetes, we likely would have taken the logic and library we implemented for the automation tool that we created, and would have used it in a higher level abstraction and tool that could be used by non-DevOps engineers from the command line to create and manage clusters,” Klein says. "But Kubernetes made that totally unnecessary – and is written and maintained by people with a lot more experience in cluster management than us, so all the better.” Now, an estimated 80 percent of Wink’s workload is run on Kubernetes on top of CoreOS Container Linux.
+
+
+
+
+
+
+ "Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.”
+
+
+
+
+
+ Wink’s reasons for going all in are clear: "It’s not proprietary, it’s totally open, it’s really portable,” Klein says. "You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro to try to validate. The benefits are enormous because you save money, you save time.”
+ Klein concedes that there are tradeoffs in every technology decision. "Cutting-edge technology is going to be scary for some people,” he says. "In order to take advantage of this, you really have to keep up with the technology. You can’t treat it like it’s a black box. Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.”
+ Wink, which was acquired by Flex in 2015, now controls 2.3 million connected devices in households all over the country. What’s next for the company? A new version of the hub - Wink Hub 2 - hit shelves last November – and is being offered for the first time at Walmart stores in addition to Home Depot. "Two of the biggest American retailers are carrying and promoting the brand and the hardware,” Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has have built.
+ Wink’s engineering team has grown exponentially since its early days, and behind the scenes, Klein is most excited about the machine learning Wink is using. "We built [a system of] containerized small sections of the data pipeline that feed each other and can have multiple outputs,” he says. "It’s like data pipelines as microservices.” Again, Klein points to having a unified stack running on CoreOS Container Linux and Kubernetes as the primary driver for the innovations to come. "You’re not reinventing the wheel every time,” he says. "You can just get down to work.”
+
diff --git a/content/ko/case-studies/wink/wink_featured.png b/content/ko/case-studies/wink/wink_featured.png
new file mode 100644
index 0000000000000..3c01133bef701
Binary files /dev/null and b/content/ko/case-studies/wink/wink_featured.png differ
diff --git a/content/ko/case-studies/wink/wink_logo.png b/content/ko/case-studies/wink/wink_logo.png
new file mode 100644
index 0000000000000..ef2ee30bf3a46
Binary files /dev/null and b/content/ko/case-studies/wink/wink_logo.png differ
diff --git a/content/ko/case-studies/workiva/index.html b/content/ko/case-studies/workiva/index.html
new file mode 100644
index 0000000000000..95f323d5ae395
--- /dev/null
+++ b/content/ko/case-studies/workiva/index.html
@@ -0,0 +1,113 @@
+---
+title: Workiva Case Study
+linkTitle: Workiva
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+draft: true
+featured: true
+weight: 20
+quote: >
+ With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code.
+---
+
+
+
CASE STUDY:
Using OpenTracing to Help Pinpoint the Bottlenecks
+
+
+
+
+
+
+ Company Workiva Location Ames, Iowa Industry Enterprise Software
+
+
+
+
+
+
+
Challenge
+ Workiva offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed.
+
+
+
+
+
+
+
Solution
+ Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
+
+
Impact
+ Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+
+
+
+
+
+
+
+"With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code." — MacLeod Broad, Senior Software Architect at Workiva
+
+
+
+
+
+
Last fall, MacLeod Broad’s platform team at Workiva was prepping one of the company’s first products utilizing Amazon Web Services when they ran into a roadblock.
+ Early on, Workiva’s backend had run mostly on Google App Engine. But things changed along the way as Workiva’s SaaS offering, Wdesk, a cloud-based platform for managing and reporting business data, grew its customer base to more than 70 percent of the Fortune 500 companies. "As customer needs grew and the product offering expanded, we started to leverage a wider offering of services such as Amazon Web Services as well as other Google Cloud Platform services, creating a multi-vendor environment."
+With this new product, there was a "sync and link" feature by which data "went through a whole host of services starting with the new spreadsheet system [Amazon Aurora] into what we called our linking system, and then pushed through http to our existing system, and then a number of calculations would go on, and the results would be transmitted back into the new system," says Broad. "We were trying to optimize that for speed. We thought we had made this great optimization and then it would turn out to be a micro optimization, which didn’t really affect the overall speed of things."
+The challenges faced by Broad’s team may sound familiar to other companies that have also made the shift from monoliths to more distributed, microservice-based systems. "We had a number of people working on this, all on different teams, so it was difficult to get our head around what the issues were and where the bottlenecks were," says Broad.
+ "Each service team was going through different iterations of their architecture and it was very hard to follow what was actually going on in each teams’ system," he adds. "We had circular dependencies where we’d have three or four different service teams unsure of where the issues really were, requiring a lot of back and forth communication. So we wasted a lot of time saying, ‘What part of this is slow? Which part of this is sometimes slow depending on the use case? Which part is degrading over time? Which part of this process is asynchronous so it doesn’t really matter if it’s long-running or not? What are we doing that’s redundant, and which part of this is buggy?’"
+
+
+
+
+
+
+ "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level. Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
+
+
+
+
+
+Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."
+With Workiva’s back-end code running on Google Compute Engine as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."
+Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"
+Using the insight OpenTracing gave them, "My team was able to look at a trace and make optimization suggestions to another team without ever looking at their code," says Broad. "The way we named our traces gave us insight whether it’s doing a SQL call or it’s making an RPC. And so it was really easy to say, ‘OK, we know that it’s going to page through all these requests. Do the work once and stuff it in cache.’ And we were done basically. All those calls became sub-second calls immediately."
+
+
+
+
+
+
+
+"We were looking at different tracing solutions and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
+
+
+
+
+
+ After the success of the first use case, everyone involved in the trial went back and fully instrumented their products. Tracing was added to a few more use cases. "We wanted to get through the initial implementation pains early without bringing the whole department along for the ride," says Broad. "Now, a lot of teams add it when they’re starting up a new service. We’re really pushing adoption now more than we were before."
+Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+Most of Workiva’s major products are now traced using OpenTracing, with data pushed into Google StackDriver. Even the products that aren’t fully traced have some components and libraries that are.
+Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."
+But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
+In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
+
+
+
+
+
+
+ "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." — Michael Davis, Software Engineer, Workiva
+
+
+
+
+ For Workiva, OpenTracing has become an essential tool for zeroing in on optimizations and determining what’s actually a micro-optimization by observing usage patterns. "On some projects we often assume what the customer is doing, and we optimize for these crazy scale cases that we hit 1 percent of the time," says Broad. "It’s been really helpful to be able to say, ‘OK, we’re adding 100 milliseconds on every request that does X, and we only need to add that 100 milliseconds if it’s the worst of the worst case, which only happens one out of a thousand requests or one out of a million requests."
+Unlike many other companies, Workiva also traces the client side. "For us, the user experience is important—it doesn’t matter if the RPC takes 100 milliseconds if it still takes 5 seconds to do the rendering to show it in the browser," says Broad. "So for us, those client times are important. We trace it to see what parts of loading take a long time. We’re in the middle of working on a definition of what is ‘loaded.’ Is it when you have it, or when it’s rendered, or when you can interact with it? Those are things we’re planning to use tracing for to keep an eye on and to better understand."
+That also requires adjusting for differences in external and internal clocks. "Before time correcting, it was horrible; our traces were more misleading than anything," says Broad. "So we decided that we would return a timestamp on the response headers, and then have the client reorient its time based on that—not change its internal clock but just calculate the offset on the response time to when the client got it. And if you end up in an impossible situation where a client RPC spans 210 milliseconds but the time on the response time is outside of that window, then we have to reorient that."
+Broad is excited about the impact OpenTracing has already had on the company, and is also looking ahead to what else the technology can enable. One possibility is using tracing to update documentation in real time. "Keeping documentation up to date with reality is a big challenge," he says. "Say, we just ran a trace simulation or we just ran a smoke test on this new deploy, and the architecture doesn’t match the documentation. We can find whose responsibility it is and let them know and have them update it. That’s one of the places I’d like to get in the future with tracing."
+
+
+
+
diff --git a/content/ko/case-studies/workiva/workiva_featured_logo.png b/content/ko/case-studies/workiva/workiva_featured_logo.png
new file mode 100644
index 0000000000000..9998b471049e5
Binary files /dev/null and b/content/ko/case-studies/workiva/workiva_featured_logo.png differ
diff --git a/content/ko/case-studies/yahoo-japan/index.html b/content/ko/case-studies/yahoo-japan/index.html
new file mode 100644
index 0000000000000..724a41ae01558
--- /dev/null
+++ b/content/ko/case-studies/yahoo-japan/index.html
@@ -0,0 +1,4 @@
+---
+title: Yahoo! Japan
+content_url: https://kubernetes.io/blog/2016/10/kubernetes-and-openstack-at-yahoo-japan
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/yahoo-japan/yahooJapan_logo.png b/content/ko/case-studies/yahoo-japan/yahooJapan_logo.png
new file mode 100644
index 0000000000000..3cbea39598954
Binary files /dev/null and b/content/ko/case-studies/yahoo-japan/yahooJapan_logo.png differ
diff --git a/content/ko/case-studies/ygrene/index.html b/content/ko/case-studies/ygrene/index.html
new file mode 100644
index 0000000000000..6f15b7fe9b73f
--- /dev/null
+++ b/content/ko/case-studies/ygrene/index.html
@@ -0,0 +1,111 @@
+---
+title: Ygrene Case Study
+
+linkTitle: Ygrene
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+logo: ygrene_featured_logo.png
+featured: true
+weight: 48
+quote: >
+ We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company.
+---
+
+
+
CASE STUDY:
Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
+
+
+
+
+
+
+ Company Ygrene Location Petaluma, Calif. Industry Clean energy financing
+
+
+
+
+
+
+
Challenge
+ A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn’t require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.
+
+
+
+
Solution
+ Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: Kubernetes to help scale out vertically and distribute workloads, Notary to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and Fluentd for "observing every part of our stack," all running on Amazon EC2 Spot.
+
+
+
+
+
+
Impact
+ Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
+
+
+
+
+
+
+
+
+"CNCF projects are helping Ygrene determine the security and observability standards for the entire PACE industry. We’re an emerging finance industry, and without these projects, especially Kubernetes, we couldn’t be the industry leader that we are today."
— Austin Adams, Development Manager, Ygrene Energy Fund
+
+
+
+
+
+
In less than a decade, Ygrene has funded more than $1 billion in loans for renewable energy projects.
A PACE (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams.
+In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time."
+By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn’t solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn’t handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."
+
+
+
+
+
+ "CNCF has been an amazing incubator for so many projects. Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It’s actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
— Austin Adams, Development Manager, Ygrene Energy Fund
+
+
+
+
+
+Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldn’t require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams.
+With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company."
+How? Cloud native has "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
+Notary, in particular, "has been a godsend," says Adams. "We need to know that our attack surface on third-party dependencies is low, or at least managed. We use it as a trust system and we also use it as a separation, so production images are signed by Notary, but some development images we don’t sign. That is to ensure that they can’t get into the production cluster. We’ve been using it in the test cluster to feel more secure about our builds."
+
+
+
+
+
+
+
+"We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company."
+
+
+
+
+
+ By using the kops project, Ygrene was able to move from Elastic Beanstalk to running its Kubernetes clusters on AWS EC2 Spot, at a tenth of the previous cost. "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
+That also helped them mitigate the risk that comes with running in the public cloud. "We figured out, essentially, that if we’re able to select instance classes using EC2 Spot that had an extremely low likelihood of interruption and zero history of interruption, and we’re willing to pay a price high enough, that we could virtually get the same guarantee using Kubernetes because we have enough nodes," says Software Engineer Zach Arnold, who led the migration to Kubernetes. "Now that we’ve re-architected these pieces of the application to not live on the same server, we can push out to many different servers and have a more stable deployment."
+As a result, the team can now ship code any time of day. "That was risky because it could bring down your whole loan management software with it," says Arnold. "But we now can deploy safely and securely during the day."
+
+
+
+
+
+
+ "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
+
+
+
+
+ Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for an overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down for 30 minutes to an hour, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed."
+Cloud native also affected how Ygrene’s 50+ developers and contractors work. Adams and Arnold spent considerable time "teaching people to think distributed out of the box," says Arnold. "We ended up picking what we call the Four S’s of Shipping: safely, securely, stably, and speedily." (For more on the security piece of it, see their article on their "continuous hacking" strategy.) As for the engineers, says Adams, "they have been able to advance as their software has advanced. I think that at the end of the day, the developers feel better about what they’re doing, and they also feel more connected to the modern software development community."
+Looking ahead, Adams is excited to explore more CNCF projects, including SPIFFE and SPIRE. "CNCF has been an amazing incubator for so many projects," he says. "Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It’s actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
+
+
+
+
+
+
diff --git a/content/ko/case-studies/ygrene/ygrene_featured_logo.png b/content/ko/case-studies/ygrene/ygrene_featured_logo.png
new file mode 100644
index 0000000000000..d0d69114784c8
Binary files /dev/null and b/content/ko/case-studies/ygrene/ygrene_featured_logo.png differ
diff --git a/content/ko/case-studies/zalando/index.html b/content/ko/case-studies/zalando/index.html
new file mode 100644
index 0000000000000..49bad6ff9e596
--- /dev/null
+++ b/content/ko/case-studies/zalando/index.html
@@ -0,0 +1,101 @@
+---
+title: Zalando Case Study
+
+case_study_styles: true
+cid: caseStudies
+css: /css/style_zalando.css
+---
+
+
+
CASE STUDY:
Europe’s Leading Online Fashion Platform Gets Radical with Cloud Native
+
+
+
+
+
+ Company Zalando Location Berlin, Germany Industry Online Fashion
+
+
+
+
+
+
+
Challenge
+ Zalando, Europe’s leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a radical transformation resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando’s technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn’t immediately considered, as teams migrated to Amazon Web Services (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There’s still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.
+
+
+
+
+
Solution
+ The company now runs its Docker containers on AWS using Kubernetes orchestration.
+
+
Impact
+ With the old infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs on the Linux kernel. This makes a lot of people pretty happy. The engineers love autonomy."
+
+
+
+
+
+ "We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes." - Henning Jacobs, Head of Developer Productivity at Zalando
+
+
+
+
+
When Henning Jacobs arrived at Zalando in 2010, the company was just two years old with 180 employees running an online store for European shoppers to buy fashion items.
+ "It started as a PHP e-commerce site which was easy to get started with, but was not scaling with the business' needs" says Jacobs, Head of Developer Productivity at Zalando.
+ At that time, the company began expanding beyond its German origins into other European markets. Fast-forward to today and Zalando now has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries. "With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience," he says.
+ Not to mention a unique opportunity for an infrastructure specialist like Jacobs. Just after he joined, the company began rewriting all their applications in-house. "That was generally our strategy," he says. "For example, we started with our own logistics warehouses but at first you don’t know how to do logistics software, so you have some vendor software. And then we replaced it with our own because with off-the-shelf software you’re not competitive. You need to optimize these processes based on your specific business needs."
+ In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even your own personal online stylist.
+ The need to scale ultimately led the company on a cloud-native journey. As did its embrace of a microservices-based software architecture that gives engineering teams more autonomy and ownership of projects. "This move to the cloud was necessary because in the data center you couldn’t have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app," Jacobs says.
+
+
+
+
+
+ "This move to the cloud was necessary because in the data center you couldn’t have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app."
+
+
+
+
+
+ Zalando began moving its infrastructure from two on-premise data centers to the cloud, requiring the migration of older applications for cloud-readiness. "We decided to have a clean break," says Jacobs. "Our Amazon Web Services infrastructure was set up like so: Every team has its own AWS account, which is completely isolated, meaning there’s no ‘lift and shift.’ You basically have to rewrite your application to make it cloud-ready even down to the persistence layer. We bravely went back to the drawing board and redid everything, first choosing Docker as a common containerization, then building the infrastructure from there."
+ The company decided to hold off on orchestration at the beginning, but as teams were migrated to AWS, "we saw the pain teams were having with infrastructure and cloud formation on AWS," says Jacobs.
+ Zalandos 200+ autonomous engineering teams decide what technologies to use and could operate their own applications using their own AWS accounts. This setup proved to be a compliance challenge. Even with strict rules-of-play and automated compliance checks in place, engineering teams and IT-compliance were overburdened addressing compliance issues. "Violations appear for non-compliant behavior, which we detect when scanning the cloud infrastructure," says Jacobs. "Everything is possible and nothing enforced, so you have to live with violations (and resolve them) instead of preventing the error in the first place. This means overhead for teams—and overhead for compliance and operations. It also takes time to spin up new EC2 instances on AWS, which affects our deployment velocity."
+ The team realized they needed to "leverage the value you get from cluster management," says Jacobs. When they first looked at Platform as a Service (PaaS) options in 2015, the market was fragmented; but "now there seems to be a clear winner. It seemed like a good bet to go with Kubernetes."
+ The transition to Kubernetes started in 2016 during Zalando’s Hack Week where participants deployed their projects to a Kubernetes cluster. From there 60 members of the tech infrastructure department were on-boarded - and then engineering teams were brought on one at a time. "We always start by talking with them and make sure everyone’s expectations are clear," says Jacobs. "Then we conduct some Kubernetes training, which is mostly training for our CI/CD setup, because the user interface for our users is primarily through the CI/CD system. But they have to know fundamental Kubernetes concepts and the API. This is followed by a weekly sync with each team to check their progress. Once they have something in production, we want to see if everything is fine on top of what we can improve."
+
+
+
+
+
+ Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs.
+
+
+
+
+
+ At the moment, Zalando is running an initial 40 Kubernetes clusters with plans to scale for the foreseeable future.
+ Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. "The self-healing infrastructure provides a frictionless experience with higher-level abstractions built upon low-level best practices. We envision all Zalando delivery teams will run their containerized applications on a state-of-the-art reliable and scalable cluster infrastructure provided by Kubernetes."
+ With the old on-premise infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs in the Linux kernel. This makes a lot of people pretty happy. The engineers love the autonomy."
+ There were a few challenges in Zalando’s Kubernetes implementation. "We are a team of seven people providing clusters to different engineering teams, and our goal is to provide a rock-solid experience for all of them," says Jacobs. "We don’t want pet clusters. We don’t want to have to understand what workload they have; it should just work out of the box. With that in mind, cluster autoscaling is important. There are many different ways of doing cluster management, and this is not part of the core. So we created two components to provision clusters, have a registry for clusters, and to manage the whole cluster life cycle."
+ Jacobs’s team also worked to improve the Kubernetes-AWS integration. "Thus you're very restricted. You need infrastructure to scale each autonomous team’s idea.""
+ Plus, "there are still a lot of best practices missing," says Jacobs. The team, for example, recently solved a pod security policy issue. "There was already a concept in Kubernetes but it wasn’t documented, so it was kind of tricky," he says. The large Kubernetes community was a big help to resolve the issue. To help other companies start down the same path, Jacobs compiled his team’s learnings in a document called Running Kubernetes in Production.
+
+
+
+
+
+
+ "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years... We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
+
+
+
+
+ In the end, Kubernetes made it possible for Zalando to introduce and maintain the new products the company envisioned to grow its platform. "The fashion advice product used Scala, and there were struggles to make this possible with our former infrastructure," says Jacobs. "It was a workaround, and that team needed more and more support from the platform team, just because they used different technologies. Now with Kubernetes, it’s autonomous. Whatever the workload is, that team can just go their way, and Kubernetes prevents other bottlenecks."
+ Looking ahead, Jacobs sees Zalando’s new infrastructure as a great enabler for other things the company has in the works, from its new logistics software, to a platform feature connecting brands, to products dreamed up by data scientists. "One vision is if you watch the next James Bond movie and see the suit he’s wearing, you should be able to automatically order it, and have it delivered to you within an hour," says Jacobs. "It’s about connecting the full fashion sphere. This is definitely not possible if you have a bottleneck with everyone running in the same data center and thus very restricted. You need infrastructure to scale each autonomous team’s idea."
+ For other companies considering this technology, Jacobs says he wouldn’t necessarily advise doing it exactly the same way Zalando did. "It’s okay to do so if you’re ready to fail at some things," he says. "You need to set the right expectations. Not everything will work. Rewriting apps and this type of organizational change can be disruptive. The first product we moved was critical. There were a lot of dependencies, and it took longer than expected. Maybe we should have started with something less complicated, less business critical, just to get our toes wet."
+ But once they got to the other side "it was clear for everyone that there’s no big alternative," Jacobs adds. "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years. Zalando Technology benefits from migrating to Kubernetes as we are able to leverage our existing knowledge to create an engineering platform offering flexibility and speed to our engineers while significantly reducing the operational overhead. We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
+
+
+
+
diff --git a/content/ko/case-studies/zalando/zalando_feature_logo.png b/content/ko/case-studies/zalando/zalando_feature_logo.png
new file mode 100644
index 0000000000000..ba6251050d15a
Binary files /dev/null and b/content/ko/case-studies/zalando/zalando_feature_logo.png differ
diff --git a/content/ko/case-studies/zulily/index.html b/content/ko/case-studies/zulily/index.html
new file mode 100644
index 0000000000000..a9e480ea97081
--- /dev/null
+++ b/content/ko/case-studies/zulily/index.html
@@ -0,0 +1,4 @@
+---
+title: Zulily
+content_url: https://www.youtube.com/embed/of45hYbkIZs
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/zulily/zulily_featured.png b/content/ko/case-studies/zulily/zulily_featured.png
new file mode 100644
index 0000000000000..81179f36d293a
Binary files /dev/null and b/content/ko/case-studies/zulily/zulily_featured.png differ
diff --git a/content/ko/case-studies/zulily/zulily_logo.png b/content/ko/case-studies/zulily/zulily_logo.png
new file mode 100644
index 0000000000000..e144c7897b3bf
Binary files /dev/null and b/content/ko/case-studies/zulily/zulily_logo.png differ
diff --git a/content/ko/docs/_index.md b/content/ko/docs/_index.md
new file mode 100644
index 0000000000000..e3c462013096c
--- /dev/null
+++ b/content/ko/docs/_index.md
@@ -0,0 +1,3 @@
+---
+title: 문서
+---
diff --git a/content/ko/docs/concepts/overview/kubernetes-api.md b/content/ko/docs/concepts/overview/kubernetes-api.md
index bc84f00b6c225..05e9e55d0af2e 100644
--- a/content/ko/docs/concepts/overview/kubernetes-api.md
+++ b/content/ko/docs/concepts/overview/kubernetes-api.md
@@ -21,7 +21,6 @@ API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/referenc
{{% /capture %}}
-{{< toc >}}
{{% capture body %}}
diff --git a/content/ko/docs/concepts/overview/working-with-objects/_index.md b/content/ko/docs/concepts/overview/working-with-objects/_index.md
new file mode 100644
index 0000000000000..a27acb856cd9a
--- /dev/null
+++ b/content/ko/docs/concepts/overview/working-with-objects/_index.md
@@ -0,0 +1,4 @@
+---
+title: "쿠버네티스 오브젝트로 작업하기"
+weight: 40
+---
diff --git a/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md
new file mode 100644
index 0000000000000..12b6f0fa18321
--- /dev/null
+++ b/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md
@@ -0,0 +1,69 @@
+---
+title: 쿠버네티스 오브젝트 이해하기
+content_template: templates/concept
+weight: 10
+---
+
+{{% capture overview %}}
+이 페이지에서는 쿠버네티스 오브젝트가 쿠버네티스 API에서 어떻게 표현되고, 그 오브젝트를 어떻게 `.yaml` 형식으로 표현할 수 있는지에 대해 설명한다.
+{{% /capture %}}
+
+{{% capture body %}}
+## 쿠버네티스 오브젝트 이해하기
+
+*쿠버네티스 오브젝트* 는 쿠버네티스 시스템에서 영속성을 가지는 개체이다. 쿠버네티스는 클러스터의 상태를 나타내기 위해 이 개체를 이용한다. 구체적으로 말하자면, 다음을 기술할 수 있다.
+
+* 어떤 컨테이너화된 애플리케이션이 동작 중인지 (그리고 어느 노드에서 동작 중인지)
+* 그 애플리케이션이 이용할 수 있는 리소스
+* 그 애플리케이션이 어떻게 재구동 정책, 업그레이드, 그리고 내고장성과 같은 것에 동작해야 하는지에 대한 정책
+
+쿠버네티스 오브젝트는 하나의 "의도를 담은 레코드" 이다. 오브젝트를 생성하게 되면, 쿠버네티스 시스템은 그 오브젝트 생성을 보장하기 위해 지속적으로 작동할 것이다. 오브젝트를 생성함으로써, 여러분이 클러스터의 워크로드를 어떤 형태로 보이고자 하는지에 대해 효과적으로 쿠버네티스 시스템에 전한다. 이것이 바로 여러분의 클러스터에 대해 **의도한 상태** 가 된다.
+
+생성이든, 수정이든, 또는 삭제든 쿠버네티스 오브젝트를 동작시키려면, [Kubernetes API](/docs/concepts/overview/kubernetes-api/)를 이용해야 한다. 예를 들어, `kubectl` 커맨드-라인 인터페이스를 이용할 때, CLI는 여러분 대신 필요한 쿠버네티스 API를 호출해 준다. 또한, 여러분은 [Client Libraries](/docs/reference/using-api/client-libraries/) 중 하나를 이용하여 여러분만의 프로그램에서 쿠버네티스 API를 직접 이용할 수도 있다.
+
+### 오브젝트 (spec)과 상태(status)
+
+모든 쿠버네티스 오브젝트는 오브젝트의 구성을 결정해주는 두 개의 중첩된 오브젝트 필드를 포함하는데 오브젝트 *spec* 과 오브젝트 *status* 가 그것이다. 필히 제공되어야만 하는 *spec* 은, 여러분이 오브젝트가 가졌으면 하고 원하는 특징, 즉 *의도한 상태* 를 기술한다. *status* 는 오브젝트의 *실제 상태* 를 기술하고, 쿠버네티스 시스템에 의해 제공되고 업데이트 된다. 주어진 임의의 시간에, 쿠버네티스 컨트롤 플레인은 오브젝트의 실제 상태를 여러분이 제시한 의도한 상태에 일치시키기 위해 능동적으로 관리한다.
+
+
+예를 들어, 쿠버네티스 디플로이먼트는 클러스터에서 동작하는 애플리케이션을 표현해 줄 수 있는 오브젝트이다. 디플로이먼트를 생성할 때, 디플로이먼트 spec에 3개의 애플리케이션 레플리카가 동작되도록 설정할 수 있다. 쿠버네티스 시스템은 그 디플로이먼트 spec을 읽어 spec에 일치되도록 상태를 업데이트하여 3개의 의도한 애플리케이션 인스턴스를 구동시킨다. 만약, 그 인스턴스들 중 어느 하나가 (상태 변경에) 실패가 난다면, 쿠버네티스 시스템은 보정을 통해, 이 경우에는 인스턴스 대체를 착수하여, spec과 status 간의 차이에 대응한다.
+
+오브젝트 spec, staus, 그리고 metadata에 대한 추가 정보는, [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md) 를 참조한다.
+
+### 쿠버네티스 오브젝트 기술하기
+
+쿠버네티스에서 오브젝트를 생성할 때, (이름과 같은)오브젝트에 대한 기본적인 정보와 더불어, 의도한 상태를 기술한 오브젝트 spec을 제시해 줘야만 한다. 오브젝트를 생성하기 위해(직접이든 또는 `kubectl`을 통해서든) 쿠버네티스 API를 이용할 때, API 요청은 요청 내용 안에 JSON 형식으로 정보를 포함시켜 줘야만 한다. **가장 자주, .yaml 파일로 `kubectl`에 정보를 제공해준다.** `kubectl` 은 API 요청이 이루어질 때, JSON 형식으로 정보를 변환시켜 준다.
+
+여기 쿠버네티스 디플로이먼트를 위한 요청 필드와 오브젝트 spec을 보여주는 `.yaml` 파일 예시가 있다.
+
+{{< codenew file="application/deployment.yaml" >}}
+
+위 예시와 같이 .yaml 파일을 이용하여 디플로이먼트를 생성하기 위한 하나의 방식으로는 `kubectl` 커맨드-라인 인터페이스에 인자값으로 `.yaml` 파일를 건네 [`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create) 명령을 이용하는 것이다. 다음 예시와 같다.
+
+
+```shell
+$ kubectl create -f https://k8s.io/examples/application/deployment.yaml --record
+```
+
+그 출력 내용은 다음과 유사하다.
+
+
+```shell
+deployment.apps/nginx-deployment created
+```
+
+### 요구되는 필드
+
+생성하고자 하는 쿠버네티스 오브젝트에 대한 `.yaml` 파일 내, 다음 필드를 위한 값들을 설정해 줘야한다.
+
+* `apiVersion` - 이 오브젝트를 생성하기 위해 사용하고 있는 쿠버네티스 API 버전이 어떤 것인지
+* `kind` - 어떤 종류의 오브젝트를 생성하고자 하는지
+* `metadata` - `이름` 문자열, UID, 그리고 선택적인 `네임스페이스` 를 포함하여 오브젝트를 유일하게 구분지어 줄 데이터
+
+여러분은 또한 오브젝트 `spec` 필드를 제공해야만 할 것이다. 오브젝트 `spec`에 대한 정확한 포맷은 모든 쿠버네티스 오브젝트마다 다르고, 그 오브젝트 특유의 중첩된 필드를 포함한다. [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) 는 쿠버네티스를 이용하여 생성할 수 있는 오브젝트에 대한 모든 spec 포맷을 살펴볼 수 있도록 해준다. 예를 들어, `파드` 오브젝트에 대한 `spec` 포맷은 [여기](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)에서 확인할 수 있고, `디플로이먼트` 오브젝트에 대한 `spec` 포맷은 [여기](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps)에서 확인할 수 있다.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* [Pod](/docs/concepts/workloads/pods/pod-overview/)와 같이, 가장 중요하고 기본적인 쿠버네티스 오브젝트에 대해 배운다.
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/workloads/_index.md b/content/ko/docs/concepts/workloads/_index.md
new file mode 100644
index 0000000000000..bdb9770e5d4ea
--- /dev/null
+++ b/content/ko/docs/concepts/workloads/_index.md
@@ -0,0 +1,4 @@
+---
+title: "워크로드"
+weight: 60
+---
diff --git a/content/ko/docs/concepts/workloads/pods/_index.md b/content/ko/docs/concepts/workloads/pods/_index.md
new file mode 100644
index 0000000000000..35ded5ad4a19c
--- /dev/null
+++ b/content/ko/docs/concepts/workloads/pods/_index.md
@@ -0,0 +1,4 @@
+---
+title: "파드"
+weight: 10
+---
diff --git a/content/ko/docs/concepts/workloads/pods/pod-overview.md b/content/ko/docs/concepts/workloads/pods/pod-overview.md
new file mode 100644
index 0000000000000..06ace4c72db7e
--- /dev/null
+++ b/content/ko/docs/concepts/workloads/pods/pod-overview.md
@@ -0,0 +1,107 @@
+---
+title: 파드(Pod) 개요
+content_template: templates/concept
+weight: 10
+---
+
+{{% capture overview %}}
+이 페이지는 쿠버네티스 객체 모델 중 가장 작은 배포 가능한 객체인 `파드` 에 대한 개요를 제공한다.
+{{% /capture %}}
+
+
+{{% capture body %}}
+## 파드에 대해 이해하기
+
+*파드* 는 쿠버네티스의 기본 구성 요소이다. 쿠버네티스 객체 모델 중 만들고 배포할 수 있는 가장 작고 간단한 단위이다. 파드는 클러스터에서의 Running 프로세스를 나타낸다.
+
+파드는 애플리케이션 컨테이너(또는, 몇몇의 경우, 다중 컨테이너), 저장소 리소스, 특정 네트워크 IP 그리고, 컨테이너가 동작하기 위해 만들어진 옵션들을 캡슐화 한다.
+파드는 배포의 단위를 말한다. 아마 단일 컨테이너로 구성되어 있거나, 강하게 결합되어 리소스를 공유하는 소수의 컨테이너로 구성되어 있는 *쿠버네티스에서의 애플리케이션 단일 인스턴스* 를 의미함.
+
+> [Docker](https://www.docker.com)는 쿠버네티스 파드에서 사용되는 가장 대표적인 컨테이너 런타임이지만, 파드는 다른 컨테이너 런타임 역시 지원한다.
+
+
+쿠버네티스 클러스터 내부의 파드는 주로 두 가지 방법으로 사용된다.
+
+* **단일 컨테이너만 동작하는 파드**. "단일 컨테이너 당 한 개의 파드" 모델은 쿠버네티스 사용 사례 중 가장 흔하다. 이 경우, 한 개의 파드가 단일 컨테이너를 감싸고 있다고 생각할 수 있으며, 쿠버네티스는 컨테이너가 아닌 파드를 직접 관리한다고 볼 수 있다.
+
+* **함께 동작하는 작업이 필요한 다중 컨테이너가 동작하는 파드**. 아마 파드는 강하게 결합되어 있고 리소스 공유가 필요한 다중으로 함께 배치된 컨테이너로 구성되어 있을 것이다. 이렇게 함께 배치되어 설치된 컨테이너는 단일 결합 서비스 단위일 것이다. 한 컨테이너는 공유 볼륨에서 퍼블릭으로 파일들을 옮기고, 동시에 분리되어 있는 "사이드카" 컨테이너는 그 파일들을 업데이트 하거나 복구한다. 파드는 이 컨테이너와 저장소 리소스들을 한 개의 관리 가능한 요소로 묶는다.
+
+
+[쿠버네티스 블로그](http://blog.kubernetes.io)에는 파드 사용 사례의 몇 가지 추가적인 정보가 있다. 더 많은 정보를 위해서 아래 내용을 참조하길 바란다.
+
+* [분산 시스템 툴킷: 복합 컨테이너를 위한 패턴](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
+* [컨테이너 디자인 패턴](https://kubernetes.io/blog/2016/06/container-design-patterns)
+
+각각의 파드는 주어진 애플리케이션에서 단일 인스턴스로 동작을 하는 것을 말한다. 만약 애플리케이션을 수평적으로 스케일하기를 원하면(예를 들면, 다중 인스턴스 동작하는 것), 각 인스턴스 당 한 개씩 다중 파드를 사용해야 한다. 쿠버네티스에서는, 일반적으로 이것을 _복제_ 라고 한다. 복제된 파드는 주로 컨트롤러라고 하는 추상화 개념의 그룹에 의해 만들어지고 관리된다. 더 많은 정보는 [파드와 컨트롤러](#pods-and-controllers)를 참고하길 바란다.
+
+
+
+## 어떻게 파드가 다중 컨테이너를 관리하는가
+
+파드는 결합도가 있는 단위의 서비스를 형성하는 다중 협력 프로세스(컨테이너)를 지원하도록 디자인 되었다. 파드 내부의 컨테이너는 자동으로 동일한 물리적 또는 가상의 머신의 클러스터에 함께 배치되고 스케쥴된다. 컨테이너는 리소스와 의존성 공유, 다른 컨테이너와의 통신 그리고 언제,어떻게 조절하는지를 공유할 수 있다.
+
+단일 파드 내부에서 함께 배치되고 관리되는 컨테이너 그룹은 상대적으로 심화된 사용 예시임에 유의하자. 컨테이너가 강하게 결합된 특별한 인스턴스의 경우에만 이 패턴을 사용하는게 좋다. 예를 들어, 공유 볼륨 내부 파일의 웹 서버 역할을 하는 컨테이너와 원격 소스로부터 그 파일들을 업데이트하는 분리된 "사이드카" 컨테이너가 있는 경우 아래 다이어그램의 모습일 것이다.
+
+
+{{< figure src="/images/docs/pod.svg" title="pod diagram" width="50%" >}}
+
+파드는 같은 파드 안에 속한 컨테이너에게 두 가지 공유 리소스를 제공한다. *네트워킹* 과 *저장소*.
+
+#### 네트워킹
+
+각각의 파드는 유일한 IP주소를 할당 받는다. 한 파드 내부의 모든 컨테이너는 네트워크 네임스페이스와 IP주소 및 네트워크 포트를 공유한다. *파드 안에 있는* 컨테이너는 다른 컨테이너와 `localhost`를 통해서 통신할 수 있다. 특정 파드 안에 있는 컨테이너가 *파드 밖의* 요소들과 통신하기 위해서는, 네트워크 리소스를 어떻게 쓰고 있는지 공유 해야 한다(예를 들어 포트 등).
+
+#### 저장소
+
+파드는 공유 저장소 집합인 *볼륨* 을 명시할 수 있다. 파드 내부의 모든 컨테이너는 공유 볼륨에 접근할 수 있고, 그 컨테이너끼리 데이터를 공유하는 것을 허용한다. 또한 볼륨은 컨테이너가 재시작되어야 하는 상황에도 파드 안의 데이터가 영구적으로 유지될 수 있게 한다. 쿠버네티스가 어떻게 파드 안의 공유 저장소를 사용하는지 보려면 [볼륨](/docs/concepts/storage/volumes/)를 참고하길 바란다.
+
+## 파드 작업
+
+직접 쿠버네티스에서 싱글톤 파드이더라도 개별 파드를 만들일이 거의 없을 것이다. 그 이유는 파드가 상대적으로 수명이 짧고 일시적이기 때문이다. 파드가 만들어지면(직접 만들거나, 컨트롤러에 의해서 간접적으로 만들어지거나), 그것은 클러스터의 노드에서 동작할 것이다. 파드는 프로세스가 종료되거나, 파드 객체가 삭제되거나, 파드가 리소스의 부족으로 인해 *제거되거나*, 노드에 장애가 생기지 않는 한 노드에 남아있는다.
+
+{{< note >}}
+**참고:** 파드 내부에서 재시작되는 컨테이너를 파드와 함께 재시작되는 컨테이너로 혼동해서는 안된다. 파드는 자기 스스로 동작하지 않는다. 하지만 컨테이너 환경은 그것이 삭제될 때까지 계속 동작한다.
+{{< /note >}}
+
+파드는 스스로 자신을 치료하지 않는다. 만약 파드가 스케줄링된 노드에 장애가 생기거나, 스케쥴링 동작이 스스로 실패할 경우 파드는 삭제된다. 그와 비슷하게, 파드는 리소스나 노드의 유지 부족으로 인해 제거되는 상황에서 살아남지 못할 것이다.
+쿠버네티스는 상대적으로 일시적인 파드 인스턴스를 관리하는 작업을 처리하는 *컨트롤러* 라고 하는 고수준의 추상적 개념을 사용한다. 즉, 파드를 직접적으로 사용가능 하지만, 컨트롤러를 사용하여 파드를 관리하는 것이 쿠버네티스에서 훨씬 더 보편적이다. 쿠버네티스가 어떻게 파드 스케일링과 치료하는지 보려면 [파드와 컨트롤러](#pods-and-controllers)를 참고하길 바란다.
+
+### 파드와 컨트롤러
+
+컨트롤러는 다중 파드를 생성하고 관리해 주는데, 클러스터 범위 내에서의 레플리케이션 핸들링, 롤아웃 그리고 셀프힐링 기능 제공을 한다. 예를 들어, 만약 노드가 고장났을 때, 컨트롤러는 다른 노드에 파드를 스케줄링 함으로써 자동으로 교체할 것이다.
+
+한 가지 또는 그 이상의 파드를 보유한 컨트롤러의 몇 가지 예시.
+
+* [디플로이먼트](/docs/concepts/workloads/controllers/deployment/)
+* [스테이트풀 셋](/docs/concepts/workloads/controllers/statefulset/)
+* [데몬 셋](/docs/concepts/workloads/controllers/daemonset/)
+
+일반적으로, 컨트롤러는 책임을 지고 제공한 파드 템플릿을 사용한다.
+
+## 파드 템플릿
+파드 템플릿은 [레플리케이션 컨트롤러](/docs/concepts/workloads/controllers/replicationcontroller/), [잡](/docs/concepts/jobs/run-to-completion-finite-workloads/), [데몬 셋](/docs/concepts/workloads/controllers/daemonset/)과 같은 다른 객체를 포함하는 파드 명세서이다. 컨트롤러는 파드 템플릿을 사용하여 실제 파드를 만든다.
+아래 예시는 메시지를 출력하는 컨테이너를 포함하는 파드에 대한 간단한 매니페스트이다.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: myapp-pod
+ labels:
+ app: myapp
+spec:
+ containers:
+ - name: myapp-container
+ image: busybox
+ command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
+```
+
+모든 레플리카의 현재 원하는 상태를 지정하는 대신, 파드 템플릿은 쿠키 틀과 같다. 쿠키가 한 번 잘리면, 그 쿠키는 쿠키 틀과 더이상 관련이 없다. 양자 얽힘이 없는 것이다. 그 이후 템플릿을 변경하거나 새로운 템플릿으로 바꿔도 이미 만들어진 파드에는 직접적인 영향이 없다. 마찬가지로, 레플리케이션 컨트롤러에 의해 만들어진 파드는 아마 그 이후 직접 업데이트될 수 있다. 이것은 모든 컨테이너가 속해있는 파드에서 현재 원하는 상태를 명시하는 것과 의도적으로 대비가 된다. 이러한 접근은 시스템의 의미를 철저히 단순화하고 유연성을 증가시킨다.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* 파드의 다른 동작들을 더 배워보자.
+ * [파드 종료](/docs/concepts/workloads/pods/pod/#termination-of-pods)
+ * 다른 파드 주제
+{{% /capture %}}
diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md
index da2e4ffd38f61..542d38e7f3f95 100644
--- a/content/ko/docs/home/_index.md
+++ b/content/ko/docs/home/_index.md
@@ -6,7 +6,7 @@ cid: userJourneys
css: /css/style_user_journeys.css
js: /js/user-journeys/home.js, https://use.fontawesome.com/4bcc658a89.js
display_browse_numbers: true
-linkTitle: "문서"
+linkTitle: "홈"
main_menu: true
weight: 10
menu:
diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md
new file mode 100644
index 0000000000000..899f39f4f374e
--- /dev/null
+++ b/content/ko/docs/reference/_index.md
@@ -0,0 +1,58 @@
+---
+title: 레퍼런스
+linkTitle: "레퍼런스"
+main_menu: true
+weight: 70
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+쿠버네티스 문서의 본 섹션에서는 레퍼런스를 다룬다.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## API 레퍼런스
+
+* [Kubernetes API Overview](/docs/reference/using-api/api-overview/) - 쿠버네티스 API에 대한 개요
+* 쿠버네티스 API 버전
+ * [1.12](/docs/reference/generated/kubernetes-api/v1.12/)
+ * [1.11](/docs/reference/generated/kubernetes-api/v1.11/)
+ * [1.10](https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/)
+ * [1.9](https://v1-9.docs.kubernetes.io/docs/api-reference/v1.9/)
+ * [1.8](https://v1-8.docs.kubernetes.io/docs/api-reference/v1.8/)
+ * [1.7](https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/)
+
+## API 클라이언트 라이브러리
+
+프로그래밍 언어에서 쿠버네티스 API를 호출하기 위해서,
+[client libraries](/docs/reference/using-api/client-libraries/)를 사용할 수 있다.
+공식적으로 지원되는 클라이언트 라이브러리는 다음과 같다.
+
+- [Kubernetes Go client library](https://github.com/kubernetes/client-go/)
+- [Kubernetes Python client library](https://github.com/kubernetes-client/python)
+
+## CLI 레퍼런스
+
+* [kubectl](/docs/user-guide/kubectl-overview) - 명령어를 실행하거나 쿠버네티스 클러스터를 관리하기 위해 사용하는 주된 CLI 도구.
+ * [JSONPath](/docs/user-guide/jsonpath/) - kubectl에서 [JSONPath expressions](http://goessner.net/articles/JsonPath/)을 사용하기 위한 문법 가이드.
+* [kubeadm](/docs/admin/kubeadm/) - 안정적인 쿠버네티스 클러스터를 쉽게 프로비전하기 위한 CLI 도구.
+* [kubefed](/docs/admin/kubefed/) - 연합된(federated) 클러스터 관리를 도와주는 CLI 도구.
+
+## 설정 레퍼런스
+
+* [kubelet](/docs/admin/kubelet/) - 각 노드에서 구동되는 주요한 *노드 에이전트*. kubelet은 PodSpecs 집합을 가지며 기술된 컨테이너가 구동되고 있는지, 정상 작동하는지를 보장한다.
+* [kube-apiserver](/docs/admin/kube-apiserver/) - 파드, 서비스, 레플리케이션 컨트롤러와 같은 API 오브젝트에 대한 검증과 구성을 수행하는 REST API.
+* [kube-controller-manager](/docs/admin/kube-controller-manager/) - 쿠버네티스에 탑재된 핵심 제어 루프를 포함하는 데몬.
+* [kube-proxy](/docs/admin/kube-proxy/) - 간단한 TCP/UDP 스트림 포워딩이나 백-엔드 집합에 걸쳐서 라운드-로빈 TCP/UDP 포워딩을 할 수 있다.
+* [kube-scheduler](/docs/admin/kube-scheduler/) - 가용성, 성능 및 용량을 관리하는 스케줄러.
+* [federation-apiserver](/docs/admin/federation-apiserver/) - 연합된 클러스터를 위한 API 서버.
+* [federation-controller-manager](/docs/admin/federation-controller-manager/) - 쿠버네티스 연합에 탑재된 핵심 제어 루프를 포함하는 데몬.
+
+## 설계 문서
+
+쿠버네티스 기능에 대한 설계 문서의 아카이브. [Kubernetes Architecture](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)와 [Kubernetes Design Overview](https://git.k8s.io/community/contributors/design-proposals)가 좋은 출발점이다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/independent/troubleshooting-kubeadm.md b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md
index 8a66795d50c69..1f5878efac3f8 100644
--- a/content/ko/docs/setup/independent/troubleshooting-kubeadm.md
+++ b/content/ko/docs/setup/independent/troubleshooting-kubeadm.md
@@ -57,7 +57,7 @@ This may be caused by a number of problems. The most common are:
There are two common ways to fix the cgroup driver problem:
- 1. Install docker again following instructions
+ 1. Install Docker again following instructions
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
@@ -104,6 +104,10 @@ Right after `kubeadm init` there should not be any pods in these states.
likely that the Pod Network solution that you installed is somehow broken. You
might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
+- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option
+ when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`.
+ MountFlags can interfere with volumes mounted by Kubernetes, and put the Pods in `CrashLoopBackOff` state.
+ The error happens when Kubernetes does not find `var/run/secrets/kubernetes.io/serviceaccount` files.
## `coredns` (or `kube-dns`) is stuck in the `Pending` state
diff --git a/content/ko/docs/setup/multiple-zones.md b/content/ko/docs/setup/multiple-zones.md
index 50b8352fd9f6d..fcd0b4adcdbf6 100644
--- a/content/ko/docs/setup/multiple-zones.md
+++ b/content/ko/docs/setup/multiple-zones.md
@@ -54,7 +54,7 @@ There are some important limitations of the multizone support:
* We assume that the different zones are located close to each other in the
network, so we don't perform any zone-aware routing. In particular, traffic
-that goes via services might cross zones (even if pods in some pods backing that service
+that goes via services might cross zones (even if some pods backing that service
exist in the same zone as the client), and this may incur additional latency and cost.
* Volume zone-affinity will only work with a `PersistentVolume`, and will not
diff --git a/content/ko/docs/setup/pick-right-solution.md b/content/ko/docs/setup/pick-right-solution.md
index e8dc7cd394369..cb6e294417fd7 100644
--- a/content/ko/docs/setup/pick-right-solution.md
+++ b/content/ko/docs/setup/pick-right-solution.md
@@ -6,21 +6,20 @@ content_template: templates/concept
{{% capture overview %}}
-Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of
-bare metal servers. The effort required to set up a cluster varies from running a single command to
-crafting your own customized cluster. Use this guide to choose a solution that fits your needs.
+쿠버네티스는 랩탑부터 클라우드 공급자의 VM들, 베어메탈 서버 랙까지 다양한 플랫폼에서 작동 가능하다.
+클러스터 구성을 위해 필요한 노력은 하나의 단일 명령어를 실행시키는 수준에서 직접 자신만의 맞춤형 클러스터를 세밀하게 만드는 수준에 이르기까지 다양하다.
+알맞은 솔루션을 선택하기 위해서 이 가이드를 사용하자.
-If you just want to "kick the tires" on Kubernetes, use the [local Docker-based solutions](#local-machine-solutions).
+쿠버네티스를 시도해보기를 원한다면, [로컬 Docker 기반의 솔루션](#로컬-머신-솔루션)을 사용하자.
-When you are ready to scale up to more machines and higher availability, a [hosted solution](#hosted-solutions) is the easiest to create and maintain.
+더 많은 머신과 높은 가용성으로 확장할 준비가 되었다면, [호스트 된 솔루션](#호스트-된-솔루션)이 생성하고 유지하기에 가장 쉽다.
-[Turnkey cloud solutions](#turnkey-cloud-solutions) require only a few commands to create
-and cover a wide range of cloud providers. [On-Premises turnkey cloud solutions](#on-premises-turnkey-cloud-solutions) have the simplicity of the turnkey cloud solution combined with the security of your own private network.
+[턴키 클라우드 솔루션](#턴키-클라우드-솔루션)은 클라우드 공급자들의 넓은 범위를 다루고 생성하기 위해서 약간의 명령어가 필요하다.
+[온-프레미스 턴키 클라우드 솔루션](#온-프레미스-턴키-클라우드-솔루션)은 프라이빗 네트워크의 보안과 결합된 턴키 클라우드 솔루션의 단순함을 가진다.
-If you already have a way to configure hosting resources, use [kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster with a single command per machine.
+호스팅한 자원을 구성하는 방법을 이미 가지고 있다면, 머신 당 단일 명령어로 클러스터를 만들어내기 위해서 [kubeadm](/docs/setup/independent/create-cluster-kubeadm/)을 사용하자.
-[Custom solutions](#custom-solutions) vary from step-by-step instructions to general advice for setting up
-a Kubernetes cluster from scratch.
+[사용자 지정 솔루션](#사용자-지정-솔루션)은 단계별 지침부터 쿠버네티스 클러스터를 처음부터 설정하기 위한 일반적인 조언까지 다양하다.
{{% /capture %}}
@@ -28,54 +27,53 @@ a Kubernetes cluster from scratch.
## 로컬 머신 솔루션
-* [Minikube](/docs/setup/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account.
+* [Minikube](/docs/setup/minikube/)는 개발과 테스트를 위한 단일 노드 쿠버네티스 클러스터를 로컬에 생성하기 위한 하나의 방법이다. 설치는 완전히 자동화 되어 있고, 클라우드 공급자 계정 정보가 필요하지 않다.
-* [microk8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.
+* [microk8s](https://microk8s.io/)는 개발과 테스트를 위한 쿠버네티스 최신 버전을 단일 명령어로 로컬 머신 상의 설치를 제공한다. 설치는 신속하고 빠르며(~30초) 단일 명령어로 Istio를 포함한 많은 플러그인을 지원한다.
-* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster.
+* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private)는 개발과 테스트 시나리오를 위해 1개 또는 더 많은 VM에 쿠버네티스를 배포하기 위해서 머신의 VirtualBox를 사용할 수 있다. 이는 전체 멀티 노드 클러스터로 확장할 수 있다.
-* [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host.
+* [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)는 Terraform/Packer/BASH 기반의 리눅스 호스트 상의 LXD 클러스터에 7개의 노드(부트 1개, 마스터 1개, 관리 1개, 프록시 1개 그리고 워커 3개)를 생성하기 위한 Infrastructure as Code (IaC) 스크립트이다.
-* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) is a multi-node (while minikube is single-node) Kubernetes cluster which only requires a docker daemon. It uses docker-in-docker technique to spawn the Kubernetes cluster.
+* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster)는 하나의 docker 데몬이 필요한 멀티 노드 쿠버네티스 클러스터이다.(minikube는 단일 노드이다.) 클러스터 생성을 위해서 docker-in-docker 기술을 사용한다.
-* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/) supports a nine-instance deployment on localhost.
+* [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/local/)는 로컬 호스트에서 9개의 인스턴스 배포를 지원한다.
## 호스트 된 솔루션
-* [AppsCode.com](https://appscode.com/products/cloud-deployment/) provides managed Kubernetes clusters for various public clouds, including AWS and Google Cloud Platform.
+* [AppsCode.com](https://appscode.com/products/cloud-deployment/)는 AWS와 Google Cloud Platform을 포함하여 다양한 퍼블릭 클라우드의 관리형 쿠버네티스 클러스터를 제공한다.
-* [APPUiO](https://appuio.ch) runs an OpenShift public cloud platform, supporting any Kubernetes workload. Additionally APPUiO offers Private Managed OpenShift Clusters, running on any public or private cloud.
+* [APPUiO](https://appuio.ch)는 쿠버네티스 워크로드를 지원하는 OpenShift 퍼블릭 클라우드 플랫폼을 구동한다. 게다가 APPUiO는 퍼블릭 또는 프라이빗 클라우드에서 구동 중인 프라이빗 OpenShift 클러스터 관리를 제공한다.
-* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offers managed Kubernetes service.
+* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/)는 관리형 쿠버네티스 서비스를 제공한다.
-* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/) offers managed Kubernetes clusters.
+* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/)는 관리형 쿠버네티스 클러스터를 제공한다.
-* [Giant Swarm](https://giantswarm.io/product/) offers managed Kubernetes clusters in their own datacenter, on-premises, or on public clouds.
+* [Giant Swarm](https://giantswarm.io/product/)은 온-프레미스 또는 퍼블릭 클라우드 데이터센터 내에서 관리형 쿠버네티스 클러스터를 제공한다.
-* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters.
+* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)은 관리형 쿠버네티스 클러스터를 제공한다.
-* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data.
+* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html)는 관리형 쿠버네티스 클러스터를 제공한다. 그와 함께 격리 종류, 운영 도구, 이미지와 컨테이너 통합된 보안 통찰력, Watson, IoT, 데이터와의 통합도 제공한다.
-* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration.
+* [Kubermatic](https://www.loodse.com)는 AWS와 Digital Ocean을 포함한 다양한 퍼블릭 클라우드뿐만 아니라 온-프레미스 상의 OpenStack 통합을 위한 관리형 쿠버네티스 클러스터를 제공한다.
-* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting.
+* [Kublr](https://kublr.com)는 AWS, Azure, GCP 및 온-프레미스에서 기업 수준의 안전하고, 확장 가능하며, 신뢰성 높은 쿠버네티스 클러스터를 제공한다. 여기에는 즉시 사용 가능한 백업 및 재해 복구, 중앙 집중식 다중 클러스터 로깅 및 모니터링, 내장 경고 서비스가 포함된다.
-* [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana.
+* [Madcore.Ai](https://madcore.ai)는 AWS에서 쿠버네티스 인프라를 배포하기 위한 devops 중심의 CLI 도구이다. 또한 마스터, 스팟 인스턴스 그룹 노드 오토-스케일링, ingress-ssl-lego, Heapster, Grafana를 지원한다.
-* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift.
+* [OpenShift Dedicated](https://www.openshift.com/dedicated/)는 OpenShift의 관리형 쿠버네티스 클러스터를 제공한다.
-* [OpenShift Online](https://www.openshift.com/features/) provides free hosted access for Kubernetes applications.
+* [OpenShift Online](https://www.openshift.com/features/)은 쿠버네티스 애플리케이션을 위해 호스트 된 무료 접근을 제공한다.
-* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.
+* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm)는 컨테이너 애플리케이션을 클라우드에 배포하는 데 사용할 수 있는 완벽하게 관리되고, 확장 가능하며, 가용성이 높은 서비스이다.
-* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.)
+* [Platform9](https://platform9.com/products/kubernetes/)는 온-프레미스 또는 모든 퍼블릭 클라우드에서 관리형 쿠버네티스를 제공한다. 또한, 24/7 상태 모니터링 및 알람 및 경고 서비스를 제공한다.(Kube2go는 웹 UI 기반의 쿠버네티스 클러스터 배포 서비스인 Platform9가 Platform9 Sandbox에 통합된 형태로 출시되었다.)
-* [Stackpoint.io](https://stackpoint.io) provides Kubernetes infrastructure automation and management for multiple public clouds.
+* [Stackpoint.io](https://stackpoint.io)는 다중 퍼블릭 클라우드에서 쿠버네티스 인프라 자동화 및 관리 기능을 제공한다.
## 턴키 클라우드 솔루션
-These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
-few commands. These solutions are actively developed and have active community support.
+다음 솔루션들은 클라우드 IaaS 공급자의 범위에서 몇 안 되는 명령어로 쿠버네티스 클러스터를 생성을 허용한다. 이러한 솔루션은 활발히 개발되었고 활발한 커뮤니티 지원을 한다.
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
@@ -99,8 +97,8 @@ few commands. These solutions are actively developed and have active community s
* [Tectonic by CoreOS](https://coreos.com/tectonic)
## 온-프레미스 턴키 클라우드 솔루션
-These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a
-few commands.
+
+다음 솔루션들은 몇 안 되는 명령어를 사용하여 내부의 안전한 클라우드 네트워크에서 쿠버네티스 클러스터를 생성할 수 있다.
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [APPUiO](https://appuio.ch)
@@ -117,26 +115,21 @@ few commands.
## 사용자 지정 솔루션
-Kubernetes can run on a wide range of Cloud providers and bare-metal environments, and with many
-base operating systems.
+쿠버네티스는 넓은 범위의 클라우드 공급자와 베어메탈 환경에서, 그리고 많은 기반 운영 체제에서 동작할 수 있다.
-If you can find a guide below that matches your needs, use it. It may be a little out of date, but
-it will be easier than starting from scratch. If you do want to start from scratch, either because you
-have special requirements, or just because you want to understand what is underneath a Kubernetes
-cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide.
+필요에 맞는 가이드를 아래에서 찾았다면, 그것을 사용하자. 약간 구식일 수도 있지만, 처음부터 시작하는 것보다 더 쉬울 것이다.
+특별한 요구사항이 있기 때문에, 또는 단지 쿠버네티스 클러스터의 아래에 무엇이 있는지를 이해하기 원하기 때문에
+처음부터 시작하기를 원한다면, [맨 처음부터 시작하기](/docs/setup/scratch/) 가이드를 시도하라.
-If you are interested in supporting Kubernetes on a new platform, see
-[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md).
+새로운 플랫폼에서 쿠버네티스 지원하는 것에 관심이 있다면, [Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md) 가이드를 참고하라.
### 일반
-If you already have a way to configure hosting resources, use
-[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster
-with a single command per machine.
+호스팅한 리소스를 구성하는 방법을 이미 알고 있다면, [kubeadm](/docs/setup/independent/create-cluster-kubeadm/)을 사용하면 머신당 단일 명령어로 쉽게 클러스터를 가지고 올 수 있다.
### 클라우드
-These solutions are combinations of cloud providers and operating systems not covered by the above solutions.
+다음 솔루션은 위의 솔루션에서 다루지 않는 클라우드 공급자와 운영체제의 조합이다.
* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
* [Gardener](https://gardener.cloud/)
@@ -147,34 +140,35 @@ These solutions are combinations of cloud providers and operating systems not co
### 온-프레미스 VM
-* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel)
-* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
+* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (Ansible, CoreOS와 flannel를 사용)
+* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (Fedora와 flannel를 사용)
* [oVirt](/docs/setup/on-premises-vm/ovirt/)
-* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
-* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
+* [Vagrant](/docs/setup/custom-cloud/coreos/) (CoreOS와 flannel를 사용)
+* [VMware](/docs/setup/custom-cloud/coreos/) (CoreOS와 flannel를 사용)
* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
-* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
+* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (Juju, Ubuntu와 flannel를 사용)
### 베어 메탈
* [CoreOS](/docs/setup/custom-cloud/coreos/)
+* [Digital Rebar](/docs/setup/on-premises-metal/krib/)
* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
### 통합
-These solutions provide integration with third-party schedulers, resource managers, and/or lower level platforms.
+다음 솔루션들은 타사 스케줄러, 자원 관리자, 낮은 레벨의 플랫폼의 통합을 제공한다.
* [DCOS](/docs/setup/on-premises-vm/dcos/)
- * Community Edition DCOS uses AWS
- * Enterprise Edition DCOS supports cloud hosting, on-premises VMs, and bare metal
+ * Community Edition DCOS는 AWS를 사용한다.
+ * Enterprise Edition DCOS는 클라우드 호스팅, 온-프레미스 VM, 베어메탈을 지원한다.
## 솔루션 표
-Below is a table of all of the solutions listed above.
+아래는 위에 리스트 된 모든 솔루션의 표이다.
-IaaS Provider | Config. Mgmt. | OS | Networking | Docs | Support Level
+IaaS 공급자 | 구성 관리 | OS | 네트워킹 | 문서 | 지원 레벨
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
@@ -219,29 +213,28 @@ any | [Gardener Cluster-Operator](https://kubernetes.io/blog/20
Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial
IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial
+Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar))
{{< note >}}
-**Note:** The above table is ordered by version test/used in nodes, followed by support level.
+**참고:** 위의 표는 버전 테스트/사용된 노드의 지원 레벨을 기준으로 정렬된다.
{{< /note >}}
### 열 정의
-* **IaaS Provider** is the product or organization which provides the virtual or physical machines (nodes) that Kubernetes runs on.
-* **OS** is the base operating system of the nodes.
-* **Config. Mgmt.** is the configuration management system that helps install and maintain Kubernetes on the
+* **IaaS 공급자**는 쿠버네티스가 구동되는 가상 또는 물리적 머신(노드)를 제공하는 제품 또는 조직이다.
+* **OS**는 노드의 기본 운영 체제이다.
+* **구성 관리**는 노드에서 쿠버네티스를 설치하고 유지 관리하는 데 도움이 되는 구성 관리 시스템이다.
nodes.
-* **Networking** is what implements the [networking model](/docs/concepts/cluster-administration/networking/). Those with networking type
- _none_ may not support more than a single node, or may support multiple VM nodes in a single physical node.
-* **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
- tests for supporting the API and base features of Kubernetes v1.0.0.
-* **Support Levels**
- * **Project**: Kubernetes committers regularly use this configuration, so it usually works with the latest release
- of Kubernetes.
- * **Commercial**: A commercial offering with its own support arrangements.
- * **Community**: Actively supported by community contributions. May not work with recent releases of Kubernetes.
- * **Inactive**: Not actively maintained. Not recommended for first-time Kubernetes users, and may be removed.
-* **Notes** has other relevant information, such as the version of Kubernetes used.
+* **네트워킹**은 [네트워킹 모델](/docs/concepts/cluster-administration/networking/)을 구현하는 것이다. 네트워크 유형이
+ _none_인 노드는 단일 노드 이상을 지원하지 않거나, 단일 물리 노드에서 여러 VM 노드를 지원할 수 있다.
+* **적합성**은 명시된 구성으로 생성된 클러스터가 쿠버네티스 v1.0.0의 API 및 기본 기능을 지원하는지의 프로젝트 적합성 테스트 통과 여부를 나타낸다.
+* **지원 레벨**
+ * **프로젝트**: 쿠버네티스 커미터는 현재 구성을 정기적으로 사용하므로, 일반적으로 최신 쿠버네티스 릴리즈와 함께 동작한다.
+ * **상업용**: 자체 지원 계약을 가진 상업용 제품.
+ * **커뮤니티**: 커뮤니티 기여를 바탕으로 활발하게 지원. 쿠버네티스 최신 릴리즈에는 작동하지 않을 수도 있다.
+ * **비활성**: 현재 유지되지 않는다. 쿠버네티스 최초 사용자에게 권장하지 않으며, 삭제될 수도 있다.
+* **참고**는 사용된 쿠버네티스 버전 같은 기타 관련 정보가 있다.
diff --git a/content/ko/docs/tasks/_index.md b/content/ko/docs/tasks/_index.md
new file mode 100644
index 0000000000000..6e9fc1b326770
--- /dev/null
+++ b/content/ko/docs/tasks/_index.md
@@ -0,0 +1,87 @@
+---
+title: 태스크
+main_menu: true
+weight: 50
+content_template: templates/concept
+---
+
+{{< toc >}}
+
+{{% capture overview %}}
+
+This section of the Kubernetes documentation contains pages that
+show how to do individual tasks. A task page shows how to do a
+single thing, typically by giving a short sequence of steps.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Web UI (Dashboard)
+
+Deploy and access the Dashboard web user interface to help you manage and monitor containerized applications in a Kubernetes cluster.
+
+## Using the kubectl Command-line
+
+Install and setup the `kubectl` command-line tool used to directly manage Kubernetes clusters.
+
+## Configuring Pods and Containers
+
+Perform common configuration tasks for Pods and Containers.
+
+## Running Applications
+
+Perform common application management tasks, such as rolling updates, injecting information into pods, and horizontal Pod autoscaling.
+
+## Running Jobs
+
+Run Jobs using parallel processing.
+
+## Accessing Applications in a Cluster
+
+Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster.
+
+## Monitoring, Logging, and Debugging
+
+Setup monitoring and logging to troubleshoot a cluster or debug a containerized application.
+
+## Accessing the Kubernetes API
+
+Learn various methods to directly access the Kubernetes API.
+
+## Using TLS
+
+Configure your application to trust and use the cluster root Certificate Authority (CA).
+
+## Administering a Cluster
+
+Learn common tasks for administering a cluster.
+
+## Administering Federation
+
+Configure components in a cluster federation.
+
+## Managing Stateful Applications
+
+Perform common tasks for managing Stateful applications, including scaling, deleting, and debugging StatefulSets.
+
+## Cluster Daemons
+
+Perform common tasks for managing a DaemonSet, such as performing a rolling update.
+
+## Managing GPUs
+
+Configure and schedule NVIDIA GPUs for use as a resource by nodes in a cluster.
+
+## Managing HugePages
+
+Configure and schedule huge pages as a schedulable resource in a cluster.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+If you would like to write a task page, see
+[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
+
+{{% /capture %}}
diff --git a/content/ko/docs/tasks/run-application/_index.md b/content/ko/docs/tasks/run-application/_index.md
new file mode 100644
index 0000000000000..203d2eb39c60c
--- /dev/null
+++ b/content/ko/docs/tasks/run-application/_index.md
@@ -0,0 +1,4 @@
+---
+title: "애플리케이션 실행"
+weight: 40
+---
diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
new file mode 100644
index 0000000000000..b52ca2dac59fa
--- /dev/null
+++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -0,0 +1,375 @@
+---
+title: Horizontal Pod Autoscaler 연습
+content_template: templates/task
+weight: 100
+---
+
+{{% capture overview %}}
+
+Horizontal Pod Autoscaler는 CPU 사용량(또는 베타 지원의 다른 애플리케이션 지원 메트릭)을 관찰하여 레플리케이션 컨트롤러, 디플로이먼트 또는 레플리카 셋의 파드 개수를 자동으로 스케일한다.
+
+이 문서는 php-apache 서버를 대상으로 Horizontal Pod Autoscaler를 동작해보는 예제이다. Horizontal Pod Autoscaler 동작과 관련된 더 많은 정보를 위해서는 [Horizontal Pod Autoscaler 사용자 가이드](/docs/tasks/run-application/horizontal-pod-autoscale/)를 참고하기 바란다.
+
+{{% /capture %}}
+
+
+
+{{% capture prerequisites %}}
+
+이 예제는 버전 1.2 또는 이상의 쿠버네티스 클러스터와 kubectl을 필요로 한다.
+[메트릭-서버](https://github.com/kubernetes-incubator/metrics-server/) 모니터링을 클러스터에 배포하여 리소스 메트릭 API를 통해 메트릭을 제공해야 한다. 이는 Horizontal Pod Autoscaler가 메트릭을 수집할때 해당 API를 사용한다. ([GCE 가이드](/docs/setup/turnkey/gce/)로 클러스터를 올리는 경우 메트릭-서버 모니터링은 디폴트로 활성화된다.)
+
+Horizontal Pod Autoscaler에 다양한 자원 메트릭을 적용하고자 하는 경우, 버전 1.6 또는 이상의 쿠버네티스 클러스터와 kubectl를 사용해야 한다. 또한, 사용자 정의 메트릭을 사용하기 위해서는, 클러스터가 사용자 정의 메트릭 API를 제공하는 API 서버와 통신할 수 있어야 한다. 마지막으로, 쿠버네티스 오브젝트와 관련이 없는 메트릭을 사용하는 경우 버전 1.10 또는 이상의 쿠버네티스 클러스터와 kubectl을 사용해야 하며, 외부 메트릭 API와 통신이 가능해야 한다. 자세한 사항은 [Horizontal Pod Autoscaler 사용자 가이드](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics)를 참고하길 바란다.
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## php-apache 서버 구동 및 노출
+
+Horizontal Pod Autoscaler 시연을 위해 php-apache 이미지를 맞춤 제작한 Docker 이미지를 사용한다.
+Dockerfile은 다음과 같다.
+
+
+```
+FROM php:5-apache
+ADD index.php /var/www/html/index.php
+RUN chmod a+rx index.php
+```
+
+index.php는 CPU 과부하 연산을 수행한다.
+
+```
+
+```
+
+첫 번째 단계로, 실행 중인 이미지의 디플로이먼트를 시작하고 서비스로 노출시킨다.
+
+```shell
+$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
+service/php-apache created
+deployment.apps/php-apache created
+```
+
+## Horizontal Pod Autoscaler 생성
+
+이제 서비스가 동작중이므로, [kubectl autoscale](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/docs/user-guide/kubectl/kubectl_autoscale.md)를 사용하여 오토스케일러를 생성한다. 다음 명령어는 첫 번째 단계에서 만든 php-apache 디플로이먼트 파드의 개수를 1부터 10 사이로 유지하는 Horizontal Pod Autoscaler를 생성한다.
+간략히 얘기하면, HPA는 (디플로이먼트를 통한) 평균 CPU 사용량을 50%로 유지하기 위하여 레플리카의 개수를 늘리고 줄인다. ([kubectl run](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/docs/user-guide/kubectl/kubectl_run.md)으로 각 파드는 200 밀리코어까지 요청할 수 있고, 따라서 여기서 말하는 평균 CPU 사용은 100 밀리코어를 말한다.) 이에 대한 자세한 알고리즘은 [여기](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm)를 참고하기 바란다.
+
+```shell
+$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
+horizontalpodautoscaler.autoscaling/php-apache autoscaled
+```
+
+실행 중인 오토스케일러의 현재 상태를 확인해본다.
+```shell
+$ kubectl get hpa
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
+
+```
+
+아직 서버로 어떠한 요청도 하지 않았기 때문에, 현재 CPU 소비는 0%임을 확인할 수 있다 (``CURRENT``은 디플로이먼트에 의해 제어되는 파드들의 평균을 나타낸다).
+
+## 부하 증가
+
+이번에는 부하가 증가함에 따라 오토스케일러가 어떻게 반응하는지를 살펴볼 것이다. 먼저 컨테이너를 하나 실행하고, php-apache 서비스에 무한루프의 쿼리를 전송한다(다른 터미널을 열어 수행하기 바란다).
+
+
+```shell
+$ kubectl run -i --tty load-generator --image=busybox /bin/sh
+
+Hit enter for command prompt
+
+$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
+```
+
+실행 후, 약 1분 정도 후에 CPU 부하가 올라가는 것을 볼 수 있다.
+
+```shell
+$ kubectl get hpa
+NAME REFERENCE TARGET CURRENT MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 305% / 50% 305% 1 10 1 3m
+
+```
+
+CPU 소비가 305%까지 증가하였다. 결과적으로, 디플로이먼트의 레플리카 개수는 7개까지 증가하였다.
+
+```shell
+$ kubectl get deployment php-apache
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+php-apache 7 7 7 7 19m
+```
+
+**참고** 때로는 레플리카의 개수를 안정화시키는데 몇 분이 걸릴 수 있다. 부하의 양은 환경에 따라 다르기 때문에, 최종 레플리카의 개수는 본 예제와 다를 수 있다.
+
+## 부하 중지
+
+본 예제를 마무리하기 위해 부하를 중단시킨다.
+`busybox` 컨테이너를 띄운 터미널에서, ` + C`로 부하 발생을 중단시킨다. 그런 다음 (몇 분 후에) 결과를 확인한다.
+
+```shell
+$ kubectl get hpa
+NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
+php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
+
+$ kubectl get deployment php-apache
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+php-apache 1 1 1 1 27m
+```
+
+CPU 사용량은 0으로 떨어졌고, HPA는 레플리카의 개수를 1로 낮췄다.
+
+{{< note >}}
+**참고** 레플리카 오토스케일링은 몇 분 정도 소요된다.
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture discussion %}}
+
+## 다양한 메트릭 및 사용자 정의 메트릭을 기초로한 오토스케일링
+
+`php-apache` 디플로이먼트를 오토스케일링할 때 `autoscaling/v2beta2` API 버전을 사용하여 추가적인 메트릭을 제공할 수 있다.
+
+첫 번째로, `autoscaling/v2beta2` 형식으로 HorizontalPodAutoscaler YAML 파일을 생성한다.
+
+```shell
+$ kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml
+```
+
+에디터로 `/tmp/hpa-v2.yaml` 파일을 열면, 다음과 같은 YAML이 보일 것입니다.
+
+```yaml
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+```
+
+`targetCPUUtilizationPercentage` 필드가 `metrics` 배열로 대체되었다. CPU 사용량 메트릭은 *resource metric* 으로 파드 컨테이너 자원의 백분율로 표현된다. CPU 외에 다른 메트릭을 지정할 수 있는데, 기본적으로 지원되는 다른 메트릭은 메모리뿐이다. 이 자원들은 한 클러스터에서 다른 클러스터로 이름을 변경할 수 없으며, `metrics.k8s.io` API가 가용한 경우 언제든지 사용할 수 있어야 한다.
+
+또한, `AverageUtilization` 대신 `AverageValue`의 `target` 타입을, 그리고 `target.averageUtilization` 대신 `target.averageValue`로 설정하여 자원 메트릭을 퍼센트 대신 값으로 명시할 수 있다.
+
+파드 메트릭과 오브젝트 메트릭 두 가지의 *사용자 정의 메트릭* 이 있다. 파드 메트릭과 오브젝트 메트릭. 이 메트릭은 클러스터에 특화된 이름을 가지고 있으며, 더 고급화된 클러스터 모니터링 설정이 필요하다.
+
+이러한 대체 메트릭 타입중 첫 번째는 *파드 메트릭* 이다. 이 메트릭은 파드들을 설명하고, 파드들간의 평균을 내며, 대상 값과 비교하여 레플리카 개수를 결정한다.
+
+이것들은 `AverageValue`의 `target`만을 지원한다는 것을 제외하면, 자원 메트릭과 매우 유사하게 동작한다.
+
+파드 메트릭은 이처럼 메트릭 블록을 사용하여 정의된다.
+
+
+```yaml
+type: Pods
+pods:
+ metric:
+ name: packets-per-second
+ target:
+ type: AverageValue
+ averageValue: 1k
+```
+
+두 번째 대체 메트릭 타입은 *오브젝트 메트릭*이다. 이 메트릭은 파드를 기술하는 대신에 동일한 네임스페이스 내에 다른 오브젝트를 표현한다. 이 메트릭은 반드시 오브젝트로부터 가져올 필요는 없다. 단지 오브젝트를 기술할 뿐이다. 오브젝트 메트릭은 `Value`과 `AverageValue`의 `target` 타입을 지원한다. `Value`를 사용할 경우 대상은 API로부터 반환되는 메트릭과 직접 비교된다. `AverageValue`를 사용할 경우, 대상 값과 비교되기 이전에 사용자 정의 메트릭 API로부터 반환된 값은 파드의 개수로 나눠진다. 다음은 `requests-per-second` 메트릭을 YAML로 기술한 예제이다.
+
+```yaml
+type: Object
+object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: extensions/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ type: Value
+ value: 2k
+```
+
+이러한 메트릭 블록을 여러 개 제공하면, HorizontalPodAutoscaler는 각 메트릭을 차례로 고려한다. HorizontalPodAutoscaler는 각 메트릭에 대해 제안된 레플리카 개수를 계산하고, 그중 가장 높은 레플리카 개수를 선정한다.
+
+예를 들어, 네트워크 트래픽 메트릭을 수집하는 모니터링 시스템이 있는 경우, `kubectl edit` 명령어를 이용하여 다음과 같이 정의를 업데이트 할 수 있다.
+
+```yaml
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ kind: AverageUtilization
+ averageUtilization: 50
+ - type: Pods
+ pods:
+ metric:
+ name: packets-per-second
+ targetAverageValue: 1k
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: extensions/v1beta1
+ kind: Ingress
+ name: main-route
+ target:
+ kind: Value
+ value: 10k
+status:
+ observedGeneration: 1
+ lastScaleTime:
+ currentReplicas: 1
+ desiredReplicas: 1
+ currentMetrics:
+ - type: Resource
+ resource:
+ name: cpu
+ current:
+ averageUtilization: 0
+ averageValue: 0
+ - type: Object
+ object:
+ metric:
+ name: requests-per-second
+ describedObject:
+ apiVersion: extensions/v1beta1
+ kind: Ingress
+ name: main-route
+ current:
+ value: 10k
+```
+
+이후, HorizontalPodAutoscaler는 각 파드가 요청 된 약 50%의 CPU 사용률을 소모하는지, 초당 1000 패킷을 처리하는지, 메인-루트 인그레스 뒤의 모든 파드들이 초당 10000 요청을 처리하는지 확인한다.
+
+### 보다 구체적인 메트릭을 기초로한 오토스케일링
+
+많은 메트릭 파이프라인들을 사용하면 이름 또는 _labels_ 이라 불리는 추가적인 식별자로 메트릭을 설명할 수 있다. 그리고, 모든 비 자원 메트릭 타입(파드, 오브젝트 그리고 아래 기술된 외부 타입)에 대해, 메트릭 파이프라인으로 전달되는 추가 레이블 셀렉터를 지정할 수 있다. 예를 들면, `verb` 레이블로 `http_requests` 메트릭을 수집하는 경우, 다음과 같이 메트릭 블록을 지정하여 GET 요청에 대해 크기를 조정할 수 있다.
+
+```yaml
+type: Object
+object:
+ metric:
+ name: `http_requests`
+ selector: `verb=GET`
+```
+
+이 셀렉터는 쿠버네티스의 레이블 셀렉터와 동일한 문법이다. 모니터링 파이프라인은 네임과 셀렉터가 여러 시리즈와 일치하는 경우, 해당 여러 시리즈를 단일 값으로 축소하는 방법을 결정한다. 셀렉터는 부가적인 속성이며, 대상 오브젝트(`Pods` 타입의 대상 파드, `Object` 타입으로 기술된 오브젝트)가 아닌 메트릭을 선택할 수 없다.
+
+### 쿠버네티스 오브젝트와 관련이 없는 메트릭을 기초로한 오토스케일링
+
+쿠버네티스 위에서 동작하는 애플리케이션은 쿠버네티스 클러스터의 어떤 오브젝트와도 관련이 없는 메트릭에 기반하여 오토스케일링을 할 수도 있다. 예로, 쿠버네티스 네임스페이스와 관련이 없는 서비스를 기초로한 메트릭을 들 수 있다. 쿠버네티스 버전 1.10 포함 이후 버전에서, *외부 메트릭*을 사용하여 이러한 유스케이스를 해결할 수 있다.
+
+외부 메트릭 사용시, 먼저 모니터링 시스템에 대한 이해가 있어야 한다. 이 설치는 사용자 정의 메트릭과 유사하다.
+외부 메트릭을 사용하면 모니터링 시스템의 사용 가능한 메트릭에 기반하여 클러스터를 오토스케일링 할 수 있다.
+위의 예제처럼 `name`과 `selector`를 갖는 `metric` 블록을 제공하고, `Object` 대신에 `External` 메트릭 타입을 사용한다.
+만일 여러개의 시계열이 `metricSelector`와 일치하면, HorizontalPodAutoscaler가 값의 합을 사용한다.
+외부 메트릭들은 `Value`와 `AverageValue` 대상 타입을 모두 지원하고, `Object` 타입을 사용할 때와 똑같이 동작한다.
+예를 들면 애플리케이션이 호스팅 된 대기열 서비스에서 작업을 처리하는 경우, 다음과 같이 HorizontalPodAutoscaler 매니퍼스트에 30개의 미해결 태스크 당 한 개의 워커를 지정하도록 추가할 수 있다.
+
+```yaml
+- type: External
+ external:
+ metric:
+ name: queue_messages_ready
+ selector: "queue=worker_tasks"
+ target:
+ type: AverageValue
+ averageValue: 30
+```
+
+가능하다면, 외부 메트릭 대신 사용자 정의 메트릭 대상 타입을 사용하길 권장한다. 왜냐하면, 클러스터 관리자가 사용자 정의 메트릭 API를 보안관점에서 더 쉽게 보호할 수 있기 때문이다. 외부 메트릭 API는 잠재적으로 어떠한 메트릭에도 접근할 수 있기에, 클러스터 관리자는 API를 노출시킬때 신중해야 한다.
+
+## 부록: Horizontal Pod Autoscaler 상태 조건
+
+HorizontalPodAutoscaler의 `autoscaling/v2beta2` 형식을 사용하면, HorizontalPodAutoscaler에서 쿠버네티스가 설정한 *상태 조건* 을 확인할 수 있다. 이 상태 조건들은 HorizontalPodAutoscaler가 스케일을 할 수 있는지, 어떤 방식으로든 제한되어 있는지 여부를 나타낸다.
+
+이 조건은 `status.conditions`에 나타난다. HorizontalPodAutoscaler에 영향을 주는 조건을 보기 위해 `kubectl describe hpa`를 사용할 수 있다.
+
+```shell
+$ kubectl describe hpa cm-test
+Name: cm-test
+Namespace: prom
+Labels:
+Annotations:
+CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
+Reference: ReplicationController/cm-test
+Metrics: ( current / target )
+ "http_requests" on pods: 66m / 500m
+Min replicas: 1
+Max replicas: 4
+ReplicationController pods: 1 current / 1 desired
+Conditions:
+ Type Status Reason Message
+ ---- ------ ------ -------
+ AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
+ ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
+ ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
+Events:
+```
+
+이 HorizontalPodAutoscaler 경우, 건강 상태의 여러 조건들을 볼 수 있다. 첫 번째 `AbleToScale`는 HPA가 스케일을 가져오고 업데이트할 수 있는지, 백 오프 관련 조건으로 스케일링이 방지되는지 여부를 나타낸다. 두 번째 `ScalingActive`는 HPA가 활성화되어있는지(즉 대상 레플리카 개수가 0이 아닌지), 원하는 스케일을 계산할 수 있는지 여부를 나타낸다. 만약 `False` 인 경우, 일반적으로 메트릭을 가져오는데 문제가 있다. 마지막으로, 마지막 조건인 `ScalingLimited`는 원하는 스케일 한도가 HorizontalPodAutoscaler의 최대/최소값으로 제한돼있음을 나타낸다. 이는 HorizontalPodAutoscaler에서 레플리카의 개수 제한을 최대/최소값으로 올리거나 낮추려는 것이다.
+
+## 부록: 수량
+
+HorizontalPodAutoscaler와 메트릭 API에서 모든 메트릭은 쿠버네티스에서 사용하는 *수량* 숫자 표기법을 사용한다. 예를 들면, `10500m` 수량은 10진법 `10.5`으로 쓰인다. 메트릭 API들은 가능한 경우 접미사 없이 정수를 반환하며, 일반적으로 수량을 밀리단위로 반환한다. 10진수로 표현했을때, `1`과 `1500m` 또는 `1`과 `1.5` 로 메트릭 값을 나타낼 수 있다. 더 많은 정보를 위해서는 [수량에 관한 용어집](/docs/reference/glossary?core-object=true#term-quantity) 을 참고하기 바란다.
+
+## 부록: 다른 가능한 시나리오
+
+### 명시적으로 오토스케일러 만들기
+
+HorizontalPodAutoscaler를 생성하기 위해 `kubectl autoscale` 명령어를 사용하지 않고 명시적으로 다음 파일을 사용하여 만들 수 있다.
+
+{{< codenew file="application/hpa/php-apache.yaml" >}}
+
+다음 명령어를 실행하여 오토스케일러를 생성할 것이다.
+
+```shell
+$ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
+horizontalpodautoscaler.autoscaling/php-apache created
+```
+
+{{% /capture %}}
diff --git a/content/ko/examples/application/deployment.yaml b/content/ko/examples/application/deployment.yaml
new file mode 100644
index 0000000000000..0f526b16c0ad2
--- /dev/null
+++ b/content/ko/examples/application/deployment.yaml
@@ -0,0 +1,19 @@
+apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 2 # tells deployment to run 2 pods matching the template
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.7.9
+ ports:
+ - containerPort: 80
diff --git a/content/ko/examples/application/hpa/php-apache.yaml b/content/ko/examples/application/hpa/php-apache.yaml
new file mode 100644
index 0000000000000..c73ae7d631b58
--- /dev/null
+++ b/content/ko/examples/application/hpa/php-apache.yaml
@@ -0,0 +1,13 @@
+apiVersion: autoscaling/v1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ targetCPUUtilizationPercentage: 50