You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/ko/case-studies/northwestern-mutual/index.html
+2-2
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ <h1> CASE STUDY:<img src="/images/northwestern_logo.png" style="margin-bottom:-1
22
22
<divclass="cols">
23
23
<divclass="col1">
24
24
<h2>Challenge</h2>
25
-
In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
25
+
In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
26
26
<br>
27
27
<h2>Solution</h2>
28
28
The platform team came up with a plan for using the public cloud (AWS), Docker containers, and Kubernetes for orchestration. "Kubernetes gave us that base framework so teams can be very autonomous in what they’re building and deliver very quickly and frequently," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. The team also built and open-sourced <ahref="https://github.com/northwesternmutual/kanali">Kanali</a>, a Kubernetes-native API management tool that uses OpenTracing, Jaeger, and gRPC.
@@ -63,7 +63,7 @@ <h2>For more than 160 years, Northwestern Mutual has maintained its industry lea
63
63
<divclass="fullcol">
64
64
Williams and the rest of the platform team decided that the first step would be to start moving from private data centers to AWS. With a new microservice architecture in mind—and the freedom to implement what was best for the organization—they began using Docker containers. After looking into the various container orchestration options, they went with Kubernetes, even though it was still in beta at the time. "There was some debate whether we should build something ourselves, or just leverage that product and evolve with it," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."<br><br>
65
65
As early adopters, the team had to do a lot of work with Ansible scripts to stand up the cluster. "We had a lot of hard security requirements given the nature of our business," explains Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. "We found ourselves running a configuration that very few other people ever tried." The client experience group was the first to use the new platform; today, a few hundred of the company’s 1,500 engineers are using it and more are eager to get on board.
66
-
The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
66
+
The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
Copy file name to clipboardexpand all lines: content/ko/case-studies/ocado/index.html
+1-1
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ <h2>Solution</h2>
32
32
</div>
33
33
34
34
<divclass="col2">
35
-
35
+
36
36
37
37
<h2>Impact</h2>
38
38
With Kubernetes, "the speed from idea to implementation to deployment is amazing," says Bryant. "I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month." And because there are no longer restrictive deployment windows in the warehouses, the rate of deployments has gone from as few as two per week to dozens per week. Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. Says DevOps Team Leader Kevin McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster." The team also uses <ahref="https://prometheus.io/">Prometheus</a> and <ahref="https://grafana.com/">Grafana</a> to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resources to host the same applications in Kubernetes in our test environments."
Copy file name to clipboardexpand all lines: content/ko/case-studies/workiva/index.html
+5-5
Original file line number
Diff line number
Diff line change
@@ -30,12 +30,12 @@ <h2>Challenge</h2>
30
30
<ahref="https://www.workiva.com/">Workiva</a> offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed.
31
31
32
32
<br>
33
-
33
+
34
34
</div>
35
35
36
36
<divclass="col2">
37
37
<h2>Solution</h2>
38
-
Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
38
+
Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
39
39
<br>
40
40
<h2>Impact</h2>
41
41
Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
@@ -68,7 +68,7 @@ <h2>Last fall, MacLeod Broad’s platform team at Workiva was prepping one of th
68
68
</div>
69
69
<sectionclass="section3">
70
70
<divclass="fullcol">
71
-
71
+
72
72
Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."<br><br>
73
73
With Workiva’s back-end code running on <ahref="https://cloud.google.com/compute/">Google Compute Engine</a> as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."<br><br>
74
74
Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"<br><br>
@@ -90,15 +90,15 @@ <h2>Last fall, MacLeod Broad’s platform team at Workiva was prepping one of th
90
90
Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." <br><br>
91
91
Most of Workiva’s major products are now traced using OpenTracing, with data pushed into <ahref="https://cloud.google.com/stackdriver/">Google StackDriver</a>. Even the products that aren’t fully traced have some components and libraries that are. <br><br>
92
92
Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."<br><br>
93
-
But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
93
+
But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
94
94
In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
95
95
96
96
97
97
</div>
98
98
99
99
<divclass="banner5">
100
100
<divclass="banner5text">
101
-
"Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." <spanstyle="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"><br>— Michael Davis, Software Engineer, Workiva </span>
101
+
"Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." <spanstyle="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"><br>— Michael Davis, Software Engineer, Workiva </span>
Copy file name to clipboardexpand all lines: content/ko/docs/concepts/architecture/cloud-controller.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,7 @@ CCM의 주요 기능은 KCM으로부터 파생된다. 이전 섹션에서 언급
89
89
90
90
노드 컨트롤러는 kubelet의 클라우드 종속적인 기능을 포함한다. CCM이 도입되기 이전에는, kubelet 이 IP 주소, 지역/영역 레이블 그리고 인스턴스 타입 정보와 같은 클라우드 특유의 세부사항으로 노드를 초기화하는 책임을 가졌다. CCM의 도입으로 kubelet에서 CCM으로 이 초기화 작업이 이전되었다.
91
91
92
-
이 새로운 모델에서, kubelet은 클라우드 특유의 정보 없이 노드를 초기화 해준다. 그러나, kubelet은 새로 생성된 노드에 taint를 추가해서 CCM이 클라우드에 대한 정보를 가지고 노드를 초기화하기 전까지는 스케줄되지 않도록 한다. 그러고 나서 이 taint를 제거한다.
92
+
이 새로운 모델에서, kubelet은 클라우드 특유의 정보 없이 노드를 초기화 해준다. 그러나, kubelet은 새로 생성된 노드에 taint를 추가해서 CCM이 클라우드에 대한 정보를 가지고 노드를 초기화하기 전까지는 스케줄되지 않도록 한다. 그러고 나서 이 taint를 제거한다.
0 commit comments