Skip to content

Commit 22c8d2f

Browse files
author
Yushiro FURUKAWA
committed
Remove trailing spaces from ko documents(kubernetes#16742)
1 parent 331cc6f commit 22c8d2f

File tree

94 files changed

+844
-844
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

94 files changed

+844
-844
lines changed

content/ko/case-studies/northwestern-mutual/index.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ <h1> CASE STUDY:<img src="/images/northwestern_logo.png" style="margin-bottom:-1
2222
<div class="cols">
2323
<div class="col1">
2424
<h2>Challenge</h2>
25-
In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
25+
In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
2626
<br>
2727
<h2>Solution</h2>
2828
The platform team came up with a plan for using the public cloud (AWS), Docker containers, and Kubernetes for orchestration. "Kubernetes gave us that base framework so teams can be very autonomous in what they’re building and deliver very quickly and frequently," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. The team also built and open-sourced <a href="https://github.com/northwesternmutual/kanali">Kanali</a>, a Kubernetes-native API management tool that uses OpenTracing, Jaeger, and gRPC.
@@ -63,7 +63,7 @@ <h2>For more than 160 years, Northwestern Mutual has maintained its industry lea
6363
<div class="fullcol">
6464
Williams and the rest of the platform team decided that the first step would be to start moving from private data centers to AWS. With a new microservice architecture in mind—and the freedom to implement what was best for the organization—they began using Docker containers. After looking into the various container orchestration options, they went with Kubernetes, even though it was still in beta at the time. "There was some debate whether we should build something ourselves, or just leverage that product and evolve with it," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."<br><br>
6565
As early adopters, the team had to do a lot of work with Ansible scripts to stand up the cluster. "We had a lot of hard security requirements given the nature of our business," explains Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. "We found ourselves running a configuration that very few other people ever tried." The client experience group was the first to use the new platform; today, a few hundred of the company’s 1,500 engineers are using it and more are eager to get on board.
66-
The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
66+
The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
6767

6868

6969
</div>

content/ko/case-studies/ocado/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ <h2>Solution</h2>
3232
</div>
3333

3434
<div class="col2">
35-
35+
3636

3737
<h2>Impact</h2>
3838
With Kubernetes, "the speed from idea to implementation to deployment is amazing," says Bryant. "I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month." And because there are no longer restrictive deployment windows in the warehouses, the rate of deployments has gone from as few as two per week to dozens per week. Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. Says DevOps Team Leader Kevin McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster." The team also uses <a href="https://prometheus.io/">Prometheus</a> and <a href="https://grafana.com/">Grafana</a> to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resources to host the same applications in Kubernetes in our test environments."

content/ko/case-studies/workiva/index.html

+5-5
Original file line numberDiff line numberDiff line change
@@ -30,12 +30,12 @@ <h2>Challenge</h2>
3030
<a href="https://www.workiva.com/">Workiva</a> offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed.
3131

3232
<br>
33-
33+
3434
</div>
3535

3636
<div class="col2">
3737
<h2>Solution</h2>
38-
Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
38+
Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
3939
<br>
4040
<h2>Impact</h2>
4141
Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
@@ -68,7 +68,7 @@ <h2>Last fall, MacLeod Broad’s platform team at Workiva was prepping one of th
6868
</div>
6969
<section class="section3">
7070
<div class="fullcol">
71-
71+
7272
Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."<br><br>
7373
With Workiva’s back-end code running on <a href="https://cloud.google.com/compute/">Google Compute Engine</a> as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."<br><br>
7474
Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"<br><br>
@@ -90,15 +90,15 @@ <h2>Last fall, MacLeod Broad’s platform team at Workiva was prepping one of th
9090
Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." <br><br>
9191
Most of Workiva’s major products are now traced using OpenTracing, with data pushed into <a href="https://cloud.google.com/stackdriver/">Google StackDriver</a>. Even the products that aren’t fully traced have some components and libraries that are. <br><br>
9292
Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."<br><br>
93-
But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
93+
But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
9494
In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
9595

9696

9797
</div>
9898

9999
<div class="banner5">
100100
<div class="banner5text">
101-
"Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in&nbsp;a&nbsp;single&nbsp;fix." <span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"><br>— Michael Davis, Software Engineer, Workiva </span>
101+
"Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in&nbsp;a&nbsp;single&nbsp;fix." <span style="font-size:14px;letter-spacing:0.12em;padding-top:20px;text-transform:uppercase"><br>— Michael Davis, Software Engineer, Workiva </span>
102102
</div>
103103
</div>
104104

content/ko/docs/concepts/architecture/cloud-controller.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ CCM의 주요 기능은 KCM으로부터 파생된다. 이전 섹션에서 언급
8989

9090
노드 컨트롤러는 kubelet의 클라우드 종속적인 기능을 포함한다. CCM이 도입되기 이전에는, kubelet 이 IP 주소, 지역/영역 레이블 그리고 인스턴스 타입 정보와 같은 클라우드 특유의 세부사항으로 노드를 초기화하는 책임을 가졌다. CCM의 도입으로 kubelet에서 CCM으로 이 초기화 작업이 이전되었다.
9191

92-
이 새로운 모델에서, kubelet은 클라우드 특유의 정보 없이 노드를 초기화 해준다. 그러나, kubelet은 새로 생성된 노드에 taint를 추가해서 CCM이 클라우드에 대한 정보를 가지고 노드를 초기화하기 전까지는 스케줄되지 않도록 한다. 그러고 나서 이 taint를 제거한다.
92+
이 새로운 모델에서, kubelet은 클라우드 특유의 정보 없이 노드를 초기화 해준다. 그러나, kubelet은 새로 생성된 노드에 taint를 추가해서 CCM이 클라우드에 대한 정보를 가지고 노드를 초기화하기 전까지는 스케줄되지 않도록 한다. 그러고 나서 이 taint를 제거한다.
9393

9494
## 플러그인 메커니즘
9595

content/ko/docs/concepts/architecture/master-node-communication.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ weight: 20
2323
전형적인 배포에서, apiserver는 하나 또는 그 이상의 클라이언트 [인증](/docs/reference/access-authn-authz/authentication/) 형태가
2424
사용가능토록 하여 안전한 HTTPS 포트(443)를 통해 원격 연결에 대해 서비스 리슨하도록 구성된다.
2525
특히 [익명의 요청](/docs/reference/access-authn-authz/authentication/#anonymous-requests)
26-
또는 [서비스 계정 토큰](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
26+
또는 [서비스 계정 토큰](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
2727
허용된 경우에는 하나 또는 그 이상의 [인가](/docs/reference/access-authn-authz/authorization/) 형태가 사용 가능해야만 한다.
2828

2929
노드는 유효한 클라이언트 자격증명과 함께 apiserver에 안전하게 접속할 수 있는
@@ -35,7 +35,7 @@ weight: 20
3535
apiserver에 접속하려는 파드는 서비스 계정에 영향력을 발휘함으로써 안전하게
3636
그리 행할 수 있으며 따라서 쿠버네티스는 인스턴스화 될 때 공인 루트 인증서와
3737
유효한 베어러 토큰을 파드 속으로 자동 주입할 수 있게 된다.
38-
(모든 네임스페이스 내) `kubernetes` 서비스는 apiserver 상의 HTTPS 엔드포인트로
38+
(모든 네임스페이스 내) `kubernetes` 서비스는 apiserver 상의 HTTPS 엔드포인트로
3939
(kube-proxy를 통해) 리다이렉트 되는 가상 IP 주소를 가지고 구성된다.
4040

4141
마스터 컴포넌트는 또한 신뢰할 수 있는 포트를 통해 클러스터 apiserver와 소통한다.
@@ -62,7 +62,7 @@ apiserver에서 kubelet으로의 연결은 다음을 위해 이용된다.
6262
이 연결은 kubelet의 HTTPS 엔드포인트에서 끝난다. 기본적으로,
6363
apiserver는 kubelet의 제공 인증서를 확인하지 않는데,
6464
이는 연결에 대한 중간자 공격을 당하게 하고, 신뢰할 수 없는
65-
그리고/또는 공인 네트워크에서 운영하기에는 **불안** 하게 만든다.
65+
그리고/또는 공인 네트워크에서 운영하기에는 **불안** 하게 만든다.
6666

6767
이 연결을 확인하려면, apiserver에 kubelet의 제공 인증서 확인을 위해 사용하는
6868
루트 인증서 번들로 `--kubelet-certificate-authority` 플래그를 이용한다

content/ko/docs/concepts/architecture/nodes.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ weight: 10
2222
* [용량과 할당가능](#capacity)
2323
* [정보](#info)
2424

25-
노드의 상태와 상세 정보는 다음 커맨드를 통해 확인할 수 있다.
25+
노드의 상태와 상세 정보는 다음 커맨드를 통해 확인할 수 있다.
2626
```shell
2727
kubectl describe node <insert-node-name-here>
2828
```
@@ -122,7 +122,7 @@ ready 컨디션의 상태가 [kube-controller-manager](/docs/admin/kube-controll
122122

123123
### 노드 컨트롤러
124124

125-
노드 컨트롤러는 노드의 다양한 측면을 관리하는 쿠버네티스 마스터 컴포넌트다.
125+
노드 컨트롤러는 노드의 다양한 측면을 관리하는 쿠버네티스 마스터 컴포넌트다.
126126

127127
노드 컨트롤러는 노드가 생성되어 유지되는 동안 다양한 역할을 한다. 첫째는 등록 시점에 (CIDR 할당이 사용토록 설정된 경우) 노드에 CIDR 블럭을 할당하는 것이다.
128128

@@ -194,7 +194,7 @@ DaemonSet 컨트롤러에 의해 생성된 파드는 쿠버네티스 스케줄
194194

195195
{{< feature-state state="alpha" >}}
196196

197-
`TopologyManager`
197+
`TopologyManager`
198198
[기능 게이트(feature gate)](/docs/reference/command-line-tools-reference/feature-gates/)
199199
활성화 시켜두면, kubelet이 리소스 할당 결정을 할 때 토폴로지 힌트를 사용할 수 있다.
200200

content/ko/docs/concepts/cluster-administration/controller-metrics.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ weight: 100
1414
## 컨트롤러 관리자 메트릭은 무엇인가
1515

1616
컨트롤러 관리자 메트릭은 컨트롤러 관리자의 성능과 상태에 대한 중요한 통찰을 제공한다.
17-
메트릭은 go_routine count와 같은 일반적인 Go 언어 런타임 메트릭과
17+
메트릭은 go_routine count와 같은 일반적인 Go 언어 런타임 메트릭과
1818
etcd 요청 대기 시간 또는 클라우드 제공자(AWS, GCE, OpenStack) API 대기 시간과 같이 클러스터 상태를
1919
측정할 수 있는 컨트롤러 특징적 메트릭을 포함한다.
2020

0 commit comments

Comments
 (0)