+
+# 干货
+这里列举一些干货,想要收获更多go-zero最佳实践干货,可以关注公众号获取最新动态。
+* [《一文读懂云原生 go-zero 微服务框架》](https://mp.weixin.qq.com/s/gszj3-fwfcof5Tt2Th4dFA)
+* [《你还在手撕微服务?快试试 go-zero 的微服务自动生成》](https://mp.weixin.qq.com/s/Qvi-g3obgD_FVJ7CK3O56w)
+* [《最简单的Go Dockerfile编写姿势,没有之一!》](https://mp.weixin.qq.com/s/VLBiIbZStKhb7uth1ndgQQ)
+* [《通过MapReduce降低服务响应时间》](https://mp.weixin.qq.com/s/yxXAIK1eC_X22DH4ssZSag)
+* [《微服务过载保护原理与实战](https://mp.weixin.qq.com/s/CWzf6CY2R12Xd-rIYVvdPQ)
+* [《最简单的 K8S 部署文件编写姿势,没有之一!》](https://mp.weixin.qq.com/s/1GOMxlI8ocOL3U_I2TKPzQ)
+* [《go-zero 如何应对海量定时/延迟任务?》](https://mp.weixin.qq.com/s/CiZ5SpuT-VN8V9wil8_iGg)
+* [《go-zero 如何扛住流量冲击(一)》](https://mp.weixin.qq.com/s/xnJIm3asMncBfbtXo22sZw)
+* [《服务自适应降载保护设计》](https://mp.weixin.qq.com/s/cgjCL59e3CDWhsxzwkuKBg)
\ No newline at end of file
diff --git a/doc/zrpc.md b/go-zero.dev/cn/zrpc.md
similarity index 96%
rename from doc/zrpc.md
rename to go-zero.dev/cn/zrpc.md
index 3a9c0fce..6eae59f8 100644
--- a/doc/zrpc.md
+++ b/go-zero.dev/cn/zrpc.md
@@ -2,7 +2,7 @@
# 企业级RPC框架zRPC
-近期比较火的开源项目[go-zero](https://github.com/tal-tech/go-zero)是一个集成了各种工程实践的包含了Web和RPC协议的功能完善的微服务框架,今天我们就一起来分析一下其中的RPC部分[zRPC](https://github.com/tal-tech/go-zero/tree/master/zrpc)。
+近期比较火的开源项目[go-zero](https://github.com/zeromicro/go-zero)是一个集成了各种工程实践的包含了Web和RPC协议的功能完善的微服务框架,今天我们就一起来分析一下其中的RPC部分[zRPC](https://github.com/zeromicro/go-zero/tree/master/zrpc)。
zRPC底层依赖gRPC,内置了服务注册、负载均衡、拦截器等模块,其中还包括自适应降载,自适应熔断,限流等微服务治理方案,是一个简单易用的可直接用于生产的企业级RPC框架。
@@ -209,7 +209,7 @@ K: 倍值 (Google SRE推荐值为2)
可以通过修改K的值来修改熔断发生的激进程度,降低K的值会使得自适应熔断算法更加激进,增加K的值则自适应熔断算法变得不再那么激进
-[熔断拦截器](https://github.com/tal-tech/go-zero/blob/master/zrpc/internal/clientinterceptors/breakerinterceptor.go)定义如下:
+[熔断拦截器](https://github.com/zeromicro/go-zero/blob/master/zrpc/internal/clientinterceptors/breakerinterceptor.go)定义如下:
```go
func BreakerInterceptor(ctx context.Context, method string, req, reply interface{},
@@ -281,7 +281,7 @@ func (b *googleBreaker) doReq(req func() error, fallback func(err error) error,
服务监控是了解服务当前运行状态以及变化趋势的重要手段,监控依赖于服务指标的收集,通过prometheus进行监控指标的收集是业界主流方案,zRPC中也采用了prometheus来进行指标的收集
-[prometheus拦截器](https://github.com/tal-tech/go-zero/blob/master/zrpc/internal/serverinterceptors/prometheusinterceptor.go)定义如下:
+[prometheus拦截器](https://github.com/zeromicro/go-zero/blob/master/zrpc/internal/serverinterceptors/prometheusinterceptor.go)定义如下:
这个拦截器主要是对服务的监控指标进行收集,这里主要是对RPC方法的耗时和调用错误进行收集,这里主要使用了Prometheus的Histogram和Counter数据类型
diff --git a/go-zero.dev/en/README.md b/go-zero.dev/en/README.md
new file mode 100644
index 00000000..5ca30ffb
--- /dev/null
+++ b/go-zero.dev/en/README.md
@@ -0,0 +1,223 @@
+
+
+# go-zero
+
+[![Go](https://github.com/zeromicro/go-zero/workflows/Go/badge.svg?branch=master)](https://github.com/zeromicro/go-zero/actions)
+[![codecov](https://codecov.io/gh/tal-tech/go-zero/branch/master/graph/badge.svg)](https://codecov.io/gh/tal-tech/go-zero)
+[![Go Report Card](https://goreportcard.com/badge/github.com/tal-tech/go-zero)](https://goreportcard.com/report/github.com/tal-tech/go-zero)
+[![Release](https://img.shields.io/github/v/release/tal-tech/go-zero.svg?style=flat-square)](https://github.com/zeromicro/go-zero)
+[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
+
+## 0. what is go-zero
+
+go-zero is a web and rpc framework that with lots of engineering practices builtin. It’s born to ensure the stability of the busy services with resilience design, and has been serving sites with tens of millions users for years.
+
+go-zero contains simple API description syntax and code generation tool called `goctl`. You can generate Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript from .api files with `goctl`.
+
+Advantages of go-zero:
+
+* improve the stability of the services with tens of millions of daily active users
+* builtin chained timeout control, concurrency control, rate limit, adaptive circuit breaker, adaptive load shedding, even no configuration needed
+* builtin middlewares also can be integrated into your frameworks
+* simple API syntax, one command to generate couple of different languages
+* auto validate the request parameters from clients
+* plenty of builtin microservice management and concurrent toolkits
+
+
+
+## 1. Backgrounds of go-zero
+
+At the beginning of 2018, we decided to re-design our system, from monolithic architecture with Java+MongoDB to microservice architecture. After researches and comparison, we chose to:
+
+* Golang based
+ * great performance
+ * simple syntax
+ * proven engineering efficiency
+ * extreme deployment experience
+ * less server resource consumption
+* Self-designed microservice architecture
+ * I have rich experience on designing microservice architectures
+ * easy to location the problems
+ * easy to extend the features
+
+## 2. Design considerations on go-zero
+
+By designing the microservice architecture, we expected to ensure the stability, as well as the productivity. And from just the beginning, we have the following design principles:
+
+* keep it simple
+* high availability
+* stable on high concurrency
+* easy to extend
+* resilience design, failure-oriented programming
+* try best to be friendly to the business logic development, encapsulate the complexity
+* one thing, one way
+
+After almost half a year, we finished the transfer from monolithic system to microservice system, and deployed on August 2018. The new system guaranteed the business growth, and the system stability.
+
+## 3. The implementation and features of go-zero
+
+go-zero is a web and rpc framework that integrates lots of engineering practices. The features are mainly listed below:
+
+* powerful tool included, less code to write
+* simple interfaces
+* fully compatible with net/http
+* middlewares are supported, easy to extend
+* high performance
+* failure-oriented programming, resilience design
+* builtin service discovery, load balancing
+* builtin concurrency control, adaptive circuit breaker, adaptive load shedding, auto trigger, auto recover
+* auto validation of API request parameters
+* chained timeout control
+* auto management of data caching
+* call tracing, metrics and monitoring
+* high concurrency protected
+
+As below, go-zero protects the system with couple layers and mechanisms:
+
+![Resilience](https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/resilience-en.png)
+
+## 4. Future development plans of go-zero
+
+* auto generate API mock server, make the client debugging easier
+* auto generate the simple integration test for the server side just from the .api files
+
+## 5. Installation
+
+Run the following command under your project:
+
+```shell
+go get -u github.com/tal-tech/go-zero
+```
+
+## 6. Quick Start
+
+0. full examples can be checked out from below:
+
+ [Rapid development of microservice systems](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl-en.md)
+
+ [Rapid development of microservice systems - multiple RPCs](https://github.com/tal-tech/zero-doc/blob/main/doc/bookstore-en.md)
+
+1. install goctl
+
+ `goctl`can be read as `go control`. `goctl` means not to be controlled by code, instead, we control it. The inside `go` is not `golang`. At the very beginning, I was expecting it to help us improve the productivity, and make our lives easier.
+
+ ```shell
+ GO111MODULE=on go get -u github.com/tal-tech/go-zero/tools/goctl
+ ```
+
+ make sure goctl is executable.
+
+2. create the API file, like greet.api, you can install the plugin of goctl in vs code, api syntax is supported.
+
+ ```go
+ type Request struct {
+ Name string `path:"name,options=you|me"` // parameters are auto validated
+ }
+
+ type Response struct {
+ Message string `json:"message"`
+ }
+
+ service greet-api {
+ @handler GreetHandler
+ get /greet/from/:name(Request) returns (Response);
+ }
+ ```
+
+ the .api files also can be generate by goctl, like below:
+
+ ```shell
+ goctl api -o greet.api
+ ```
+
+3. generate the go server side code
+
+ ```shell
+ goctl api go -api greet.api -dir greet
+ ```
+
+ the generated files look like:
+
+ ```
+ ├── greet
+ │ ├── etc
+ │ │ └── greet-api.yaml // configuration file
+ │ ├── greet.go // main file
+ │ └── internal
+ │ ├── config
+ │ │ └── config.go // configuration definition
+ │ ├── handler
+ │ │ ├── greethandler.go // get/put/post/delete routes are defined here
+ │ │ └── routes.go // routes list
+ │ ├── logic
+ │ │ └── greetlogic.go // request logic can be written here
+ │ ├── svc
+ │ │ └── servicecontext.go // service context, mysql/redis can be passed in here
+ │ └── types
+ │ └── types.go // request/response defined here
+ └── greet.api // api description file
+ ```
+
+ the generated code can be run directly:
+
+ ```shell
+ cd greet
+ go mod init
+ go mod tidy
+ go run greet.go -f etc/greet-api.yaml
+ ```
+
+ by default, it’s listening on port 8888, while it can be changed in configuration file.
+
+ you can check it by curl:
+
+ ```shell
+ curl -i http://localhost:8888/greet/from/you
+ ```
+
+ the response looks like:
+
+ ```http
+ HTTP/1.1 200 OK
+ Date: Sun, 30 Aug 2020 15:32:35 GMT
+ Content-Length: 0
+ ```
+
+4. Write the business logic code
+
+ * the dependencies can be passed into the logic within servicecontext.go, like mysql, reds etc.
+ * add the logic code in logic package according to .api file
+
+5. Generate code like Java, TypeScript, Dart, JavaScript etc. just from the api file
+
+ ```shell
+ goctl api java -api greet.api -dir greet
+ goctl api dart -api greet.api -dir greet
+ ...
+ ```
+
+## 7. Benchmark
+
+![benchmark](https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/benchmark.png)
+
+[Checkout the test code](https://github.com/smallnest/go-web-framework-benchmark)
+
+## 8. Documents (adding)
+
+* [Rapid development of microservice systems](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl-en.md)
+* [Rapid development of microservice systems - multiple RPCs](https://github.com/tal-tech/zero-doc/blob/main/docs/zero/bookstore-en.md)
+* [Examples](https://github.com/zeromicro/zero-examples)
+
+## 9. Important notes
+
+* Use grpc 1.29.1, because etcd lib doesn’t support latter versions.
+
+ `google.golang.org/grpc v1.29.1`
+
+* For protobuf compatibility, use `protocol-gen@v1.3.2`.
+
+ ` go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.2`
+
+## 10. Chat group
+
+Join the chat via https://join.slack.com/t/go-zeroworkspace/shared_invite/zt-m39xssxc-kgIqERa7aVsujKNj~XuPKg
diff --git a/go-zero.dev/en/about-us.md b/go-zero.dev/en/about-us.md
new file mode 100644
index 00000000..b3860771
--- /dev/null
+++ b/go-zero.dev/en/about-us.md
@@ -0,0 +1,21 @@
+# About Us
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Go-Zero
+go-zero is a web and rpc framework that integrates various engineering practices. Through flexible design, the stability of the large concurrent server is guaranteed, and it has withstood full actual combat tests.
+
+go-zero contains a minimalist API definition and generation tool goctl, which can generate Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript code with one click according to the defined api file, and run it directly.
+
+## Go-Zero's Author
+[](https://github.com/kevwan)
+
+**kevwan** is the XiaoHeiBan’s R&D person in charge and a senior technical expert in TAL, whhas 14 years of R&D team management experience, 16 years of architecture design experience, 20 years of engineering practical experience, responsible for the architecture design of many large-scale projects, and has been in partnership for many times (acquired ), Lecturer of Gopher China Conference, Lecturer of Tencent Cloud Developer Conference.
+
+## Go-Zero Members
+As of February 2021, go-zero currently has 30 team developers and 50+ community members.
+
+## Go-Zero Community
+We currently have more than 3,000 community members. Here, you can discuss any go-zero technology, feedback on issues, get the latest go-zero information, and the technical experience shared by the big guys every day.
+
diff --git a/go-zero.dev/en/api-coding.md b/go-zero.dev/en/api-coding.md
new file mode 100644
index 00000000..2215fe6c
--- /dev/null
+++ b/go-zero.dev/en/api-coding.md
@@ -0,0 +1,57 @@
+# API File Coding
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Create file
+```shell
+$ vim service/user/cmd/api/user.api
+```
+```text
+type (
+ LoginReq {
+ Username string `json:"username"`
+ Password string `json:"password"`
+ }
+
+ LoginReply {
+ Id int64 `json:"id"`
+ Name string `json:"name"`
+ Gender string `json:"gender"`
+ AccessToken string `json:"accessToken"`
+ AccessExpire int64 `json:"accessExpire"`
+ RefreshAfter int64 `json:"refreshAfter"`
+ }
+)
+
+service user-api {
+ @handler login
+ post /user/login (LoginReq) returns (LoginReply)
+}
+```
+## Generate api service
+### By goctl executable file
+
+```shell
+$ cd book/service/user/cmd/api
+$ goctl api go -api user.api -dir .
+```
+```text
+Done.
+```
+
+### By Intellij Plugin
+
+Right-click on the `user.api` file, and then click to enter `New`->`Go Zero`->`Api Code`, enter the target directory selection, that is, the target storage directory of the api source code, the default is the directory where user.api is located, select Click OK after finishing the list.
+
+![ApiGeneration](https://zeromicro.github.io/go-zero-pages/resource/goctl-api.png)
+![ApiGenerationDirectorySelection](https://zeromicro.github.io/go-zero-pages/resource/goctl-api-select.png)
+
+### By Keymap
+
+Open user.api, enter the editing area, use the shortcut key `Command+N` (for macOS) or `alt+insert` (for windows), select `Api Code`, and also enter the directory selection pop-up window, after selecting the directory Just click OK.
+
+# Guess you wants
+* [API IDL](api-grammar.md)
+* [API Commands](goctl-api.md)
+* [API Directory Structure](api-dir.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/api-config.md b/go-zero.dev/en/api-config.md
new file mode 100644
index 00000000..7959fd5f
--- /dev/null
+++ b/go-zero.dev/en/api-config.md
@@ -0,0 +1,112 @@
+# API configuration
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The api configuration controls various functions in the api service, including but not limited to the service listening address, port, environment configuration, log configuration, etc. Let's take a simple configuration to see what the common configurations in the api do.
+
+## Configuration instructions
+Through the yaml configuration, we will find that there are many parameters that we are not aligned with config. This is because many of the config definitions are labeled with `optional` or `default`. For `optional` options, you can choose according to your own Need to determine whether it needs to be set. For the `default` tag, if you think the default value is enough, you don't need to set it. Generally, the value in `default` basically does not need to be modified and can be considered as a best practice value.
+
+### Config
+
+```go
+type Config struct{
+ rest.RestConf // rest api configuration
+ Auth struct { // jwt authentication configuration
+ AccessSecret string // jwt key
+ AccessExpire int64 // jwt expire, unit: second
+ }
+ Mysql struct { // database configuration, in addition to mysql, there may be other databases such as mongo
+ DataSource string // mysql datasource, which satisfies the format of user:password@tcp(ip:port)db?queries
+ }
+ CacheRedis cache.CacheConf // redis cache
+ UserRpc zrpc.RpcClientConf // rpc client configuration
+}
+```
+
+### rest.RestConf
+The basic configuration of api service, including monitoring address, monitoring port, certificate configuration, current limit, fusing parameters, timeout parameters and other controls, expand it, we can see:
+```go
+service.ServiceConf // service configuration
+Host string `json:",default=0.0.0.0"` // http listening ip, default 0.0.0.0
+Port int // http listening port, required
+CertFile string `json:",optional"` // https certificate file, optional
+KeyFile string `json:",optional"` // https private key file, optional
+Verbose bool `json:",optional"` // whether to print detailed http request log
+MaxConns int `json:",default=10000"` // http can accept the maximum number of requests at the same time (current limit), the default is 10000
+MaxBytes int64 `json:",default=1048576,range=[0:8388608]"` // http can accept the maximum Content Length of the request, the default is 1048576, and the set value cannot be between 0 and 8388608
+// milliseconds
+Timeout int64 `json:",default=3000"` // timeout duration control, unit: milliseconds, default 3000
+CpuThreshold int64 `json:",default=900,range=[0:1000]"` // CPU load reduction threshold, the default is 900, the allowable setting range is 0 to 1000
+Signature SignatureConf `json:",optional"` // signature configuration
+```
+
+### service.ServiceConf
+```go
+type ServiceConf struct {
+ Name string // service name
+ Log logx.LogConf // log configuration
+ Mode string `json:",default=pro,options=dev|test|pre|pro"` // service environment, dev-development environment, test-test environment, pre-pre-release environment, pro-formal environment
+ MetricsUrl string `json:",optional"` // index report interface address, this address needs to support post json
+ Prometheus prometheus.Config `json:",optional"` // prometheus configuration
+}
+```
+
+### logx.LogConf
+```go
+type LogConf struct {
+ ServiceName string `json:",optional"` // service name
+ Mode string `json:",default=console,options=console|file|volume"` // Log mode, console-output to console, file-output to the current server (container) file, volume-output docker hangs in the file
+ Path string `json:",default=logs"` // Log storage path
+ Level string `json:",default=info,options=info|error|severe"` // Log level
+ Compress bool `json:",optional"` // whether to enable gzip compression
+ KeepDays int `json:",optional"` // log retention days
+ StackCooldownMillis int `json:",default=100"` // log write interval
+}
+```
+
+### prometheus.Config
+```go
+type Config struct {
+ Host string `json:",optional"` // prometheus monitor host
+ Port int `json:",default=9101"` // prometheus listening port
+ Path string `json:",default=/metrics"` // report address
+}
+```
+
+### SignatureConf
+```go
+SignatureConf struct {
+ Strict bool `json:",default=false"` // Whether it is Strict mode, if it is, Private Keys is required
+ Expiry time.Duration `json:",default=1h"` // Validity period, default is 1 hour
+ PrivateKeys []PrivateKeyConf // Signing key related configuration
+}
+```
+
+### PrivateKeyConf
+```go
+PrivateKeyConf struct {
+ Fingerprint string // Fingerprint configuration
+ KeyFile string // Key configuration
+}
+```
+
+### cache.CacheConf
+```go
+ClusterConf []NodeConf
+
+NodeConf struct {
+ redis.RedisConf
+ Weight int `json:",default=100"` // Weights
+}
+```
+
+### redis.RedisConf
+```go
+RedisConf struct {
+ Host string // redis address
+ Type string `json:",default=node,options=node|cluster"` // redis type
+ Pass string `json:",optional"` // redis password
+}
+```
diff --git a/go-zero.dev/en/api-dir.md b/go-zero.dev/en/api-dir.md
new file mode 100644
index 00000000..cdbe1f0d
--- /dev/null
+++ b/go-zero.dev/en/api-dir.md
@@ -0,0 +1,27 @@
+# API directory introduction
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+```text
+.
+├── etc
+│ └── greet-api.yaml // yaml configuration file
+├── go.mod // go module file
+├── greet.api // api interface description language file
+├── greet.go // main function entry
+└── internal
+ ├── config
+ │ └── config.go // configuration declaration type
+ ├── handler // routing and handler forwarding
+ │ ├── greethandler.go
+ │ └── routes.go
+ ├── logic // business logic
+ │ └── greetlogic.go
+ ├── middleware // middleware file
+ │ └── greetmiddleware.go
+ ├── svc // the resource pool that logic depends on
+ │ └── servicecontext.go
+ └── types // The struct of request and response is automatically generated according to the api, and editing is not recommended
+ └── types.go
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/api-grammar.md b/go-zero.dev/en/api-grammar.md
new file mode 100644
index 00000000..64676cf6
--- /dev/null
+++ b/go-zero.dev/en/api-grammar.md
@@ -0,0 +1,743 @@
+# API syntax
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## API IDL example
+
+```go
+/**
+ * api syntax example and syntax description
+ */
+
+// api syntax version
+syntax = "v1"
+
+// import literal
+import "foo.api"
+
+// import group
+import (
+ "bar.api"
+ "foo/bar.api"
+)
+info(
+ author: "anqiansong"
+ date: "2020-01-08"
+ desc: "api syntax example and syntax description"
+)
+
+// type literal
+
+type Foo{
+ Foo int `json:"foo"`
+}
+
+// type group
+
+type(
+ Bar{
+ Bar int `json:"bar"`
+ }
+)
+
+// service block
+@server(
+ jwt: Auth
+ group: foo
+)
+service foo-api{
+ @doc "foo"
+ @handler foo
+ post /foo (Foo) returns (Bar)
+}
+```
+
+## API syntax structure
+
+* syntax statement
+* import syntax block
+* info syntax block
+* type syntax block
+* service syntax block
+* hidden channel
+
+> [!TIP]
+> In the above grammatical structure, grammatically speaking, each grammar block can be declared anywhere in the .api file according to the grammatical block.> But in order to improve reading efficiency, we suggest to declare in the above order, because it may be in the future Strict mode is used to control the order of syntax blocks.
+
+### syntax statement
+
+syntax is a newly added grammatical structure, the introduction of the grammar can solve:
+
+* Quickly locate the problematic grammatical structure of the api version
+* Syntax analysis for the version
+* Prevent the big version upgrade of api syntax from causing backward compatibility
+
+> **[!WARNING]
+> The imported api must be consistent with the syntax version of the main api.
+
+**Grammar definition**
+
+```antlrv4
+'syntax'={checkVersion(p)}STRING
+```
+
+**Grammar description**
+
+syntax: Fixed token, marking the beginning of a syntax structure
+
+checkVersion: Custom go method to detect whether `STRING` is a legal version number. The current detection logic is that STRING must meet the regularity of `(?m)"v[1-9][0-9]"`.
+
+STRING: A string of English double quotes, such as "v1"
+
+An api grammar file can only have 0 or 1 syntax statement. If there is no syntax, the default version is `v1`
+
+**Examples of correct syntax** ✅
+
+eg1: Irregular writing
+
+```api
+syntax="v1"
+```
+
+eg2: Standard writing (recommended)
+
+```api
+syntax = "v2"
+```
+
+**Examples of incorrect syntax** ❌
+
+eg1:
+
+```api
+syntax = "v0"
+```
+
+eg2:
+
+```api
+syntax = v1
+```
+
+eg3:
+
+```api
+syntax = "V1"
+```
+
+## Import syntax block
+
+As the business scale increases, there are more and more structures and services defined in the api.
+All the grammatical descriptions are in one api file. This is a problem, and it will greatly increase the difficulty of reading and maintenance.
+Import The grammar block can help us solve this problem. By splitting the api file, different api files are declared according to certain rules,
+which can reduce the difficulty of reading and maintenance.
+
+> **[!WARNING]
+> Import here does not include package declarations like golang, it is just the introduction of a file path. After the final analysis, all the declarations will be gathered into a spec.Spec.
+> You cannot import multiple identical paths, otherwise it will cause parsing errors.
+
+**Grammar definition**
+
+```antlrv4
+'import' {checkImportValue(p)}STRING
+|'import' '(' ({checkImportValue(p)}STRING)+ ')'
+```
+
+**Grammar description**
+
+import: fixed token, marking the beginning of an import syntax
+
+checkImportValue: Custom go method to detect whether `STRING` is a legal file path. The current detection logic is that STRING must satisfy `(?m)"(?[az AZ 0-9_-])+\. api"` regular.
+
+STRING: A string of English double quotes, such as "foo.api"
+
+**Examples of correct syntax** ✅
+
+eg:
+
+```api
+import "foo.api"
+import "foo/bar.api"
+
+import(
+ "bar.api"
+ "foo/bar/foo.api"
+)
+```
+
+**Examples of incorrect syntax** ❌
+
+eg:
+
+```api
+import foo.api
+import "foo.txt"
+import (
+ bar.api
+ bar.api
+)
+```
+
+## info syntax block
+
+The info grammar block is a grammar body that contains multiple key-value pairs.
+Its function is equivalent to the description of an api service. The parser will map it to spec.Spec for translation into other languages (golang, java, etc.)
+Is the meta element that needs to be carried. If it is just a description of the current api, without considering its translation to other languages,
+you can use simple multi-line comments or java-style documentation comments. For comment descriptions, please refer to the hidden channels below.
+
+> **[!WARNING]
+> Duplicate keys cannot be used, each api file can only have 0 or 1 info syntax block
+
+**Grammar definition**
+
+```antlrv4
+'info' '(' (ID {checkKeyValue(p)}VALUE)+ ')'
+```
+
+**Grammar description**
+
+info: fixed token, marking the beginning of an info syntax block
+
+checkKeyValue: Custom go method to check whether `VALUE` is a legal value.
+
+VALUE: The value corresponding to the key, which can be any character after a single line except'\r','\n',''. For multiple lines, please wrap it with "", but it is strongly recommended that everything be wrapped with ""
+
+**Examples of correct syntax** ✅
+
+eg1:Irregular writing
+
+```api
+info(
+foo: foo value
+bar:"bar value"
+ desc:"long long long long
+long long text"
+)
+```
+
+eg2:Standard writing (recommended)
+
+```api
+info(
+ foo: "foo value"
+ bar: "bar value"
+ desc: "long long long long long long text"
+)
+```
+
+**Examples of incorrect syntax** ❌
+
+eg1:No key-value
+
+```api
+info()
+```
+
+eg2:Does not contain colon
+
+```api
+info(
+ foo value
+)
+```
+
+eg3:key-value does not wrap
+
+```api
+info(foo:"value")
+```
+
+eg4:No key
+
+```api
+info(
+ : "value"
+)
+```
+
+eg5:Illegal key
+
+```api
+info(
+ 12: "value"
+)
+```
+
+eg6:Remove the old version of multi-line syntax
+
+```api
+info(
+ foo: >
+ some text
+ <
+)
+```
+
+## type syntax block
+
+In the api service, we need to use a structure (class) as the carrier of the request body and the response body.
+Therefore, we need to declare some structures to accomplish this. The type syntax block evolved from the type of golang.
+Of course It also retains some of the characteristics of golang type, and the following golang characteristics are used:
+
+* Keep the built-in data types of golang `bool`,`int`,`int8`,`int16`,`int32`,`int64`,`uint`,`uint8`,`uint16`,`uint32`,`uint64`,`uintptr`
+ ,`float32`,`float64`,`complex64`,`complex128`,`string`,`byte`,`rune`,
+* Compatible with golang struct style declaration
+* Keep golang keywords
+
+> **[!WARNING]️
+> * Does not support alias
+> * Does not support `time.Time` data type
+> * Structure name, field name, cannot be a golang keyword
+
+**Grammar definition**
+
+Since it is similar to golang, it will not be explained in detail. Please refer to the typeSpec definition in [ApiParser.g4](https://github.com/zeromicro/go-zero/blob/master/tools/goctl/api/parser/g4/ApiParser.g4) for the specific syntax definition.
+
+**Grammar description**
+
+Refer to golang writing
+
+**Examples of correct syntax** ✅
+
+eg1:Irregular writing
+
+```api
+type Foo struct{
+ Id int `path:"id"` // ①
+ Foo int `json:"foo"`
+}
+
+type Bar struct{
+ // Non-exported field
+ bar int `form:"bar"`
+}
+
+type(
+ // Non-derived structure
+ fooBar struct{
+ FooBar int
+ }
+)
+```
+
+eg2: Standard writing (recommended)
+
+```api
+type Foo{
+ Id int `path:"id"`
+ Foo int `json:"foo"`
+}
+
+type Bar{
+ Bar int `form:"bar"`
+}
+
+type(
+ FooBar{
+ FooBar int
+ }
+)
+```
+
+**Examples of incorrect syntax** ❌
+
+eg
+
+```api
+type Gender int // not support
+
+// Non-struct token
+type Foo structure{
+ CreateTime time.Time // Does not support time.Time
+}
+
+// golang keyword var
+type var{}
+
+type Foo{
+ // golang keyword interface
+ Foo interface
+}
+
+
+type Foo{
+ foo int
+ // The map key must have the built-in data type of golang
+ m map[Bar]string
+}
+```
+
+> [!NOTE] ①
+> The tag definition is the same as the json tag syntax in golang. In addition to the json tag, go-zero also provides some other tags to describe the fields,
+> See the table below for details.
+
+* tag table
+
+
+
tag key
Description
Provider
Effective Coverage
Example
+
+
+
json
Json serialization tag
golang
request、response
json:"fooo"
+
+
+
path
Routing path, such as/foo/:id
go-zero
request
path:"id"
+
+
+
form
Mark that the request body is a form (in the POST method) or a query (in the GET method)/search?name=keyword)
go-zero
request
form:"name"
+
+
+* tag modifier
+
+ Common parameter verification description
+
+
+
tag key
Description
Provider
Effective Coverage
Example
+
+
+
optional
Define the current field as an optional parameter
go-zero
request
json:"name,optional"
+
+
+
options
Define the enumeration value of the current field, separated by a vertical bar |
go-zero
request
json:"gender,options=male"
+
+
+
default
Define the default value of the current field
go-zero
request
json:"gender,default=male"
+
+
+
range
Define the value range of the current field
go-zero
request
json:"age,range=[0:120]"
+
+
+
+ > [!TIP]
+ > The tag modifier needs to be separated by a quotation comma after the tag value
+
+## service syntax block
+
+The service syntax block is used to define api services, including service name, service metadata, middleware declaration, routing, handler, etc.
+
+> **[!WARNING]️
+> * The service name of the main api and the imported api must be the same, and there must be no ambiguity in the service name.
+> * The handler name cannot be repeated
+> * The name of the route (request method + request path) cannot be repeated
+> * The request body must be declared as a normal (non-pointer) struct, and the response body has been processed for forward compatibility. Please refer to the following description for details
+>
+
+**Grammar definition**
+
+```antlrv4
+serviceSpec: atServer? serviceApi;
+atServer: '@server' lp='(' kvLit+ rp=')';
+serviceApi: {match(p,"service")}serviceToken=ID serviceName lbrace='{' serviceRoute* rbrace='}';
+serviceRoute: atDoc? (atServer|atHandler) route;
+atDoc: '@doc' lp='('? ((kvLit+)|STRING) rp=')'?;
+atHandler: '@handler' ID;
+route: {checkHttpMethod(p)}httpMethod=ID path request=body? returnToken=ID? response=replybody?;
+body: lp='(' (ID)? rp=')';
+replybody: lp='(' dataType? rp=')';
+// kv
+kvLit: key=ID {checkKeyValue(p)}value=LINE_VALUE;
+
+serviceName: (ID '-'?)+;
+path: (('/' (ID ('-' ID)*))|('/:' (ID ('-' ID)?)))+;
+```
+
+**Grammar description**
+
+serviceSpec: Contains an optional syntax block `atServer` and `serviceApi` syntax block, which follow the sequence mode (the service must be written in order, otherwise it will be parsed incorrectly)
+
+atServer: Optional syntax block, defining server metadata of the key-value structure,'@server' indicates the beginning of this server syntax block, which can be used to describe serviceApi or route syntax block, and it has some special keys when it is used to describe different syntax blocks key needs attention,see **atServerKey Key Description**。
+
+serviceApi: Contains one or more `serviceRoute` syntax blocks
+
+serviceRoute: Contains `atDoc`, handler and `route` in sequence mode
+
+atDoc: Optional syntax block, a key-value description of a route, which will be passed to the spec.Spec structure after parsing. If you don't care about passing it to spec.Spec, it is recommended to use a single-line comment instead.
+
+handler: It is a description of the handler layer of routing. You can specify the handler name by specifying the `handler` key by atServer, or you can directly use the atHandler syntax block to define the handler name
+
+atHandler: `@handler` fixed token, followed by a value following the regularity `[_a-zA-Z][a-zA-Z_-]`, used to declare a handler name
+
+route: Routing consists of `httpMethod`, `path`, optional `request`, optional `response`, and `httpMethod` must be lowercase.
+
+body: api request body grammar definition, it must be an optional ID value wrapped by ()
+
+replyBody: api response body grammar definition, must be a struct wrapped by ()、~~array(Forward compatible processing, it may be discarded in the future, it is strongly recommended to wrap it in struct instead of using array directly as the response body)~~
+
+kvLit: Same as info key-value
+
+serviceName: There can be multiple'-'join ID values
+
+path: The api request path must start with `/` or `/:`, and must not end with `/`. The middle can contain ID or multiple ID strings with `-` join
+
+**atServerKey Key Description**
+
+When modifying the service
+
+
+
+
key
Description
Example
+
+
+
jwt
Declare that all routes under the current service require jwt authentication, and code containing jwt logic will be automatically generated
jwt: Auth
+
+
+
group
Declare the current service or routing file group
group: login
+
+
+
middleware
Declare that the current service needs to open the middleware
middleware: AuthMiddleware
+
+
+
+When modifying the route
+
+
+
+
key
Description
Example
+
+
+
handler
Declare a handler
-
+
+
+
+**Examples of correct syntax** ✅
+
+eg1:Irregular writing
+
+```api
+@server(
+ jwt: Auth
+ group: foo
+ middleware: AuthMiddleware
+)
+service foo-api{
+ @doc(
+ summary: foo
+ )
+ @server(
+ handler: foo
+ )
+ // Non-exported body
+ post /foo/:id (foo) returns (bar)
+
+ @doc "bar"
+ @handler bar
+ post /bar returns ([]int)// Array is not recommended as response body
+
+ @handler fooBar
+ post /foo/bar (Foo) returns // You can omit 'returns'
+}
+```
+
+eg2:Standard writing (recommended)
+
+```api
+@server(
+ jwt: Auth
+ group: foo
+ middleware: AuthMiddleware
+)
+service foo-api{
+ @doc "foo"
+ @handler foo
+ post /foo/:id (Foo) returns (Bar)
+}
+
+service foo-api{
+ @handler ping
+ get /ping
+
+ @doc "foo"
+ @handler bar
+ post /bar/:id (Foo)
+}
+
+```
+
+**Examples of incorrect syntax** ❌
+
+```api
+// Empty server syntax block is not supported
+@server(
+)
+// Empty service syntax block is not supported
+service foo-api{
+}
+
+service foo-api{
+ @doc kkkk // The short version of the doc must be enclosed in English double quotation marks
+ @handler foo
+ post /foo
+
+ @handler foo // Duplicate handler
+ post /bar
+
+ @handler fooBar
+ post /bar // Duplicate routing
+
+ // @handler and @doc are in the wrong order
+ @handler someHandler
+ @doc "some doc"
+ post /some/path
+
+ // handler is missing
+ post /some/path/:id
+
+ @handler reqTest
+ post /foo/req (*Foo) // Data types other than ordinary structures are not supported as the request body
+
+ @handler replyTest
+ post /foo/reply returns (*Foo) // Does not support data types other than ordinary structures and arrays (forward compatibility, later considered to be discarded) as response bodies
+}
+```
+
+## Hidden channel
+
+Hidden channels are currently mainly blank symbols, newline symbols and comments. Here we only talk about comments, because blank symbols and newline symbols are currently useless.
+
+### Single line comment
+
+**Grammar definition**
+
+```antlrv4
+'//' ~[\r\n]*
+```
+
+**Grammar description**
+It can be known from the grammatical definition that single-line comments must start with `//`, and the content cannot contain newline characters
+
+**Examples of correct syntax** ✅
+
+```api
+// doc
+// comment
+```
+
+**Examples of incorrect syntax** ❌
+
+```api
+// break
+line comments
+```
+
+### java style documentation comments
+
+**Grammar definition**
+
+```antlrv4
+'/*' .*? '*/'
+```
+
+**Grammar description**
+
+It can be known from the grammar definition that a single line comment must start with any character that starts with `/*` and ends with `*/`.
+
+**Examples of correct syntax** ✅
+
+```api
+/**
+ * java-style doc
+ */
+```
+
+**Examples of incorrect syntax** ❌
+
+```api
+/*
+ * java-style doc */
+ */
+```
+
+## Doc&Comment
+
+If you want to get the doc or comment of a certain element, how do you define it?
+
+**Doc**
+
+We stipulate that the number of lines in the previous grammar block (non-hidden channel content)
+line+1 to all comments (the current line, or multiple lines) before the first element of the current grammar block are doc,
+And retain the original mark of `//`, `/*`, `*/`.
+
+**Comment**
+
+We specify that a comment block (the current line, or multiple lines) at the beginning of the line where the last element of the current syntax block is located is comment,
+And retain the original mark of `//`, `/*`, `*/`.
+
+Syntax block **Doc** and **Comment** support situation
+
+
+
+
Syntax block
Parent Syntax Block
Doc
Comment
+
+
+
syntaxLit
api
✅
✅
+
+
+
kvLit
infoSpec
✅
✅
+
+
+
importLit
importSpec
✅
✅
+
+
+
typeLit
api
✅
❌
+
+
+
typeLit
typeBlock
✅
❌
+
+
+
field
typeLit
✅
✅
+
+
+
key-value
atServer
✅
✅
+
+
+
atHandler
serviceRoute
✅
✅
+
+
+
route
serviceRoute
✅
✅
+
+
+
+The following is the writing of doc and comment after the corresponding syntax block is parsed
+
+```api
+// syntaxLit doc
+syntax = "v1" // syntaxLit commnet
+
+info(
+ // kvLit doc
+ author: songmeizi // kvLit comment
+)
+
+// typeLit doc
+type Foo {}
+
+type(
+ // typeLit doc
+ Bar{}
+
+ FooBar{
+ // filed doc
+ Name int // filed comment
+ }
+)
+
+@server(
+ /**
+ * kvLit doc
+ * Enable jwt authentication
+ */
+ jwt: Auth /**kvLit comment*/
+)
+service foo-api{
+ // atHandler doc
+ @handler foo //atHandler comment
+
+ /*
+ * Route doc
+ * Post request
+ * Route path: foo
+ * Request body: Foo
+ * Response body: Foo
+ */
+ post /foo (Foo) returns (Foo) // route comment
+}
+```
diff --git a/go-zero.dev/en/bloom.md b/go-zero.dev/en/bloom.md
new file mode 100644
index 00000000..e7b17a2d
--- /dev/null
+++ b/go-zero.dev/en/bloom.md
@@ -0,0 +1,91 @@
+# bloom
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The go-zero microservice framework provides many out-of-the-box tools.
+Good tools can not only improve the performance of the service,
+but also improve the robustness of the code to avoid errors,
+and realize the uniformity of the code style for others to read, etc.
+A series of articles will respectively introduce the use of tools in the go-zero framework and their implementation principles.
+
+## Bloom filter [bloom](https://github.com/zeromicro/go-zero/blob/master/core/bloom/bloom.go)
+When doing server development, I believe you have heard of Bloom filters,
+you can judge whether a certain element is in the collection,
+because there are certain misjudgments and delete complex problems,
+the general usage scenario is: to prevent cache breakdown (to prevent malicious Attacks),
+spam filtering, cache digests, model detectors, etc.,
+to determine whether there is a row of data to reduce disk access and improve service access performance.
+The simple cache package bloom.bloom provided by go-zero, the simple way to use it is as follows.
+
+```go
+// Initialize redisBitSet
+store := redis.NewRedis("redis 地址", redis.NodeType)
+// Declare a bitSet, key="test_key" name and bits are 1024 bits
+bitSet := newRedisBitSet(store, "test_key", 1024)
+// Determine whether the 0th bit exists
+isSetBefore, err := bitSet.check([]uint{0})
+
+// Set the 512th bit to 1
+err = bitSet.set([]uint{512})
+// Expires in 3600 seconds
+err = bitSet.expire(3600)
+
+// Delete the bitSet
+err = bitSet.del()
+```
+
+
+Bloom briefly introduced the use of the most basic redis bitset. The following is the real bloom implementation.
+
+Position the element hash
+
+```go
+// The element is hashed 14 times (const maps=14), and byte (0-13) is appended to the element each time, and then the hash is performed.
+// Take the modulo of locations[0-13], and finally return to locations.
+func (f *BloomFilter) getLocations(data []byte) []uint {
+ locations := make([]uint, maps)
+ for i := uint(0); i < maps; i++ {
+ hashValue := hash.Hash(append(data, byte(i)))
+ locations[i] = uint(hashValue % uint64(f.bits))
+ }
+
+ return locations
+}
+```
+
+
+Add elements to bloom
+```go
+// We can find that the add method uses the set methods of getLocations and bitSet.
+// We hash the elements into uint slices of length 14, and then perform the set operation and store them in the bitSet of redis.
+func (f *BloomFilter) Add(data []byte) error {
+ locations := f.getLocations(data)
+ err := f.bitSet.set(locations)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+```
+
+
+Check if there is an element in bloom
+```go
+// We can find that the Exists method uses the check method of getLocations and bitSet
+// We hash the elements into uint slices of length 14, and then perform bitSet check verification, return true if it exists, false if it does not exist or if the check fails
+func (f *BloomFilter) Exists(data []byte) (bool, error) {
+ locations := f.getLocations(data)
+ isSet, err := f.bitSet.check(locations)
+ if err != nil {
+ return false, err
+ }
+ if !isSet {
+ return false, nil
+ }
+
+ return true, nil
+}
+```
+
+This section mainly introduces the `core.bloom` tool in the go-zero framework, which is very practical in actual projects. Good use of tools is very helpful to improve service performance and development efficiency. I hope this article can bring you some gains.
\ No newline at end of file
diff --git a/go-zero.dev/en/buiness-cache.md b/go-zero.dev/en/buiness-cache.md
new file mode 100644
index 00000000..19a239db
--- /dev/null
+++ b/go-zero.dev/en/buiness-cache.md
@@ -0,0 +1,152 @@
+# Business layer cache
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the previous article [Persistent Layer Cache](redis-cache.md), the db layer cache was introduced. In retrospect, the main design of the db layer cache can be summarized as follows:
+
+* The cache is only deleted but not updated
+* Only one row record is always stored, that is, the row record corresponding to the primary key
+* The unique index only caches the primary key value, not the row record directly (refer to the mysql index idea)
+* Anti-cache penetration design, one minute by default
+* Do not cache multi-line records
+
+## Preface
+
+In a large-scale business system, by adding a cache to the persistence layer, for most single-line record queries,
+it is believed that the cache can help the persistence layer reduce a lot of access pressure, but in actual business,
+data reading is not just a single-line record.
+In the face of many multi-line records, this will also cause a lot of access pressure on the persistence layer.
+In addition, it is unrealistic to rely solely on the persistence layer for high-concurrency scenarios such as the spike system and the course selection system.
+In this section, we introduce the cache design in go-zero practice-biz cache.
+
+## Examples of applicable scenarios
+
+* subject system
+* Content social system
+* Spike...
+
+Like these systems, we can add another layer of cache to the business layer to store key information in the system,
+such as the student selection information in the course selection system, the remaining number of courses in the course selection system,
+and the content information during a certain period of time in the content social system.
+
+Next, let's take an example of a content social system.
+
+In the content social system, we generally query a batch of content lists first,
+and then click on a piece of content to view the details.
+
+Before adding biz cache, the query flowchart of content information should be:
+
+![redis-cache-05](./resource/redis-cache-05.png)
+
+From the figure and the previous article [Persistence Layer Cache] (redis-cache.md),
+we can know that there is no way to get the content list to rely on the cache.
+If we add a layer of cache to the business layer to store the key information (or even the complete information) in the list,
+then access to multiple rows of records is no longer a problem, and this is what biz redis will do. Next,
+let’s take a look at the design plan, assuming that a single-line record in the content system contains the following fields.
+
+|Field Name|Field Type|Remarks|
+|---|---|---|
+|id|string|Content id|
+|title|string|Title|
+|content|string|Content|
+|createTime|time.Time|Create time|
+
+Our goal is to obtain a batch of content lists, and try to avoid the access pressure caused by the content list going to the db.
+First, we use the sort set data structure of redis to store. The amount of field information that needs to be stored is based on
+two redis storage schemes:
+
+* Cache local information
+
+ ![biz-redis-02](./resource/biz-redis-02.svg)
+ The key field information (such as id, etc.) is compressed and stored according to certain rules.
+ For score, we use the `createTime` millisecond value (the time value is equal, not discussed here).
+ The advantage of this storage scheme is to save redis storage space.
+
+ On the other hand, the disadvantage is that the detailed content of the list needs to be checked back again (but this back check will use the row record cache of the persistence layer)
+
+* Cache complete information
+
+ ![biz-redis-01](./resource/biz-redis-01.svg)
+ All published content will be stored after being compressed according to certain rules. For the same score,
+ we still use the `createTime` millisecond value. The advantage of this storage solution is that business additions,
+ deletions, checks, and changes are all reids, while the db layer is at this time.
+
+ You don’t need to consider the row record cache. The persistence layer only provides data backup and recovery.
+ On the other hand, its shortcomings are also obvious. The storage space and configuration requirements are higher, and the cost will increase.
+
+Sample code
+```golang
+type Content struct {
+ Id string `json:"id"`
+ Title string `json:"title"`
+ Content string `json:"content"`
+ CreateTime time.Time `json:"create_time"`
+}
+
+const bizContentCacheKey = `biz#content#cache`
+
+// AddContent provides content storage
+func AddContent(r redis.Redis, c *Content) error {
+ v := compress(c)
+ _, err := r.Zadd(bizContentCacheKey, c.CreateTime.UnixNano()/1e6, v)
+ return err
+}
+
+// DelContent provides content deletion
+func DelContent(r redis.Redis, c *Content) error {
+ v := compress(c)
+ _, err := r.Zrem(bizContentCacheKey, v)
+
+ return err
+}
+
+// Content compression
+func compress(c *Content) string {
+ // todo: do it yourself
+ var ret string
+ return ret
+}
+
+// Content decompression
+func unCompress(v string) *Content {
+ // todo: do it yourself
+ var ret Content
+ return &ret
+}
+
+// ListByRangeTime provides data query based on time period
+func ListByRangeTime(r redis.Redis, start, end time.Time) ([]*Content, error) {
+ kvs, err := r.ZrangebyscoreWithScores(bizContentCacheKey, start.UnixNano()/1e6, end.UnixNano()/1e6)
+ if err != nil {
+ return nil, err
+ }
+
+ var list []*Content
+ for _, kv := range kvs {
+ data:=unCompress(kv.Key)
+ list = append(list, data)
+ }
+
+ return list, nil
+}
+
+```
+
+In the above example, redis does not set an expiration time. We will synchronize the add, delete, modify,
+and check operations to redis. We think that the content social system has a relatively high list access request to do this scheme design.
+In addition, there are also some data visits. I did not expect the content design system to visit so frequently.
+It may be a sudden increase in visits within a certain period of time, and then it may be visited again for a long time. At this interval,
+In other words, I will not visit again. In this scenario, how should I consider the design of the cache?
+In the practice of go-zero content, there are two solutions to this problem:
+
+* Increased memory cache: The memory cache is used to store data that may have a large amount of sudden access. Commonly used storage schemes use map data structure to store.
+ Map data storage is relatively simple to implement, but cache expiration processing needs to be increased
+ The timer comes out, another solution is through [Cache](https://github.com/zeromicro/go-zero/blob/master/core/collection/cache.go) in the go-zero library, It is specialized
+ Used for memory management.
+* Use biz redis and set a reasonable expiration time
+
+# Summary
+The above two scenarios can contain most of the multi-line record cache. For scenarios where the query volume of multi-line records is not large,
+there is no need to put biz redis directly in it. You can try to let db take care of it first, and developers can monitor according to the persistence layer. And service
+Biz needs to be introduced when monitoring to measure.
diff --git a/go-zero.dev/en/business-coding.md b/go-zero.dev/en/business-coding.md
new file mode 100644
index 00000000..433e8700
--- /dev/null
+++ b/go-zero.dev/en/business-coding.md
@@ -0,0 +1,128 @@
+# Business code
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the previous section, we have written user.api based on the preliminary requirements to describe which services the user service provides to the outside world. In this section, we will continue with the previous steps.
+Use business coding to tell how go-zero is used in actual business.
+
+## Add Mysql configuration
+```shell
+$ vim service/user/cmd/api/internal/config/config.go
+```
+```go
+package config
+
+import "github.com/tal-tech/go-zero/rest"
+
+type Config struct {
+ rest.RestConf
+ Mysql struct{
+ DataSource string
+ }
+
+ CacheRedis cache.CacheConf
+}
+```
+
+## Improve yaml configuration
+```shell
+$ vim service/user/cmd/api/etc/user-api.yaml
+```
+```yaml
+Name: user-api
+Host: 0.0.0.0
+Port: 8888
+Mysql:
+ DataSource: $user:$password@tcp($url)/$db?charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai
+CacheRedis:
+ - Host: $host
+ Pass: $pass
+ Type: node
+```
+
+> [!TIP]
+> $user: mysql database user
+>
+> $password: mysql database password
+>
+> $url: mysql database connection address
+>
+> $db: mysql database db name, that is, the database where the user table is located
+>
+> $host: Redis connection address Format: ip:port, such as: 127.0.0.1:6379
+>
+> $pass: redis password
+>
+> For more configuration information, please refer to [api configuration introduction](api-config.md)
+
+## Improve service dependence
+```shell
+$ vim service/user/cmd/api/internal/svc/servicecontext.go
+```
+```go
+type ServiceContext struct {
+ Config config.Config
+ UserModel model.UserModel
+}
+
+func NewServiceContext(c config.Config) *ServiceContext {
+ conn:=sqlx.NewMysql(c.Mysql.DataSource)
+ return &ServiceContext{
+ Config: c,
+ UserModel: model.NewUserModel(conn,c.CacheRedis),
+ }
+}
+```
+## Fill in the login logic
+```shell
+$ vim service/user/cmd/api/internal/logic/loginlogic.go
+```
+
+```go
+func (l *LoginLogic) Login(req types.LoginReq) (*types.LoginReply, error) {
+ if len(strings.TrimSpace(req.Username)) == 0 || len(strings.TrimSpace(req.Password)) == 0 {
+ return nil, errors.New("Invalid parameter")
+ }
+
+ userInfo, err := l.svcCtx.UserModel.FindOneByNumber(req.Username)
+ switch err {
+ case nil:
+ case model.ErrNotFound:
+ return nil, errors.New("Username does not exist")
+ default:
+ return nil, err
+ }
+
+ if userInfo.Password != req.Password {
+ return nil, errors.New("User password is incorrect")
+ }
+
+ // ---start---
+ now := time.Now().Unix()
+ accessExpire := l.svcCtx.Config.Auth.AccessExpire
+ jwtToken, err := l.getJwtToken(l.svcCtx.Config.Auth.AccessSecret, now, l.svcCtx.Config.Auth.AccessExpire, userInfo.Id)
+ if err != nil {
+ return nil, err
+ }
+ // ---end---
+
+ return &types.LoginReply{
+ Id: userInfo.Id,
+ Name: userInfo.Name,
+ Gender: userInfo.Gender,
+ AccessToken: jwtToken,
+ AccessExpire: now + accessExpire,
+ RefreshAfter: now + accessExpire/2,
+ }, nil
+}
+```
+> [!TIP]
+> For the code implementation of [start]-[end] in the above code, please refer to the [Jwt Authentication](jwt.md) chapter
+
+# Guess you wants
+* [API IDL](api-grammar.md)
+* [API Commands](goctl-api.md)
+* [API Directory Structure](api-dir.md)
+* [JWT](jwt.md)
+* [API Configuration](api-config.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/business-dev.md b/go-zero.dev/en/business-dev.md
new file mode 100644
index 00000000..a2c31168
--- /dev/null
+++ b/go-zero.dev/en/business-dev.md
@@ -0,0 +1,63 @@
+# Business development
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In this chapter, we use a simple example to demonstrate some basic functions in go-zero. This section will contain the following subsections:
+ * [Directory Structure](service-design.md)
+ * [Model Generation](model-gen.md)
+ * [API Coding](api-coding.md)
+ * [Business Coding](business-coding.md)
+ * [JWT](jwt.md)
+ * [Middleware](middleware.md)
+ * [RPC Implement & Call](rpc-call.md)
+ * [Error Handling](error-handle.md)
+
+## Demo project download
+Before officially entering the follow-up document description, you can pay attention to the source code here, and we will perform a progressive demonstration of the function based on this source code.
+Instead of starting from 0 completely, if you come from the [Quick Start](quick-start.md) chapter, this source code structure is not a problem for you.
+
+Click Here to download Demo project
+
+## Demonstration project description
+
+### Scenes
+The programmer Xiao Ming needs to borrow a copy of "Journey to the West". When there is no online library management system, he goes to the front desk of the library to consult with the librarian every day.
+* Xiao Ming: Hello, do you still have the book "Journey to the West" today?
+* Administrator: No more, let's check again tomorrow.
+
+One day later, Xiao Ming came to the library again and asked:
+* Xiao Ming: Hello, do you still have the book "Journey to the West" today?
+* Administrator: No, you can check again in two days.
+
+After many repetitions in this way, Xiao Ming was also in vain and wasted a lot of time on the way back and forth, so he finally couldn't stand the backward library management system.
+He decided to build a book review system by himself.
+
+### Expected achievement
+* User login:
+ Rely on existing student system data to log in
+* Book search:
+ Search for books based on book keywords and query the remaining number of books.
+
+### System analysis
+
+#### Service design
+* user
+ * api: provides user login protocol
+ * rpc: for search service to access user data
+* search
+ * api: provide book query agreement
+
+> [!TIP]
+> Although this tiny book borrowing query system is small, it does not fit the business scenario in practice, but only the above two functions have already met our demonstration of the go-zero api/rpc scenario.
+> In order to satisfy the richer go-zero function demonstration in the future, business insertion, that is, related function descriptions, will be carried out in the document. Here only one scene is used for introduction.
+>
+> NOTE: Please create the sql statement in the user into the db by yourself, see [prepare](prepare.md) for more preparation work
+>
+> Add some preset user data to the database for later use. For the sake of space, the demonstration project does not demonstrate the operation of inserting data in detail.
+
+
+# Reference preset data
+```sql
+INSERT INTO `user` (number,name,password,gender)values ('666','xiaoming','123456','male');
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/ci-cd.md b/go-zero.dev/en/ci-cd.md
new file mode 100644
index 00000000..bb11ea50
--- /dev/null
+++ b/go-zero.dev/en/ci-cd.md
@@ -0,0 +1,60 @@
+# CI/CD
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+> In software engineering, CI/CD or CICD generally refers to the combined practices of continuous integration and either continuous delivery or continuous deployment.
+>
+> ——[Wikipedia](https://zh.wikipedia.org/wiki/CI/CD)
+
+
+![cd-cd](./resource/ci-cd.png)
+
+## What can CI do?
+
+> In modern application development, the goal is to have multiple developers working simultaneously on different features of the same app. However, if an organization is set up to merge all branching source code together on one day (known as “merge day”), the resulting work can be tedious, manual, and time-intensive. That’s because when a developer working in isolation makes a change to an application, there’s a chance it will conflict with different changes being simultaneously made by other developers. This problem can be further compounded if each developer has customized their own local integrated development environment (IDE), rather than the team agreeing on one cloud-based IDE.
+
+> ——[Continuous integration](https://www.redhat.com/en/topics/devops/what-is-ci-cd)
+
+From a conceptual point of view, CI/CD includes the deployment process. Here, we will put the deployment (CD) in a separate section [Service Deployment](service-deployment.md),
+This section uses gitlab to do a simple CI (Run Unit Test) demonstration.
+
+## Gitlab CI
+Gitlab CI/CD is a built-in software development tool of Gitlab, providing
+* Continuous Integration (CI)
+* Continuous Delivery (CD)
+* Continuous deployment (CD)
+
+## Prepare
+* gitlab installation
+* git installation
+* gitlab runner installation
+
+## Enable Gitlab CI
+* Upload code
+ * Create a new warehouse `go-zero-demo` in gitlab
+ * Upload the local code to the `go-zero-demo` warehouse
+* Create a `.gitlab-ci.yaml` file in the project root directory. Through this file, a pipeline can be created, which will be run when there is a content change in the code repository. The pipeline is run in sequence by one or more.
+ Each stage can contain one or more jobs running in parallel.
+* Add CI content (for reference only)
+
+ ```yaml
+ stages:
+ - analysis
+
+ analysis:
+ stage: analysis
+ image: golang
+ script:
+ - go version && go env
+ - go test -short $(go list ./...) | grep -v "no test"
+ ```
+
+> [!TIP]
+> The above CI is a simple demonstration. For detailed gitlab CI, please refer to the official gitlab documentation for richer CI integration.
+
+
+# Reference
+* [CI/CD Wikipedia](https://zh.wikipedia.org/wiki/CI/CD)
+* [Continuous integration](https://www.redhat.com/en/topics/devops/what-is-ci-cd)
+* [Gitlab CI](https://docs.gitlab.com/ee/ci/)
\ No newline at end of file
diff --git a/go-zero.dev/en/coding-spec.md b/go-zero.dev/en/coding-spec.md
new file mode 100644
index 00000000..d4983cf6
--- /dev/null
+++ b/go-zero.dev/en/coding-spec.md
@@ -0,0 +1,46 @@
+# Coding Rules
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## import
+* Single-line import is not recommended being wrapped in parentheses
+* Introduce in the order of `Official Package`, NEW LINE, `Project Package`, NEW LINE, `Third Party Dependent Package`
+ ```go
+ import (
+ "context"
+ "string"
+
+ "greet/user/internal/config"
+
+ "google.golang.org/grpc"
+ )
+ ```
+
+## Function returns
+* Object avoids non-pointer return
+* Follow the principle that if there is a normal value return, there must be no error, and if there is an error, there must be no normal value return.
+
+## Error handling
+* An error must be handled, if it cannot be handled, it must be thrown.
+* Avoid underscore (_) receiving error
+
+## Function body coding
+* It is recommended that a block end with a blank line, such as if, for, etc.
+ ```go
+ func main (){
+ if x==1{
+ // do something
+ }
+
+ fmt.println("xxx")
+ }
+ ```
+* Blank line before return
+ ```go
+ func getUser(id string)(string,error){
+ ....
+
+ return "xx",nil
+ }
+ ```
\ No newline at end of file
diff --git a/go-zero.dev/en/concept-introduction.md b/go-zero.dev/en/concept-introduction.md
new file mode 100644
index 00000000..91ee8ddd
--- /dev/null
+++ b/go-zero.dev/en/concept-introduction.md
@@ -0,0 +1,46 @@
+# Concepts
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+## go-zero
+go-zero is a web and rpc framework that with lots of engineering practices builtin. It’s born to ensure the stability of the busy services with resilience design, and has been serving sites with tens of millions users for years.
+
+## goctl
+An auxiliary tool designed to improve engineering efficiency and reduce error rates for developers.
+
+## goctl plugins
+Refers to the peripheral binary resources centered on goctl, which can meet some personalized code generation requirements, such as the routing merge plug-in `goctl-go-compact` plug-in,
+The `goctl-swagger` plugin for generating swagger documents, the `goctl-php` plugin for generating the php caller, etc.
+
+## intellij/vscode plugins
+A plug-in developed with goctl on the intellij series products, which replaces the goctl command line operation with the UI.
+
+## api file
+An api file refers to a text file used to define and describe an api service. It ends with the .api suffix and contains IDL of the api syntax.
+
+## goctl environment
+The goctl environment is the preparation environment before using goctl, including:
+* golang environment
+* protoc
+* protoc-gen-go plugin
+* go module | gopath
+
+## go-zero-demo
+Go-zero-demo contains a large repository of all the source code in the document. When we write the demo in the future, we all create sub-projects under this project.
+Therefore, we need to create a large warehouse in advance `go-zero-demo`, and I put this warehouse in the home directory here.
+
+```shell
+$ cd ~
+$ mkdir go-zero-demo&&cd go-zero-demo
+$ go mod init go-zero-demo
+```
+
+
+# Reference
+* [go-zero](README.md)
+* [Goctl](goctl.md)
+* [Plugins](plugin-center.md)
+* [Tools](tool-center.md)
+* [API IDL](api-grammar.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/config-introduction.md b/go-zero.dev/en/config-introduction.md
new file mode 100644
index 00000000..87eca118
--- /dev/null
+++ b/go-zero.dev/en/config-introduction.md
@@ -0,0 +1,8 @@
+# Configuration Introduction
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Before officially using go-zero, let us first understand the configuration definitions of different service types in go-zero, and see what role each field in the configuration has. This section will contain the following subsections:
+* [API Configuration](api-config.md)
+* [RPC Configuration](rpc-config.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/datacenter.md b/go-zero.dev/en/datacenter.md
new file mode 100644
index 00000000..78b080ea
--- /dev/null
+++ b/go-zero.dev/en/datacenter.md
@@ -0,0 +1,918 @@
+# How do I use go-zero to implement a Middle Ground System?
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+> Author: Jack Luo
+>
+> Original link:https://www.cnblogs.com/jackluo/p/14148518.html
+
+[TOC]
+
+I recently discovered that a new star microservice framework has emerged in the golang community.
+It comes from a good future. Just looking at the name, it is very exciting. Before, I only played go-micro.
+In fact, I have not used it in the project. I think that microservices and grpc are very noble.
+They have not been in the project yet. I have really played them. I saw that the tools provided by the government are really easy to use.
+They only need to be defined, and the comfortable file structure is generated. I am concerned about business, and there was a voting activity recently,
+and in recent years, Middle Ground System have been quite popular, so I decided to give it a try.
+
+> SourceCode: [https://github.com/jackluo2012/datacenter](https://github.com/jackluo2012/datacenter)
+
+Let's talk about the idea of Middle Ground System architecture first:
+
+![](https://img2020.cnblogs.com/blog/203395/202012/203395-20201217094615171-335437652.jpg)
+
+The concept of Middle Ground System is probably to unify the apps one by one. I understand it this way anyway.
+
+Let’s talk about user service first. Now a company has a lot of official accounts, small programs, WeChat, Alipay, and xxx xxx, and a lot of platforms. Every time we develop, we always need to provide user login services. Stop copying the code, and then we are thinking about whether we can have a set of independent user services, just tell me you need to send a platform you want to log in (such as WeChat), WeChat login, what is needed is that the client returns one to the server code, and then the server takes this code to WeChat to get user information. Anyway, everyone understands it.
+
+We decided to get all the information into the configuration public service, and then store the appid and appkey of WeChat, Alipay and other platforms, as well as the appid and appkey of payment, and write a set.
+
+---
+
+Finally, let's talk about implementation, the whole is a repo:
+
+- Gateway, we use: go-zero's Api service
+- Others are services, we use go-zero rpc service
+
+Look at the directory structure
+
+![](https://img2020.cnblogs.com/blog/203395/202012/203395-20201209110504600-317546535.png)
+
+After the whole project was completed, I worked alone and wrote about it for a week, and then I realized the above Middle Ground System.
+
+## datacenter-api service
+
+
+Look at the official document first [https://zeromicro.github.io/go-zero/](https://zeromicro.github.io/go-zero/)
+
+Let's set up the gateway first::
+
+```shell
+➜ blogs mkdir datacenter && cd datacenter
+➜ datacenter go mod init datacenter
+go: creating new go.mod: module datacenter
+➜ datacenter
+```
+
+View the book directory:
+
+
+```
+➜ datacenter tree
+.
+└── go.mod
+
+0 directories, 1 file
+```
+
+
+### Create api file
+
+
+```
+➜ datacenter goctl api -o datacenter.api
+Done.
+➜ datacenter tree
+.
+├── datacenter.api
+└── go.mod
+```
+
+
+### Define api service
+
+
+Respectively include the above **Public Service**, **User Service**, **Voting Activity Service**
+
+
+```
+info(
+ title: "demo"
+ desc: "demo"
+ author: "jackluo"
+ email: "net.webjoy@gmail.com"
+)
+
+// Get application information
+type Beid struct {
+ Beid int64 `json:"beid"`
+}
+type Token struct{
+ Token string `json:"token"`
+}
+type WxTicket struct{
+ Ticket string `json:"ticket"`
+}
+type Application struct {
+ Sname string `json:"Sname"`
+ Logo string `json:"logo"`
+ Isclose int64 `json:"isclose"`
+ Fullwebsite string `json:"fullwebsite"`
+}
+type SnsReq struct{
+ Beid
+ Ptyid int64 `json:"ptyid"` // Platform ID
+ BackUrl string `json:"back_url"` // Return address after login
+}
+type SnsResp struct{
+ Beid
+ Ptyid int64 `json:"ptyid"` // Platform ID
+ Appid string `json:"appid"` // sns Platform ID
+ Title string `json:"title"`
+ LoginUrl string `json:"login_url"` // WeChat login address
+}
+
+type WxShareResp struct {
+ Appid string `json:"appid"`
+ Timestamp int64 `json:"timestamp"`
+ Noncestr string `json:"noncestr"`
+ Signature string `json:"signature"`
+}
+
+@server(
+ group: common
+)
+service datacenter-api {
+ @doc(
+ summary: "Get information about the site"
+ )
+ @handler votesVerification
+ get /MP_verify_NT04cqknJe0em3mT.txt (SnsReq) returns (SnsResp)
+
+ @handler appInfo
+ get /common/appinfo (Beid) returns (Application)
+
+ @doc(
+ summary: "Get social attribute information of the site"
+ )
+ @handler snsInfo
+ post /common/snsinfo (SnsReq) returns (SnsResp)
+ // Get shared returns
+ @handler wxTicket
+ post /common/wx/ticket (SnsReq) returns (WxShareResp)
+
+}
+
+@server(
+ jwt: Auth
+ group: common
+)
+service datacenter-api {
+ @doc(
+ summary: "Qiniu upload credentials"
+ )
+ @handler qiuniuToken
+ post /common/qiuniu/token (Beid) returns (Token)
+}
+
+// Registration request
+type RegisterReq struct {
+ Mobile string `json:"mobile"`
+ Password string `json:"password"`
+ Smscode string `json:"smscode"`
+}
+// Login request
+type LoginReq struct{
+ Mobile string `json:"mobile"`
+ Type int64 `json:"type"` // 1. Password login, 2. SMS login
+ Password string `json:"password"`
+}
+// WeChat login
+type WxLoginReq struct {
+ Beid int64 `json:"beid"` // Application id
+ Code string `json:"code"` // WeChat AccesskKey
+ Ptyid int64 `json:"ptyid"` // Platform ID
+}
+
+//Return user information
+type UserReply struct {
+ Auid int64 `json:"auid"`
+ Uid int64 `json:"uid"`
+ Beid int64 `json:"beid"` // Platform ID
+ Ptyid int64 `json:"ptyid"`
+ Username string `json:"username"`
+ Mobile string `json:"mobile"`
+ Nickname string `json:"nickname"`
+ Openid string `json:"openid"`
+ Avator string `json:"avator"`
+ JwtToken
+}
+
+type AppUser struct{
+ Uid int64 `json:"uid"`
+ Auid int64 `json:"auid"`
+ Beid int64 `json:"beid"`
+ Ptyid int64 `json:"ptyid"`
+ Nickname string `json:"nickname"`
+ Openid string `json:"openid"`
+ Avator string `json:"avator"`
+}
+
+type LoginAppUser struct{
+ Uid int64 `json:"uid"`
+ Auid int64 `json:"auid"`
+ Beid int64 `json:"beid"`
+ Ptyid int64 `json:"ptyid"`
+ Nickname string `json:"nickname"`
+ Openid string `json:"openid"`
+ Avator string `json:"avator"`
+ JwtToken
+}
+
+type JwtToken struct {
+ AccessToken string `json:"access_token,omitempty"`
+ AccessExpire int64 `json:"access_expire,omitempty"`
+ RefreshAfter int64 `json:"refresh_after,omitempty"`
+}
+
+type UserReq struct{
+ Auid int64 `json:"auid"`
+ Uid int64 `json:"uid"`
+ Beid int64 `json:"beid"`
+ Ptyid int64 `json:"ptyid"`
+}
+
+type Request {
+ Name string `path:"name,options=you|me"`
+}
+type Response {
+ Message string `json:"message"`
+}
+
+@server(
+ group: user
+)
+service user-api {
+ @handler ping
+ post /user/ping ()
+
+ @handler register
+ post /user/register (RegisterReq) returns (UserReply)
+
+ @handler login
+ post /user/login (LoginReq) returns (UserReply)
+
+ @handler wxlogin
+ post /user/wx/login (WxLoginReq) returns (LoginAppUser)
+
+ @handler code2Session
+ get /user/wx/login () returns (LoginAppUser)
+}
+@server(
+ jwt: Auth
+ group: user
+ middleware: Usercheck
+)
+service user-api {
+ @handler userInfo
+ get /user/dc/info (UserReq) returns (UserReply)
+}
+
+
+type Actid struct {
+ Actid int64 `json:"actid"`
+}
+
+type VoteReq struct {
+ Aeid int64 `json:"aeid"`
+ Actid
+}
+type VoteResp struct {
+ VoteReq
+ Votecount int64 `json:"votecount"`
+ Viewcount int64 `json:"viewcount"`
+}
+
+
+type ActivityResp struct {
+ Actid int64 `json:"actid"`
+ Title string `json:"title"`
+ Descr string `json:"descr"`
+ StartDate int64 `json:"start_date"`
+ EnrollDate int64 `json:"enroll_date"`
+ EndDate int64 `json:"end_date"`
+ Votecount int64 `json:"votecount"`
+ Viewcount int64 `json:"viewcount"`
+ Type int64 `json:"type"`
+ Num int64 `json:"num"`
+}
+
+type EnrollReq struct {
+ Actid
+ Name string `json:"name"`
+ Address string `json:"address"`
+ Images []string `json:"images"`
+ Descr string `json:"descr"`
+}
+
+type EnrollResp struct {
+ Actid
+ Aeid int64 `json:"aeid"`
+ Name string `json:"name"`
+ Address string `json:"address"`
+ Images []string `json:"images"`
+ Descr string `json:"descr"`
+ Votecount int64 `json:"votecount"`
+ Viewcount int64 `json:"viewcount"`
+
+}
+
+@server(
+ group: votes
+)
+service votes-api {
+ @doc(
+ summary: "Get activity information"
+ )
+ @handler activityInfo
+ get /votes/activity/info (Actid) returns (ActivityResp)
+ @doc(
+ summary: "Activity visit +1"
+ )
+ @handler activityIcrView
+ get /votes/activity/view (Actid) returns (ActivityResp)
+ @doc(
+ summary: "Get information about registered voting works"
+ )
+ @handler enrollInfo
+ get /votes/enroll/info (VoteReq) returns (EnrollResp)
+ @doc(
+ summary: "Get a list of registered works"
+ )
+ @handler enrollLists
+ get /votes/enroll/lists (Actid) returns(EnrollResp)
+}
+
+@server(
+ jwt: Auth
+ group: votes
+ middleware: Usercheck
+)
+service votes-api {
+ @doc(
+ summary: "vote"
+ )
+ @handler vote
+ post /votes/vote (VoteReq) returns (VoteResp)
+ @handler enroll
+ post /votes/enroll (EnrollReq) returns (EnrollResp)
+}
+```
+
+
+The API and document ideas that are basically written above
+
+
+### Generate datacenter api service
+
+
+```
+➜ datacenter goctl api go -api datacenter.api -dir .
+Done.
+➜ datacenter tree
+.
+├── datacenter.api
+├── etc
+│ └── datacenter-api.yaml
+├── go.mod
+├── internal
+│ ├── config
+│ │ └── config.go
+│ ├── handler
+│ │ ├── common
+│ │ │ ├── appinfohandler.go
+│ │ │ ├── qiuniutokenhandler.go
+│ │ │ ├── snsinfohandler.go
+│ │ │ ├── votesverificationhandler.go
+│ │ │ └── wxtickethandler.go
+│ │ ├── routes.go
+│ │ ├── user
+│ │ │ ├── code2sessionhandler.go
+│ │ │ ├── loginhandler.go
+│ │ │ ├── pinghandler.go
+│ │ │ ├── registerhandler.go
+│ │ │ ├── userinfohandler.go
+│ │ │ └── wxloginhandler.go
+│ │ └── votes
+│ │ ├── activityicrviewhandler.go
+│ │ ├── activityinfohandler.go
+│ │ ├── enrollhandler.go
+│ │ ├── enrollinfohandler.go
+│ │ ├── enrolllistshandler.go
+│ │ └── votehandler.go
+│ ├── logic
+│ │ ├── common
+│ │ │ ├── appinfologic.go
+│ │ │ ├── qiuniutokenlogic.go
+│ │ │ ├── snsinfologic.go
+│ │ │ ├── votesverificationlogic.go
+│ │ │ └── wxticketlogic.go
+│ │ ├── user
+│ │ │ ├── code2sessionlogic.go
+│ │ │ ├── loginlogic.go
+│ │ │ ├── pinglogic.go
+│ │ │ ├── registerlogic.go
+│ │ │ ├── userinfologic.go
+│ │ │ └── wxloginlogic.go
+│ │ └── votes
+│ │ ├── activityicrviewlogic.go
+│ │ ├── activityinfologic.go
+│ │ ├── enrollinfologic.go
+│ │ ├── enrolllistslogic.go
+│ │ ├── enrolllogic.go
+│ │ └── votelogic.go
+│ ├── middleware
+│ │ └── usercheckmiddleware.go
+│ ├── svc
+│ │ └── servicecontext.go
+│ └── types
+│ └── types.go
+└── datacenter.go
+
+14 directories, 43 files
+```
+
+
+We open `etc/datacenter-api.yaml` and add the necessary configuration information
+
+
+```yaml
+Name: datacenter-api
+Log:
+ Mode: console
+Host: 0.0.0.0
+Port: 8857
+Auth:
+ AccessSecret: secret
+ AccessExpire: 86400
+CacheRedis:
+- Host: 127.0.0.1:6379
+ Pass: pass
+ Type: node
+UserRpc:
+ Etcd:
+ Hosts:
+ - 127.0.0.1:2379
+ Key: user.rpc
+CommonRpc:
+ Etcd:
+ Hosts:
+ - 127.0.0.1:2379
+ Key: common.rpc
+VotesRpc:
+ Etcd:
+ Hosts:
+ - 127.0.0.1:2379
+ Key: votes.rpc
+```
+
+
+I will write the above `UserRpc`, `CommonRpc`, and `VotesRpc` first, and then add them by step.
+
+
+Let's write the `CommonRpc` service first.
+
+
+## CommonRpc service
+
+
+### New project directory
+
+
+```
+➜ datacenter mkdir -p common/rpc && cd common/rpc
+```
+
+
+Just create it directly in the datacenter directory, because in common, it may not only provide rpc services in the future, but also api services, so the rpc directory is added
+
+
+### goctl create template
+
+
+```
+➜ rpc goctl rpc template -o=common.proto
+➜ rpc ls
+common.proto
+```
+
+
+Fill in the content:
+
+
+```protobufbuf
+➜ rpc cat common.proto
+syntax = "proto3";
+
+package common;
+
+option go_package = "common";
+
+message BaseAppReq{
+ int64 beid=1;
+}
+
+message BaseAppResp{
+ int64 beid=1;
+ string logo=2;
+ string sname=3;
+ int64 isclose=4;
+ string fullwebsite=5;
+}
+
+message AppConfigReq {
+ int64 beid=1;
+ int64 ptyid=2;
+}
+
+message AppConfigResp {
+ int64 id=1;
+ int64 beid=2;
+ int64 ptyid=3;
+ string appid=4;
+ string appsecret=5;
+ string title=6;
+}
+
+service Common {
+ rpc GetAppConfig(AppConfigReq) returns(AppConfigResp);
+ rpc GetBaseApp(BaseAppReq) returns(BaseAppResp);
+}
+```
+
+
+### gotcl generates rpc service
+
+
+```bash
+➜ rpc goctl rpc proto -src common.proto -dir .
+protoc -I=/Users/jackluo/works/blogs/datacenter/common/rpc common.proto --go_out=plugins=grpc:/Users/jackluo/works/blogs/datacenter/common/rpc/common
+Done.
+```
+
+
+```
+➜ rpc tree
+.
+├── common
+│ └── common.pb.go
+├── common.go
+├── common.proto
+├── commonclient
+│ └── common.go
+├── etc
+│ └── common.yaml
+└── internal
+├── config
+│ └── config.go
+├── logic
+│ ├── getappconfiglogic.go
+│ └── getbaseapplogic.go
+├── server
+│ └── commonserver.go
+└── svc
+└── servicecontext.go
+
+8 directories, 10 files
+```
+
+
+Basically, all the catalog specifications and structure are generated, so there is no need to worry about the project catalog, how to put it, and how to organize it.
+
+
+Take a look at the configuration information, which can write mysql and other redis information:
+
+
+```yaml
+Name: common.rpc
+ListenOn: 127.0.0.1:8081
+Mysql:
+ DataSource: root:admin@tcp(127.0.0.1:3306)/datacenter?charset=utf8&parseTime=true&loc=Asia%2FShanghai
+CacheRedis:
+- Host: 127.0.0.1:6379
+ Pass:
+ Type: node
+Etcd:
+ Hosts:
+ - 127.0.0.1:2379
+ Key: common.rpc
+```
+
+
+Let's add database services:
+
+
+```
+➜ rpc cd ..
+➜ common ls
+rpc
+➜ common pwd
+/Users/jackluo/works/blogs/datacenter/common
+➜ common goctl model mysql datasource -url="root:admin@tcp(127.0.0.1:3306)/datacenter" -table="base_app" -dir ./model -c
+Done.
+➜ common tree
+.
+├── model
+│ ├── baseappmodel.go
+│ └── vars.go
+└── rpc
+ ├── common
+ │ └── common.pb.go
+ ├── common.go
+ ├── common.proto
+ ├── commonclient
+ │ └── common.go
+ ├── etc
+ │ └── common.yaml
+ └── internal
+ ├── config
+ │ └── config.go
+ ├── logic
+ │ ├── getappconfiglogic.go
+ │ └── getbaseapplogic.go
+ ├── server
+ │ └── commonserver.go
+ └── svc
+ └── servicecontext.go
+
+10 directories, 12 files
+```
+
+
+So the basic `rpc` is finished, and then we connect the rpc with the model and the api. This official document is already very detailed, here is just the code:
+
+
+```go
+➜ common cat rpc/internal/config/config.go
+package config
+
+import (
+ "github.com/tal-tech/go-zero/core/stores/cache"
+ "github.com/tal-tech/go-zero/zrpc"
+)
+
+type Config struct {
+ zrpc.RpcServerConf
+ Mysql struct {
+ DataSource string
+ }
+ CacheRedis cache.ClusterConf
+}
+```
+
+
+Modify in svc:
+
+
+```go
+➜ common cat rpc/internal/svc/servicecontext.go
+package svc
+
+import (
+ "datacenter/common/model"
+ "datacenter/common/rpc/internal/config"
+
+ "github.com/tal-tech/go-zero/core/stores/sqlx"
+)
+
+type ServiceContext struct {
+ c config.Config
+ AppConfigModel model.AppConfigModel
+ BaseAppModel model.BaseAppModel
+}
+
+func NewServiceContext(c config.Config) *ServiceContext {
+ conn := sqlx.NewMysql(c.Mysql.DataSource)
+ apm := model.NewAppConfigModel(conn, c.CacheRedis)
+ bam := model.NewBaseAppModel(conn, c.CacheRedis)
+ return &ServiceContext{
+ c: c,
+ AppConfigModel: apm,
+ BaseAppModel: bam,
+ }
+}
+```
+
+
+The above code has already associated `rpc` with the `model` database, we will now associate `rpc` with `api`:
+
+
+```go
+➜ datacenter cat internal/config/config.go
+
+package config
+
+import (
+ "github.com/tal-tech/go-zero/core/stores/cache"
+ "github.com/tal-tech/go-zero/rest"
+ "github.com/tal-tech/go-zero/zrpc"
+)
+
+type Config struct {
+ rest.RestConf
+
+ Auth struct {
+ AccessSecret string
+ AccessExpire int64
+ }
+ UserRpc zrpc.RpcClientConf
+ CommonRpc zrpc.RpcClientConf
+ VotesRpc zrpc.RpcClientConf
+
+ CacheRedis cache.ClusterConf
+}
+```
+
+
+Join the `svc` service:
+
+
+```go
+➜ datacenter cat internal/svc/servicecontext.go
+package svc
+
+import (
+ "context"
+ "datacenter/common/rpc/commonclient"
+ "datacenter/internal/config"
+ "datacenter/internal/middleware"
+ "datacenter/shared"
+ "datacenter/user/rpc/userclient"
+ "datacenter/votes/rpc/votesclient"
+ "fmt"
+ "net/http"
+ "time"
+
+ "github.com/tal-tech/go-zero/core/logx"
+ "github.com/tal-tech/go-zero/core/stores/cache"
+ "github.com/tal-tech/go-zero/core/stores/redis"
+ "github.com/tal-tech/go-zero/core/syncx"
+ "github.com/tal-tech/go-zero/rest"
+ "github.com/tal-tech/go-zero/zrpc"
+ "google.golang.org/grpc"
+)
+
+type ServiceContext struct {
+ Config config.Config
+ GreetMiddleware1 rest.Middleware
+ GreetMiddleware2 rest.Middleware
+ Usercheck rest.Middleware
+ UserRpc userclient.User //用户
+ CommonRpc commonclient.Common
+ VotesRpc votesclient.Votes
+ Cache cache.Cache
+ RedisConn *redis.Redis
+}
+
+func timeInterceptor(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
+ stime := time.Now()
+ err := invoker(ctx, method, req, reply, cc, opts...)
+ if err != nil {
+ return err
+ }
+
+ fmt.Printf("timeout %s: %v\n", method, time.Now().Sub(stime))
+ return nil
+}
+func NewServiceContext(c config.Config) *ServiceContext {
+
+ ur := userclient.NewUser(zrpc.MustNewClient(c.UserRpc, zrpc.WithUnaryClientInterceptor(timeInterceptor)))
+ cr := commonclient.NewCommon(zrpc.MustNewClient(c.CommonRpc, zrpc.WithUnaryClientInterceptor(timeInterceptor)))
+ vr := votesclient.NewVotes(zrpc.MustNewClient(c.VotesRpc, zrpc.WithUnaryClientInterceptor(timeInterceptor)))
+ //缓存
+ ca := cache.NewCache(c.CacheRedis, syncx.NewSharedCalls(), cache.NewCacheStat("dc"), shared.ErrNotFound)
+ rcon := redis.NewRedis(c.CacheRedis[0].Host, c.CacheRedis[0].Type, c.CacheRedis[0].Pass)
+ return &ServiceContext{
+ Config: c,
+ GreetMiddleware1: greetMiddleware1,
+ GreetMiddleware2: greetMiddleware2,
+ Usercheck: middleware.NewUserCheckMiddleware().Handle,
+ UserRpc: ur,
+ CommonRpc: cr,
+ VotesRpc: vr,
+ Cache: ca,
+ RedisConn: rcon,
+ }
+}
+```
+
+
+Basically, we can call it in the file directory of `logic`:
+
+
+```go
+cat internal/logic/common/appinfologic.go
+
+package logic
+
+import (
+ "context"
+
+ "datacenter/internal/svc"
+ "datacenter/internal/types"
+ "datacenter/shared"
+
+ "datacenter/common/model"
+ "datacenter/common/rpc/common"
+
+ "github.com/tal-tech/go-zero/core/logx"
+)
+
+type AppInfoLogic struct {
+ logx.Logger
+ ctx context.Context
+ svcCtx *svc.ServiceContext
+}
+
+func NewAppInfoLogic(ctx context.Context, svcCtx *svc.ServiceContext) AppInfoLogic {
+ return AppInfoLogic{
+ Logger: logx.WithContext(ctx),
+ ctx: ctx,
+ svcCtx: svcCtx,
+ }
+}
+
+func (l *AppInfoLogic) AppInfo(req types.Beid) (appconfig *common.BaseAppResp, err error) {
+
+ err = l.svcCtx.Cache.GetCache(model.GetcacheBaseAppIdPrefix(req.Beid), appconfig)
+ if err != nil && err == shared.ErrNotFound {
+ appconfig, err = l.svcCtx.CommonRpc.GetBaseApp(l.ctx, &common.BaseAppReq{
+ Beid: req.Beid,
+ })
+ if err != nil {
+ return
+ }
+ err = l.svcCtx.Cache.SetCache(model.GetcacheBaseAppIdPrefix(req.Beid), appconfig)
+ }
+
+ return
+}
+```
+
+
+In this way, it is basically connected, and basically there is no need to change the others. `UserRPC` and `VotesRPC` are similar, so I won't write them here.
+
+
+## Reviews
+
+
+`go-zero` is really fragrant, because it has a `goctl` tool, which can automatically generate all the code structure, we will no longer worry about the directory structure, how to organize it, and there is no architectural ability for several years It’s not easy to implement. What are the norms, concurrency, circuit breaker, no use at all, test and filter other, concentrate on realizing the business, like microservices, but also service discovery, a series of things, don’t care, because `go-zero` has been implemented internally.
+
+
+I have written code for more than 10 years. The php I have been using before, the more famous ones are laravel and thinkphp, which are basically modular. Realizations like microservices are really costly, but you use go-zero. , You develop as simple as tune api interface, other service discovery, those do not need to pay attention at all, only need to pay attention to the business.
+
+
+A good language, framework, and their underlying thinking are always high-efficiency and no overtime thinking. I believe that go-zero will improve the efficiency of you and your team or company. The author of go-zero said that they have a team dedicated to organizing the go-zero framework, and the purpose should be obvious, that is, to improve their own development efficiency, process flow, and standardization, which are the criteria for improving work efficiency, as we usually encounter When it comes to a problem or encounter a bug, the first thing I think of is not how to solve my bug, but whether there is a problem with my process, which of my process will cause the bug, and finally I believe in `go-zero `Can become the preferred framework for **microservice development**.
+
+
+Finally, talk about the pits encountered:
+
+
+- `grpc`
+
+
+
+I used `grpc` for the first time, and then I encountered the problem that the field value is not displayed when some characters are empty:
+
+
+It is realized by `jsonpb` in the official library of `grpc`. The official setting has a structure to realize the conversion of `protoc buffer` to JSON structure, and can configure the conversion requirements according to the fields.
+
+
+- Cross-domain issues
+
+
+
+It is set in `go-zero`, and it feels no effect. The big guy said that it was set through nginx, but later found that it still didn't work. Recently, I forcibly got a domain name, and I have time to solve it later.
+
+
+- `sqlx`
+
+
+
+The `sqlx` problem of `go-zero`, this really took a long time:
+
+
+> `time.Time` is a data structure. Timestamp is used in the database. For example, my field is delete_at. The default database setting is null. When the result is inserted, it reports `Incorrect datetime value: '0000-00-00 'for column'deleted_at' at row 1"}` This error, the query time reported `deleted_at\": unsupported Scan, storing driver.Value type \u003cnil\u003e into type *time.Time"`
+>
+> I removed this field decisively and added the label `.omitempty` above the field, which seems to be useful too, `db:".omitempty"`
+
+
+
+The second is this `Conversion from collation utf8_general_ci into utf8mb4_unicode_ci`. The probable reason for this is that I like to use emj expressions now, and my mysql data cannot be recognized.
+
+
+- data links
+
+
+
+`mysql` still follows the original way, modify the encoding format of the configuration file, re-create the database, and set the database encoding to utf8mb4, and the sorting rule is `utf8mb4_unicode_ci`.
+
+
+**In this case, all tables and string fields are in this encoding format. If you don't want all of them, you can set them separately. This is not the point. Because it is easy to set on Navicat, just click manually**。
+
+
+Here comes the important point: Golang uses the `github.com/go-sql-driver/mysql` driver, which will connect to the `dsn` of `mysql` (because I am using gorm, dsn may be different from the native format. Too the same, but it’s okay, just pay attention to `charset` and `collation`)
+`root:password@/name?parseTime=True&loc=Local&charset=utf8` is modified to:
+`root:password@/name?parseTime=True&loc=Local&charset=utf8mb4&collation=utf8mb4_unicode_ci`
diff --git a/go-zero.dev/en/dev-flow.md b/go-zero.dev/en/dev-flow.md
new file mode 100644
index 00000000..5aa5d506
--- /dev/null
+++ b/go-zero.dev/en/dev-flow.md
@@ -0,0 +1,28 @@
+# Development Flow
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The development process here is not a concept with our actual business development process. The definition here is limited to the use of go-zero, that is, the development details at the code level.
+
+## Development Flow
+* Goctl environment preparation [1]
+* Database Design
+* Business development
+* New Construction
+* Create service catalog
+* Create service type (api/rpc/rmq/job/script)
+* Write api and proto files
+* Code generation
+* Generate database access layer code model
+* Configuration config, yaml change
+* Resource dependency filling (ServiceContext)
+* Add middleware
+* Business code filling
+* Error handling
+
+> [!TIP]
+> [1] [goctl environment](concept-introduction.md)
+
+## Development Tools
+* Visual Studio Code
+* Goland (recommended)
\ No newline at end of file
diff --git a/go-zero.dev/en/dev-specification.md b/go-zero.dev/en/dev-specification.md
new file mode 100644
index 00000000..0802620f
--- /dev/null
+++ b/go-zero.dev/en/dev-specification.md
@@ -0,0 +1,27 @@
+# Development Rules
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In actual business development, in addition to improving business development efficiency, shortening business development cycles, and ensuring high performance and high availability indicators for online business, good programming habits are also one of the basic qualities of a developer. In this chapter,
+
+We will introduce the coding standards in go-zero. This chapter is an optional chapter. The content is for communication and reference only. This chapter will explain from the following subsections:
+
+* [Naming Rules](naming-spec.md)
+* [Route Rules](route-naming-spec.md)
+* [Coding Rules](coding-spec.md)
+
+## Three principles of development
+
+### Clarity
+The author quoted a quote from `Hal Abelson and Gerald Sussman`:
+> Programs must be written for people to read, and only incidentally for machines to execute
+
+### Simplicity
+> Simplicity is prerequisite for reliability
+
+`Edsger W. Dijkstra` believes that: the prerequisite for reliability is simplicity. We have all encountered in actual development. What is this code written and what it wants to accomplish. Developers don’t understand this code, so they don’t know. How to maintain, this brings complexity, the more complex the program, the harder it is to maintain, and the harder it is to maintain, the program becomes more and more complicated. Therefore, the first thing you should think of when encountering a program becoming complicated is - Refactoring, refactoring will redesign the program and make the program simple.
+
+### Productivity)
+In the go-zero team, this topic has always been emphasized. The productivity of developers is not how many lines of code you have written and how many module developments you have completed, but we need to use various effective ways to take advantage of the limited Time to complete the development to maximize the efficiency, and the birth of Goctl was officially to increase productivity,
+Therefore, I very much agree with this development principle.
+
diff --git a/go-zero.dev/en/doc-contibute.md b/go-zero.dev/en/doc-contibute.md
new file mode 100644
index 00000000..bff5f63f
--- /dev/null
+++ b/go-zero.dev/en/doc-contibute.md
@@ -0,0 +1,54 @@
+# Document Contribute
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## How to contribute documents?
+Click the "Edit this page" button at the top to enter the file corresponding to the source code repository, and the developer will submit the modified (added) document in the form of pr,
+After we receive the pr, we will conduct a document review, and once the review is passed, the document can be updated.
+
+![doc-edit](./resource/doc-edit.png)
+
+## What documents can I contribute?
+* Documentation errors
+* The documentation is not standardized and incomplete
+* Go-zero application practice and experience
+* Component Center
+
+## How soon will the document be updated after the document pr is passed?
+After pr accepts, github action will automatically build gitbook and release, so you can view the updated document 1-2 minutes after github action is successful.
+
+## Documentation contribution notes
+* Error correction and improvement of the source document can directly write the original md file
+* The newly added component documents need to be typeset and easy to read, and the component documents need to be placed in the [Components](extended-reading.md) subdirectory
+* Go-zero application practice sharing can be directly placed in the [Development Practice](practise.md) subdirectory
+
+## Directory structure specification
+* The directory structure should not be too deep, preferably no more than 3 levels
+* The component document needs to be attributed to [Component Center] (component-center.md), such as
+ * [Development Practice](practise.md)
+ * [logx](logx.md)
+ * [bloom](bloom.md)
+ * [executors](executors.md)
+ * Your document directory name
+* Application practice needs to be attributed to [Development Practice](practise.md), such as
+ * [Development Practice](practise.md)
+ * [How do I use go-zero to implement a middle-office system] (datacenter.md)
+ * [Stream data processing tool](stream.md)
+ * [Summary of online communication issues on October 3](online-exchange.md
+ * Your document directory name
+
+## Development Practice Document Template
+ ```markdown
+ # Title
+
+ > Author:The author name
+ >
+ > Original link: The original link
+
+ some markdown content
+ ```
+
+# Guess you wants
+* [Join Us](join-us.md)
+* [Github Pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests)
+
diff --git a/go-zero.dev/en/error-handle.md b/go-zero.dev/en/error-handle.md
new file mode 100644
index 00000000..9f159afc
--- /dev/null
+++ b/go-zero.dev/en/error-handle.md
@@ -0,0 +1,179 @@
+# Error Handling
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Error handling is an indispensable part of service. In normal business development, we can think that the http status code is not in the `2xx` series, it can be regarded as an http request error.
+It is accompanied by error messages in response, but these error messages are all returned in plain text. In addition, I will define some business errors in the business, and the common practice is to pass
+The two fields `code` and `msg` are used to describe the business processing results, and it is hoped that the response can be made with the json response body.
+
+## Business error response format
+* Business processing is normal
+ ```json
+ {
+ "code": 0,
+ "msg": "successful",
+ "data": {
+ ....
+ }
+ }
+ ```
+
+* Business processing exception
+ ```json
+ {
+ "code": 10001,
+ "msg": "something wrong"
+ }
+ ```
+
+## login of user api
+Previously, when we handled the login logic when the username did not exist, an error was directly returned. Let's log in and pass a username that does not exist to see the effect.
+
+```shell
+curl -X POST \
+ http://127.0.0.1:8888/user/login \
+ -H 'content-type: application/json' \
+ -d '{
+ "username":"1",
+ "password":"123456"
+}'
+```
+```text
+HTTP/1.1 400 Bad Request
+Content-Type: text/plain; charset=utf-8
+X-Content-Type-Options: nosniff
+Date: Tue, 09 Feb 2021 06:38:42 GMT
+Content-Length: 19
+
+Username does not exist
+```
+Next we will return it in json format
+
+## Custom error
+* First add a `baseerror.go` file in common and fill in the code
+ ```shell
+ $ cd common
+ $ mkdir errorx&&cd errorx
+ $ vim baseerror.go
+ ```
+ ```goalng
+ package errorx
+
+ const defaultCode = 1001
+
+ type CodeError struct {
+ Code int `json:"code"`
+ Msg string `json:"msg"`
+ }
+
+ type CodeErrorResponse struct {
+ Code int `json:"code"`
+ Msg string `json:"msg"`
+ }
+
+ func NewCodeError(code int, msg string) error {
+ return &CodeError{Code: code, Msg: msg}
+ }
+
+ func NewDefaultError(msg string) error {
+ return NewCodeError(defaultCode, msg)
+ }
+
+ func (e *CodeError) Error() string {
+ return e.Msg
+ }
+
+ func (e *CodeError) Data() *CodeErrorResponse {
+ return &CodeErrorResponse{
+ Code: e.Code,
+ Msg: e.Msg,
+ }
+ }
+
+ ```
+
+* Replace errors in login logic with CodeError custom errors
+ ```go
+ if len(strings.TrimSpace(req.Username)) == 0 || len(strings.TrimSpace(req.Password)) == 0 {
+ return nil, errorx.NewDefaultError("Invalid parameter")
+ }
+
+ userInfo, err := l.svcCtx.UserModel.FindOneByNumber(req.Username)
+ switch err {
+ case nil:
+ case model.ErrNotFound:
+ return nil, errorx.NewDefaultError("Username does not exist")
+ default:
+ return nil, err
+ }
+
+ if userInfo.Password != req.Password {
+ return nil, errorx.NewDefaultError("User password is incorrect")
+ }
+
+ now := time.Now().Unix()
+ accessExpire := l.svcCtx.Config.Auth.AccessExpire
+ jwtToken, err := l.getJwtToken(l.svcCtx.Config.Auth.AccessSecret, now, l.svcCtx.Config.Auth.AccessExpire, userInfo.Id)
+ if err != nil {
+ return nil, err
+ }
+
+ return &types.LoginReply{
+ Id: userInfo.Id,
+ Name: userInfo.Name,
+ Gender: userInfo.Gender,
+ AccessToken: jwtToken,
+ AccessExpire: now + accessExpire,
+ RefreshAfter: now + accessExpire/2,
+ }, nil
+ ```
+
+* Use custom errors
+ ```shell
+ $ vim service/user/cmd/api/user.go
+ ```
+ ```go
+ func main() {
+ flag.Parse()
+
+ var c config.Config
+ conf.MustLoad(*configFile, &c)
+
+ ctx := svc.NewServiceContext(c)
+ server := rest.MustNewServer(c.RestConf)
+ defer server.Stop()
+
+ handler.RegisterHandlers(server, ctx)
+
+ // Custom error
+ httpx.SetErrorHandler(func(err error) (int, interface{}) {
+ switch e := err.(type) {
+ case *errorx.CodeError:
+ return http.StatusOK, e.Data()
+ default:
+ return http.StatusInternalServerError, nil
+ }
+ })
+
+ fmt.Printf("Starting server at %s:%d...\n", c.Host, c.Port)
+ server.Start()
+ }
+ ```
+* Restart service verification
+ ```shell
+ $ curl -i -X POST \
+ http://127.0.0.1:8888/user/login \
+ -H 'content-type: application/json' \
+ -d '{
+ "username":"1",
+ "password":"123456"
+ }'
+ ```
+ ```text
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Tue, 09 Feb 2021 06:47:29 GMT
+ Content-Length: 40
+
+ {"code":1001,"msg":"Username does not exist"}
+ ```
diff --git a/go-zero.dev/en/error.md b/go-zero.dev/en/error.md
new file mode 100644
index 00000000..68c11a75
--- /dev/null
+++ b/go-zero.dev/en/error.md
@@ -0,0 +1,48 @@
+# Error
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Error reporting on Windows
+```text
+A required privilege is not held by the client.
+```text
+Solution: "Run as administrator" goctl will work.
+
+## grpc error
+* Case 1
+ ```text
+ protoc-gen-go: unable to determine Go import path for "greet.proto"
+
+ Please specify either:
+ • a "go_package" option in the .proto source file, or
+ • a "M" argument on the command line.
+
+ See https://developers.google.com/protocol-buffers/docs/reference/go-generated#package for more information.
+
+ --go_out: protoc-gen-go: Plugin failed with status code 1.
+
+ ```
+ Solution:
+ ```text
+ go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.2
+ ```
+
+## protoc-gen-go installation failed
+```text
+go get github.com/golang/protobuf/protoc-gen-go: module github.com/golang/protobuf/protoc-gen-go: Get "https://proxy.golang.org/github.com/golang/protobuf/protoc-gen-go/@v/list": dial tcp 216.58.200.49:443: i/o timeout
+```
+
+Please make sure `GOPROXY` has been set, see [Go Module Configuration](gomod-config.md) for GOPROXY setting
+
+## api service failed to start
+```text
+error: config file etc/user-api.yaml, error: type mismatch for field xx
+```
+
+Please confirm whether the configuration items in the `user-api.yaml` configuration file have been configured. If there are values, check whether the yaml configuration file conforms to the yaml format.
+
+## command not found: goctl
+```
+command not found: goctl
+```
+Please make sure that goctl has been installed or whether goctl has been added to the environment variable
\ No newline at end of file
diff --git a/go-zero.dev/en/executors.md b/go-zero.dev/en/executors.md
new file mode 100644
index 00000000..21142091
--- /dev/null
+++ b/go-zero.dev/en/executors.md
@@ -0,0 +1,327 @@
+# executors
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In `go-zero`, `executors` act as a task pool, do multi-task buffering, and use tasks for batch processing. Such as: `clickhouse` large batch `insert`, `sql batch insert`. At the same time, you can also see `executors` in `go-queue` [In `queue`, `ChunkExecutor` is used to limit the byte size of task submission].
+
+So when you have the following requirements, you can use this component:
+
+- Submit tasks in batches
+- Buffer part of tasks and submit lazily
+- Delay task submission
+
+
+
+Before explaining it in detail, let's give a rough overview:
+![c42c34e8d33d48ec8a63e56feeae882a](./resource/c42c34e8d33d48ec8a63e56feeae882a.png)
+## Interface design
+
+
+Under the `executors` package, there are the following `executors`:
+
+| Name | Margin value |
+| --- | --- |
+| `bulkexecutor` | Reach `maxTasks` [Maximum number of tasks] Submit |
+| `chunkexecutor` | Reach `maxChunkSize`[Maximum number of bytes] Submit |
+| `periodicalexecutor` | `basic executor` |
+| `delayexecutor` | Delay the execution of the passed `fn()` |
+| `lessexecutor` | |
+
+
+
+You will see that except for the special functions of `delay` and `less`, the other three are all combinations of `executor` + `container`:
+
+
+```go
+func NewBulkExecutor(execute Execute, opts ...BulkOption) *BulkExecutor {
+ // Option mode: It appears in many places in go-zero. In multiple configurations, better design ideas
+ // https://halls-of-valhalla.org/beta/articles/functional-options-pattern-in-go,54/
+ options := newBulkOptions()
+ for _, opt := range opts {
+ opt(&options)
+ }
+ // 1. task container: [execute the function that actually does the execution] [maxTasks execution critical point]
+ container := &bulkContainer{
+ execute: execute,
+ maxTasks: options.cachedTasks,
+ }
+ // 2. It can be seen that the underlying bulkexecutor depends on the periodicalexecutor
+ executor := &BulkExecutor{
+ executor: NewPeriodicalExecutor(options.flushInterval, container),
+ container: container,
+ }
+
+ return executor
+}
+```
+
+
+And this `container` is an `interface`:
+
+
+```go
+TaskContainer interface {
+ // Add task to container
+ AddTask(task interface{}) bool
+ // Is actually to execute the incoming execute func()
+ Execute(tasks interface{})
+ // When the critical value is reached, remove all tasks in the container and pass them to execute func() through the channel for execution
+ RemoveAll() interface{}
+}
+```
+
+
+This shows the dependency between:
+
+
+- `bulkexecutor`:`periodicalexecutor` + `bulkContainer`
+- `chunkexecutor`:`periodicalexecutor` + `chunkContainer`
+
+
+> [!TIP]
+> So if you want to complete your own `executor`, you can implement these three interfaces of `container`, and then combine with `periodicalexecutor`.
+
+So back to the picture 👆, our focus is on the `periodicalexecutor`, and see how it is designed?
+
+
+## How to use
+
+
+First look at how to use this component in business:
+
+There is a timed service to perform data synchronization from `mysql` to `clickhouse` at a fixed time every day:
+
+
+```go
+type DailyTask struct {
+ ckGroup *clickhousex.Cluster
+ insertExecutor *executors.BulkExecutor
+ mysqlConn sqlx.SqlConn
+}
+```
+
+
+Initialize `bulkExecutor`:
+
+```go
+func (dts *DailyTask) Init() {
+ // insertIntoCk() is the real insert execution function [requires developers to write specific business logic by themselves]
+ dts.insertExecutor = executors.NewBulkExecutor(
+ dts.insertIntoCk,
+ executors.WithBulkInterval(time.Second*3), // The container will automatically refresh the task to execute every 3s.
+ executors.WithBulkTasks(10240), // The maximum number of tasks for the container. Generally set to a power of 2
+ )
+}
+```
+
+> [!TIP]
+> An additional introduction: `clickhouse` is suitable for mass insertion, because the insert speed is very fast, mass insert can make full use of clickhouse
+
+
+Main business logic preparation:
+
+
+```go
+func (dts *DailyTask) insertNewData(ch chan interface{}, sqlFromDb *model.Task) error {
+ for item := range ch {
+ if r, vok := item.(*model.Task); !vok {
+ continue
+ }
+ err := dts.insertExecutor.Add(r)
+ if err != nil {
+ r.Tag = sqlFromDb.Tag
+ r.TagId = sqlFromDb.Id
+ r.InsertId = genInsertId()
+ r.ToRedis = toRedis == constant.INCACHED
+ r.UpdateWay = sqlFromDb.UpdateWay
+ // 1. Add Task
+ err := dts.insertExecutor.Add(r)
+ if err != nil {
+ logx.Error(err)
+ }
+ }
+ }
+ // 2. Flush Task container
+ dts.insertExecutor.Flush()
+ // 3. Wait All Task Finish
+ dts.insertExecutor.Wait()
+}
+```
+
+> [!TIP]
+> You may be wondering why `Flush(), Wait()` is needed, and I will analyze it through the source code later.
+
+There are 3 steps to use as a whole:
+
+
+- `Add()`: Add to task
+- `Flush()`: Refresh tasks in `container`
+- `Wait()`: Wait for the completion of all tasks
+
+
+
+## Source code analysis
+
+> [!TIP]
+> The main analysis here is `periodicalexecutor`, because the other two commonly used `executors` rely on it.
+
+
+
+### Initialization
+
+```go
+func New...(interval time.Duration, container TaskContainer) *PeriodicalExecutor {
+ executor := &PeriodicalExecutor{
+ commander: make(chan interface{}, 1),
+ interval: interval,
+ container: container,
+ confirmChan: make(chan lang.PlaceholderType),
+ newTicker: func(d time.Duration) timex.Ticker {
+ return timex.NewTicker(interval)
+ },
+ }
+ ...
+ return executor
+}
+```
+
+
+- `commander`: Pass the channel of `tasks`
+- `container`: Temporarily store the task of `Add()`
+- `confirmChan`: Block `Add()`, at the beginning of this time, `executeTasks()` will let go of blocking
+- `ticker`: To prevent the blocking of `Add()`, there will be a chance to execute regularly and release the temporarily stored task in time
+
+
+
+### Add()
+After initialization, the first step in the business logic is to add task to `executor`:
+
+```go
+func (pe *PeriodicalExecutor) Add(task interface{}) {
+ if vals, ok := pe.addAndCheck(task); ok {
+ pe.commander <- vals
+ <-pe.confirmChan
+ }
+}
+
+func (pe *PeriodicalExecutor) addAndCheck(task interface{}) (interface{}, bool) {
+ pe.lock.Lock()
+ defer func() {
+ // default false
+ var start bool
+ if !pe.guarded {
+ // backgroundFlush() will reset guarded
+ pe.guarded = true
+ start = true
+ }
+ pe.lock.Unlock()
+ // The backgroundFlush() in if will be executed when the first task is added. Background coroutine brush task
+ if start {
+ pe.backgroundFlush()
+ }
+ }()
+ // Control maxTask, >=maxTask will pop and return tasks in the container
+ if pe.container.AddTask(task) {
+ return pe.container.RemoveAll(), true
+ }
+
+ return nil, false
+}
+```
+
+In `addAndCheck()`, `AddTask()` is controlling the maximum number of tasks. If it exceeds the number of tasks, `RemoveAll()` will be executed, and the tasks pop of the temporarily stored `container` will be passed to the `commander`, followed by goroutine loop reading , And then execute tasks.
+
+### backgroundFlush()
+Start a background coroutine, and constantly refresh the tasks in the `container`:
+
+```go
+func (pe *PeriodicalExecutor) backgroundFlush() {
+ // Encapsulate go func(){}
+ threading.GoSafe(func() {
+ ticker := pe.newTicker(pe.interval)
+ defer ticker.Stop()
+
+ var commanded bool
+ last := timex.Now()
+ for {
+ select {
+ // Get []tasks from channel
+ case vals := <-pe.commander:
+ commanded = true
+ // Substance: wg.Add(1)
+ pe.enterExecution()
+ // Let go of the blocking of Add(), and the temporary storage area is also empty at this time. Just start a new task to join
+ pe.confirmChan <- lang.Placeholder
+ // Really execute task logic
+ pe.executeTasks(vals)
+ last = timex.Now()
+ case <-ticker.Chan():
+ if commanded {
+ // Due to the randomness of select, if the two conditions are met at the same time and the above is executed at the same time, this treatment is reversed and this paragraph is skipped.
+ // https://draveness.me/golang/docs/part2-foundation/ch05-keyword/golang-select/
+ commanded = false
+ } else if pe.Flush() {
+ // The refresh is complete and the timer is cleared. The temporary storage area is empty, start the next timed refresh
+ last = timex.Now()
+ } else if timex.Since(last) > pe.interval*idleRound {
+ // If maxTask is not reached, Flush() err, and last->now is too long, Flush() will be triggered again
+ // Only when this is reversed will a new backgroundFlush() background coroutine be opened
+ pe.guarded = false
+ // Refresh again to prevent missing
+ pe.Flush()
+ return
+ }
+ }
+ }
+ })
+}
+```
+
+Overall two processes:
+
+- `commander` receives the tasks passed by `RemoveAll()`, then executes it, and releases the blocking of `Add()` to continue `Add()`
+- It’s time for `ticker`, if the first step is not executed, it will automatically `Flush()` and execute the task.
+
+### Wait()
+In `backgroundFlush()`, a function is mentioned: `enterExecution()`:
+
+```go
+func (pe *PeriodicalExecutor) enterExecution() {
+ pe.wgBarrier.Guard(func() {
+ pe.waitGroup.Add(1)
+ })
+}
+
+func (pe *PeriodicalExecutor) Wait() {
+ pe.wgBarrier.Guard(func() {
+ pe.waitGroup.Wait()
+ })
+}
+```
+By enumerating in this way, you can know why you have to bring `dts.insertExecutor.Wait()` at the end. Of course, you have to wait for all `goroutine tasks` to complete.
+
+## Thinking
+In looking at the source code, I thought about some other design ideas, do you have similar questions:
+
+- In the analysis of `executors`, you will find that there are `lock` in many places
+
+> [!TIP]
+> There is a race condition in `go test`, use locking to avoid this situation
+
+- After analyzing `confirmChan`, it was found that this [submit](https://github.com/zeromicro/go-zero/commit/9d9399ad1014c171cc9bd9c87f78b5d2ac238ce4) only appeared, why is it designed like this?
+
+> It used to be: `wg.Add(1)` was written in `executeTasks()`; now it is: first `wg.Add(1)`, then release `confirmChan` blocking
+> If the execution of `executor func` is blocked, `Add task` is still in progress, because there is no block, it may be executed to `Executor.Wait()` soon, and this is where `wg.Wait()` appears in `wg.Add ()` before execution, this will be `panic`
+
+For details, please see the latest version of `TestPeriodicalExecutor_WaitFast()`, you may wish to run on this version to reproduce.
+
+## Summary
+There are still a few more analysis of `executors`, I leave it to you to look at the source code.
+
+In short, the overall design:
+
+- Follow interface-oriented design
+- Flexible use of concurrent tools such as `channel` and `waitgroup`
+- The combination of execution unit + storage unit
+
+There are many useful component tools in `go-zero`. Good use of tools is very helpful to improve service performance and development efficiency. I hope this article can bring you some gains.
diff --git a/go-zero.dev/en/extended-reading.md b/go-zero.dev/en/extended-reading.md
new file mode 100644
index 00000000..ff491188
--- /dev/null
+++ b/go-zero.dev/en/extended-reading.md
@@ -0,0 +1,18 @@
+# Components
+
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The component center will include all components in the [go-zero](https://github.com/zeromicro/go-zero) core folder,
+Therefore, it will be relatively large, and this resource will continue to be updated, and everyone is welcome to contribute to the document. This section will contain the following directories (in order of document update time):
+
+* [shorturl](shorturl-en.md)
+* [logx](logx.md)
+* [bloom](bloom.md)
+* [executors](executors.md)
+* [fx](fx.md)
+* [mysql](mysql.md)
+* [redis-lock](redis-lock.md)
+* [periodlimit](periodlimit.md)
+* [tokenlimit](tokenlimit.md)
+* [TimingWheel](timing-wheel.md)
diff --git a/go-zero.dev/en/framework-design.md b/go-zero.dev/en/framework-design.md
new file mode 100644
index 00000000..1fa8556b
--- /dev/null
+++ b/go-zero.dev/en/framework-design.md
@@ -0,0 +1,13 @@
+# Framework Design
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+![architechture](./resource/architechture.svg)
+
+This section will explain the design of go-zero framework from the go-zero design philosophy and the best practice catalog of go-zero services. This section will contain the following subsections:
+
+* [Go-Zero Design](go-zero-design.md)
+* [Go-Zero Features](go-zero-features.md)
+* [API IDL](api-grammar.md)
+* [API Directory Structure](api-dir.md)
+* [RPC Directory Structure](rpc-dir.md)
diff --git a/go-zero.dev/en/fx.md b/go-zero.dev/en/fx.md
new file mode 100644
index 00000000..56a7d3e1
--- /dev/null
+++ b/go-zero.dev/en/fx.md
@@ -0,0 +1,238 @@
+# fx
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+`fx` is a complete stream processing component.
+It is similar to `MapReduce`, `fx` also has a concurrent processing function: `Parallel(fn, options)`. But at the same time it is not only concurrent processing. `From(chan)`, `Map(fn)`, `Filter(fn)`, `Reduce(fn)`, etc., read from the data source into a stream, process the stream data, and finally aggregate the stream data. Is it a bit like Java Lambda? If you were a Java developer before, you can understand the basic design when you see this.
+
+## Overall API
+Let's get an overview of how `fx` is constructed as a whole:
+![dc500acd526d40aabfe4f53cf5bd180a_tplv-k3u1fbpfcp-zoom-1.png](./resource/dc500acd526d40aabfe4f53cf5bd180a_tplv-k3u1fbpfcp-zoom-1.png)
+
+The marked part is the most important part of the entire `fx`:
+
+1. From APIs such as `From(fn)`, a data stream `Stream` is generated
+2. A collection of APIs for converting, aggregating, and evaluating `Stream`
+
+
+So list the currently supported `Stream API`:
+
+| API | Function |
+| --- | --- |
+| `Distinct(fn)` | Select a specific item type in fn and de-duplicate it |
+| `Filter(fn, option)` | fn specifies specific rules, and the `element` that meets the rules is passed to the next `stream` |
+| `Group(fn)` | According to fn, the elements in `stream` are divided into different groups |
+| `Head(num)` | Take out the first num elements in `stream` and generate a new `stream` |
+| `Map(fn, option)` | Convert each ele to another corresponding ele and pass it to the next `stream` |
+| `Merge()` | Combine all `ele` into one `slice` and generate a new `stream` |
+| `Reverse()` | Reverse the element in `stream`. [Use double pointer] |
+| `Sort(fn)` | Sort elements in `stream` according to fn |
+| `Tail(num)` | Take out the last num elements of `stream` to generate a new `stream`. [Using a doubly linked list] |
+| `Walk(fn, option)` | Apply fn to every element of `source`. Generate a new `stream` |
+
+
+No longer generates a new `stream`, do the final evaluation operation:
+
+| API | Function |
+| --- | --- |
+| `ForAll(fn)` | Process `stream` according to fn, and no longer generate `stream` [evaluation operation] |
+| `ForEach(fn)` | Perform fn [evaluation operation] on all elements in `stream` |
+| `Parallel(fn, option)` | Concurrently apply the given fn and the given number of workers to each `element`[evaluation operation] |
+| `Reduce(fn)` | Directly process `stream` [evaluation operation] |
+| `Done()` | Do nothing, wait for all operations to complete |
+
+
+
+## How to use?
+
+```go
+result := make(map[string]string)
+fx.From(func(source chan<- interface{}) {
+ for _, item := range data {
+ source <- item
+ }
+}).Walk(func(item interface{}, pipe chan<- interface{}) {
+ each := item.(*model.ClassData)
+
+ class, err := l.rpcLogic.GetClassInfo()
+ if err != nil {
+ l.Errorf("get class %s failed: %s", each.ClassId, err.Error())
+ return
+ }
+
+ students, err := l.rpcLogic.GetUsersInfo(class.ClassId)
+ if err != nil {
+ l.Errorf("get students %s failed: %s", each.ClassId, err.Error())
+ return
+ }
+
+ pipe <- &classObj{
+ classId: each.ClassId
+ studentIds: students
+ }
+}).ForEach(func(item interface{}) {
+ o := item.(*classObj)
+ result[o.classId] = o.studentIds
+})
+```
+
+
+1. `From()` generates `stream` from a `slice`
+2. `Walk()` receives and a `stream`, transforms and reorganizes each `ele` in the stream to generate a new `stream`
+3. Finally, the `stream` output (`fmt.Println`), storage (`map,slice`), and persistence (`db operation`) are performed by the `evaluation operation`
+
+
+
+## Briefly analyze
+
+The function naming in `fx` is semantically. Developers only need to know what kind of conversion is required for the business logic and call the matching function.
+
+
+So here is a brief analysis of a few more typical functions.
+
+### Walk()
+
+`Walk()` is implemented as the bottom layer by multiple functions throughout `fx`, such as `Map(), Filter()`, etc.
+
+So the essence is: `Walk()` is responsible for concurrently applying the passed function to each `ele` of the **input stream** and generating a new `stream`.
+
+Following the source code, it is divided into two sub-functions: custom count by `worker`, default count is `worker`
+
+```go
+// Custom workers
+func (p Stream) walkLimited(fn WalkFunc, option *rxOptions) Stream {
+ pipe := make(chan interface{}, option.workers)
+
+ go func() {
+ var wg sync.WaitGroup
+ // channel<- If the set number of workers is reached, the channel is blocked, so as to control the number of concurrency.
+ // Simple goroutine pool
+ pool := make(chan lang.PlaceholderType, option.workers)
+
+ for {
+ // Each for loop will open a goroutine. If it reaches the number of workers, it blocks
+ pool <- lang.Placeholder
+ item, ok := <-p.source
+ if !ok {
+ <-pool
+ break
+ }
+ // Use WaitGroup to ensure the integrity of task completion
+ wg.Add(1)
+ threading.GoSafe(func() {
+ defer func() {
+ wg.Done()
+ <-pool
+ }()
+
+ fn(item, pipe)
+ })
+ }
+
+ wg.Wait()
+ close(pipe)
+ }()
+
+ return Range(pipe)
+}
+```
+
+
+- Use `buffered channel` as a concurrent queue to limit the number of concurrent
+- `waitgroup` to ensure the completeness of the task completion
+
+Another `walkUnlimited()`: also uses `waitgroup` for concurrency control, because there is no custom concurrency limit, so there is no other `channel` for concurrency control.
+
+
+### Tail()
+
+The introduction of this is mainly because the `ring` is a doubly linked list, and the simple algorithm is still very interesting.
+
+```go
+func (p Stream) Tail(n int64) Stream {
+ source := make(chan interface{})
+
+ go func() {
+ ring := collection.NewRing(int(n))
+ // Sequence insertion, the order of the source is consistent with the order of the ring
+ for item := range p.source {
+ ring.Add(item)
+ }
+ // Take out all the items in the ring
+ for _, item := range ring.Take() {
+ source <- item
+ }
+ close(source)
+ }()
+
+ return Range(source)
+}
+```
+
+
+As for why `Tail()` can take out the last n of the source, this is left for everyone to fine-tune. Here is my understanding:
+![f93c621571074e44a2d403aa25e7db6f_tplv-k3u1fbpfcp-zoom-1.png](./resource/f93c621571074e44a2d403aa25e7db6f_tplv-k3u1fbpfcp-zoom-1.png)
+
+> [!TIP]
+> Suppose there is the following scenario,`Tail(5)`
+> - `stream size` :7
+> - `ring size`:5
+
+
+
+Here you can use the method of pulling apart the ring-shaped linked list, **Loop-to-line**,At this point, divide the symmetry axis by the full length, flip the extra elements, and the following elements are the parts needed by `Tail(5)`.
+
+
+> [!TIP]
+> The graph is used here for a clearer performance, but everyone should also look at the code. Algorithm to be tested![](https://gw.alipayobjects.com/os/lib/twemoji/11.2.0/2/svg/1f528.svg#align=left&display=inline&height=18&margin=%5Bobject%20Object%5D&originHeight=150&originWidth=150&status=done&style=none&width=18)
+
+
+
+### Stream Transform Design
+
+
+Analyzing the entire `fx`, you will find that the overall design follows a design template:
+
+
+```go
+func (p Stream) Transform(fn func(item interface{}) interface{}) Stream {
+ // make channel
+ source := make(chan interface{})
+ // goroutine worker
+ go func() {
+ // transform
+ for item := range p.source {
+ ...
+ source <- item
+ ...
+ }
+ ...
+ // Close the input, but still can output from this stream. Prevent memory leaks
+ close(source)
+ }()
+ // channel -> stream
+ return Range(source)
+}
+```
+
+
+- `channel` as a container for streams
+- Open `goroutine` to convert `source`, aggregate, and send to `channel`
+- Processed,`close(outputStream)`
+- `channel -> stream`
+
+
+
+## Summary
+
+This concludes the basic introduction of `fx`. If you are interested in other API source code, you can follow the API list above to read one by one.
+
+At the same time, it is also recommended that you take a look at the API of `java stream`, and you can have a deeper understanding of this `stream call`.
+
+At the same time, there are many useful component tools in `go-zero`. Good use of tools will greatly help improve service performance and development efficiency. I hope this article can bring you some gains.
+
+
+## Reference
+- [go-zero](https://github.com/zeromicro/go-zero)
+- [Java Stream](https://colobu.com/2016/03/02/Java-Stream/)
+- [Stream API in Java 8](https://mp.weixin.qq.com/s/xa98C-QUHRUK0BhWLzI3XQ)
diff --git a/go-zero.dev/en/go-queue.md b/go-zero.dev/en/go-queue.md
new file mode 100644
index 00000000..e36956b4
--- /dev/null
+++ b/go-zero.dev/en/go-queue.md
@@ -0,0 +1,305 @@
+# Queue
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the development of daily tasks, we will have many asynchronous, batch, timing, and delayed tasks to be processed. There is go-queue in go-zero. It is recommended to use go-queue for processing. Go-queue itself is also developed based on go-zero. There are two modes
+
+ - dq : Depends on beanstalkd, distributed, can be stored, delayed, timing settings, shutdown and restart can be re-executed, messages will not be lost, very simple to use, redis setnx is used in go-queue to ensure that each message is only consumed once, usage scenarios Mainly used for daily tasks.
+ - kq: Depends on Kafka, so I won’t introduce more about it, the famous Kafka, the usage scenario is mainly to do message queue
+
+ We mainly talk about dq. The use of kq is also the same, but it depends on the bottom layer. If you haven't used beanstalkd, you can google it first. It's still very easy to use.
+
+
+
+ etc/job.yaml : Configuration file
+
+ ```yaml
+ Name: job
+
+ Log:
+ ServiceName: job
+ Level: info
+
+ # dq depends on Beanstalks, redis, Beanstalks configuration, redis configuration
+ DqConf:
+ Beanstalks:
+ - Endpoint: 127.0.0.1:7771
+ Tube: tube1
+ - Endpoint: 127.0.0.1:7772
+ Tube: tube2
+ Redis:
+ Host: 127.0.0.1:6379
+ Type: node
+ ```
+
+
+
+ Internal/config/config.go: Parse dq corresponding `etc/*.yaml` configuration
+
+ ```go
+ /**
+ * @Description Configuration file
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+
+ package config
+
+ import (
+ "github.com/tal-tech/go-queue/dq"
+ "github.com/tal-tech/go-zero/core/service"
+
+ )
+
+ type Config struct {
+ service.ServiceConf
+ DqConf dq.DqConf
+ }
+
+ ```
+
+
+
+ Handler/router.go : Responsible for multi-task registration
+
+ ```go
+ /**
+ * @Description Register job
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+ package handler
+
+ import (
+ "context"
+ "github.com/tal-tech/go-zero/core/service"
+ "job/internal/logic"
+ "job/internal/svc"
+ )
+
+ func RegisterJob(serverCtx *svc.ServiceContext,group *service.ServiceGroup) {
+
+ group.Add(logic.NewProducerLogic(context.Background(),serverCtx))
+ group.Add(logic.NewConsumerLogic(context.Background(),serverCtx))
+
+ group.Start()
+
+ }
+ ```
+
+
+
+ ProducerLogic: One of the job business logic
+
+ ```go
+ /**
+ * @Description Producer task
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+ package logic
+
+ import (
+ "context"
+ "github.com/tal-tech/go-queue/dq"
+ "github.com/tal-tech/go-zero/core/logx"
+ "github.com/tal-tech/go-zero/core/threading"
+ "job/internal/svc"
+ "strconv"
+ "time"
+ )
+
+
+
+ type Producer struct {
+ ctx context.Context
+ svcCtx *svc.ServiceContext
+ logx.Logger
+ }
+
+ func NewProducerLogic(ctx context.Context, svcCtx *svc.ServiceContext) *Producer {
+ return &Producer{
+ ctx: ctx,
+ svcCtx: svcCtx,
+ Logger: logx.WithContext(ctx),
+ }
+ }
+
+ func (l *Producer)Start() {
+
+ logx.Infof("start Producer \n")
+ threading.GoSafe(func() {
+ producer := dq.NewProducer([]dq.Beanstalk{
+ {
+ Endpoint: "localhost:7771",
+ Tube: "tube1",
+ },
+ {
+ Endpoint: "localhost:7772",
+ Tube: "tube2",
+ },
+ })
+ for i := 1000; i < 1005; i++ {
+ _, err := producer.Delay([]byte(strconv.Itoa(i)), time.Second * 1)
+ if err != nil {
+ logx.Error(err)
+ }
+ }
+ })
+ }
+
+ func (l *Producer)Stop() {
+ logx.Infof("stop Producer \n")
+ }
+
+
+ ```
+
+ Another job business logic
+
+ ```go
+ /**
+ * @Description Consumer task
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+ package logic
+
+ import (
+ "context"
+ "github.com/tal-tech/go-zero/core/logx"
+ "github.com/tal-tech/go-zero/core/threading"
+ "job/internal/svc"
+ )
+
+ type Consumer struct {
+ ctx context.Context
+ svcCtx *svc.ServiceContext
+ logx.Logger
+ }
+
+ func NewConsumerLogic(ctx context.Context, svcCtx *svc.ServiceContext) *Consumer {
+ return &Consumer{
+ ctx: ctx,
+ svcCtx: svcCtx,
+ Logger: logx.WithContext(ctx),
+ }
+ }
+
+ func (l *Consumer)Start() {
+ logx.Infof("start consumer \n")
+
+ threading.GoSafe(func() {
+ l.svcCtx.Consumer.Consume(func(body []byte) {
+ logx.Infof("consumer job %s \n" ,string(body))
+ })
+ })
+ }
+
+ func (l *Consumer)Stop() {
+ logx.Infof("stop consumer \n")
+ }
+ ```
+
+
+
+ svc/servicecontext.go
+
+ ```go
+ /**
+ * @Description Configuration
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+ package svc
+
+ import (
+ "job/internal/config"
+ "github.com/tal-tech/go-queue/dq"
+ )
+
+ type ServiceContext struct {
+ Config config.Config
+ Consumer dq.Consumer
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ Config: c,
+ Consumer: dq.NewConsumer(c.DqConf),
+ }
+ }
+
+ ```
+
+
+
+ main.go startup file
+
+ ```go
+ /**
+ * @Description Startup file
+ * @Author Mikael
+ * @Email 13247629622@163.com
+ * @Date 2021/1/18 12:05
+ * @Version 1.0
+ **/
+ package main
+
+ import (
+ "flag"
+ "fmt"
+ "github.com/tal-tech/go-zero/core/conf"
+ "github.com/tal-tech/go-zero/core/logx"
+ "github.com/tal-tech/go-zero/core/service"
+ "job/internal/config"
+ "job/internal/handler"
+ "job/internal/svc"
+ "os"
+ "os/signal"
+ "syscall"
+ "time"
+ )
+
+
+ var configFile = flag.String("f", "etc/job.yaml", "the config file")
+
+ func main() {
+ flag.Parse()
+
+ var c config.Config
+ conf.MustLoad(*configFile, &c)
+ ctx := svc.NewServiceContext(c)
+
+ group := service.NewServiceGroup()
+ handler.RegisterJob(ctx,group)
+
+ ch := make(chan os.Signal)
+ signal.Notify(ch, syscall.SIGHUP, syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT)
+ for {
+ s := <-ch
+ logx.Info("get a signal %s", s.String())
+ switch s {
+ case syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT:
+ fmt.Printf("stop group")
+ group.Stop()
+ logx.Info("job exit")
+ time.Sleep(time.Second)
+ return
+ case syscall.SIGHUP:
+ default:
+ return
+ }
+ }
+ }
+ ```
diff --git a/go-zero.dev/en/go-zero-design.md b/go-zero.dev/en/go-zero-design.md
new file mode 100644
index 00000000..b7165cee
--- /dev/null
+++ b/go-zero.dev/en/go-zero-design.md
@@ -0,0 +1,14 @@
+# Go-Zero Design
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+For the design of the microservice framework, we expect that while ensuring the stability of microservices, we must also pay special attention to research and development efficiency. So at the beginning of the design, we have the following guidelines:
+
+* Keep it simple, first principle
+* Flexible design, fault-oriented programming
+* Tools are bigger than conventions and documents
+* High availability
+* High concurrency
+* Easy to expand
+* Friendly to business development, package complexity
+* There is only one way to constrain one thing
\ No newline at end of file
diff --git a/go-zero.dev/en/go-zero-features.md b/go-zero.dev/en/go-zero-features.md
new file mode 100644
index 00000000..324b4bea
--- /dev/null
+++ b/go-zero.dev/en/go-zero-features.md
@@ -0,0 +1,23 @@
+# Go-Zero Features
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+go-zero is a web and rpc framework that integrates various engineering practices. It has the following main features:
+
+* Powerful tool support, as little code writing as possible
+* Minimalist interface
+* Fully compatible with net.http
+* Support middleware for easy expansion
+* High performance
+* Fault-oriented programming, flexible design
+* Built-in service discovery, load balancing
+* Built-in current limiting, fusing, load reduction, and automatic triggering, automatic recovery
+* API parameter automatic verification
+* Timeout cascade control
+* Automatic cache control
+* Link tracking, statistical alarm, etc.
+* High concurrency support, stably guaranteeing daily traffic peaks during the epidemic
+
+As shown in the figure below, we have ensured the high availability of the overall service from multiple levels:
+
+![resilience](https://gitee.com/kevwan/static/raw/master/doc/images/resilience.jpg)
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl-api.md b/go-zero.dev/en/goctl-api.md
new file mode 100644
index 00000000..7e0f7add
--- /dev/null
+++ b/go-zero.dev/en/goctl-api.md
@@ -0,0 +1,71 @@
+# API Commands
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+goctl api is one of the core modules in goctl. It can quickly generate an api service through the .api file with one click. If you just start a go-zero api demo project,
+You can complete an api service development and normal operation without even coding. In traditional api projects, we have to create directories at all levels, write structures,
+Define routing, add logic files, this series of operations, if calculated according to the business requirements of a protocol, it takes about 5 to 6 minutes for the entire coding to actually enter the writing of business logic.
+This does not consider the various errors that may occur during the writing process. With the increase of services and the increase of agreements, the time for this part of the preparation work will increase proportionally.
+The goctl api can completely replace you to do this part of the work, no matter how many agreements you have, in the end, it only takes less than 10 seconds to complete.
+
+> [!TIP]
+> The structure is written, and the route definition is replaced by api, so in general, it saves you the time of creating folders, adding various files and resource dependencies.
+
+## API command description
+```shell
+$ goctl api -h
+```
+```text
+NAME:
+ goctl api - generate api related files
+
+USAGE:
+ goctl api command [command options] [arguments...]
+
+COMMANDS:
+ new fast create api service
+ format format api files
+ validate validate api file
+ doc generate doc files
+ go generate go files for provided api in yaml file
+ java generate java files for provided api in api file
+ ts generate ts files for provided api in api file
+ dart generate dart files for provided api in api file
+ kt generate kotlin code for provided api file
+ plugin custom file generator
+
+OPTIONS:
+ -o value the output api file
+ --help, -h show help
+```
+
+As you can see from the above, according to the different functions, the api contains a lot of self-commands and flags, let’s focus on it here
+The `go` subcommand, which function is to generate golang api services, let's take a look at the usage help through `goctl api go -h`:
+```shell
+$ goctl api go -h
+```
+```text
+NAME:
+ goctl api go - generate go files for provided api in yaml file
+
+USAGE:
+ goctl api go [command options] [arguments...]
+
+OPTIONS:
+ --dir value the target dir
+ --api value the api file
+ --style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
+```
+
+* --dir: Code output directory
+* --api: Specify the api source file
+* --style: Specify the file name style of the generated code file, see for details [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md](https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md)
+
+## Usage example
+```shell
+$ goctl api go -api user.api -dir . -style gozero
+```
+
+# Guess you wants
+* [API IDL](api-grammar.md)
+* [API Directory Structure](api-dir.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl-commands.md b/go-zero.dev/en/goctl-commands.md
new file mode 100644
index 00000000..236ae3ca
--- /dev/null
+++ b/go-zero.dev/en/goctl-commands.md
@@ -0,0 +1,273 @@
+# goctl command list
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+![goctl](https://zeromicro.github.io/go-zero/en/resource/goctl-command.png)
+
+# goctl
+
+## api
+(api service related operations)
+
+### -o
+(Generate api file)
+
+- Example: goctl api -o user.api
+
+### new
+(Quickly create an api service)
+
+- Example: goctl api new user
+
+### format
+(api format, vscode use)
+
+- -dir
+ (Target directory)
+- -iu
+ (Whether to automatically update goctl)
+- -stdin
+ (Whether to read data from standard input)
+
+### validate
+(Verify that the api file is valid)
+
+- -api
+ (Specify the api file source)
+
+ - Example: goctl api validate -api user.api
+
+### doc
+(Generate doc markdown)
+
+- -dir
+ (Specify the directory)
+
+ - Example: goctl api doc -dir user
+
+### go
+(Generate golang api service)
+
+- -dir
+ (Specify the code storage directory)
+- -api
+ (Specify the api file source)
+- -force
+ (Whether to force overwrite existing files)
+- -style
+ (Specify the file name naming style, `gozero`: lowercase, `go_zero`: underscore, `GoZero`: camel case)
+
+### java
+(Generate access api service code-java language)
+
+- -dir
+ (Specify the code storage directory)
+- -api
+ (Specify the api file source)
+
+### ts
+(Generate access api service code-ts language)
+
+- -dir
+ (Specify the code storage directory)
+- -api
+ (Specify the api file source)
+- webapi
+- caller
+- unwrap
+
+### dart
+(Generate access api service code-dart language)
+
+- -dir
+ (Specify code storage target)
+- -api
+ (Specify the api file source)
+
+### kt
+(Generate access api service code-Kotlin language)
+
+- -dir
+ (Specify code storage target)
+- -api
+ (Specify the api file source)
+- -pkg
+ (Specify package name)
+
+### plugin
+
+- -plugin
+ Executable file
+- -dir
+ Code storage destination folder
+- -api
+ api source file
+- -style
+ File name formatting
+
+## template
+(Template operation)
+
+### init
+(Cache api/rpc/model template)
+
+- Example: goctl template init
+
+### clean
+(清空缓存模板)
+
+- Example: goctl template clean
+
+### update
+(Update template)
+
+- -category,c
+ (Specify the group name that needs to be updated api|rpc|model)
+
+ - Example: goctl template update -c api
+
+### revert
+(Restore the specified template file)
+
+- -category,c
+ (Specify the group name that needs to be updated api|rpc|model)
+- -name,n
+ (Specify the template file name)
+
+## config
+(Configuration file generation)
+
+### -path,p
+(Specify the configuration file storage directory)
+
+- Example: goctl config -p user
+
+## docker
+(Generate Dockerfile)
+
+### -go
+(Specify the main function file)
+
+### -port
+(Specify the exposed port)
+
+## rpc (rpc service related operations)
+
+### new
+(Quickly generate an rpc service)
+
+- -idea
+ (Identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the terminal execution [optional])
+- -style
+ (Specify the file name naming style, `gozero`: lowercase, `go_zero`: underscore, `GoZero`: camel case)
+
+### template
+(Create a proto template file)
+
+- -idea
+ (Identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the terminal execution [optional])
+- -out,o
+ (Specify the code storage directory)
+
+### proto
+(Generate rpc service based on proto)
+
+- -src,s
+ (Specify the proto file source)
+- -proto_path,I
+ (Specify proto import to find the directory, protoc native commands, for specific usage, please refer to protoc -h to view)
+- -dir,d
+ (Specify the code storage directory)
+- -idea
+ (Identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the terminal execution [optional])
+- -style
+ (Specify the file name naming style, `gozero`: lowercase, `go_zero`: underscore, `GoZero`: camel case)
+
+### model
+(Model layer code operation)
+
+- mysql
+ (Generate model code from mysql)
+
+ - ddl
+ (Specify the data source to generate model code for the ddl file)
+
+ - -src,s
+ (Specify the source of the sql file containing ddl, support wildcard matching)
+ - -dir,d
+ (Specify the code storage directory)
+ - -style
+ (Specify the file name naming style, `gozero`: lowercase, `go_zero`: underscore, `GoZero`: camel case)
+ - -cache,c
+ (Whether the generated code has redis cache logic, bool value)
+ - -idea
+ (Identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the terminal execution [optional])
+
+ - datasource
+ (Specify the data source to generate model code from the datasource)
+
+ - -url
+ (Specify datasource)
+ - -table,t
+ (Specify the table name, support wildcards)
+ - -dir,d
+ (Specify the code storage directory)
+ - -style
+ (Specify the file name naming style, `gozero`: lowercase, `go_zero`: underscore, `GoZero`: camel case)
+ - -cache,c
+ (Whether the generated code has redis cache logic, bool value)
+ - -idea
+ (Identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the terminal execution [optional])
+- mongo
+ (generate model code from mongo)
+
+ -type,t
+ (specify Go Type name)
+ -cache,c
+ (generate code with redis cache logic or not, bool value, default no)
+ -dir,d
+ (specify the code generation directory)
+ -style
+ (specify filename naming style, gozero:lowercase, go_zero:underscore, GoZero:hump)
+
+## upgrade
+Goctl updated to the latest version
+
+## kube
+Generate k8s deployment file
+
+### deploy
+
+
+- -name
+ service name
+- -namespace
+ k8s namespace
+- -image
+ docker image
+- -secret
+ Specify the k8s secret to obtain the mirror
+- -requestCpu
+ Specify the default allocation of cpu
+- -requestMem
+ Specify the default allocation of memory
+- -limitCpu
+ Specify the maximum allocation of cpu
+- -limitMem
+ Specify the maximum amount of memory allocated
+- -o
+ `deployment.yaml` output directory
+- -replicas
+ Specify the replicas
+- -revisions
+ Specify the number of release records to keep
+- -port
+ Specify service port
+- -nodePort
+ Specify the service's external exposure port
+- -minReplicas
+ Specify the minimum number of copies
+- -maxReplicas
+ Specify the maximum number of copies
+
diff --git a/go-zero.dev/en/goctl-install.md b/go-zero.dev/en/goctl-install.md
new file mode 100644
index 00000000..6900e77f
--- /dev/null
+++ b/go-zero.dev/en/goctl-install.md
@@ -0,0 +1,37 @@
+# Goctl Installation
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Foreword
+Goctl plays a very important role in the development of the go-zero project. It can effectively help developers greatly improve development efficiency, reduce code error rate, and shorten the workload of business development. For more introductions to Goctl, please read [Goctl Introduction ](goctl.md),
+
+Here we strongly recommend that you install it, because most of the follow-up demonstration examples will use goctl for demonstration.
+
+## Install(mac&linux)
+* download&install
+ ```shell
+ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
+ ```
+* Environmental variable detection
+
+ The compiled binary file downloaded by `go get` is located in the `$GOPATH/bin` directory. Make sure that `$GOPATH/bin` has been added to the environment variable.
+ ```shell
+ $ sudo vim /etc/paths
+ ```
+ Add the following in the last line
+ ```text
+ $GOPATH/bin
+ ```
+ > [!TIP]
+ > `$GOPATH` is the filepath on your local machine
+
+* Installation result verification
+ ```shell
+ $ goctl -v
+ ```
+ ```text
+ goctl version 1.1.4 darwin/amd64
+ ```
+
+> [!TIP]
+> For windows users to add environment variables, please Google by yourself.
diff --git a/go-zero.dev/en/goctl-model.md b/go-zero.dev/en/goctl-model.md
new file mode 100644
index 00000000..5117b5b7
--- /dev/null
+++ b/go-zero.dev/en/goctl-model.md
@@ -0,0 +1,376 @@
+# Model Commands
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+goctl model is one of the components in the tool module under go-zero. It currently supports the recognition of mysql ddl for model layer code generation. It can be selectively generated with or without redis cache through the command line or idea plug-in (supported soon) The code logic.
+
+## Quick start
+
+* Generated by ddl
+
+ ```shell
+ $ goctl model mysql ddl -src="./*.sql" -dir="./sql/model" -c
+ ```
+
+ CURD code can be quickly generated after executing the above command.
+
+ ```text
+ model
+ │ ├── error.go
+ │ └── usermodel.go
+ ```
+
+* Generated by datasource
+
+ ```shell
+ $ goctl model mysql datasource -url="user:password@tcp(127.0.0.1:3306)/database" -table="*" -dir="./model"
+ ```
+
+* Example code
+ ```go
+ package model
+
+ import (
+ "database/sql"
+ "fmt"
+ "strings"
+ "time"
+
+ "github.com/tal-tech/go-zero/core/stores/cache"
+ "github.com/tal-tech/go-zero/core/stores/sqlc"
+ "github.com/tal-tech/go-zero/core/stores/sqlx"
+ "github.com/tal-tech/go-zero/core/stringx"
+ "github.com/tal-tech/go-zero/tools/goctl/model/sql/builderx"
+ )
+
+ var (
+ userFieldNames = builderx.RawFieldNames(&User{})
+ userRows = strings.Join(userFieldNames, ",")
+ userRowsExpectAutoSet = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), ",")
+ userRowsWithPlaceHolder = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), "=?,") + "=?"
+
+ cacheUserNamePrefix = "cache#User#name#"
+ cacheUserMobilePrefix = "cache#User#mobile#"
+ cacheUserIdPrefix = "cache#User#id#"
+ cacheUserPrefix = "cache#User#user#"
+ )
+
+ type (
+ UserModel interface {
+ Insert(data User) (sql.Result, error)
+ FindOne(id int64) (*User, error)
+ FindOneByUser(user string) (*User, error)
+ FindOneByName(name string) (*User, error)
+ FindOneByMobile(mobile string) (*User, error)
+ Update(data User) error
+ Delete(id int64) error
+ }
+
+ defaultUserModel struct {
+ sqlc.CachedConn
+ table string
+ }
+
+ User struct {
+ Id int64 `db:"id"`
+ User string `db:"user"` // user
+ Name string `db:"name"` // user name
+ Password string `db:"password"` // user password
+ Mobile string `db:"mobile"` // mobile
+ Gender string `db:"gender"` // male|female|secret
+ Nickname string `db:"nickname"` // nickname
+ CreateTime time.Time `db:"create_time"`
+ UpdateTime time.Time `db:"update_time"`
+ }
+ )
+
+ func NewUserModel(conn sqlx.SqlConn, c cache.CacheConf) UserModel {
+ return &defaultUserModel{
+ CachedConn: sqlc.NewConn(conn, c),
+ table: "`user`",
+ }
+ }
+
+ func (m *defaultUserModel) Insert(data User) (sql.Result, error) {
+ userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
+ userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
+ userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
+ ret, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
+ query := fmt.Sprintf("insert into %s (%s) values (?, ?, ?, ?, ?, ?)", m.table, userRowsExpectAutoSet)
+ return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname)
+ }, userNameKey, userMobileKey, userKey)
+ return ret, err
+ }
+
+ func (m *defaultUserModel) FindOne(id int64) (*User, error) {
+ userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
+ var resp User
+ err := m.QueryRow(&resp, userIdKey, func(conn sqlx.SqlConn, v interface{}) error {
+ query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
+ return conn.QueryRow(v, query, id)
+ })
+ switch err {
+ case nil:
+ return &resp, nil
+ case sqlc.ErrNotFound:
+ return nil, ErrNotFound
+ default:
+ return nil, err
+ }
+ }
+
+ func (m *defaultUserModel) FindOneByUser(user string) (*User, error) {
+ userKey := fmt.Sprintf("%s%v", cacheUserPrefix, user)
+ var resp User
+ err := m.QueryRowIndex(&resp, userKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
+ query := fmt.Sprintf("select %s from %s where `user` = ? limit 1", userRows, m.table)
+ if err := conn.QueryRow(&resp, query, user); err != nil {
+ return nil, err
+ }
+ return resp.Id, nil
+ }, m.queryPrimary)
+ switch err {
+ case nil:
+ return &resp, nil
+ case sqlc.ErrNotFound:
+ return nil, ErrNotFound
+ default:
+ return nil, err
+ }
+ }
+
+ func (m *defaultUserModel) FindOneByName(name string) (*User, error) {
+ userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, name)
+ var resp User
+ err := m.QueryRowIndex(&resp, userNameKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
+ query := fmt.Sprintf("select %s from %s where `name` = ? limit 1", userRows, m.table)
+ if err := conn.QueryRow(&resp, query, name); err != nil {
+ return nil, err
+ }
+ return resp.Id, nil
+ }, m.queryPrimary)
+ switch err {
+ case nil:
+ return &resp, nil
+ case sqlc.ErrNotFound:
+ return nil, ErrNotFound
+ default:
+ return nil, err
+ }
+ }
+
+ func (m *defaultUserModel) FindOneByMobile(mobile string) (*User, error) {
+ userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, mobile)
+ var resp User
+ err := m.QueryRowIndex(&resp, userMobileKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
+ query := fmt.Sprintf("select %s from %s where `mobile` = ? limit 1", userRows, m.table)
+ if err := conn.QueryRow(&resp, query, mobile); err != nil {
+ return nil, err
+ }
+ return resp.Id, nil
+ }, m.queryPrimary)
+ switch err {
+ case nil:
+ return &resp, nil
+ case sqlc.ErrNotFound:
+ return nil, ErrNotFound
+ default:
+ return nil, err
+ }
+ }
+
+ func (m *defaultUserModel) Update(data User) error {
+ userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, data.Id)
+ _, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
+ query := fmt.Sprintf("update %s set %s where `id` = ?", m.table, userRowsWithPlaceHolder)
+ return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname, data.Id)
+ }, userIdKey)
+ return err
+ }
+
+ func (m *defaultUserModel) Delete(id int64) error {
+ data, err := m.FindOne(id)
+ if err != nil {
+ return err
+ }
+
+ userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
+ userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
+ userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
+ userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
+ _, err = m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
+ query := fmt.Sprintf("delete from %s where `id` = ?", m.table)
+ return conn.Exec(query, id)
+ }, userNameKey, userMobileKey, userIdKey, userKey)
+ return err
+ }
+
+ func (m *defaultUserModel) formatPrimary(primary interface{}) string {
+ return fmt.Sprintf("%s%v", cacheUserIdPrefix, primary)
+ }
+
+ func (m *defaultUserModel) queryPrimary(conn sqlx.SqlConn, v, primary interface{}) error {
+ query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
+ return conn.QueryRow(v, query, primary)
+ }
+
+ ```
+
+## 用法
+
+```text
+$ goctl model mysql -h
+```
+
+```text
+NAME:
+ goctl model mysql - generate mysql model"
+
+USAGE:
+ goctl model mysql command [command options] [arguments...]
+
+COMMANDS:
+ ddl generate mysql model from ddl"
+ datasource generate model from datasource"
+
+OPTIONS:
+ --help, -h show help
+```
+
+## Generation rules
+
+* Default rule
+
+ By default, users will create createTime and updateTime fields (ignoring case and underscore naming style) when creating a table, and the default values are both `CURRENT_TIMESTAMP`, and updateTime supports `ON UPDATE CURRENT_TIMESTAMP`. For these two fields, `insert`, It will be removed when `update` is not in the assignment scope. Of course, if you don't need these two fields, it does not matter.
+* With cache mode
+ * ddl
+
+ ```shell
+ $ goctl model mysql -src={patterns} -dir={dir} -cache
+ ```
+
+ help
+
+ ```
+ NAME:
+ goctl model mysql ddl - generate mysql model from ddl
+
+ USAGE:
+ goctl model mysql ddl [command options] [arguments...]
+
+ OPTIONS:
+ --src value, -s value the path or path globbing patterns of the ddl
+ --dir value, -d value the target dir
+ --style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
+ --cache, -c generate code with cache [optional]
+ --idea for idea plugin [optional]
+ ```
+
+ * datasource
+
+ ```shell
+ $ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir} -cache=true
+ ```
+
+ help
+
+ ```text
+ NAME:
+ goctl model mysql datasource - generate model from datasource
+
+ USAGE:
+ goctl model mysql datasource [command options] [arguments...]
+
+ OPTIONS:
+ --url value the data source of database,like "root:password@tcp(127.0.0.1:3306)/database
+ --table value, -t value the table or table globbing patterns in the database
+ --cache, -c generate code with cache [optional]
+ --dir value, -d value the target dir
+ --style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
+ --idea for idea plugin [optional]
+ ```
+
+ > [!TIP]
+ > Goctl model mysql ddl/datasource has added a new `--style` parameter to mark the file naming style.
+
+ Currently, only redis cache is supported. If you select the cache mode, the generated `FindOne(ByXxx)`&`Delete` code will generate code with cache logic. Currently, only single index fields (except full-text index) are supported. For joint index By default, we believe that there is no need to bring a cache, and it is not a general-purpose code, so it is not placed in the code generation ranks. For example, the `id`, `name`, and `mobile` fields in the user table in the example are all single-field indexes.
+
+* Without cache mode
+
+ * ddl
+
+ ```shell
+ $ goctl model -src={patterns} -dir={dir}
+ ```
+
+ * datasource
+
+ ```shell
+ $ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
+ ```
+
+ or
+ * ddl
+
+ ```shell
+ $ goctl model -src={patterns} -dir={dir}
+ ```
+
+ * datasource
+
+ ```shell
+ $ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
+ ```
+
+Generate code only basic CURD structure.
+
+## Cache
+
+For the cache, I chose to list it in the form of question and answer. I think this can more clearly describe the function of the cache in the model.
+
+* What information will the cache?
+
+ For the primary key field cache, the entire structure information will be cached, while for the single index field (except full-text index), the primary key field value will be cached.
+
+* Does the data update (`update`) operation clear the cache?
+
+ Yes, but only clear the information in the primary key cache, why? I won't go into details here.
+
+* Why not generate `updateByXxx` and `deleteByXxx` codes based on single index fields?
+
+ There is no problem in theory, but we believe that the data operations of the model layer are based on the entire structure, including queries. I do not recommend querying only certain fields (no objection), otherwise our cache will be meaningless.
+
+* Why not support the code generation layer of `findPageLimit` and `findAll`?
+
+ At present, I think that in addition to the basic CURD, the other codes are all business-type codes. I think it is better for developers to write according to business needs.
+
+# Type conversion rules
+| mysql dataType | golang dataType | golang dataType(if null&&default null) |
+|----------------|-----------------|----------------------------------------|
+| bool | int64 | sql.NullInt64 |
+| boolean | int64 | sql.NullInt64 |
+| tinyint | int64 | sql.NullInt64 |
+| smallint | int64 | sql.NullInt64 |
+| mediumint | int64 | sql.NullInt64 |
+| int | int64 | sql.NullInt64 |
+| integer | int64 | sql.NullInt64 |
+| bigint | int64 | sql.NullInt64 |
+| float | float64 | sql.NullFloat64 |
+| double | float64 | sql.NullFloat64 |
+| decimal | float64 | sql.NullFloat64 |
+| date | time.Time | sql.NullTime |
+| datetime | time.Time | sql.NullTime |
+| timestamp | time.Time | sql.NullTime |
+| time | string | sql.NullString |
+| year | time.Time | sql.NullInt64 |
+| char | string | sql.NullString |
+| varchar | string | sql.NullString |
+| binary | string | sql.NullString |
+| varbinary | string | sql.NullString |
+| tinytext | string | sql.NullString |
+| text | string | sql.NullString |
+| mediumtext | string | sql.NullString |
+| longtext | string | sql.NullString |
+| enum | string | sql.NullString |
+| set | string | sql.NullString |
+| json | string | sql.NullString |
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl-other.md b/go-zero.dev/en/goctl-other.md
new file mode 100644
index 00000000..e8249e4f
--- /dev/null
+++ b/go-zero.dev/en/goctl-other.md
@@ -0,0 +1,311 @@
+# More Commands
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* goctl docker
+* goctl kube
+
+## goctl docker
+`goctl docker` can quickly generate a Dockerfile to help developers/operations and maintenance personnel speed up the deployment pace and reduce deployment complexity.
+
+### Prepare
+* docker install
+
+### Dockerfile note
+* Choose the simplest mirror: For example, `alpine`, the entire mirror is about 5M
+* Set mirror time zone
+```shell
+RUN apk add --no-cache tzdata
+ENV TZ Asia/Shanghai
+```
+
+### Multi-stage build
+* Otherwise, an executable file will be built in the first stage of construction to ensure that the build process is independent of the host
+* The second stage uses the output of the first stage as input to construct the final minimalist image
+
+### Dockerfile writing process
+* First install the goctl tool
+```shell
+$ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
+```
+
+* Create a hello service under the greet project
+```shell
+$ goctl api new hello
+```
+
+The file structure is as follows:
+```text
+greet
+├── go.mod
+├── go.sum
+└── service
+ └── hello
+ ├── Dockerfile
+ ├── etc
+ │ └── hello-api.yaml
+ ├── hello.api
+ ├── hello.go
+ └── internal
+ ├── config
+ │ └── config.go
+ ├── handler
+ │ ├── hellohandler.go
+ │ └── routes.go
+ ├── logic
+ │ └── hellologic.go
+ ├── svc
+ │ └── servicecontext.go
+ └── types
+ └── types.go
+```
+* Generate a `Dockerfile` in the `hello` directory
+```shell
+$ goctl docker -go hello.go
+```
+Dockerfile:
+```shell
+ FROM golang:alpine AS builder
+ LABEL stage=gobuilder
+ ENV CGO_ENABLED 0
+ ENV GOOS linux
+ ENV GOPROXY https://goproxy.cn,direct
+ WORKDIR /build/zero
+ ADD go.mod .
+ ADD go.sum .
+ RUN go mod download
+ COPY . .
+ COPY service/hello/etc /app/etc
+ RUN go build -ldflags="-s -w" -o /app/hello service/hello/hello.go
+ FROM alpine
+ RUN apk update --no-cache
+ RUN apk add --no-cache ca-certificates
+ RUN apk add --no-cache tzdata
+ ENV TZ Asia/Shanghai
+ WORKDIR /app
+ COPY --from=builder /app/hello /app/hello
+ COPY --from=builder /app/etc /app/etc
+ CMD ["./hello", "-f", "etc/hello-api.yaml"]
+```
+* To `build` mirror in the `greet` directory
+```shell
+$ docker build -t hello:v1 -f service/hello/Dockerfile .
+```
+
+* View mirror
+```shell
+hello v1 5455f2eaea6b 7 minutes ago 18.1MB
+```
+
+It can be seen that the mirror size is about 18M.
+* Start service
+```shell
+$ docker run --rm -it -p 8888:8888 hello:v1
+```
+* Test service
+```shell
+$ curl -i http://localhost:8888/from/you
+```
+```text
+HTTP/1.1 200 OK
+Content-Type: application/json
+Date: Thu, 10 Dec 2020 06:03:02 GMT
+Content-Length: 14
+{"message":""}
+```
+
+### goctl docker summary
+The goctl tool greatly simplifies the writing of Dockerfile files, provides best practices out of the box, and supports template customization.
+
+## goctl kube
+
+`goctl kube` provides the function of quickly generating a `k8s` deployment file, which can speed up the deployment progress of developers/operations and maintenance personnel and reduce deployment complexity.
+
+### Have a trouble to write K8S deployment files?
+
+
+- `K8S yaml` has a lot of parameters, need to write and check?
+- How to set the number of retained rollback versions?
+- How to detect startup success, how to detect live?
+- How to allocate and limit resources?
+- How to set the time zone? Otherwise, the print log is GMT standard time
+- How to expose services for other services to call?
+- How to configure horizontal scaling based on CPU and memory usage?
+
+
+
+First, you need to know that you have these knowledge points, and secondly, it is not easy to understand all these knowledge points, and again, it is still easy to make mistakes every time you write!
+
+## Create service image
+For demonstration, here we take the `redis:6-alpine` image as an example.
+
+## 完整 K8S Deployment file writing process
+
+- First install the `goctl` tool
+
+```shell
+$ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
+```
+
+- One-click generation of K8S deployment files
+
+```shell
+$ goctl kube deploy -name redis -namespace adhoc -image redis:6-alpine -o redis.yaml -port 6379
+```
+The generated `yaml` file is as follows:
+
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: redis
+ namespace: adhoc
+ labels:
+ app: redis
+spec:
+ replicas: 3
+ revisionHistoryLimit: 5
+ selector:
+ matchLabels:
+ app: redis
+ template:
+ metadata:
+ labels:
+ app: redis
+ spec:
+ containers:
+ - name: redis
+ image: redis:6-alpine
+ lifecycle:
+ preStop:
+ exec:
+ command: ["sh","-c","sleep 5"]
+ ports:
+ - containerPort: 6379
+ readinessProbe:
+ tcpSocket:
+ port: 6379
+ initialDelaySeconds: 5
+ periodSeconds: 10
+ livenessProbe:
+ tcpSocket:
+ port: 6379
+ initialDelaySeconds: 15
+ periodSeconds: 20
+ resources:
+ requests:
+ cpu: 500m
+ memory: 512Mi
+ limits:
+ cpu: 1000m
+ memory: 1024Mi
+ volumeMounts:
+ - name: timezone
+ mountPath: /etc/localtime
+ volumes:
+ - name: timezone
+ hostPath:
+ path: /usr/share/zoneinfo/Asia/Shanghai
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: redis-svc
+ namespace: adhoc
+spec:
+ ports:
+ - port: 6379
+ selector:
+ app: redis
+---
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: redis-hpa-c
+ namespace: adhoc
+ labels:
+ app: redis-hpa-c
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: redis
+ minReplicas: 3
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ targetAverageUtilization: 80
+---
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: redis-hpa-m
+ namespace: adhoc
+ labels:
+ app: redis-hpa-m
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: redis
+ minReplicas: 3
+ maxReplicas: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: memory
+ targetAverageUtilization: 80
+```
+
+
+- Deploy the service, if the `adhoc` namespace does not exist, please create it through `kubectl create namespace adhoc`
+```
+$ kubectl apply -f redis.yaml
+deployment.apps/redis created
+service/redis-svc created
+horizontalpodautoscaler.autoscaling/redis-hpa-c created
+horizontalpodautoscaler.autoscaling/redis-hpa-m created
+```
+
+- View service permission status
+```
+$ kubectl get all -n adhoc
+NAME READY STATUS RESTARTS AGE
+pod/redis-585bc66876-5ph26 1/1 Running 0 6m5s
+pod/redis-585bc66876-bfqxz 1/1 Running 0 6m5s
+pod/redis-585bc66876-vvfc9 1/1 Running 0 6m5s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/redis-svc ClusterIP 172.24.15.8 6379/TCP 6m5s
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/redis 3/3 3 3 6m6s
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/redis-585bc66876 3 3 3 6m6s
+NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
+horizontalpodautoscaler.autoscaling/redis-hpa-c Deployment/redis 0%/80% 3 10 3 6m6s
+horizontalpodautoscaler.autoscaling/redis-hpa-m Deployment/redis 0%/80% 3 10 3 6m6s
+```
+
+
+- Test service
+```
+$ kubectl run -i --tty --rm cli --image=redis:6-alpine -n adhoc -- sh
+/data # redis-cli -h redis-svc
+redis-svc:6379> set go-zero great
+OK
+redis-svc:6379> get go-zero
+"great"
+```
+### goctl kube summary
+The `goctl` tool greatly simplifies the writing of K8S yaml files, provides best practices out of the box, and supports template customization.
+
+# Guess you wants
+* [Prepare](prepare.md)
+* [API Directory Structure](api-dir.md)
+* [API IDL](api-grammar.md)
+* [API Configuration](api-config.md)
+* [API Commands](goctl-api.md)
+* [Docker](https://www.docker.com)
+* [K8s](https://kubernetes.io/docs/home/)
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl-plugin.md b/go-zero.dev/en/goctl-plugin.md
new file mode 100644
index 00000000..88ee98ef
--- /dev/null
+++ b/go-zero.dev/en/goctl-plugin.md
@@ -0,0 +1,64 @@
+# Plugin Commands
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Goctl supports custom plugins for api, so how do I customize a plugin? Let's take a look at an example of how to finally use it below.
+```go
+$ goctl api plugin -p goctl-android="android -package com.tal" -api user.api -dir .
+```
+
+The above command can be broken down into the following steps:
+* goctl parsing api file
+* goctl passes the parsed structure ApiSpec and parameters to the goctl-android executable file
+* goctl-android customizes the generation logic according to the ApiSpec structure.
+
+The first part of this command goctl api plugin -p is a fixed parameter, goctl-android="android -package com.tal" is a plugin parameter, where goctl-android is the plugin binary file, and android -package com.tal is a custom parameter of the plugin , -Api user.api -dir. Is a common custom parameter for goctl.
+## How to write a custom plug-in?
+A very simple custom plug-in demo is included in the go-zero framework. The code is as follows:
+```go
+package main
+
+import (
+ "fmt"
+
+ "github.com/tal-tech/go-zero/tools/goctl/plugin"
+)
+
+func main() {
+ plugin, err := plugin.NewPlugin()
+ if err != nil {
+ panic(err)
+ }
+ if plugin.Api != nil {
+ fmt.Printf("api: %+v \n", plugin.Api)
+ }
+ fmt.Printf("dir: %s \n", plugin.Dir)
+ fmt.Println("Enjoy anything you want.")
+}
+```
+
+`plugin, err := plugin.NewPlugin()` The function of this line of code is to parse the data passed from goctl, which contains the following parts:
+
+```go
+type Plugin struct {
+ Api *spec.ApiSpec
+ Style string
+ Dir string
+}
+```
+> [!TIP]
+> Api: defines the structure data of the api file
+>
+> Style: optional, it is used to control file naming conventions
+>
+> Dir: workDir
+
+
+Complete android plugin demo project based on plugin
+[https://github.com/zeromicro/goctl-android](https://github.com/zeromicro/goctl-android)
+
+# Guess you wants
+* [API Directory Structure](api-dir.md)
+* [API IDL](api-grammar.md)
+* [API Configuration](api-config.md)
+* [API Commands](goctl-api.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl-rpc.md b/go-zero.dev/en/goctl-rpc.md
new file mode 100644
index 00000000..58317c9d
--- /dev/null
+++ b/go-zero.dev/en/goctl-rpc.md
@@ -0,0 +1,227 @@
+# RPC Commands
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Goctl Rpc is a rpc service code generation module under `goctl` scaffolding, which supports proto template generation and rpc service code generation. To generate code through this tool, you only need to pay attention to the business logic writing instead of writing some repetitive code. This allows us to focus on the business, thereby speeding up development efficiency and reducing the code error rate.
+
+## Features
+
+* Simple and easy to use
+* Quickly improve development efficiency
+* Low error rate
+* Close to protoc
+
+
+## Quick start
+
+### The way one: Quickly generate greet service
+
+Generated by the command `goctl rpc new ${servieName}`
+
+Such as generating `greet` rpc service:
+
+ ```Bash
+ goctl rpc new greet
+ ```
+
+The code structure after execution is as follows:
+
+ ```go
+.
+├── etc // yaml configuration file
+│ └── greet.yaml
+├── go.mod
+├── greet // pb.go folder①
+│ └── greet.pb.go
+├── greet.go // main entry
+├── greet.proto // proto source file
+├── greetclient // call logic ②
+│ └── greet.go
+└── internal
+ ├── config // yaml configuration corresponding entity
+ │ └── config.go
+ ├── logic //business code
+ │ └── pinglogic.go
+ ├── server // rpc server
+ │ └── greetserver.go
+ └── svc // dependent resources
+ └── servicecontext.go
+ ```
+
+> ① The name of the pb folder (the old version folder is fixed as pb) is taken from the value of option go_package in the proto file. The last level is converted according to a certain format. If there is no such declaration, it is taken from the value of package. The approximate code is as follows:
+
+```go
+ if option.Name == "go_package" {
+ ret.GoPackage = option.Constant.Source
+ }
+ ...
+ if len(ret.GoPackage) == 0 {
+ ret.GoPackage = ret.Package.Name
+ }
+ ret.PbPackage = GoSanitized(filepath.Base(ret.GoPackage))
+ ...
+```
+> For GoSanitized method, please refer to google.golang.org/protobuf@v1.25.0/internal/strs/strings.go:71
+
+> ② The name of the call layer folder is taken from the name of the service in the proto. If the name of the sercice is equal to the name of the pb folder, the client will be added after service to distinguish between pb and call.
+
+```go
+if strings.ToLower(proto.Service.Name) == strings.ToLower(proto.GoPackage) {
+ callDir = filepath.Join(ctx.WorkDir, strings.ToLower(stringx.From(proto.Service.Name+"_client").ToCamel()))
+}
+```
+
+rpc one-click generation to solve common problems, see [Error](error.md)
+
+### The way two: Generate rpc service by specifying proto
+
+* Generate proto template
+
+ ```Bash
+ goctl rpc template -o=user.proto
+ ```
+
+ ```go
+ syntax = "proto3";
+
+ package remote;
+
+ option go_package = "remote";
+
+ message Request {
+ string username = 1;
+ string password = 2;
+ }
+
+ message Response {
+ string name = 1;
+ string gender = 2;
+ }
+
+ service User {
+ rpc Login(Request)returns(Response);
+ }
+ ```
+
+* Generate rpc service code
+
+ ```Bash
+ goctl rpc proto -src user.proto -dir .
+ ```
+
+## Prepare
+
+* Installed go environment
+* Protoc&protoc-gen-go is installed, and environment variables have been set
+* For more questions, please see Notes
+
+## Usage
+
+### rpc service generation usage
+
+```Bash
+goctl rpc proto -h
+```
+
+```Bash
+NAME:
+ goctl rpc proto - generate rpc from proto
+
+USAGE:
+ goctl rpc proto [command options] [arguments...]
+
+OPTIONS:
+ --src value, -s value the file path of the proto source file
+ --proto_path value, -I value native command of protoc, specify the directory in which to search for imports. [optional]
+ --dir value, -d value the target path of the code
+ --style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
+ --idea whether the command execution environment is from idea plugin. [optional]
+```
+
+### 参数说明
+
+* --src: required, the proto data source, currently supports the generation of a single proto file
+* --proto_path: optional. The protoc native subcommand is used to specify where to find proto import. You can specify multiple paths, such as `goctl rpc -I={path1} -I={path2} ...`, in You can leave it blank when there is no import. The current proto path does not need to be specified, it is already built-in. For the detailed usage of `-I`, please refer to `protoc -h`
+* --dir: optional, the default is the directory where the proto file is located, the target directory of the generated code
+* --style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
+* --idea: optional, whether it is executed in the idea plug-in, terminal execution can be ignored
+
+
+### What developers need to do
+
+Pay attention to business code writing, hand over repetitive and business-unrelated work to goctl, after generating the rpc service code, developers only need to modify.
+
+* Preparation of configuration files in the service (etc/xx.json, internal/config/config.go)
+* Writing business logic in the service (internal/logic/xxlogic.go)
+* Preparation of resource context in the service (internal/svc/servicecontext.go)
+
+
+### Precautions
+* proto does not support the generation of multiple files at the same time
+* proto does not support the introduction of external dependency packages, and message does not support inline
+* At present, the main file, shared file, and handler file will be forcibly overwritten, and those that need to be written manually by the developer will not be overwritten and generated. This category has the code header
+
+ ```shell
+ // Code generated by goctl. DO NOT EDIT!
+ // Source: xxx.proto
+ ```
+
+ If it contains the `DO NOT EDIT` logo, please be careful not to write business code in it.
+
+## proto import
+* The requestType and returnType in rpc must be defined in the main proto file. For the message in proto, other proto files can be imported like protoc.
+
+proto example:
+
+### Wrong import
+```protobuf
+syntax = "proto3";
+
+package greet;
+
+option go_package = "greet";
+
+import "base/common.proto"
+
+message Request {
+ string ping = 1;
+}
+
+message Response {
+ string pong = 1;
+}
+
+service Greet {
+ rpc Ping(base.In) returns(base.Out);// request and return do not support import
+}
+
+```
+
+
+### Import correctly
+```protobuf
+syntax = "proto3";
+
+package greet;
+
+option go_package = "greet";
+
+import "base/common.proto"
+
+message Request {
+ base.In in = 1;
+}
+
+message Response {
+ base.Out out = 2;
+}
+
+service Greet {
+ rpc Ping(Request) returns(Response);
+}
+```
+
+# Guess you wants
+* [RPC Directory Structure](rpc-dir.md)
+* [RPC Configuration](rpc-config.md)
+* [RPC Implement & Call](rpc-call.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/goctl.md b/go-zero.dev/en/goctl.md
new file mode 100644
index 00000000..e2abd29e
--- /dev/null
+++ b/go-zero.dev/en/goctl.md
@@ -0,0 +1,71 @@
+# Goctl
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+goctl is a code generation tool under the go-zero microservice framework. Using goctl can significantly improve development efficiency and allow developers to focus their time on business development. Its functions include:
+
+- api service generation
+- rpc service generation
+- model code generation
+- template management
+
+This section will contain the following:
+
+* [Commands & Flags](goctl-commands.md)
+* [API Commands](goctl-api.md)
+* [RPC Commands](goctl-rpc.md)
+* [Model Commands](goctl-model.md)
+* [Plugin Commands](goctl-plugin.md)
+* [More Commands](goctl-other.md)
+
+## goctl?
+Many people will pronounce `goctl` as `go-C-T-L`. This is a wrong way of thinking. You should refer to `go control` and pronounce `ɡō kənˈtrōl`.
+
+## View version information
+```shell
+$ goctl -v
+```
+
+If goctl is installed, it will output text information in the following format:
+
+```text
+goctl version ${version} ${os}/${arch}
+```
+
+For example output:
+```text
+goctl version 1.1.5 darwin/amd64
+```
+
+Version number description
+* version: goctl version number
+* os: Current operating system name
+* arch: Current system architecture name
+
+## Install goctl
+
+### The way one(go get)
+
+```shell
+$ GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
+```
+
+Use this command to install the goctl tool into the `GOPATHbin` directory
+
+### The way two (fork and build)
+
+Pull a source code from the go-zero code repository `git@github.com:tal-techgo-zero.git`, enter the `toolsgoctl` directory to compile the goctl file, and then add it to the environment variable.
+
+After the installation is complete, execute `goctl -v`. If the version information is output, the installation is successful, for example:
+
+```shell
+$ goctl -v
+
+goctl version 1.1.4 darwin/amd64
+```
+
+## FAQ
+```
+command not found: goctl
+```
+Please make sure that goctl has been installed, or whether goctl has been correctly added to the environment variables of the current shell.
diff --git a/go-zero.dev/en/golang-install.md b/go-zero.dev/en/golang-install.md
new file mode 100644
index 00000000..86ccb9a6
--- /dev/null
+++ b/go-zero.dev/en/golang-install.md
@@ -0,0 +1,55 @@
+# Golang Installation
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Forward
+To develop a golang program, the installation of its environment must be indispensable. Here we choose to take 1.15.1 as an example.
+
+## Official document
+[https://golang.google.cn/doc/install](https://golang.google.cn/doc/install)
+
+## Install Go on macOS
+
+* Download and install [Go for Mac](https://dl.google.com/go/go1.15.1.darwin-amd64.pkg)
+* Verify the installation result
+ ```shell
+ $ go version
+ ```
+ ```text
+ go version go1.15.1 darwin/amd64
+ ```
+## Install Go on linux
+* Download [Go for Linux](https://golang.org/dl/go1.15.8.linux-amd64.tar.gz)
+* Unzip the compressed package to `/usr/local`
+ ```shell
+ $ tar -C /usr/local -xzf go1.15.8.linux-amd64.tar.gz
+ ```
+* Add `/usr/local/go/bin` to environment variables
+ ```shell
+ $ $HOME/.profile
+ ```
+ ```shell
+ export PATH=$PATH:/usr/local/go/bin
+ ```
+ ```shell
+ $ source $HOME/.profile
+ ```
+* Verify the installation result
+ ```shell
+ $ go version
+ ```
+ ```text
+ go version go1.15.1 linux/amd64
+ ```
+## Install Go on windows
+* Download and install [Go for Windows](https://golang.org/dl/go1.15.8.windows-amd64.msi)
+* Verify the installation result
+ ```shell
+ $ go version
+ ```
+ ```text
+ go version go1.15.1 windows/amd64
+ ```
+
+## More
+For more operating system installation, see [https://golang.org/dl/](https://golang.org/dl/)
diff --git a/go-zero.dev/en/gomod-config.md b/go-zero.dev/en/gomod-config.md
new file mode 100644
index 00000000..1b4c825e
--- /dev/null
+++ b/go-zero.dev/en/gomod-config.md
@@ -0,0 +1,39 @@
+# Go Module Configuration
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Introduction to Go Module
+> Modules are how Go manages dependencies.[1]
+
+That is, Go Module is a way for Golang to manage dependencies, similar to Maven in Java and Gradle in Android.
+
+## Module configuration
+* Check the status of `GO111MODULE`
+ ```shell
+ $ go env GO111MODULE
+ ```
+ ```text
+ on
+ ```
+* Turn on `GO111MODULE`, if it is already turned on (that is, execute `go env GO111MODULE` and the result is `on`), please skip it.
+ ```shell
+ $ go env -w GO111MODULE="on"
+ ```
+* Set up `GOPROXY`
+ ```shell
+ $ go env -w GOPROXY=https://goproxy.cn
+ ```
+* Set up `GOMODCACHE`
+
+ view `GOMODCACHE`
+ ```shell
+ $ go env GOMODCACHE
+ ```
+ If the directory is not empty or `/dev/null`, please skip it.
+ ```shell
+ go env -w GOMODCACHE=$GOPATH/pkg/mod
+ ```
+
+
+# Reference
+[1] [Go Modules Reference](https://golang.google.cn/ref/mod)
\ No newline at end of file
diff --git a/go-zero.dev/en/goreading.md b/go-zero.dev/en/goreading.md
new file mode 100644
index 00000000..77753948
--- /dev/null
+++ b/go-zero.dev/en/goreading.md
@@ -0,0 +1,9 @@
+# Night
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* [2020-08-16 XiaoHeiBan go-zero microservice framework architecture design](https://talkgo.org/t/topic/729)
+* [2020-10-03 go-zero microservice framework and online communication](https://talkgo.org/t/topic/1070)
+* [In-process shared calls to prevent cache breakdown](https://talkgo.org/t/topic/968)
+* [Implement JWT authentication based on go-zero](https://talkgo.org/t/topic/1114)
+* [Goodbye go-micro! Enterprise project migration go-zero strategy (1)](https://talkgo.org/t/topic/1607)
\ No newline at end of file
diff --git a/go-zero.dev/en/gotalk.md b/go-zero.dev/en/gotalk.md
new file mode 100644
index 00000000..c9d63c7f
--- /dev/null
+++ b/go-zero.dev/en/gotalk.md
@@ -0,0 +1,5 @@
+# OpenTalk
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* [OpenTalk 4th - Go-Zero](https://www.bilibili.com/video/BV1Jy4y127Xu)
\ No newline at end of file
diff --git a/go-zero.dev/en/intellij.md b/go-zero.dev/en/intellij.md
new file mode 100644
index 00000000..e836bb72
--- /dev/null
+++ b/go-zero.dev/en/intellij.md
@@ -0,0 +1,116 @@
+# Intellij Plugin
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Go-Zero Plugin
+
+[](https://github.com/zeromicro/go-zero)
+[](https://github.com/zeromicro/goctl-intellij/blob/main/LICENSE)
+[](https://github.com/zeromicro/goctl-intellij/releases)
+[](https://github.com/zeromicro/goctl-intellij/actions)
+
+## Introduction
+A plug-in tool that supports go-zero api language structure syntax highlighting, detection, and quick generation of api, rpc, and model.
+
+
+## Idea version requirements
+* IntelliJ 2019.3+ (Ultimate or Community)
+* Goland 2019.3+
+* WebStorm 2019.3+
+* PhpStorm 2019.3+
+* PyCharm 2019.3+
+* RubyMine 2019.3+
+* CLion 2019.3+
+
+## Features
+* api syntax highlighting
+* api syntax and semantic detection
+* struct, route, handler repeated definition detection
+* type jump to the type declaration position
+* Support api, rpc, mode related menu options in the context menu
+* Code formatting (option+command+L)
+* Code hint
+
+## Install
+
+### The way one
+Find the latest zip package in the GitHub release, download it and install it locally. (No need to unzip)
+
+### Thw way two
+In the plugin store, search for `Goctl` to install
+
+
+## Preview
+![preview](./resource/api-compare.png)
+
+## Create a new Api(Proto) file
+In the project area target folder `right click ->New-> New Api(Proto) File ->Empty File/Api(Proto) Template`, as shown in the figure:
+![preview](./resource/api-new.png)
+
+# Quickly generate api/rpc service
+In the target folder `right click->New->Go Zero -> Api Greet Service/Rpc Greet Service`
+
+![preview](./resource/service.png)
+
+# Api/Rpc/Model Code generation
+
+## The way one(Project Panel)
+
+Corresponding files (api, proto, sql) `right click->New->Go Zero-> Api/Rpc/Model Code`, as shown in the figure:
+
+![preview](./resource/project_generate_code.png)
+
+## Thw way two(Editor Panel)
+Corresponding files (api, proto, sql) `right click -> Generate-> Api/Rpc/Model Code`
+
+
+# Error message
+![context menu](./resource/alert.png)
+
+
+# Live Template
+Live Template can speed up our writing of api files. For example, when we enter the `main` keyword in the go file, press the tip and press Enter and insert a piece of template code.
+```go
+func main(){
+
+}
+```
+In other words, you will be more familiar with the picture below. Once upon a time, you still defined the template here.
+![context menu](./resource/go_live_template.png)
+
+Let’s enter the instructions for using the template in today’s api grammar. Let’s take a look at the effect of the service template.
+![context menu](./resource/live_template.gif)
+
+First of all, in the previous picture, take a look at several template effective areas in the api file (psiTree element area)
+![context menu](./resource/psiTree.png)
+
+#### Default template and effective scope
+| keyword | psiTree effective scope|description|
+| ---- | ---- | ---- |
+| @doc | ApiService |doc comment template|
+| doc | ApiService |doc comment template|
+| struct | Struct |struct declaration template|
+| info | ApiFile |info block template|
+| type | ApiFile |type group template|
+| handler | ApiService |handler name template|
+| get | ApiService |get method routing template|
+| head | ApiService |head method routing template|
+| post | ApiService |post method routing template|
+| put | ApiService |put method routing template|
+| delete | ApiService |delete method routing template|
+| connect | ApiService |connect method routing template|
+| options | ApiService |options method routing template|
+| trace | ApiService |trace method routing template|
+| service | ApiFile |service service block template|
+| json | Tag、Tag literal |tag template|
+| xml | Tag、Tag literal |tag template|
+| path | Tag、Tag literal |tag template|
+| form | Tag、Tag literal |tag template|
+
+For the corresponding content of each template, you can view the detailed template content in `Goland(mac Os)->Preference->Editor->Live Templates-> Api|Api Tags`, for example, the json tag template content is
+```go
+json:"$FIELD_NAME$"
+```
+![context menu](./resource/json_tag.png)
+
+
diff --git a/go-zero.dev/en/join-us.md b/go-zero.dev/en/join-us.md
new file mode 100644
index 00000000..1114556a
--- /dev/null
+++ b/go-zero.dev/en/join-us.md
@@ -0,0 +1,61 @@
+# Join Us
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+## Summary
+
+
+[go-zero](https://github.com/zeromicro/go-zero) is based on the [MIT License](https://github.com/zeromicro/go-zero/blob/master/LICENSE) open source projects, if you find bugs, new features, etc. in use, you can participate in the contribution of go-zero. We welcome your active participation and will respond to various questions raised by you as soon as possible. , Pr, etc.
+
+## Contribution form
+* [Pull Request](https://github.com/zeromicro/go-zero/pulls)
+* [Issue](https://github.com/zeromicro/go-zero/issues)
+
+## Contribution notes
+The code in go-zero's Pull request needs to meet certain specifications
+* For naming conventions, please read [naming conventions](naming-spec.md)
+* Mainly English annotations
+* Remark the functional characteristics when pr, the description needs to be clear and concise
+* Increase unit test coverage to 80%+
+
+## Pull Request(pr)
+* Enter [go-zero](https://github.com/zeromicro/go-zero) project, fork a copy of [go-zero](https://github.com/zeromicro/go-zero) Project to its own GitHub repository.
+* Go back to your GitHub homepage and find the `xx/go-zero` project, where xx is your username, such as `anqiansong/go-zero`
+
+ ![fork](./resource/fork.png)
+* Clone the code to local
+
+ ![clone](./resource/clone.png)
+* Develop code and push to your own GitHub repository
+* Enter the go-zero project in your own GitHub, click on the `[Pull requests]` on the floating layer to enter the Compare page.
+
+ ![pr](./resource/new_pr.png)
+
+* `base repository` choose `tal-tech/go-zero` `base:master`,`head repository` choose `xx/go-zero` `compare:$branch` ,`$branch` is the branch you developed, as shown in the figure:
+
+ ![pr](./resource/compare.png)
+
+* Click `[Create pull request]` to realize the pr application
+* To confirm whether the pr submission is successful, enter [Pull requests](https://github.com/zeromicro/go-zero) of [go-zero](https://github.com/zeromicro/go-zero) /pulls) view, there should be a record of your own submission, the name is your branch name during development.
+
+ ![pr record](./resource/pr_record.png)
+
+## Issue
+In our community, many partners will actively feedback some problems encountered during the use of go-zero.
+Due to the large number of people in the community, although we will follow the community dynamics in real time,
+the feedback of all questions is random. When our team is still solving a problem raised by a partner, other issues are also fed back,
+which may cause the team to easily ignore it. In order to solve everyone's problems one by one, we strongly recommend that everyone use the issue method.
+Feedback issues, including but not limited to bug, expected new features, etc., we will also reflect in the issue when we implement a certain new feature.
+You can also get the latest trend of go-zero here, and welcome everyone Come and actively participate in the discussion.
+
+### How to issue
+* Click [here](https://github.com/zeromicro/go-zero/issues) to enter go-zero's Issue page or directly visit [https://github.com/zeromicro/go-zero/ issues](https://github.com/zeromicro/go-zero/issues) address
+* Click `[New issue]` in the upper right corner to create a new issue
+* Fill in the issue title and content
+* Click `【Submit new issue】` to submit an issue
+
+
+## Reference
+
+* [Github Pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests)
\ No newline at end of file
diff --git a/go-zero.dev/en/jwt.md b/go-zero.dev/en/jwt.md
new file mode 100644
index 00000000..1b14e5a5
--- /dev/null
+++ b/go-zero.dev/en/jwt.md
@@ -0,0 +1,240 @@
+# JWT authentication
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Summary
+> JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and independent method for securely transmitting information as JSON objects between parties. Since this information is digitally signed, it can be verified and trusted. The JWT can be signed using a secret (using the HMAC algorithm) or using a public/private key pair of RSA or ECDSA.
+
+## When should you use JSON Web Tokens?
+* Authorization: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. Single Sign On is a feature that widely uses JWT nowadays, because of its small overhead and its ability to be easily used across different domains.
+
+* Information exchange: JSON Web Tokens are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs—you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.
+
+## Why should we use JSON Web Tokens?
+As JSON is less verbose than XML, when it is encoded its size is also smaller, making JWT more compact than SAML. This makes JWT a good choice to be passed in HTML and HTTP environments.
+
+Security-wise, SWT can only be symmetrically signed by a shared secret using the HMAC algorithm. However, JWT and SAML tokens can use a public/private key pair in the form of a X.509 certificate for signing. Signing XML with XML Digital Signature without introducing obscure security holes is very difficult when compared to the simplicity of signing JSON.
+
+JSON parsers are common in most programming languages because they map directly to objects. Conversely, XML doesn't have a natural document-to-object mapping. This makes it easier to work with JWT than SAML assertions.
+
+Regarding usage, JWT is used at Internet scale. This highlights the ease of client-side processing of the JSON Web token on multiple platforms, especially mobile.
+
+> [!TIP]
+> All the above content quote from [jwt.io](https://jwt.io/introduction)
+
+## How to use jwt in go-zero
+Jwt authentication is generally used at the api layer. In this demonstration project, we generate jwt token when user api logs in, and verify the user jwt token when searching api for books.
+
+### user api generates jwt token
+Following the content of the [Business Coding](business-coding.md) chapter, we perfect the `getJwtToken` method left over from the previous section, that is, generate the jwt token logic
+
+#### Add configuration definition and yaml configuration items
+```shell
+$ vim service/user/cmd/api/internal/config/config.go
+```
+```go
+type Config struct {
+ rest.RestConf
+ Mysql struct{
+ DataSource string
+ }
+ CacheRedis cache.CacheConf
+ Auth struct {
+ AccessSecret string
+ AccessExpire int64
+ }
+}
+```
+```shell
+$ vim service/user/cmd/api/etc/user-api.yaml
+```
+```yaml
+Name: user-api
+Host: 0.0.0.0
+Port: 8888
+Mysql:
+ DataSource: $user:$password@tcp($url)/$db?charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai
+CacheRedis:
+ - Host: $host
+ Pass: $pass
+ Type: node
+Auth:
+ AccessSecret: $AccessSecret
+ AccessExpire: $AccessExpire
+```
+
+> [!TIP]
+> $AccessSecret: The easiest way to generate the key of the jwt token is to use an uuid value.
+>
+> $AccessExpire: Jwt token validity period, unit: second
+>
+> For more configuration information, please refer to [API Configuration](api-config.md)
+
+```shell
+$ vim service/user/cmd/api/internal/logic/loginlogic.go
+```
+
+```go
+func (l *LoginLogic) getJwtToken(secretKey string, iat, seconds, userId int64) (string, error) {
+ claims := make(jwt.MapClaims)
+ claims["exp"] = iat + seconds
+ claims["iat"] = iat
+ claims["userId"] = userId
+ token := jwt.New(jwt.SigningMethodHS256)
+ token.Claims = claims
+ return token.SignedString([]byte(secretKey))
+}
+```
+
+### search.api uses jwt token authentication
+#### Write search.api file
+```shell
+$ vim service/search/cmd/api/search.api
+```
+```text
+type (
+ SearchReq {
+ Name string `form:"name"`
+ }
+
+ SearchReply {
+ Name string `json:"name"`
+ Count int `json:"count"`
+ }
+)
+
+@server(
+ jwt: Auth
+)
+service search-api {
+ @handler search
+ get /search/do (SearchReq) returns (SearchReply)
+}
+
+service search-api {
+ @handler ping
+ get /search/ping
+}
+```
+
+> [!TIP]
+> `jwt: Auth`: Enable jwt authentication
+>
+> If the routing requires JWT authentication, you need to declare this syntax flag above the service, such as `/search/do` above
+>
+> Routes that do not require jwt authentication do not need to be declared, such as `/search/ping` above
+>
+> For more grammar, please read [API IDL](api-grammar.md)
+
+
+#### Generate code
+As described above, there are three ways to generate code, so I won’t go into details here.
+
+
+#### Add yaml configuration items
+```shell
+$ vim service/search/cmd/api/etc/search-api.yaml
+```
+```yaml
+Name: search-api
+Host: 0.0.0.0
+Port: 8889
+Auth:
+ AccessSecret: $AccessSecret
+ AccessExpire: $AccessExpire
+
+```
+
+> [!TIP]
+> $AccessSecret: This value must be consistent with the one declared in the user api.
+>
+> $AccessExpire: Valid period
+>
+> Modify the port here to avoid conflicts with user api port 8888
+
+### Verify jwt token
+* Start user api service, and login
+ ```shell
+ $ cd service/user/cmd/api
+ $ go run user.go -f etc/user-api.yaml
+ ```
+ ```text
+ Starting server at 0.0.0.0:8888...
+ ```
+ ```shell
+ $ curl -i -X POST \
+ http://127.0.0.1:8888/user/login \
+ -H 'content-type: application/json' \
+ -d '{
+ "username":"666",
+ "password":"123456"
+ }'
+ ```
+ ```text
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Mon, 08 Feb 2021 10:37:54 GMT
+ Content-Length: 251
+
+ {"id":1,"name":"xiaoming","gender":"male","accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTI4NjcwNzQsImlhdCI6MTYxMjc4MDY3NCwidXNlcklkIjoxfQ.JKa83g9BlEW84IiCXFGwP2aSd0xF3tMnxrOzVebbt80","accessExpire":1612867074,"refreshAfter":1612823874}
+ ```
+* Start the search api service, call `/search/do` to verify whether the jwt authentication is passed
+ ```shell
+ $ go run search.go -f etc/search-api.yaml
+ ```
+ ```text
+ Starting server at 0.0.0.0:8889...
+ ```
+ Let’s not pass the jwt token and see the result:
+ ```shell
+ $ curl -i -X GET \
+ 'http://127.0.0.1:8889/search/do?name=%E8%A5%BF%E6%B8%B8%E8%AE%B0'
+ ```
+ ```text
+ HTTP/1.1 401 Unauthorized
+ Date: Mon, 08 Feb 2021 10:41:57 GMT
+ Content-Length: 0
+ ```
+ Obviously, the jwt authentication failed, and the statusCode of 401 is returned. Next, let's take a jwt token (that is, the `accessToken` returned by the user login)
+ ```shell
+ $ curl -i -X GET \
+ 'http://127.0.0.1:8889/search/do?name=%E8%A5%BF%E6%B8%B8%E8%AE%B0' \
+ -H 'authorization: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTI4NjcwNzQsImlhdCI6MTYxMjc4MDY3NCwidXNlcklkIjoxfQ.JKa83g9BlEW84IiCXFGwP2aSd0xF3tMnxrOzVebbt80'
+ ```
+ ```text
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Mon, 08 Feb 2021 10:44:45 GMT
+ Content-Length: 21
+
+ {"name":"","count":0}
+ ```
+
+ > [!TIP]
+ > Service startup error, please check [Error](error.md)
+
+
+At this point, the demonstration of jwt from generation to use is complete. The authentication of jwt token is already encapsulated in go-zero. You only need to simply declare it when defining the service in the api file.
+
+### Get the information carried in the jwt token
+After go-zero is parsed from the jwt token, the kv passed in when the user generates the token will be placed in the Context of http.Request intact, so we can get the value you want through the Context.
+
+```shell
+$ vim /service/search/cmd/api/internal/logic/searchlogic.go
+```
+Add a log to output the userId parsed from jwt.
+```go
+func (l *SearchLogic) Search(req types.SearchReq) (*types.SearchReply, error) {
+ logx.Infof("userId: %v",l.ctx.Value("userId"))// 这里的key和生成jwt token时传入的key一致
+ return &types.SearchReply{}, nil
+}
+```
+Output
+```text
+{"@timestamp":"2021-02-09T10:29:09.399+08","level":"info","content":"userId: 1"}
+```
+
+# Guess you wants
+* [JWT](https://jwt.io/)
+* [API Configuration](api-config.md)
+* [API IDL](api-grammar.md)
diff --git a/go-zero.dev/en/learning-resource.md b/go-zero.dev/en/learning-resource.md
new file mode 100644
index 00000000..c96e62f2
--- /dev/null
+++ b/go-zero.dev/en/learning-resource.md
@@ -0,0 +1,8 @@
+# Learning Resources
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The latest learning resource channel of go-zero will be updated from time to time. Currently, the channels included are:
+* [Wechat](wechat.md)
+* [Night](goreading.md)
+* [OpenTalk](gotalk.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/log-collection.md b/go-zero.dev/en/log-collection.md
new file mode 100644
index 00000000..ee819092
--- /dev/null
+++ b/go-zero.dev/en/log-collection.md
@@ -0,0 +1,144 @@
+# Log Collection
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In order to ensure the stable operation of the business and predict the unhealthy risks of the service, the collection of logs can help us observe the current health of the service.
+In traditional business development, when there are not many machine deployments, we usually log in directly to the server to view and debug logs. However, as the business increases, services continue to be split.
+
+The maintenance cost of the service will also become more and more complicated. In a distributed system, there are more server machines, and the service is distributed on different servers. When problems are encountered,
+We can't use traditional methods to log in to the server for log investigation and debugging. The complexity can be imagined.
+
+![log-flow](./resource/log-flow.png)
+
+> [!TIP]
+> If it is a simple single service system, or the service is too small, it is not recommended to use it directly, otherwise it will be counterproductive.
+
+## Prepare
+* kafka
+* elasticsearch
+* kibana
+* filebeat、Log-Pilot(k8s)
+* go-stash
+
+## Filebeat
+```shell
+$ vim xx/filebeat.yaml
+```
+
+```yaml
+filebeat.inputs:
+- type: log
+ enabled: true
+ # Turn on json parsing
+ json.keys_under_root: true
+ json.add_error_key: true
+ # Log file path
+ paths:
+ - /var/log/order/*.log
+
+setup.template.settings:
+ index.number_of_shards: 1
+
+# Define kafka topic field
+fields:
+ log_topic: log-collection
+
+# Export to kafka
+output.kafka:
+ hosts: ["127.0.0.1:9092"]
+ topic: '%{[fields.log_topic]}'
+ partition.round_robin:
+ reachable_only: false
+ required_acks: 1
+ keep_alive: 10s
+
+# ================================= Processors =================================
+processors:
+ - decode_json_fields:
+ fields: ['@timestamp','level','content','trace','span','duration']
+ target: ""
+```
+
+> [!TIP]
+> xx is the path where filebeat.yaml is located
+
+## go-stash configuration
+* Create a new `config.yaml` file
+* Add configuration content
+
+```shell
+$ vim config.yaml
+```
+
+```yaml
+Clusters:
+- Input:
+ Kafka:
+ Name: go-stash
+ Log:
+ Mode: file
+ Brokers:
+ - "127.0.0.1:9092"
+ Topics:
+ - log-collection
+ Group: stash
+ Conns: 3
+ Consumers: 10
+ Processors: 60
+ MinBytes: 1048576
+ MaxBytes: 10485760
+ Offset: first
+ Filters:
+ - Action: drop
+ Conditions:
+ - Key: status
+ Value: "503"
+ Type: contains
+ - Key: type
+ Value: "app"
+ Type: match
+ Op: and
+ - Action: remove_field
+ Fields:
+ - source
+ - _score
+ - "@metadata"
+ - agent
+ - ecs
+ - input
+ - log
+ - fields
+ Output:
+ ElasticSearch:
+ Hosts:
+ - "http://127.0.0.1:9200"
+ Index: "go-stash-{{yyyy.MM.dd}}"
+ MaxChunkBytes: 5242880
+ GracePeriod: 10s
+ Compress: false
+ TimeZone: UTC
+```
+
+## Start services (start in order)
+* Start kafka
+* Start elasticsearch
+* Start kibana
+* Start go-stash
+* Start filebeat
+* Start the order-api service and its dependent services (order-api service in the go-zero-demo project)
+
+## Visit kibana
+Enter 127.0.0.1:5601
+![log](./resource/log.png)
+
+> [!TIP]
+> Here we only demonstrate the logs generated by logx in the collection service, and log collection in nginx is the same.
+
+
+# Reference
+* [kafka](http://kafka.apache.org/)
+* [elasticsearch](https://www.elastic.co/cn/elasticsearch/)
+* [kibana](https://www.elastic.co/cn/kibana)
+* [filebeat](https://www.elastic.co/cn/beats/filebeat)
+* [go-stash](https://github.com/tal-tech/go-stash)
+* [filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/index.html)
diff --git a/go-zero.dev/en/logx.md b/go-zero.dev/en/logx.md
new file mode 100644
index 00000000..6b8d7ad4
--- /dev/null
+++ b/go-zero.dev/en/logx.md
@@ -0,0 +1,186 @@
+# logx
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Example
+
+```go
+var c logx.LogConf
+// Initialize the configuration from the yaml file
+conf.MustLoad("config.yaml", &c)
+
+// logx is initialized according to the configuration
+logx.MustSetup(c)
+
+logx.Info("This is info!")
+logx.Infof("This is %s!", "info")
+
+logx.Error("This is error!")
+logx.Errorf("this is %s!", "error")
+
+logx.Close()
+```
+
+## Initialization
+logx has many configurable items, you can refer to the definition in logx.LogConf. Currently available
+
+```go
+logx.MustSetUp(c)
+```
+Perform the initial configuration. If the initial configuration is not performed, all the configurations will use the default configuration.
+
+## Level
+The print log levels supported by logx are:
+- info
+- error
+- server
+- fatal
+- slow
+- stat
+
+You can use the corresponding method to print out the log of the corresponding level.
+At the same time, in order to facilitate debugging and online use, the log printing level can be dynamically adjusted. The level can be set through **logx.SetLevel(uint32)** or through configuration initialization. The currently supported parameters are:
+
+```go
+const (
+ // Print all levels of logs
+ InfoLevel = iotas
+ // Print errors, slows, stacks logs
+ ErrorLevel
+ // Only print server level logs
+ SevereLevel
+)
+```
+
+## Log mode
+At present, the log printing mode is mainly divided into two types, one is file output, and the other is console output. The recommended way, when using k8s, docker and other deployment methods, you can output the log to the console, use the log collector to collect and import it to es for log analysis. If it is a direct deployment method, the file output method can be used, and logx will automatically create log files corresponding to 5 corresponding levels in the specified file directory to save the logs.
+
+```bash
+.
+├── access.log
+├── error.log
+├── severe.log
+├── slow.log
+└── stat.log
+```
+
+At the same time, the file will be divided according to the natural day. When the specified number of days is exceeded, the log file will be automatically deleted, packaged and other operations.
+
+## Disable log
+If you don't need log printing, you can use **logx.Close()** to close the log output. Note that when log output is disabled, it cannot be opened again. For details, please refer to the implementation of **logx.RotateLogger** and **logx.DailyRotateRule**.
+
+## Close log
+Because logx uses asynchronous log output, if the log is not closed normally, some logs may be lost. The log output must be turned off where the program exits:
+```go
+logx.Close()
+```
+Log configuration and shutdown related operations have already been done in most places such as rest and zrpc in the framework, so users don't need to care.
+At the same time, note that when the log output is turned off, the log cannot be printed again.
+
+Recommended writing:
+```go
+import "github.com/tal-tech/go-zero/core/proc"
+
+// grace close log
+proc.AddShutdownListener(func() {
+ logx.Close()
+})
+```
+
+## Duration
+When we print the log, we may need to print the time-consuming situation, we can use **logx.WithDuration(time.Duration)**, refer to the following example:
+
+```go
+startTime := timex.Now()
+// Database query
+rows, err := conn.Query(q, args...)
+duration := timex.Since(startTime)
+if duration > slowThreshold {
+ logx.WithDuration(duration).Slowf("[SQL] query: slowcall - %s", stmt)
+} else {
+ logx.WithDuration(duration).Infof("sql query: %s", stmt)
+}
+```
+
+
+Will output the following format:
+
+```json
+{"@timestamp":"2020-09-12T01:22:55.552+08","level":"info","duration":"3.0ms","content":"sql query:..."}
+{"@timestamp":"2020-09-12T01:22:55.552+08","level":"slow","duration":"500ms","content":"[SQL] query: slowcall - ..."}
+```
+
+In this way, it is easy to collect statistics about slow sql related information.
+
+## TraceLog
+tracingEntry is customized for link tracing log output. You can print the traceId and spanId information in the context. With our **rest** and **zrpc**, it is easy to complete the related printing of the link log. The example is as follows:
+
+```go
+logx.WithContext(context.Context).Info("This is info!")
+```
+
+
+## SysLog
+
+Some applications may use system log for log printing. Logx uses the same encapsulation method, which makes it easy to collect log-related logs into logx.
+
+```go
+logx.CollectSysLog()
+```
+
+
+
+
+# Log configuration related
+**LogConf** Define the basic configuration required for the logging system
+
+The complete definition is as follows:
+
+```go
+type LogConf struct {
+ ServiceName string `json:",optional"`
+ Mode string `json:",default=console,options=console|file|volume"`
+ Path string `json:",default=logs"`
+ Level string `json:",default=info,options=info|error|severe"`
+ Compress bool `json:",optional"`
+ KeepDays int `json:",optional"`
+ StackCooldownMillis int `json:",default=100"`
+}
+```
+
+
+## Mode
+**Mode** defines the log printing method. The default mode is **console**, which will print to the console.
+
+The currently supported modes are as follows:
+
+- console
+ - Print to the console
+- file
+ - Print to access.log, error.log, stat.log and other files in the specified path
+- volume
+ - In order to print to the storage that the mount comes in in k8s, because multiple pods may overwrite the same file, the volume mode automatically recognizes the pod and writes separate log files according to the pod.
+
+## Path
+**Path** defines the output path of the file log, the default value is **logs**.
+
+## Level
+**Level** defines the log printing level, and the default value is **info**.
+The currently supported levels are as follows:
+
+- info
+- error
+- severe
+
+
+
+## Compress
+**Compress** defines whether the log needs to be compressed, the default value is **false**. When Mode is file mode, the file will finally be packaged and compressed into a .gz file.
+
+
+## KeepDays
+**KeepDays** defines the maximum number of days to keep logs. The default value is 0, which means that old logs will not be deleted. When Mode is file mode, if the maximum retention days are exceeded, the old log files will be deleted.
+
+
+## StackCooldownMillis
+**StackCooldownMillis** defines the log output interval, the default is 100 milliseconds.
diff --git a/go-zero.dev/en/micro-service.md b/go-zero.dev/en/micro-service.md
new file mode 100644
index 00000000..e24a8186
--- /dev/null
+++ b/go-zero.dev/en/micro-service.md
@@ -0,0 +1,291 @@
+# Microservice
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the previous article, we have demonstrated how to quickly create a monolithic service. Next, let’s demonstrate how to quickly create a microservice.
+In this section, the api part is actually the same as the creation logic of the monolithic service, except that there is no communication between services in the monolithic service.
+And the api service in the microservice will have more rpc call configuration.
+
+## Forward
+This section will briefly demonstrate with an `order service` calling `user service`. The demo code only conveys ideas, and some links will not be listed one by one.
+
+## Scenario summary
+Suppose we are developing a mall project, and the developer Xiao Ming is responsible for the development of the user module (user) and the order module (order). Let's split these two modules into two microservices.①
+
+> [!NOTE]
+> ①: The splitting of microservices is also a science, and we will not discuss the details of how to split microservices here.
+
+## Demonstration function goal
+* Order service (order) provides a query interface
+* User service (user) provides a method for order service to obtain user information
+
+## Service design analysis
+According to the scenario summary, we can know that the order is directly facing the user, and the data is accessed through the http protocol, and some basic data of the user needs to be obtained inside the order. Since our service adopts the microservice architecture design,
+Then two services (user, order) must exchange data. The data exchange between services is the communication between services. At this point, it is also a developer’s need to adopt a reasonable communication protocol.
+For consideration, communication can be carried out through http, rpc and other methods. Here we choose rpc to implement communication between services. I believe that I have already made a better scenario description of "What is the role of rpc service?".
+Of course, there is much more than this design analysis before a service is developed, and we will not describe it in detail here. From the above, we need:
+* user rpc
+* order api
+
+two services to initially implement this small demo.
+
+## Create mall project
+```shell
+$ cd ~/go-zero-demo
+$ mkdir mall && cd mall
+```
+
+## Create user rpc service
+
+* new user rpc
+ ```shell
+ $ cd ~/go-zero-demo/mall
+ $ mkdir -p user/rpc&&cd user/rpc
+ ```
+
+* Add `user.proto` file, add `getUser` method
+
+ ```shell
+ $ vim ~/go-zero-demo/mall/user/user.proto
+ ```
+
+ ```protobuf
+ syntax = "proto3";
+
+ package user;
+
+ option go_package = "user";
+
+ message IdRequest {
+ string id = 1;
+ }
+
+ message UserResponse {
+ string id = 1;
+ string name = 2;
+ string gender = 3;
+ }
+
+ service User {
+ rpc getUser(IdRequest) returns(UserResponse);
+ }
+ ```
+* Generate code
+
+ ```shell
+ $ cd ~/go-zero-demo/mall/user/rpc
+ $ goctl rpc proto -src user.proto -dir .
+ [goclt version <=1.2.1] protoc -I=/Users/xx/mall/user user.proto --goctl_out=plugins=grpc:/Users/xx/mall/user/user
+ [goctl version > 1.2.1] protoc -I=/Users/xx/mall/user user.proto --go_out=plugins=grpc:/Users/xx/mall/user/user
+ Done.
+ ```
+
+> [!TIPS]
+> If the installed version of `protoc-gen-go` is larger than 1.4.0, it is recommended to add `go_package` to the proto file
+
+
+* Fill in business logic
+
+ ```shell
+ $ vim internal/logic/getuserlogic.go
+ ```
+ ```go
+ package logic
+
+ import (
+ "context"
+
+ "go-zero-demo/mall/user/internal/svc"
+ "go-zero-demo/mall/user/user"
+
+ "github.com/tal-tech/go-zero/core/logx"
+ )
+
+ type GetUserLogic struct {
+ ctx context.Context
+ svcCtx *svc.ServiceContext
+ logx.Logger
+ }
+
+ func NewGetUserLogic(ctx context.Context, svcCtx *svc.ServiceContext) *GetUserLogic {
+ return &GetUserLogic{
+ ctx: ctx,
+ svcCtx: svcCtx,
+ Logger: logx.WithContext(ctx),
+ }
+ }
+
+ func (l *GetUserLogic) GetUser(in *user.IdRequest) (*user.UserResponse, error) {
+ return &user.UserResponse{
+ Id: "1",
+ Name: "test",
+ }, nil
+ }
+ ```
+
+## Create order api service
+* Create an `order api` service
+
+ ```shell
+ $ cd ~/go-zero-demo/mall
+ $ mkdir -p order/api&&cd order/api
+ ```
+
+* Add api file
+ ```shell
+ $ vim order.api
+ ```
+ ```go
+ type(
+ OrderReq {
+ Id string `path:"id"`
+ }
+
+ OrderReply {
+ Id string `json:"id"`
+ Name string `json:"name"`
+ }
+ )
+ service order {
+ @handler getOrder
+ get /api/order/get/:id (OrderReq) returns (OrderReply)
+ }
+ ```
+* Generate `order` service
+ ```shell
+ $ goctl api go -api order.api -dir .
+ Done.
+ ```
+* Add user rpc configuration
+
+ ```shell
+ $ vim internal/config/config.go
+ ```
+ ```go
+ package config
+
+ import "github.com/tal-tech/go-zero/rest"
+ import "github.com/tal-tech/go-zero/zrpc"
+
+ type Config struct {
+ rest.RestConf
+ UserRpc zrpc.RpcClientConf
+ }
+ ```
+* Add yaml configuration
+
+ ```shell
+ $ vim etc/order.yaml
+ ```
+ ```yaml
+ Name: order
+ Host: 0.0.0.0
+ Port: 8888
+ UserRpc:
+ Etcd:
+ Hosts:
+ - 127.0.0.1:2379
+ Key: user.rpc
+ ```
+* Improve service dependence
+
+ ```shell
+ $ vim internal/svc/servicecontext.go
+ ```
+ ```go
+ package svc
+
+ import (
+ "go-zero-demo/mall/order/api/internal/config"
+ "go-zero-demo/mall/user/rpc/userclient"
+
+ "github.com/tal-tech/go-zero/zrpc"
+ )
+
+ type ServiceContext struct {
+ Config config.Config
+ UserRpc userclient.User
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ Config: c,
+ UserRpc: userclient.NewUser(zrpc.MustNewClient(c.UserRpc)),
+ }
+ }
+ ```
+
+* Add order demo logic
+
+ Add business logic to `getorderlogic`
+ ```shell
+ $ vim ~/go-zero-demo/mall/order/api/internal/logic/getorderlogic.go
+ ```
+ ```go
+ user, err := l.svcCtx.UserRpc.GetUser(l.ctx, &userclient.IdRequest{
+ Id: "1",
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ if user.Name != "test" {
+ return nil, errors.New("User does not exist")
+ }
+
+ return &types.OrderReply{
+ Id: req.Id,
+ Name: "test order",
+ }, nil
+ ```
+
+## Start the service and verify
+* Start etcd
+ ```shell
+ $ etcd
+ ```
+* Start user rpc
+ ```shell
+ $ go run user.go -f etc/user.yaml
+ ```
+ ```text
+ Starting rpc server at 127.0.0.1:8080...
+ ```
+
+* Start order api
+ ```shell
+ $ go run order.go -f etc/order.yaml
+ ```
+ ```text
+ Starting server at 0.0.0.0:8888...
+ ```
+* Access order api
+ ```shell
+ curl -i -X GET \
+ http://localhost:8888/api/order/get/1
+ ```
+
+ ```text
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Sun, 07 Feb 2021 03:45:05 GMT
+ Content-Length: 30
+
+ {"id":"1","name":"test order"}
+ ```
+
+> [!TIP]
+> The api syntax mentioned in the demo, how to use and install rpc generation, goctl, goctl environment, etc. are not outlined in detail in the quick start. We will have detailed documents to describe in the follow-up. You can also click on the following [Guess you think View] View the corresponding document for quick jump.
+
+# Source code
+[mall source code](https://github.com/zeromicro/go-zero-demo/tree/master/mall)
+
+# Guess you wants
+* [Goctl](goctl.md)
+* [API Directory Structure](api-dir.md)
+* [API IDL](api-grammar.md)
+* [API Configuration](api-config.md)
+* [Middleware](middleware.md)
+* [RPC Directory Structure](rpc-dir.md)
+* [RPC Configuration](rpc-config.md)
+* [RPC Implement & Call](rpc-call.md)
diff --git a/go-zero.dev/en/middleware.md b/go-zero.dev/en/middleware.md
new file mode 100644
index 00000000..f3abea01
--- /dev/null
+++ b/go-zero.dev/en/middleware.md
@@ -0,0 +1,127 @@
+# Middleware
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the previous section, we demonstrated how to use jwt authentication. I believe you have mastered the basic use of jwt. In this section, let’s take a look at how to use api service middleware.
+
+## Middleware classification
+In go-zero, middleware can be divided into routing middleware and global middleware. Routing middleware refers to certain specific routes that need to implement middleware logic, which is similar to jwt and does not place the routes under `jwt:xxx` Does not use middleware functions,
+The service scope of global middleware is the entire service.
+
+## Middleware use
+Here we take the `search` service as an example to demonstrate the use of middleware
+
+### Routing middleware
+* Rewrite the `search.api` file and add the `middleware` declaration
+ ```shell
+ $ cd service/search/cmd/api
+ $ vim search.api
+ ```
+ ```text
+ type SearchReq struct {}
+
+ type SearchReply struct {}
+
+ @server(
+ jwt: Auth
+ middleware: Example // Routing middleware declaration
+ )
+ service search-api {
+ @handler search
+ get /search/do (SearchReq) returns (SearchReply)
+ }
+ ```
+* Regenerate the api code
+ ```shell
+ $ goctl api go -api search.api -dir .
+ ```
+ ```text
+ etc/search-api.yaml exists, ignored generation
+ internal/config/config.go exists, ignored generation
+ search.go exists, ignored generation
+ internal/svc/servicecontext.go exists, ignored generation
+ internal/handler/searchhandler.go exists, ignored generation
+ internal/handler/pinghandler.go exists, ignored generation
+ internal/logic/searchlogic.go exists, ignored generation
+ internal/logic/pinglogic.go exists, ignored generation
+ Done.
+ ```
+ After the generation is completed, there will be an additional `middleware` directory under the `internal` directory, which is the middleware file, and the implementation logic of the subsequent middleware is also written here.
+* Improve resource dependency `ServiceContext`
+ ```shell
+ $ vim service/search/cmd/api/internal/svc/servicecontext.go
+ ```
+ ```go
+ type ServiceContext struct {
+ Config config.Config
+ Example rest.Middleware
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ Config: c,
+ Example: middleware.NewExampleMiddleware().Handle,
+ }
+ }
+ ```
+* Write middleware logic
+ Only one line of log is added here, with the content example middle. If the service runs and outputs example middle, it means that the middleware is in use.
+
+ ```shell
+ $ vim service/search/cmd/api/internal/middleware/examplemiddleware.go
+ ```
+ ```go
+ package middleware
+
+ import "net/http"
+
+ type ExampleMiddleware struct {
+ }
+
+ func NewExampleMiddleware() *ExampleMiddleware {
+ return &ExampleMiddleware{}
+ }
+
+ func (m *ExampleMiddleware) Handle(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ // TODO generate middleware implement function, delete after code implementation
+
+ // Passthrough to next handler if need
+ next(w, r)
+ }
+ }
+ ```
+* Start service verification
+ ```text
+ {"@timestamp":"2021-02-09T11:32:57.931+08","level":"info","content":"example middle"}
+ ```
+
+### Global middleware
+call `rest.Server.Use`
+```go
+func main() {
+ flag.Parse()
+
+ var c config.Config
+ conf.MustLoad(*configFile, &c)
+
+ ctx := svc.NewServiceContext(c)
+ server := rest.MustNewServer(c.RestConf)
+ defer server.Stop()
+
+ // Global middleware
+ server.Use(func(next http.HandlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ logx.Info("global middleware")
+ next(w, r)
+ }
+ })
+ handler.RegisterHandlers(server, ctx)
+
+ fmt.Printf("Starting server at %s:%d...\n", c.Host, c.Port)
+ server.Start()
+}
+```
+```text
+{"@timestamp":"2021-02-09T11:50:15.388+08","level":"info","content":"global middleware"}
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/model-gen.md b/go-zero.dev/en/model-gen.md
new file mode 100644
index 00000000..0837d399
--- /dev/null
+++ b/go-zero.dev/en/model-gen.md
@@ -0,0 +1,59 @@
+# Model Generation
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+First, after downloading the [demo project](https://go-zero.dev/en/resource/book.zip), we will use the user's model to demonstrate the code generation.
+
+## Forward
+Model is a bridge for services to access the persistent data layer. The persistent data of the business often exists in databases such as mysql and mongo. We all know that the operation of a database is nothing more than CURD.
+And these tasks will also take up part of the time for development. I once wrote 40 model files when writing a business. According to the complexity of different business requirements, on average, each model file is almost required.
+10 minutes, for 40 files, 400 minutes of working time, almost a day's workload, and the goctl tool can complete the 400 minutes of work in 10 seconds.
+
+## Prepare
+Enter the demo project `book`, find the` user.sql` file under `user/model`, and execute the table creation in your own database.
+
+## Code generation (with cache)
+### The way one(ddl)
+Enter the `service/user/model` directory and execute the command
+```shell
+$ cd service/user/model
+$ goctl model mysql ddl -src user.sql -dir . -c
+```
+```text
+Done.
+```
+
+### The way two(datasource)
+```shell
+$ goctl model mysql datasource -url="$datasource" -table="user" -c -dir .
+```
+```text
+Done.
+```
+> [!TIP]
+> `$datasource` is the database connection address
+
+### The way three(intellij plugin)
+In Goland, right-click `user.sql`, enter and click `New`->`Go Zero`->`Model Code` to generate it, or open the `user.sql` file,
+Enter the editing area, use the shortcut key `Command+N` (for macOS) or `alt+insert` (for windows), select `Mode Code`.
+
+![model generation](https://zeromicro.github.io/go-zero-pages/resource/intellij-model.png)
+
+> [!TIP]
+> The intellij plug-in generation needs to install the goctl plug-in, see [intellij plugin](intellij.md) for details
+
+## Verify the generated model file
+view tree
+```shell
+$ tree
+```
+```text
+.
+├── user.sql
+├── usermodel.go
+└── vars.go
+```
+
+# Guess you wants
+[Model Commands](goctl-model.md)
diff --git a/go-zero.dev/en/monolithic-service.md b/go-zero.dev/en/monolithic-service.md
new file mode 100644
index 00000000..f3641d20
--- /dev/null
+++ b/go-zero.dev/en/monolithic-service.md
@@ -0,0 +1,94 @@
+# Monolithic Service
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Forward
+Since go-zero integrates web/rpc, some friends in the community will ask me whether go-zero is positioned as a microservice framework.
+The answer is no. Although go-zero integrates many functions, you can use any one of them independently, or you can develop a single service.
+
+It is not that every service must adopt the design of the microservice architecture. For this point, you can take a look at the fourth issue of the author (kevin) [OpenTalk](https://www.bilibili.com/video/BV1Jy4y127Xu) , Which has a detailed explanation on this.
+
+## Create greet service
+```shell
+$ cd ~/go-zero-demo
+$ goctl api new greet
+Done.
+```
+
+Take a look at the structure of the `greet` service
+```shell
+$ cd greet
+$ tree
+```
+```text
+.
+├── etc
+│ └── greet-api.yaml
+├── go.mod
+├── greet.api
+├── greet.go
+└── internal
+ ├── config
+ │ └── config.go
+ ├── handler
+ │ ├── greethandler.go
+ │ └── routes.go
+ ├── logic
+ │ └── greetlogic.go
+ ├── svc
+ │ └── servicecontext.go
+ └── types
+ └── types.go
+```
+It can be observed from the above directory structure that although the `greet` service is small, it has "all internal organs". Next, we can write business code in `greetlogic.go`.
+
+## Write logic
+```shell
+$ vim ~/go-zero-demo/greet/internal/logic/greetlogic.go
+```
+```go
+func (l *GreetLogic) Greet(req types.Request) (*types.Response, error) {
+ return &types.Response{
+ Message: "Hello go-zero",
+ }, nil
+}
+```
+
+## Start and access the service
+
+* Start service
+ ```shell
+ $ cd ~/go-zer-demo/greet
+ $ go run greet.go -f etc/greet-api.yaml
+ ```
+ ```text
+ Starting server at 0.0.0.0:8888...
+ ```
+
+* Access service
+ ```shell
+ $ curl -i -X GET \
+ http://localhost:8888/from/you
+ ```
+
+ ```text
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Sun, 07 Feb 2021 04:31:25 GMT
+ Content-Length: 27
+
+ {"message":"Hello go-zero"}
+ ```
+
+# Source code
+[greet source code](https://github.com/zeromicro/go-zero-demo/tree/master/greet)
+
+# Guess you wants
+* [Goctl](goctl.md)
+* [API Directory Structure](api-dir.md)
+* [API IDL](api-grammar.md)
+* [API Configuration](api-config.md)
+* [Middleware](middleware.md)
+
+
+
diff --git a/go-zero.dev/en/mysql.md b/go-zero.dev/en/mysql.md
new file mode 100644
index 00000000..1a41b363
--- /dev/null
+++ b/go-zero.dev/en/mysql.md
@@ -0,0 +1,182 @@
+# Mysql
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+`go-zero` provides easier operation of `mysql` API.
+
+> [!TIP]
+> But `stores/mysql` positioning is not an `orm` framework. If you need to generate `model` layer code through `sql/scheme` -> `model/struct` reverse engineering, developers can use [goctl model](https://go-zero.dev/cn/goctl-model.html), this is an excellent feature.
+
+
+
+## Features
+
+- Provides a more developer-friendly API compared to native
+- Complete the automatic assignment of `queryField -> struct`
+- Insert "bulkinserter" in batches
+- Comes with fuse
+- API has been continuously tested by several services
+- Provide `partial assignment` feature, do not force strict assignment of `struct`
+
+
+
+## Connection
+Let's use an example to briefly explain how to create a `mysql` connected model:
+```go
+// 1. Quickly connect to a mysql
+// datasource: mysql dsn
+heraMysql := sqlx.NewMysql(datasource)
+
+// 2. Call in the `servicecontext`, understand the logic layer call of the model upper layer
+model.NewMysqlModel(heraMysql, tablename),
+
+// 3. model layer mysql operation
+func NewMysqlModel(conn sqlx.SqlConn, table string) *MysqlModel {
+ defer func() {
+ recover()
+ }()
+ // 4. Create a batch insert [mysql executor]
+ // conn: mysql connection; insertsql: mysql insert sql
+ bulkInserter , err := sqlx.NewBulkInserter(conn, insertsql)
+ if err != nil {
+ logx.Error("Init bulkInsert Faild")
+ panic("Init bulkInsert Faild")
+ return nil
+ }
+ return &MysqlModel{conn: conn, table: table, Bulk: bulkInserter}
+}
+```
+
+
+## CRUD
+
+Prepare an `User model`
+```go
+var userBuilderQueryRows = strings.Join(builderx.FieldNames(&User{}), ",")
+
+type User struct {
+ Avatar string `db:"avatar"`
+ UserName string `db:"user_name"`
+ Sex int `db:"sex"`
+ MobilePhone string `db:"mobile_phone"`
+}
+```
+Among them, `userBuilderQueryRows`: `go-zero` provides `struct -> [field...]` conversion. Developers can use this as a template directly.
+### insert
+```go
+// An actual insert model layer operation
+func (um *UserModel) Insert(user *User) (int64, error) {
+ const insertsql = `insert into `+um.table+` (`+userBuilderQueryRows+`) values(?, ?, ?)`
+ // insert op
+ res, err := um.conn.Exec(insertsql, user.Avatar, user.UserName, user.Sex, user.MobilePhone)
+ if err != nil {
+ logx.Errorf("insert User Position Model Model err, err=%v", err)
+ return -1, err
+ }
+ id, err := res.LastInsertId()
+ if err != nil {
+ logx.Errorf("insert User Model to Id parse id err,err=%v", err)
+ return -1, err
+ }
+ return id, nil
+}
+```
+
+- Splicing `insertsql`
+- Pass in `insertsql` and the `struct field` corresponding to the placeholder -> `con.Exex(insertsql, field...)`
+
+
+> [!WARNING]
+> `conn.Exec(sql, args...)`: `args...` needs to correspond to the placeholder in `sql`. Otherwise, there will be problems with assignment exceptions.
+
+
+`go-zero` unified and abstracted operations involving `mysql` modification as `Exec()`. So the `insert/update/delete` operations are essentially the same. For the remaining two operations, the developer can try the above `insert` process.
+
+
+### query
+
+
+You only need to pass in the `querysql` and `model` structure, and you can get the assigned `model`. No need for developers to manually assign values.
+```go
+func (um *UserModel) FindOne(uid int64) (*User, error) {
+ var user User
+ const querysql = `select `+userBuilderQueryRows+` from `+um.table+` where id=? limit 1`
+ err := um.conn.QueryRow(&user, querysql, uid)
+ if err != nil {
+ logx.Errorf("userId.findOne error, id=%d, err=%s", uid, err.Error())
+ if err == sqlx.ErrNotFound {
+ return nil, ErrNotFound
+ }
+ return nil, err
+ }
+ return &user, nil
+}
+```
+
+- Declare `model struct`, splicing `querysql`
+- `conn.QueryRow(&model, querysql, args...)`: `args...` corresponds to the placeholder in `querysql`.
+
+
+
+> [!WARNING]
+> The first parameter in `QueryRow()` needs to be passed in `Ptr` "The bottom layer needs to be reflected to assign a value to `struct`"
+
+The above is to query one record, if you need to query multiple records, you can use `conn.QueryRows()`
+```go
+func (um *UserModel) FindOne(sex int) ([]*User, error) {
+ users := make([]*User, 0)
+ const querysql = `select `+userBuilderQueryRows+` from `+um.table+` where sex=?`
+ err := um.conn.QueryRows(&users, querysql, sex)
+ if err != nil {
+ logx.Errorf("usersSex.findOne error, sex=%d, err=%s", uid, err.Error())
+ if err == sqlx.ErrNotFound {
+ return nil, ErrNotFound
+ }
+ return nil, err
+ }
+ return users, nil
+}
+```
+The difference from `QueryRow()` is: `model` needs to be set to `Slice`, because it is to query multiple rows, and multiple `model`s need to be assigned. But at the same time you need to pay attention to ️: the first parameter needs to be passed in `Ptr`
+
+### querypartial
+
+
+In terms of use, it is no different from the above-mentioned `QueryRow()`, "this reflects the highly abstract design of `go-zero`."
+
+
+the difference:
+
+- `QueryRow()`: `len(querysql fields) == len(struct)`, and one-to-one correspondence
+- `QueryRowPartial()` :`len(querysql fields) <= len(struct)`
+
+
+
+numA: Number of database fields; numB: the number of defined `struct` attributes.
+If `numA [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In any language development, there are some naming conventions in the language field, good
+* Reduce code reading costs
+* Reduce maintenance difficulty
+* Reduce code complexity
+
+## Specification suggestion
+In our actual development, many developers may transfer from one language to another language field. After switching to another language,
+We will all retain the programming habits of the old language. Here, what I suggest is that although some previous specifications of different languages may be the same,
+But we'd better be familiar with some official demos to gradually adapt to the programming specifications of the current language, rather than directly migrating the programming specifications of the original language.
+
+## Naming guidelines
+* When the distance between the definition and the last use of the variable name is short, the short name looks better.
+* Variable naming should try to describe its content, not type
+* Constant naming should try to describe its value, not how to use this value
+* When encountering for, if and other loops or branches, single letter names are recommended to identify parameters and return values
+* It is recommended to use words to name method, interface, type, and package
+* The package name is also part of the naming, please use it as much as possible
+* Use a consistent naming style
+
+## File naming guidelines
+* All lowercase
+* Avoid underscores (_) except for unit test
+* The file name should not be too long
+
+## Variable naming convention reference
+* Initial lowercase
+* Hump naming
+* See the name to know the meaning, avoid pinyin instead of English
+* It is not recommended including an underscore (_)
+* It is not recommended including numbers
+
+**Scope of application**
+* Local variables
+* Function parameter output, input parameter
+
+## Function and constant naming convention
+* Camel case naming
+* The first letter of the exported must be capitalized
+* The first letter must be lowercase if it cannot be exported
+* Avoid the combination of all uppercase and underscore (_)
+
+
+> [!TIP]
+> If it is a go-zero code contribution, you must strictly follow this naming convention
+
+
+# Reference
+* [Practical Go: Real world advice for writing maintainable Go programs](https://dave.cheney.net/practical-go/presentations/gophercon-singapore-2019.html#_simplicity)
\ No newline at end of file
diff --git a/go-zero.dev/en/online-exchange.md b/go-zero.dev/en/online-exchange.md
new file mode 100644
index 00000000..35271a26
--- /dev/null
+++ b/go-zero.dev/en/online-exchange.md
@@ -0,0 +1,156 @@
+# Summary of online communication issues on October 3,2020
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+- Go-zero applicable scenarios
+ - I hope to talk about the application scenarios and the advantages of each scenario
+ - Highly concurrent microservice system
+ - Support tens of millions of daily activities, millions of QPS
+ - Complete microservice governance capabilities
+ - Support custom middleware
+ - Well managed database and cache
+ - Effectively isolate faults
+ - Monolithic system with low concurrency
+ - This kind of system can directly use the api layer without rpc service
+ - Use scenarios and use cases of each function
+ - Limiting
+ - Fuse
+ - Load reduction
+ - time out
+ - Observability
+- The actual experience of go-zero
+ - The service is stable
+ - Front-end and back-end interface consistency, one api file can generate front-end and back-end code
+ - Less specification and less code means less bugs
+ - Eliminate api documents, greatly reducing communication costs
+ - The code structure is completely consistent, easy to maintain and take over
+- Project structure of microservices, CICD processing of monorepo
+
+```
+ bookstore
+ ├── api
+ │ ├── etc
+ │ └── internal
+ │ ├── config
+ │ ├── handler
+ │ ├── logic
+ │ ├── svc
+ │ └── types
+ └── rpc
+ ├── add
+ │ ├── adder
+ │ ├── etc
+ │ ├── internal
+ │ │ ├── config
+ │ │ ├── logic
+ │ │ ├── server
+ │ │ └── svc
+ │ └── pb
+ ├── check
+ │ ├── checker
+ │ ├── etc
+ │ ├── internal
+ │ │ ├── config
+ │ │ ├── logic
+ │ │ ├── server
+ │ │ └── svc
+ │ └── pb
+ └── model
+```
+
+The CI of the mono repo is made through gitlab, and the CD uses jenkins
+CI is as strict as possible, such as -race, using tools such as sonar
+CD has development, testing, pre-release, grayscale and formal clusters
+If it is in grayscale at 6 p.m. and there is no fault, it will automatically synchronize to the official cluster at 10 the next day
+The formal cluster is divided into multiple k8s clusters, which can effectively prevent a single cluster from failing, just remove it directly, and the cluster upgrade is better
+- How to deploy and how to monitor?
+ - The full amount of K8S is automatically packaged into a docker image through jenkins, and the tag is packaged according to the time, so that you can see which day of the image is at a glance
+ - As mentioned above, pre-release -> grayscale -> formal
+ - Prometheus+ self-built dashboard service
+ - Detect service and request exceptions based on logs
+- If you plan to change the go-zero framework to refactor your business, how can you make the online business stable and safe for users to switch without feeling? In addition, how to divide the service under consultation?
+ - Gradually replace, from outside to inside, add a proxy to proofread, you can switch after proofreading a week
+ - If there is a database reconstruction, you need to do a good job of synchronizing the old and the new
+ - Service division is based on business, following the principle of coarse to fine, avoiding one api and one microservice
+ - Data splitting is particularly important for microservices. The upper layer is easy to split, and the data is difficult to split. As far as possible, ensure that the data is split according to the business
+- Service discovery
+ - Service discovery etcd key design
+ - Service key + timestamp, the probability of timestamp conflict in the number of service processes is extremely low, ignore it
+ - etcd service discovery and management, exception capture and exception handling
+ - Why k8s also uses etcd for service discovery, because the refresh of dns is delayed, resulting in a large number of failures in rolling updates, and etcd can achieve completely lossless updates
+ - The etcd cluster is directly deployed in the k8s cluster, because there are multiple formal clusters, clusters are single-pointed and registered to avoid confusion
+ - Automatically detect and refresh for etcd abnormalities or leader switching. When etcd has abnormalities that cannot be recovered, the service list will not be refreshed to ensure that the services are still available
+- Cache design and use cases
+ - Distributed multiple redis clusters, dozens of largest online clusters provide caching services for the same service
+ - Seamless expansion and contraction
+ - There is no cache without expiration time to avoid a large amount of infrequently used data occupying resources, the default is one week
+ - Cache penetration, no data will be cached for one minute for a short period of time to avoid the system crashing due to interface brushing or a large number of non-existent data requests
+ - Cache breakdown, a process will only refresh the same data once, avoiding a large number of hot data being loaded at the same time
+ - Cache avalanche, automatically jitter the cache expiration time, with a standard deviation of 5%, so that the expiration time of a week is distributed within 16 hours, effectively preventing avalanches
+ - Our online database has a cache, otherwise it will not be able to support massive concurrency
+ - Automatic cache management has been built into go-zero, and code can be automatically generated through goctl
+- Can you explain the design ideas of middleware and interceptor?
+
+ - Onion model
+ - This middleware processes, such as current limiting, fusing, etc., and then decides whether to call next
+ - next call
+ - Process the return result of the next call
+- How to implement the transaction processing of microservices, the design and implementation of gozero distributed transactions, and what good middleware recommendations are there?
+ - 2PC, two-phase submission
+ - TCC, Try-Confirm-Cancel
+ - Message queue, maximum attempt
+ - Manual compensation
+- How to design better multi-level goroutine exception capture?
+ - Microservice system request exceptions should be isolated, and a single exception request should not crash the entire process
+ - go-zero comes with RunSafe/GoSafe to prevent a single abnormal request from causing the process to crash
+ - Monitoring needs to keep up to prevent abnormal excess without knowing it
+ - The contradiction between fail fast and fault isolation
+- Generation and use of k8s configuration (gateway, service, slb)
+ - K8s yaml file is automatically generated internally, which is too dependent on configuration and not open source
+ - I plan to add a k8s configuration template to the bookstore example
+ - slb->nginx->nodeport->api gateway->rpc service
+- Gateway current limiting, fusing and load reduction
+ - There are two types of current limiting: concurrency control and distributed current limiting
+ - Concurrency control is used to prevent instantaneous excessive requests and protect the system from being overwhelmed
+ - Distributed current limit is used to configure different quotas for different services
+ - Fuse is to protect dependent services. When a service has a large number of exceptions, the caller should protect it so that it has a chance to return to normal and also achieve the effect of fail fast
+ - Downloading is to protect the current process from exhausting its resources and fall into complete unavailability, ensuring that the maximum amount of requests that can be carried is served as well as possible
+ - Load reduction and k8s can effectively protect k8s expansion, k8s expansion in minutes, go-zero load reduction in seconds
+- Introduce useful components in core, such as timingwheel, etc., and talk about design ideas
+ - Bloom filter
+ - In-process cache
+ - RollingWindow
+ - TimingWheel
+ - Various executors
+ - fx package, map/reduce/filter/sort/group/distinct/head/tail...
+ - Consistent hash implementation
+ - Distributed current limiting implementation
+ - mapreduce, with cancel ability
+ - There are a lot of concurrency tools in the syncx package
+- How to quickly add a kind of rpc protocol support, change the cross-machine discovery to the local node adjustment, and turn off the complex filter and load balancing functions
+ - go-zero has a relatively close relationship with grpc, and it did not consider supporting protocols other than grpc at the beginning of the design
+ - If you want to increase it, you can only fork out and change it.
+ - You can use the direct scheme directly by adjusting the machine
+ - Why remove filter and load balancing? If you want to go, fork is changed, but there is no need
+- The design and implementation ideas of log and monitoring and link tracking, it is best to have a rough diagram
+ - Log and monitoring We use prometheus, customize the dashboard service, bundle and submit data (every minute)
+ - Link tracking can see the calling relationship and automatically record the trace log
+![](https://lh5.googleusercontent.com/PBRdYmRs22xEH1gjNkQnoHuB5WFBva10oKCm61A6G23xvi28u95Bwq-qTc_WVV-PihzAHyLpAKkBtbtzK8v9Kjtrp3YBZqGiTSXhHJHwf7CAv5K9AqBSc1CZuV0u3URCDVP8r1RD0PY#align=left&display=inline&height=658&margin=%5Bobject%20Object%5D&originHeight=658&originWidth=1294&status=done&style=none&width=1294)
+- Is there any pooling technique useful for the go-zero framework? If so, in which core code can you refer to
+ - Generally do not need to optimize in advance, over-optimization is a taboo
+ - Core/syncx/pool.go defines a general pooling technology with expiration time
+- Go-zero uses those performance test method frameworks, is there a code reference? You can talk about ideas and experience
+ - go benchmark
+ - Stress testing can be scaled up according to the estimated ratio by using existing business log samples
+ - The pressure test must be pressured until the system cannot be carried, see where the first bottleneck is, and then pressure again after the change, and cycle
+- Talk about the abstract experience and experience of the code
+ - Don’t repeat yourself
+ - You may not need it. Before, business developers often asked me if I could add this function or that function. I usually ask the deep-level purpose carefully. In many cases, I find that this function is redundant, and it is the best practice to not need it.
+ - Martin Fowler proposed the principle of abstracting after three occurrences. Sometimes some colleagues will ask me to add a function to the framework. After I think about it, I often answer this. You write it in the business layer first. If there is a need in other places, you will tell me again, and it will appear three times. I will consider integrating into the framework
+ - A file should only do one thing as much as possible, each file should be controlled within 200 lines as much as possible, and a function should be controlled within 50 lines as much as possible, so that you can see the entire function without scrolling
+ - Need the ability to abstract and refine, think more, often look back and think about the previous architecture or implementation
+- Will you publish a book on the go-zero framework from design to practice? What is the future development plan of the framework?
+ - There is no book publishing plan, and a good framework is the most important
+ - Continue to focus on engineering efficiency
+ - Improve service governance capabilities
+ - Help business development land as quickly as possible
diff --git a/go-zero.dev/en/periodlimit.md b/go-zero.dev/en/periodlimit.md
new file mode 100644
index 00000000..4f0fd426
--- /dev/null
+++ b/go-zero.dev/en/periodlimit.md
@@ -0,0 +1,128 @@
+# periodlimit
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Whether in a single service or in a microservice, the API interface provided by the developer for the front end has an upper limit of access. When the frequency of access or the amount of concurrency exceeds its tolerance, we must consider current limit to ensure the interface. Availability or degraded availability. That is, the interface also needs to be installed with a fuse to prevent the system from being paralyzed due to excessive pressure on the system by unexpected requests.
+
+
+This article will introduce `periodlimit`.
+## Usage
+```go
+const (
+ seconds = 1
+ total = 100
+ quota = 5
+)
+// New limiter
+l := NewPeriodLimit(seconds, quota, redis.NewRedis(s.Addr(), redis.NodeType), "periodlimit")
+
+// take source
+code, err := l.Take("first")
+if err != nil {
+ logx.Error(err)
+ return true
+}
+
+// switch val => process request
+switch code {
+ case limit.OverQuota:
+ logx.Errorf("OverQuota key: %v", key)
+ return false
+ case limit.Allowed:
+ logx.Infof("AllowedQuota key: %v", key)
+ return true
+ case limit.HitQuota:
+ logx.Errorf("HitQuota key: %v", key)
+ // todo: maybe we need to let users know they hit the quota
+ return false
+ default:
+ logx.Errorf("DefaultQuota key: %v", key)
+ // unknown response, we just let the sms go
+ return true
+}
+```
+## periodlimit
+
+
+`go-zero` adopts a **sliding window** counting method to calculate the number of accesses to the same resource within a period of time. If it exceeds the specified `limit`, access is denied. Of course, if you are accessing different resources within a period of time, the amount of access to each resource does not exceed the `limit`. In this case, a large number of requests are allowed to come in.
+
+
+In a distributed system, there are multiple microservices to provide services. So when instantaneous traffic accesses the same resource at the same time, how to make the counter count normally in the distributed system? At the same time, when computing resources are accessed, multiple calculations may be involved. How to ensure the atomicity of calculations?
+
+
+- `go-zero` counts resource visits with the help of `incrby` of `redis`
+- Use `lua script` to do the whole window calculation to ensure the atomicity of calculation
+
+
+
+Let's take a look at several key attributes controlled by `lua script`:
+
+| **argument** | **mean** |
+| --- | --- |
+| key[1] | Logo for access to resources |
+| ARGV[1] | limit => the total number of requests, if it exceeds the rate limit. Can be set to QPS |
+| ARGV[2] | window size => sliding window, use ttl to simulate the effect of sliding |
+
+```lua
+-- to be compatible with aliyun redis,
+-- we cannot use `local key = KEYS[1]` to reuse thekey
+local limit = tonumber(ARGV[1])
+local window = tonumber(ARGV[2])
+-- incrbt key 1 => key visis++
+local current = redis.call("INCRBY", KEYS[1], 1)
+-- If it is the first visit, set the expiration time => TTL = window size
+-- Because it only limits the number of visits for a period
+if current == 1 then
+ redis.call("expire", KEYS[1], window)
+ return 1
+elseif current < limit then
+ return 1
+elseif current == limit then
+ return 2
+else
+ return 0
+end
+```
+As for the above `return code`, return it to the caller. The caller decides to request subsequent operations:
+
+| **return code** | **tag** | call code | **mean** |
+| --- | --- | --- | --- |
+| 0 | OverQuota | 3 | **over limit** |
+| 1 | Allowed | 1 | **in limit** |
+| 2 | HitQuota | 2 | **hit limit** |
+
+The following picture describes the process of request entry and the subsequent situation when the request triggers `limit`:
+![image.png](https://cdn.nlark.com/yuque/0/2020/png/261626/1605430483430-92415ed3-e88f-487d-8fd6-8c58a9abe334.png#align=left&display=inline&height=524&margin=%5Bobject%20Object%5D&name=image.png&originHeight=524&originWidth=1051&size=90836&status=done&style=none&width=1051)
+![image.png](https://cdn.nlark.com/yuque/0/2020/png/261626/1605495120249-f6b05ac2-7090-47b0-a3c0-da50df6206dd.png#align=left&display=inline&height=557&margin=%5Bobject%20Object%5D&name=image.png&originHeight=557&originWidth=456&size=53785&status=done&style=none&width=456)
+## Subsequent processing
+
+
+If a large batch of requests comes in at a certain point in the service, the `periodlimit` reaches the `limit` threshold in a short period of time, and the set time range is far from reaching. The processing of subsequent requests becomes a problem.
+
+
+It is not processed in `periodlimit`, but `code` is returned. The processing of subsequent requests is left to the developer.
+
+
+1. If it is not processed, it is simply to reject the request
+2. If these requests need to be processed, developers can use `mq` to buffer the requests to ease the pressure of the requests
+3. Use `tokenlimit` to allow temporary traffic impact
+
+
+
+So in the next article, we will talk about `tokenlimit`
+
+
+## Summary
+The `periodlimit` current limiting scheme in `go-zero` is based on `redis` counters. By calling `redis lua script`, it guarantees the atomicity of the counting process and guarantees that the counting is normal under distributed conditions. However, this scheme has disadvantages because it needs to record all behavior records within the time window. If this amount is particularly large, memory consumption will become very serious.
+
+
+## Reference
+
+- [go-zero periodlimit](https://github.com/zeromicro/go-zero/blob/master/core/limit/periodlimit.go)
+- [Distributed service current limit actual combat, has already lined up the pits for you](https://www.infoq.cn/article/Qg2tX8fyw5Vt-f3HH673)
+- [tokenlimit](tokenlimit.md)
+
+
+
+
+
diff --git a/go-zero.dev/en/plugin-center.md b/go-zero.dev/en/plugin-center.md
new file mode 100644
index 00000000..1e7c37bf
--- /dev/null
+++ b/go-zero.dev/en/plugin-center.md
@@ -0,0 +1,19 @@
+# Plugins
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The goctl api provides plugin commands to support the function extension of the api. When the functions in the goctl api do not satisfy your use,
+Or you need to extend goctl api function customization, then the plug-in function will be very suitable for developers to be self-sufficient, see for details
+[goctl plugin](goctl-plugin.md)
+
+## Plugin resources
+* [goctl-go-compact](https://github.com/zeromicro/goctl-go-compact)
+ Goctl's default route merges one file into one file
+* [goctl-swagger](https://github.com/zeromicro/goctl-swagger)
+ Generate swagger documents through api files
+* [goctl-php](https://github.com/zeromicro/goctl-php)
+ goctl-php is a plug-in based on goctl, used to generate php call end (server end) http server request code
+
+# Guess you wants
+* [Plugin Commands](goctl-plugin.md)
+* [API IDL](api-grammar.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/practise.md b/go-zero.dev/en/practise.md
new file mode 100644
index 00000000..5962d43c
--- /dev/null
+++ b/go-zero.dev/en/practise.md
@@ -0,0 +1,10 @@
+# User Practise
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* [Persistent layer cache](redis-cache.md)
+* [Business layer cache](buiness-cache.md)
+* [Queue](go-queue.md)
+* [Middle Ground System](datacenter.md)
+* [Stream Handler](stream.md)
+* [Online Exchange](online-exchange.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/prepare-other.md b/go-zero.dev/en/prepare-other.md
new file mode 100644
index 00000000..7af8830d
--- /dev/null
+++ b/go-zero.dev/en/prepare-other.md
@@ -0,0 +1,12 @@
+# Other
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Before, we have prepared the Go environment, Go Module configuration, Goctl, protoc&protoc-gen-go installation, these are the environments that developers must prepare during the development phase, and you can optionally install the next environment,
+Because these environments generally exist on the server (installation work, operation and maintenance will be completed for you), but in order to complete the follow-up **demonstration** process, I suggest you install it locally, because most of our demo environments will be Locally based.
+The following only gives the necessary preparatory work, and does not give a detailed introduction in the length of the document.
+
+## Other environment
+* [etcd](https://etcd.io/docs/current/rfc/v3api/)
+* [redis](https://redis.io/)
+* [mysql](https://www.mysql.com/)
\ No newline at end of file
diff --git a/go-zero.dev/en/prepare.md b/go-zero.dev/en/prepare.md
new file mode 100644
index 00000000..383d9945
--- /dev/null
+++ b/go-zero.dev/en/prepare.md
@@ -0,0 +1,12 @@
+# Prepare
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Before officially entering the actual development, we need to do some preparations, such as: installation of the Go environment, installation of tools used for grpc code generation,
+Installation of the necessary tool Goctl, Golang environment configuration, etc., this section will contain the following subsections:
+
+* [Golang Installation](golang-install.md)
+* [Go Module Configuration](gomod-config.md)
+* [Goctl Installation](goctl-install.md)
+* [protoc & protoc-gen-go Installation](protoc-install.md)
+* [More](prepare-other.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/project-dev.md b/go-zero.dev/en/project-dev.md
new file mode 100644
index 00000000..1f359fdf
--- /dev/null
+++ b/go-zero.dev/en/project-dev.md
@@ -0,0 +1,35 @@
+# Project Development
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In the previous chapters, we have introduced go-zero from the dimensions of some concepts, backgrounds, and quick start. Seeing this, I believe you already have some understanding of go-zero.
+From here, we will start to explain the entire process from environment preparation to service deployment. In order to ensure that everyone can thoroughly understand the go-zero development process, then prepare your patience and move on.
+In the chapters, the following subsections will be included:
+* [Prepare](prepare.md)
+* [Golang Installation](golang-install.md)
+* [Go Module Configuration](gomod-config.md)
+* [Goctl Installation](goctl-install.md)
+* [protoc & protoc-gen-go Installation](protoc-install.md)
+* [More](prepare-other.md)
+* [Development Rules](dev-specification.md)
+ * [Naming Rules](naming-spec.md)
+ * [Route Rules](route-naming-spec.md)
+ * [Coding Rules](coding-spec.md)
+* [Development Flow](dev-flow.md)
+* [Configuration Introduction](config-introduction.md)
+ * [API Configuration](api-config.md)
+ * [RPC Configuration](rpc-config.md)
+* [Business Development](business-dev.md)
+ * [Directory Structure](service-design.md)
+ * [Model Generation](model-gen.md)
+ * [API Coding](api-coding.md)
+ * [Business Coding](business-coding.md)
+ * [JWT](jwt.md)
+ * [Middleware](middleware.md)
+ * [RPC Implement & Call](rpc-call.md)
+ * [Error Handling](error-handle.md)
+* [CI/CD](ci-cd.md)
+* [Service Deployment](service-deployment.md)
+* [Log Collection](log-collection.md)
+* [Trace](trace.md)
+* [Monitor](service-monitor.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/protoc-install.md b/go-zero.dev/en/protoc-install.md
new file mode 100644
index 00000000..ee74000b
--- /dev/null
+++ b/go-zero.dev/en/protoc-install.md
@@ -0,0 +1,57 @@
+# protoc & protoc-gen-go安装
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Forward
+protoc is a tool written in C++, which can translate proto files into codes in the specified language. In the go-zero microservices, we use grpc to communicate between services, and the writing of grpc requires the use of protoc and the plug-in protoc-gen-go that translates into go language rpc stub code.
+
+Demonstration environment of this document
+* mac OS
+* protoc 3.14.0
+
+## protoc installation
+
+* Enter the [protobuf release](https://github.com/protocolbuffers/protobuf/releases) page and select the compressed package file suitable for your operating system
+* Unzip `protoc-3.14.0-osx-x86_64.zip` and enter `protoc-3.14.0-osx-x86_64`
+ ```shell
+ $ cd protoc-3.14.0-osx-x86_64/bin
+ ```
+* Move the started `protoc` binary file to any path added to the environment variable, such as `$GOPATH/bin`. It is not recommended putting it directly with the next path of the system.
+ ```shell
+ $ mv protoc $GOPATH/bin
+ ```
+ > [!TIP]
+ > $GOPATH is the actual folder address of your local machine
+* Verify the installation result
+ ```shell
+ $ protoc --version
+ ```
+ ```shell
+ libprotoc 3.14.0
+ ```
+## protoc-gen-* installation
+With goctl versions greater than 1.2.1, there is no need to install the `protoc-gen-go` plugin, because after that version, goctl has been implemented as a plugin for `protoc`, and goctl will automatically
+create a symbolic link `protoc-gen-goctl` to `goctl`, which will generate pb.go according to the following logic.
+1. check if the `protoc-gen-goctl` plug-in exists in the environment variable, if so, skip to step 3
+2. detect the existence of `protoc-gen-go` plugin in the environment variable, if not, the generation process is finished
+3. generate pb.go based on the detected plugins
+
+> [!TIPS]
+>
+> Windows may report an error, `A required privilege is not held by the client.`, because goctl needs to be run `as administrator` under Windows.
+>The reason is that goctl needs to be run "as administrator" under Windows.
+* Download and install `protoc-gen-go`
+
+ If the goctl version is already 1.2.1 or later, you can ignore this step.
+
+ ```shell
+ $ go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.2
+ ```
+ ```text
+ go: found github.com/golang/protobuf/protoc-gen-go in github.com/golang/protobuf v1.4.3
+ go: google.golang.org/protobuf upgrade => v1.25.0
+ ```
+* Move protoc-gen-go to any path where environment variables are added, such as `$GOPATH/bin`, because the binary itself after `go get` is in the `$GOPATH/bin` directory, so just make sure your `$GOPATH/bin` can be in the environment variable.
+
+> **[!WARNING]
+> protoc-gen-go installation failed, please read [Error](error.md)
diff --git a/go-zero.dev/en/quick-start.md b/go-zero.dev/en/quick-start.md
new file mode 100644
index 00000000..555f0892
--- /dev/null
+++ b/go-zero.dev/en/quick-start.md
@@ -0,0 +1,8 @@
+# Quick Start
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+This section mainly starts by quickly starting services such as api/rpc to let everyone have a macro concept of the project developed with go-zero, and we will introduce them in more detail in the follow-up. If you have already prepared the environment and tools with reference to [prepare](prepare.md), please follow the following sections to start the experience:
+
+* [monolithic service](monolithic-service.md)
+* [micro service](micro-service.md)
diff --git a/go-zero.dev/en/redis-cache.md b/go-zero.dev/en/redis-cache.md
new file mode 100644
index 00000000..37462fc6
--- /dev/null
+++ b/go-zero.dev/en/redis-cache.md
@@ -0,0 +1,271 @@
+# Persistent layer cache
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+## Principles of Cache Design
+
+We only delete the cache without updating it. Once the data in the DB is modified, we will directly delete the corresponding cache instead of updating it.
+
+Let's see how the order of deleting the cache is correct.
+
+* Delete the cache first, then update the DB
+
+![redis-cache-01](./resource/redis-cache-01.png)
+
+Let's look at the situation of two concurrent requests. A request needs to update the data. The cache is deleted first, and then B requests to read the data. At this time, there is no data in the cache, and the data is loaded from the DB and written back to the cache, and then A updates the DB , Then the data in the cache will always be dirty data at this time, until the cache expires or there is a new data update request. As shown
+
+![redis-cache-02](./resource/redis-cache-02.png)
+
+* Update the DB first, then delete the cache
+
+ ![redis-cache-03](./resource/redis-cache-03.png)
+
+A requests to update the DB first, and then B requests to read the data. At this time, the old data is returned. At this time, it can be considered that the A request has not been updated, and the final consistency is acceptable. Then A deletes the cache, and subsequent requests will Get the latest data, as shown in the figure
+![redis-cache-04](./resource/redis-cache-04.png)
+
+Let's take another look at the normal request flow:
+
+* The first request to update the DB and delete the cache
+* The second request to read the cache, if there is no data, read the data from the DB and write it back to the cache
+* All subsequent read requests can be read directly from the cache
+ ![redis-cache-05](./resource/redis-cache-05.png)
+
+Let's take a look at the DB query, assuming that there are seven columns of ABCDEFG data in the row record:
+
+* A request to query only part of the column data, such as ABC, CDE or EFG in the request, as shown in the figure
+ ![redis-cache-06](./resource/redis-cache-06.png)
+
+* Query a single complete row record, as shown in the figure
+ ![redis-cache-07](./resource/redis-cache-07.png)
+
+* Query part or all of the columns of multiple rows, as shown in the figure
+ ![redis-cache-08](./resource/redis-cache-08.png)
+
+For the above three cases, firstly, we don’t need partial queries, because some queries cannot be cached. Once cached, the data is updated, and it is impossible to locate which data needs to be deleted; secondly, for multi-line queries, according to actual scenarios and If necessary, we will establish the corresponding mapping from the query conditions to the primary key in the business layer; and for the query of a single row of complete records, go-zero has a built-in complete cache management method. So the core principle is: **go-zero cache must be a complete line record**.
+
+Let's introduce in detail the cache processing methods of the three built-in scenarios in go-zero:
+
+* Cache based on primary key
+ ```text
+ PRIMARY KEY (`id`)
+ ```
+
+This kind of cache is relatively the easiest to handle, just use the primary key as the key in redis to cache line records.
+
+* Cache based on unique index
+ ![redis-cache-09](./resource/redis-cache-09.webp)
+
+When doing index-based cache design, I used the design method of database index for reference. In database design, if you use the index to check data, the engine will first find the primary key in the tree of index -> primary key, and then use the primary key. To query row records, an indirect layer is introduced to solve the corresponding problem of index to row records. The same principle applies to the cache design of go-zero.
+
+Index-based cache is divided into single-column unique index and multi-column unique index:
+
+But for go-zero, single-column and multi-column are just different ways of generating cache keys, and the control logic behind them is the same. Then go-zero's built-in cache management can better control the data consistency problem, and also built-in to prevent the breakdown, penetration, and avalanche problems of the cache (these were discussed carefully when sharing at the gopherchina conference, see follow-up gopherchina Share video).
+
+In addition, go-zero has built-in cache access and access hit rate statistics, as shown below:
+
+```text
+dbcache(sqlc) - qpm: 5057, hit_ratio: 99.7%, hit: 5044, miss: 13, db_fails: 0
+```
+
+But for go-zero, single-column and multi-column are just different ways of generating cache keys, and the control logic behind them is the same. Then go-zero's built-in cache management can better control the data consistency problem, and also built-in to prevent the breakdown, penetration, and avalanche problems of the cache (these were discussed carefully when sharing at the gopherchina conference, see follow-up gopherchina Share video).
+
+* The single-column unique index is as follows:
+ ```text
+ UNIQUE KEY `product_idx` (`product`)
+ ```
+
+* The multi-column unique index is as follows:
+ ```text
+ UNIQUE KEY `vendor_product_idx` (`vendor`, `product`)
+ ```
+## Cache code interpretation
+
+### 1. Cache logic based on the primary key
+![redis-cache-10](./resource/redis-cache-10.png)
+
+The specific implementation code is as follows:
+```go
+func (cc CachedConn) QueryRow(v interface{}, key string, query QueryFn) error {
+ return cc.cache.Take(v, key, func(v interface{}) error {
+ return query(cc.db, v)
+ })
+}
+```
+
+The `Take` method here is to first get the data from the cache via the `key`, if you get it, return it directly, if you can't get it, then use the `query` method to go to the `DB` to read the complete row record and write it back Cache, and then return the data. The whole logic is relatively simple and easy to understand.
+
+Let's take a look at the implementation of `Take` in detail:
+```go
+func (c cacheNode) Take(v interface{}, key string, query func(v interface{}) error) error {
+ return c.doTake(v, key, query, func(v interface{}) error {
+ return c.SetCache(key, v)
+ })
+}
+```
+
+The logic of `Take` is as follows:
+
+* Use key to find data from cache
+* If found, return the data
+* If you can't find it, use the query method to read the data
+* After reading it, call c.SetCache(key, v) to set the cache
+
+The code and explanation of `doTake` are as follows:
+```go
+// v - The data object that needs to be read
+// key - Cache key
+// query - Method used to read complete data from DB
+// cacheVal - Method used to write cache
+func (c cacheNode) doTake(v interface{}, key string, query func(v interface{}) error,
+ cacheVal func(v interface{}) error) error {
+ // Use barriers to prevent cache breakdown and ensure that there is only one request in a process to load the data corresponding to the key
+ val, fresh, err := c.barrier.DoEx(key, func() (interface{}, error) {
+ // Read data from the cache
+ if err := c.doGetCache(key, v); err != nil {
+ // If it is a placeholder that was put in beforehand (to prevent cache penetration), then the default errNotFound is returned
+ // If it is an unknown error, then return directly, because we can't give up the cache error and directly send all requests to the DB,
+ // This will kill the DB in a high concurrency scenario
+ if err == errPlaceholder {
+ return nil, c.errNotFound
+ } else if err != c.errNotFound {
+ // why we just return the error instead of query from db,
+ // because we don't allow the disaster pass to the DBs.
+ // fail fast, in case we bring down the dbs.
+ return nil, err
+ }
+
+ // request DB
+ // If the returned error is errNotFound, then we need to set a placeholder in the cache to prevent the cache from penetrating
+ if err = query(v); err == c.errNotFound {
+ if err = c.setCacheWithNotFound(key); err != nil {
+ logx.Error(err)
+ }
+
+ return nil, c.errNotFound
+ } else if err != nil {
+ // Statistics DB failed
+ c.stat.IncrementDbFails()
+ return nil, err
+ }
+
+ // Write data to cache
+ if err = cacheVal(v); err != nil {
+ logx.Error(err)
+ }
+ }
+
+ // Return json serialized data
+ return jsonx.Marshal(v)
+ })
+ if err != nil {
+ return err
+ }
+ if fresh {
+ return nil
+ }
+
+ // got the result from previous ongoing query
+ c.stat.IncrementTotal()
+ c.stat.IncrementHit()
+
+ // Write data to the incoming v object
+ return jsonx.Unmarshal(val.([]byte), v)
+}
+```
+
+### 2. Cache logic based on unique index
+Because this block is more complicated, I used different colors to mark out the code block and logic of the response. `block 2` is actually the same as the cache based on the primary key. Here, I mainly talk about the logic of `block 1`.
+![redis-cache-11](./resource/redis-cache-11.webp)
+
+The block 1 part of the code block is divided into two cases:
+
+* The primary key can be found from the cache through the index. At this time, the primary key is used directly to walk the logic of `block 2`, and the follow-up is the same as the above-based primary key-based caching logic.
+
+* The primary key cannot be found in the cache through the index
+ * Query the complete row record from the DB through the index, if there is an error, return
+ * After the complete row record is found, the cache of the primary key to the complete row record and the cache of the index to the primary key will be written to `redis` at the same time
+ * Return the required row data
+
+```go
+// v-the data object that needs to be read
+// key-cache key generated by index
+// keyer-Use the primary key to generate a key based on the primary key cache
+// indexQuery-method to read complete data from DB using index, need to return the primary key
+// primaryQuery-method to get complete data from DB with primary key
+func (cc CachedConn) QueryRowIndex(v interface{}, key string, keyer func(primary interface{}) string,
+ indexQuery IndexQueryFn, primaryQuery PrimaryQueryFn) error {
+ var primaryKey interface{}
+ var found bool
+
+ // First query the cache through the index to see if there is a cache from the index to the primary key
+ if err := cc.cache.TakeWithExpire(&primaryKey, key, func(val interface{}, expire time.Duration) (err error) {
+ // If there is no cache of the index to the primary key, then the complete data is queried through the index
+ primaryKey, err = indexQuery(cc.db, v)
+ if err != nil {
+ return
+ }
+
+ // The complete data is queried through the index, set to “found” and used directly later, no need to read data from the cache anymore
+ found = true
+ // Save the mapping from the primary key to the complete data in the cache. The TakeWithExpire method has saved the mapping from the index to the primary key in the cache.
+ return cc.cache.SetCacheWithExpire(keyer(primaryKey), v, expire+cacheSafeGapBetweenIndexAndPrimary)
+ }); err != nil {
+ return err
+ }
+
+ // The data has been found through the index, just return directly
+ if found {
+ return nil
+ }
+
+ // Read data from the cache through the primary key, if the cache is not available, read from the DB through the primaryQuery method and write back to the cache and then return the data
+ return cc.cache.Take(v, keyer(primaryKey), func(v interface{}) error {
+ return primaryQuery(cc.db, v, primaryKey)
+ })
+}
+```
+
+Let's look at a practical example
+```go
+func (m *defaultUserModel) FindOneByUser(user string) (*User, error) {
+ var resp User
+ // Generate index-based keys
+ indexKey := fmt.Sprintf("%s%v", cacheUserPrefix, user)
+
+ err := m.QueryRowIndex(&resp, indexKey,
+ // Generate a complete data cache key based on the primary key
+ func(primary interface{}) string {
+ return fmt.Sprintf("user#%v", primary)
+ },
+ // Index-based DB query method
+ func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
+ query := fmt.Sprintf("select %s from %s where user = ? limit 1", userRows, m.table)
+ if err := conn.QueryRow(&resp, query, user); err != nil {
+ return nil, err
+ }
+ return resp.Id, nil
+ },
+ // DB query method based on primary key
+ func(conn sqlx.SqlConn, v, primary interface{}) error {
+ query := fmt.Sprintf("select %s from %s where id = ?", userRows, m.table)
+ return conn.QueryRow(&resp, query, primary)
+ })
+
+ // Error handling, you need to determine whether the returned sqlc.ErrNotFound is, if it is, we use the ErrNotFound defined in this package to return
+ // Prevent users from perceiving whether or not the cache is used, and at the same time isolate the underlying dependencies
+ switch err {
+ case nil:
+ return &resp, nil
+ case sqlc.ErrNotFound:
+ return nil, ErrNotFound
+ default:
+ return nil, err
+ }
+}
+```
+
+All the above cache automatic management codes can be automatically generated through [goctl](goctl.md), and the internal `CRUD` and cache of our team are basically automatically generated through [goctl](goctl.md), which can save A lot of development time, and the cache code itself is also very error-prone. Even with good code experience, it is difficult to write it correctly every time. Therefore, we recommend using automatic cache code generation tools as much as possible to avoid errors.
+
+# Guess you wants
+* [The fourth phase-how to design go-zero cache in OpenTalk](https://www.bilibili.com/video/BV1Jy4y127Xu)
+* [Goctl](goctl.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/redis-lock.md b/go-zero.dev/en/redis-lock.md
new file mode 100644
index 00000000..ac13b37b
--- /dev/null
+++ b/go-zero.dev/en/redis-lock.md
@@ -0,0 +1,141 @@
+# redis lock
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Since it is a lock, the first function that comes to mind is: **Anti-repeated clicks, only one request has an effect at a time**.
+
+
+Since it is `redis`, it must be exclusive and also have some common features of locks:
+
+
+- High performance
+- No deadlock
+- No lock failure after the node is down
+
+
+
+In `go-zero`, redis `set key nx` can be used to ensure that the write is successful when the key does not exist. `px` can automatically delete the key after the timeout. "The worst case is that the key is automatically deleted after the timeout, so that there will be no death. lock"
+
+
+## example
+
+
+```go
+redisLockKey := fmt.Sprintf("%v%v", redisTpl, headId)
+// 1. New redislock
+redisLock := redis.NewRedisLock(redisConn, redisLockKey)
+// 2. Optional operation, set the redislock expiration time
+redisLock.SetExpire(redisLockExpireSeconds)
+if ok, err := redisLock.Acquire(); !ok || err != nil {
+ return nil, errors.New("another user is currently operating, please try again later")
+}
+defer func() {
+ recover()
+ redisLock.Release()
+}()
+```
+
+
+It is the same as when you use `sync.Mutex`. Lock and unlock, perform your business operations.
+
+
+## Acquire the lock
+
+
+```go
+lockCommand = `if redis.call("GET", KEYS[1]) == ARGV[1] then
+ redis.call("SET", KEYS[1], ARGV[1], "PX", ARGV[2])
+ return "OK"
+else
+ return redis.call("SET", KEYS[1], ARGV[1], "NX", "PX", ARGV[2])
+end`
+
+func (rl *RedisLock) Acquire() (bool, error) {
+ seconds := atomic.LoadUint32(&rl.seconds)
+ // execute luascript
+ resp, err := rl.store.Eval(lockCommand, []string{rl.key}, []string{
+ rl.id, strconv.Itoa(int(seconds)*millisPerSecond + tolerance)})
+ if err == red.Nil {
+ return false, nil
+ } else if err != nil {
+ logx.Errorf("Error on acquiring lock for %s, %s", rl.key, err.Error())
+ return false, err
+ } else if resp == nil {
+ return false, nil
+ }
+
+ reply, ok := resp.(string)
+ if ok && reply == "OK" {
+ return true, nil
+ } else {
+ logx.Errorf("Unknown reply when acquiring lock for %s: %v", rl.key, resp)
+ return false, nil
+ }
+}
+```
+
+
+First introduce several `redis` command options, the following are the added options for the `set` command:
+
+
+- `ex seconds` : Set the key expiration time, in s
+- `px milliseconds` : set the key expiration time in milliseconds
+- `nx` : When the key does not exist, set the value of the key
+- `xx` : When the key exists, the value of the key will be set
+
+
+
+The input parameters involved in `lua script`:
+
+
+
+| args | example | description |
+| --- | --- | --- |
+| KEYS[1] | key$20201026 | redis key |
+| ARGV[1] | lmnopqrstuvwxyzABCD | Unique ID: random string |
+| ARGV[2] | 30000 | Set the expiration time of the lock |
+
+
+
+Then talk about the code features:
+
+
+1. The `Lua` script guarantees atomicity "Of course, multiple operations are implemented as one operation in Redis, that is, a single command operation"
+1. Use `set key value px milliseconds nx`
+1. `value` is unique
+1. When locking, first determine whether the `value` of the `key` is consistent with the previous setting, and modify the expiration time if it is consistent
+
+
+
+## Release lock
+
+
+```go
+delCommand = `if redis.call("GET", KEYS[1]) == ARGV[1] then
+ return redis.call("DEL", KEYS[1])
+else
+ return 0
+end`
+
+func (rl *RedisLock) Release() (bool, error) {
+ resp, err := rl.store.Eval(delCommand, []string{rl.key}, []string{rl.id})
+ if err != nil {
+ return false, err
+ }
+
+ if reply, ok := resp.(int64); !ok {
+ return false, nil
+ } else {
+ return reply == 1, nil
+ }
+}
+```
+
+
+You only need to pay attention to one point when releasing the lock:
+
+
+**Can't release other people's locks, can't release other people's locks, can't release other people's locks**
+
+
+Therefore, you need to first `get(key) == value「key」`, and then go to `delete` if it is true
diff --git a/go-zero.dev/en/resource/3aefec98-56eb-45a6-a4b2-9adbdf4d63c0.png b/go-zero.dev/en/resource/3aefec98-56eb-45a6-a4b2-9adbdf4d63c0.png
new file mode 100644
index 00000000..603f8a97
Binary files /dev/null and b/go-zero.dev/en/resource/3aefec98-56eb-45a6-a4b2-9adbdf4d63c0.png differ
diff --git a/go-zero.dev/en/resource/3bbddc1ebb79455da91dfcf3da6bc72f_tplv-k3u1fbpfcp-zoom-1.image.png b/go-zero.dev/en/resource/3bbddc1ebb79455da91dfcf3da6bc72f_tplv-k3u1fbpfcp-zoom-1.image.png
new file mode 100644
index 00000000..f530f572
Binary files /dev/null and b/go-zero.dev/en/resource/3bbddc1ebb79455da91dfcf3da6bc72f_tplv-k3u1fbpfcp-zoom-1.image.png differ
diff --git a/go-zero.dev/en/resource/76108cc071154e2faa66eada81857fb0_tplv-k3u1fbpfcp-zoom-1.image.png b/go-zero.dev/en/resource/76108cc071154e2faa66eada81857fb0_tplv-k3u1fbpfcp-zoom-1.image.png
new file mode 100644
index 00000000..42bdbd1c
Binary files /dev/null and b/go-zero.dev/en/resource/76108cc071154e2faa66eada81857fb0_tplv-k3u1fbpfcp-zoom-1.image.png differ
diff --git a/go-zero.dev/en/resource/7715f4b6-8739-41ac-8c8c-04d187172e9d.png b/go-zero.dev/en/resource/7715f4b6-8739-41ac-8c8c-04d187172e9d.png
new file mode 100644
index 00000000..4513c5ac
Binary files /dev/null and b/go-zero.dev/en/resource/7715f4b6-8739-41ac-8c8c-04d187172e9d.png differ
diff --git a/go-zero.dev/en/resource/7e0fd2b8-d4c1-4130-a216-a7d3d4301116.png b/go-zero.dev/en/resource/7e0fd2b8-d4c1-4130-a216-a7d3d4301116.png
new file mode 100644
index 00000000..37e6fe93
Binary files /dev/null and b/go-zero.dev/en/resource/7e0fd2b8-d4c1-4130-a216-a7d3d4301116.png differ
diff --git a/go-zero.dev/en/resource/alert.png b/go-zero.dev/en/resource/alert.png
new file mode 100644
index 00000000..f24580d3
Binary files /dev/null and b/go-zero.dev/en/resource/alert.png differ
diff --git a/go-zero.dev/en/resource/api-compare.png b/go-zero.dev/en/resource/api-compare.png
new file mode 100644
index 00000000..eb083a0a
Binary files /dev/null and b/go-zero.dev/en/resource/api-compare.png differ
diff --git a/go-zero.dev/en/resource/api-new.png b/go-zero.dev/en/resource/api-new.png
new file mode 100644
index 00000000..469a8cd0
Binary files /dev/null and b/go-zero.dev/en/resource/api-new.png differ
diff --git a/go-zero.dev/en/resource/architechture.svg b/go-zero.dev/en/resource/architechture.svg
new file mode 100644
index 00000000..161e1262
--- /dev/null
+++ b/go-zero.dev/en/resource/architechture.svg
@@ -0,0 +1,16 @@
+
\ No newline at end of file
diff --git a/go-zero.dev/en/resource/author.jpeg b/go-zero.dev/en/resource/author.jpeg
new file mode 100644
index 00000000..c566910a
Binary files /dev/null and b/go-zero.dev/en/resource/author.jpeg differ
diff --git a/go-zero.dev/en/resource/b97bf7df-1781-436e-bf04-f1dd90c60537.png b/go-zero.dev/en/resource/b97bf7df-1781-436e-bf04-f1dd90c60537.png
new file mode 100644
index 00000000..4e4455a3
Binary files /dev/null and b/go-zero.dev/en/resource/b97bf7df-1781-436e-bf04-f1dd90c60537.png differ
diff --git a/go-zero.dev/en/resource/biz-redis-01.svg b/go-zero.dev/en/resource/biz-redis-01.svg
new file mode 100644
index 00000000..95e80f43
--- /dev/null
+++ b/go-zero.dev/en/resource/biz-redis-01.svg
@@ -0,0 +1,16 @@
+
\ No newline at end of file
diff --git a/go-zero.dev/en/resource/biz-redis-02.svg b/go-zero.dev/en/resource/biz-redis-02.svg
new file mode 100644
index 00000000..acf355e1
--- /dev/null
+++ b/go-zero.dev/en/resource/biz-redis-02.svg
@@ -0,0 +1,16 @@
+
\ No newline at end of file
diff --git a/go-zero.dev/en/resource/book.zip b/go-zero.dev/en/resource/book.zip
new file mode 100644
index 00000000..a62c7bcd
Binary files /dev/null and b/go-zero.dev/en/resource/book.zip differ
diff --git a/go-zero.dev/en/resource/c42c34e8d33d48ec8a63e56feeae882a.png b/go-zero.dev/en/resource/c42c34e8d33d48ec8a63e56feeae882a.png
new file mode 100644
index 00000000..1fdebc4b
Binary files /dev/null and b/go-zero.dev/en/resource/c42c34e8d33d48ec8a63e56feeae882a.png differ
diff --git a/go-zero.dev/en/resource/ci-cd.png b/go-zero.dev/en/resource/ci-cd.png
new file mode 100644
index 00000000..ee16b01e
Binary files /dev/null and b/go-zero.dev/en/resource/ci-cd.png differ
diff --git a/go-zero.dev/en/resource/clone.png b/go-zero.dev/en/resource/clone.png
new file mode 100644
index 00000000..e463d566
Binary files /dev/null and b/go-zero.dev/en/resource/clone.png differ
diff --git a/go-zero.dev/en/resource/compare.png b/go-zero.dev/en/resource/compare.png
new file mode 100644
index 00000000..42216ae3
Binary files /dev/null and b/go-zero.dev/en/resource/compare.png differ
diff --git a/go-zero.dev/en/resource/dc500acd526d40aabfe4f53cf5bd180a_tplv-k3u1fbpfcp-zoom-1.png b/go-zero.dev/en/resource/dc500acd526d40aabfe4f53cf5bd180a_tplv-k3u1fbpfcp-zoom-1.png
new file mode 100644
index 00000000..716ea11d
Binary files /dev/null and b/go-zero.dev/en/resource/dc500acd526d40aabfe4f53cf5bd180a_tplv-k3u1fbpfcp-zoom-1.png differ
diff --git a/go-zero.dev/en/resource/doc-edit.png b/go-zero.dev/en/resource/doc-edit.png
new file mode 100644
index 00000000..14b9eb10
Binary files /dev/null and b/go-zero.dev/en/resource/doc-edit.png differ
diff --git a/go-zero.dev/en/resource/docker_env.png b/go-zero.dev/en/resource/docker_env.png
new file mode 100644
index 00000000..5ac04759
Binary files /dev/null and b/go-zero.dev/en/resource/docker_env.png differ
diff --git a/go-zero.dev/en/resource/f93c621571074e44a2d403aa25e7db6f_tplv-k3u1fbpfcp-zoom-1.png b/go-zero.dev/en/resource/f93c621571074e44a2d403aa25e7db6f_tplv-k3u1fbpfcp-zoom-1.png
new file mode 100644
index 00000000..afcde0fb
Binary files /dev/null and b/go-zero.dev/en/resource/f93c621571074e44a2d403aa25e7db6f_tplv-k3u1fbpfcp-zoom-1.png differ
diff --git a/go-zero.dev/en/resource/fork.png b/go-zero.dev/en/resource/fork.png
new file mode 100644
index 00000000..44521bb3
Binary files /dev/null and b/go-zero.dev/en/resource/fork.png differ
diff --git a/go-zero.dev/en/resource/fx_log.png b/go-zero.dev/en/resource/fx_log.png
new file mode 100644
index 00000000..45ca95e6
Binary files /dev/null and b/go-zero.dev/en/resource/fx_log.png differ
diff --git a/go-zero.dev/en/resource/gitlab-git-url.png b/go-zero.dev/en/resource/gitlab-git-url.png
new file mode 100644
index 00000000..b2ed854a
Binary files /dev/null and b/go-zero.dev/en/resource/gitlab-git-url.png differ
diff --git a/go-zero.dev/en/resource/go-zero-logo.png b/go-zero.dev/en/resource/go-zero-logo.png
new file mode 100644
index 00000000..a0ec1cd5
Binary files /dev/null and b/go-zero.dev/en/resource/go-zero-logo.png differ
diff --git a/go-zero.dev/en/resource/go-zero-practise.png b/go-zero.dev/en/resource/go-zero-practise.png
new file mode 100755
index 00000000..0be4da53
Binary files /dev/null and b/go-zero.dev/en/resource/go-zero-practise.png differ
diff --git a/go-zero.dev/en/resource/go_live_template.png b/go-zero.dev/en/resource/go_live_template.png
new file mode 100644
index 00000000..28110974
Binary files /dev/null and b/go-zero.dev/en/resource/go_live_template.png differ
diff --git a/go-zero.dev/en/resource/goctl-api-select.png b/go-zero.dev/en/resource/goctl-api-select.png
new file mode 100644
index 00000000..d852489f
Binary files /dev/null and b/go-zero.dev/en/resource/goctl-api-select.png differ
diff --git a/go-zero.dev/en/resource/goctl-api.png b/go-zero.dev/en/resource/goctl-api.png
new file mode 100644
index 00000000..83a5cf48
Binary files /dev/null and b/go-zero.dev/en/resource/goctl-api.png differ
diff --git a/go-zero.dev/en/resource/goctl-command.png b/go-zero.dev/en/resource/goctl-command.png
new file mode 100644
index 00000000..6f38d223
Binary files /dev/null and b/go-zero.dev/en/resource/goctl-command.png differ
diff --git a/go-zero.dev/en/resource/grafana-app.png b/go-zero.dev/en/resource/grafana-app.png
new file mode 100644
index 00000000..aa97e4d5
Binary files /dev/null and b/go-zero.dev/en/resource/grafana-app.png differ
diff --git a/go-zero.dev/en/resource/grafana-panel.png b/go-zero.dev/en/resource/grafana-panel.png
new file mode 100644
index 00000000..b82430c5
Binary files /dev/null and b/go-zero.dev/en/resource/grafana-panel.png differ
diff --git a/go-zero.dev/en/resource/grafana-qps.png b/go-zero.dev/en/resource/grafana-qps.png
new file mode 100644
index 00000000..14a86dd5
Binary files /dev/null and b/go-zero.dev/en/resource/grafana-qps.png differ
diff --git a/go-zero.dev/en/resource/grafana.png b/go-zero.dev/en/resource/grafana.png
new file mode 100644
index 00000000..0c648728
Binary files /dev/null and b/go-zero.dev/en/resource/grafana.png differ
diff --git a/go-zero.dev/en/resource/handler.gif b/go-zero.dev/en/resource/handler.gif
new file mode 100644
index 00000000..fd1c1c38
Binary files /dev/null and b/go-zero.dev/en/resource/handler.gif differ
diff --git a/go-zero.dev/en/resource/info.gif b/go-zero.dev/en/resource/info.gif
new file mode 100644
index 00000000..f4c26bf5
Binary files /dev/null and b/go-zero.dev/en/resource/info.gif differ
diff --git a/go-zero.dev/en/resource/intellij-model.png b/go-zero.dev/en/resource/intellij-model.png
new file mode 100644
index 00000000..66dd40ad
Binary files /dev/null and b/go-zero.dev/en/resource/intellij-model.png differ
diff --git a/go-zero.dev/en/resource/jenkins-add-credentials.png b/go-zero.dev/en/resource/jenkins-add-credentials.png
new file mode 100644
index 00000000..e5dc389d
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-add-credentials.png differ
diff --git a/go-zero.dev/en/resource/jenkins-build-with-parameters.png b/go-zero.dev/en/resource/jenkins-build-with-parameters.png
new file mode 100644
index 00000000..14684389
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-build-with-parameters.png differ
diff --git a/go-zero.dev/en/resource/jenkins-choice.png b/go-zero.dev/en/resource/jenkins-choice.png
new file mode 100644
index 00000000..c86e722a
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-choice.png differ
diff --git a/go-zero.dev/en/resource/jenkins-configure.png b/go-zero.dev/en/resource/jenkins-configure.png
new file mode 100644
index 00000000..69a0b430
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-configure.png differ
diff --git a/go-zero.dev/en/resource/jenkins-credentials-id.png b/go-zero.dev/en/resource/jenkins-credentials-id.png
new file mode 100644
index 00000000..2f633628
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-credentials-id.png differ
diff --git a/go-zero.dev/en/resource/jenkins-credentials.png b/go-zero.dev/en/resource/jenkins-credentials.png
new file mode 100644
index 00000000..372b9262
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-credentials.png differ
diff --git a/go-zero.dev/en/resource/jenkins-git.png b/go-zero.dev/en/resource/jenkins-git.png
new file mode 100644
index 00000000..950fc5b6
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-git.png differ
diff --git a/go-zero.dev/en/resource/jenkins-new-item.png b/go-zero.dev/en/resource/jenkins-new-item.png
new file mode 100644
index 00000000..4ad817b6
Binary files /dev/null and b/go-zero.dev/en/resource/jenkins-new-item.png differ
diff --git a/go-zero.dev/en/resource/json_tag.png b/go-zero.dev/en/resource/json_tag.png
new file mode 100644
index 00000000..e8f44c9c
Binary files /dev/null and b/go-zero.dev/en/resource/json_tag.png differ
diff --git a/go-zero.dev/en/resource/jump.gif b/go-zero.dev/en/resource/jump.gif
new file mode 100644
index 00000000..8581aae5
Binary files /dev/null and b/go-zero.dev/en/resource/jump.gif differ
diff --git a/go-zero.dev/en/resource/k8s-01.png b/go-zero.dev/en/resource/k8s-01.png
new file mode 100644
index 00000000..9192926a
Binary files /dev/null and b/go-zero.dev/en/resource/k8s-01.png differ
diff --git a/go-zero.dev/en/resource/k8s-02.png b/go-zero.dev/en/resource/k8s-02.png
new file mode 100644
index 00000000..2ec67d6d
Binary files /dev/null and b/go-zero.dev/en/resource/k8s-02.png differ
diff --git a/go-zero.dev/en/resource/k8s-03.png b/go-zero.dev/en/resource/k8s-03.png
new file mode 100644
index 00000000..e672db3a
Binary files /dev/null and b/go-zero.dev/en/resource/k8s-03.png differ
diff --git a/go-zero.dev/en/resource/live_template.gif b/go-zero.dev/en/resource/live_template.gif
new file mode 100644
index 00000000..dac3499e
Binary files /dev/null and b/go-zero.dev/en/resource/live_template.gif differ
diff --git a/go-zero.dev/en/resource/log-flow.png b/go-zero.dev/en/resource/log-flow.png
new file mode 100644
index 00000000..2421ddce
Binary files /dev/null and b/go-zero.dev/en/resource/log-flow.png differ
diff --git a/go-zero.dev/en/resource/log.png b/go-zero.dev/en/resource/log.png
new file mode 100644
index 00000000..9c751d7e
Binary files /dev/null and b/go-zero.dev/en/resource/log.png differ
diff --git a/go-zero.dev/en/resource/logo.png b/go-zero.dev/en/resource/logo.png
new file mode 100644
index 00000000..16798ee5
Binary files /dev/null and b/go-zero.dev/en/resource/logo.png differ
diff --git a/go-zero.dev/en/resource/new_pr.png b/go-zero.dev/en/resource/new_pr.png
new file mode 100644
index 00000000..02d38b9f
Binary files /dev/null and b/go-zero.dev/en/resource/new_pr.png differ
diff --git a/go-zero.dev/en/resource/pipeline.png b/go-zero.dev/en/resource/pipeline.png
new file mode 100644
index 00000000..eb51eaec
Binary files /dev/null and b/go-zero.dev/en/resource/pipeline.png differ
diff --git a/go-zero.dev/en/resource/pr_record.png b/go-zero.dev/en/resource/pr_record.png
new file mode 100644
index 00000000..8b0e4937
Binary files /dev/null and b/go-zero.dev/en/resource/pr_record.png differ
diff --git a/go-zero.dev/en/resource/project_generate_code.png b/go-zero.dev/en/resource/project_generate_code.png
new file mode 100644
index 00000000..a637403b
Binary files /dev/null and b/go-zero.dev/en/resource/project_generate_code.png differ
diff --git a/go-zero.dev/en/resource/prometheus-flow.png b/go-zero.dev/en/resource/prometheus-flow.png
new file mode 100644
index 00000000..ce6a97af
Binary files /dev/null and b/go-zero.dev/en/resource/prometheus-flow.png differ
diff --git a/go-zero.dev/en/resource/prometheus-graph.webp b/go-zero.dev/en/resource/prometheus-graph.webp
new file mode 100644
index 00000000..d283fc5a
Binary files /dev/null and b/go-zero.dev/en/resource/prometheus-graph.webp differ
diff --git a/go-zero.dev/en/resource/prometheus-start.png b/go-zero.dev/en/resource/prometheus-start.png
new file mode 100644
index 00000000..ee0bbefa
Binary files /dev/null and b/go-zero.dev/en/resource/prometheus-start.png differ
diff --git a/go-zero.dev/en/resource/psiTree.png b/go-zero.dev/en/resource/psiTree.png
new file mode 100644
index 00000000..06af982a
Binary files /dev/null and b/go-zero.dev/en/resource/psiTree.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-01.png b/go-zero.dev/en/resource/redis-cache-01.png
new file mode 100644
index 00000000..f07a1133
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-01.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-02.png b/go-zero.dev/en/resource/redis-cache-02.png
new file mode 100644
index 00000000..ba8f2fb0
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-02.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-03.png b/go-zero.dev/en/resource/redis-cache-03.png
new file mode 100644
index 00000000..8b2449a8
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-03.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-04.png b/go-zero.dev/en/resource/redis-cache-04.png
new file mode 100644
index 00000000..38b3f0f4
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-04.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-05.png b/go-zero.dev/en/resource/redis-cache-05.png
new file mode 100644
index 00000000..4a743ec7
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-05.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-06.png b/go-zero.dev/en/resource/redis-cache-06.png
new file mode 100644
index 00000000..ec9cf88f
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-06.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-07.png b/go-zero.dev/en/resource/redis-cache-07.png
new file mode 100644
index 00000000..c84d292d
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-07.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-08.png b/go-zero.dev/en/resource/redis-cache-08.png
new file mode 100644
index 00000000..039816e4
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-08.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-09.webp b/go-zero.dev/en/resource/redis-cache-09.webp
new file mode 100644
index 00000000..e13122b6
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-09.webp differ
diff --git a/go-zero.dev/en/resource/redis-cache-10.png b/go-zero.dev/en/resource/redis-cache-10.png
new file mode 100644
index 00000000..ec8e69a3
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-10.png differ
diff --git a/go-zero.dev/en/resource/redis-cache-11.webp b/go-zero.dev/en/resource/redis-cache-11.webp
new file mode 100644
index 00000000..5476baf0
Binary files /dev/null and b/go-zero.dev/en/resource/redis-cache-11.webp differ
diff --git a/go-zero.dev/en/resource/service.gif b/go-zero.dev/en/resource/service.gif
new file mode 100644
index 00000000..dbf7f613
Binary files /dev/null and b/go-zero.dev/en/resource/service.gif differ
diff --git a/go-zero.dev/en/resource/service.png b/go-zero.dev/en/resource/service.png
new file mode 100644
index 00000000..09b51d7f
Binary files /dev/null and b/go-zero.dev/en/resource/service.png differ
diff --git a/go-zero.dev/en/resource/ssh-add-key.png b/go-zero.dev/en/resource/ssh-add-key.png
new file mode 100644
index 00000000..70635b7e
Binary files /dev/null and b/go-zero.dev/en/resource/ssh-add-key.png differ
diff --git a/go-zero.dev/en/resource/type.gif b/go-zero.dev/en/resource/type.gif
new file mode 100644
index 00000000..e4d4d7a1
Binary files /dev/null and b/go-zero.dev/en/resource/type.gif differ
diff --git a/go-zero.dev/en/resource/user-pipeline-script.png b/go-zero.dev/en/resource/user-pipeline-script.png
new file mode 100644
index 00000000..57ede338
Binary files /dev/null and b/go-zero.dev/en/resource/user-pipeline-script.png differ
diff --git a/go-zero.dev/en/route-naming-spec.md b/go-zero.dev/en/route-naming-spec.md
new file mode 100644
index 00000000..7c3b4407
--- /dev/null
+++ b/go-zero.dev/en/route-naming-spec.md
@@ -0,0 +1,13 @@
+# Route Rules
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* Recommended spine naming
+* Combinations of lowercase words and horizontal bars (-)
+* What you see is what you get
+
+```go
+/user/get-info
+/user/get/info
+/user/password/change/:id
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/rpc-call.md b/go-zero.dev/en/rpc-call.md
new file mode 100644
index 00000000..d9997867
--- /dev/null
+++ b/go-zero.dev/en/rpc-call.md
@@ -0,0 +1,256 @@
+# RPC Implement & Call
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In a large system, there must be data transfer between multiple subsystems (services). If there is data transfer, you need a communication method. You can choose the simplest http for communication or rpc service for communication.
+In go-zero, we use zrpc to communicate between services, which is based on grpc.
+
+## Scenes
+In the front, we have improved the interface protocol for user login, user query of books, etc., but the user did not do any user verification when querying the book. If the current user is a non-existent user, we do not allow him to view book information.
+From the above information, we can know that the user service needs to provide a method to obtain user information for use by the search service, so we need to create a user rpc service and provide a getUser method.
+
+## rpc service writing
+
+* Compile the proto file
+ ```shell
+ $ vim service/user/cmd/rpc/user.proto
+ ```
+ ```protobuf
+ syntax = "proto3";
+
+ package user;
+
+ option go_package = "user";
+
+ message IdReq{
+ int64 id = 1;
+ }
+
+ message UserInfoReply{
+ int64 id = 1;
+ string name = 2;
+ string number = 3;
+ string gender = 4;
+ }
+
+ service user {
+ rpc getUser(IdReq) returns(UserInfoReply);
+ }
+ ```
+ * Generate rpc service code
+ ```shell
+ $ cd service/user/cmd/rpc
+ $ goctl rpc proto -src user.proto -dir .
+ ```
+
+> [!TIPS]
+> If the installed version of `protoc-gen-go` is larger than 1.4.0, it is recommended to add `go_package` to the proto file
+
+* Add configuration and improve yaml configuration items
+ ```shell
+ $ vim service/user/cmd/rpc/internal/config/config.go
+ ```
+ ```go
+ type Config struct {
+ zrpc.RpcServerConf
+ Mysql struct {
+ DataSource string
+ }
+ CacheRedis cache.CacheConf
+ }
+ ```
+ ```shell
+ $ vim /service/user/cmd/rpc/etc/user.yaml
+ ```
+ ```yaml
+ Name: user.rpc
+ ListenOn: 127.0.0.1:8080
+ Etcd:
+ Hosts:
+ - $etcdHost
+ Key: user.rpc
+ Mysql:
+ DataSource: $user:$password@tcp($url)/$db?charset=utf8mb4&parseTime=true&loc=Asia%2FShanghai
+ CacheRedis:
+ - Host: $host
+ Pass: $pass
+ Type: node
+ ```
+ > [!TIP]
+ > $user: mysql database user
+ >
+ > $password: mysql database password
+ >
+ > $url: mysql database connection address
+ >
+ > $db: mysql database db name, that is, the database where the user table is located
+ >
+ > $host: Redis connection address Format: ip:port, such as: 127.0.0.1:6379
+ >
+ > $pass: redis password
+ >
+ > $etcdHost: etcd connection address, format: ip:port, such as: 127.0.0.1:2379
+ >
+ > For more configuration information, please refer to [rpc configuration introduction](rpc-config.md)
+
+* Add resource dependency
+ ```shell
+ $ vim service/user/cmd/rpc/internal/svc/servicecontext.go
+ ```
+ ```go
+ type ServiceContext struct {
+ Config config.Config
+ UserModel model.UserModel
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ conn := sqlx.NewMysql(c.Mysql.DataSource)
+ return &ServiceContext{
+ Config: c,
+ UserModel: model.NewUserModel(conn, c.CacheRedis),
+ }
+ }
+ ```
+* Add rpc logic
+ ```shell
+ $ service/user/cmd/rpc/internal/logic/getuserlogic.go
+ ```
+ ```go
+ func (l *GetUserLogic) GetUser(in *user.IdReq) (*user.UserInfoReply, error) {
+ one, err := l.svcCtx.UserModel.FindOne(in.Id)
+ if err != nil {
+ return nil, err
+ }
+
+ return &user.UserInfoReply{
+ Id: one.Id,
+ Name: one.Name,
+ Number: one.Number,
+ Gender: one.Gender,
+ }, nil
+ }
+ ```
+
+## Use rpc
+Next we call user rpc in the search service
+
+* Add UserRpc configuration and yaml configuration items
+ ```shell
+ $ vim service/search/cmd/api/internal/config/config.go
+ ```
+ ```go
+ type Config struct {
+ rest.RestConf
+ Auth struct {
+ AccessSecret string
+ AccessExpire int64
+ }
+ UserRpc zrpc.RpcClientConf
+ }
+ ```
+ ```shell
+ $ vim service/search/cmd/api/etc/search-api.yaml
+ ```
+ ```yaml
+ Name: search-api
+ Host: 0.0.0.0
+ Port: 8889
+ Auth:
+ AccessSecret: $AccessSecret
+ AccessExpire: $AccessExpire
+ UserRpc:
+ Etcd:
+ Hosts:
+ - $etcdHost
+ Key: user.rpc
+ ```
+ > [!TIP]
+ > $AccessSecret: This value must be consistent with the one declared in the user api.
+ >
+ > $AccessExpire: Valid period
+ >
+ > $etcdHost: etcd connection address
+ >
+ > The `Key` in etcd must be consistent with the Key in the user rpc service configuration
+* Add dependency
+ ```shell
+ $ vim service/search/cmd/api/internal/svc/servicecontext.go
+ ```
+ ```go
+ type ServiceContext struct {
+ Config config.Config
+ Example rest.Middleware
+ UserRpc userclient.User
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ Config: c,
+ Example: middleware.NewExampleMiddleware().Handle,
+ UserRpc: userclient.NewUser(zrpc.MustNewClient(c.UserRpc)),
+ }
+ }
+ ```
+* Supplementary logic
+ ```shell
+ $ vim /service/search/cmd/api/internal/logic/searchlogic.go
+ ```
+ ```go
+ func (l *SearchLogic) Search(req types.SearchReq) (*types.SearchReply, error) {
+ userIdNumber := json.Number(fmt.Sprintf("%v", l.ctx.Value("userId")))
+ logx.Infof("userId: %s", userIdNumber)
+ userId, err := userIdNumber.Int64()
+ if err != nil {
+ return nil, err
+ }
+
+ // use user rpc
+ _, err = l.svcCtx.UserRpc.GetUser(l.ctx, &userclient.IdReq{
+ Id: userId,
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ return &types.SearchReply{
+ Name: req.Name,
+ Count: 100,
+ }, nil
+ }
+ ```
+## Start and verify the service
+* Start etcd, redis, mysql
+* Start user rpc
+ ```shell
+ $ cd /service/user/cmd/rpc
+ $ go run user.go -f etc/user.yaml
+ ```
+ ```text
+ Starting rpc server at 127.0.0.1:8080...
+ ```
+* Start search api
+```shell
+$ cd service/search/cmd/api
+$ go run search.go -f etc/search-api.yaml
+```
+
+* Verification Service
+ ```shell
+ $ curl -i -X GET \
+ 'http://127.0.0.1:8889/search/do?name=%E8%A5%BF%E6%B8%B8%E8%AE%B0' \
+ -H 'authorization: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTI4NjcwNzQsImlhdCI6MTYxMjc4MDY3NCwidXNlcklkIjoxfQ.JKa83g9BlEW84IiCXFGwP2aSd0xF3tMnxrOzVebbt80'
+ ```
+ ```text
+ HTTP/1.1 200 OK
+ Content
+ -Type: application/json
+ Date: Tue, 09 Feb 2021 06:05:52 GMT
+ Content-Length: 32
+
+ {"name":"xiyouji","count":100}
+ ```
+
+# Guess you wants
+* [RPC Configuration](rpc-config.md)
+* [RPC Directory Structure](rpc-dir.md)
+* [RPC Commands](goctl-rpc.md)
diff --git a/go-zero.dev/en/rpc-config.md b/go-zero.dev/en/rpc-config.md
new file mode 100644
index 00000000..865ac746
--- /dev/null
+++ b/go-zero.dev/en/rpc-config.md
@@ -0,0 +1,55 @@
+# RPC Configuration
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+The rpc configuration controls various functions of an rpc service, including but not limited to listening address, etcd configuration, timeout, fuse configuration, etc. Below we will use a common rpc service configuration to illustrate.
+
+## Configuration instructions
+```go
+Config struct {
+ zrpc.RpcServerConf
+ CacheRedis cache.CacheConf // Redis cache configuration, see the api configuration instructions for details, and I won’t go into details here
+ Mysql struct { // mysql database access configuration, see api configuration instructions for details, not repeat here
+ DataSource string
+ }
+}
+```
+
+### zrpc.RpcServerConf
+```go
+RpcServerConf struct {
+ service.ServiceConf // mysql database access configuration, see api configuration instructions for details, not repeat here
+ ListenOn string // rpc listening address and port, such as: 127.0.0.1:8888
+ Etcd discov.EtcdConf `json:",optional"` // etcd related configuration
+ Auth bool `json:",optional"` // Whether to enable Auth, if yes, Redis is required
+ Redis redis.RedisKeyConf `json:",optional"` // Auth verification
+ StrictControl bool `json:",optional"` // Whether it is Strict mode, if it is, the error is Auth failure, otherwise it can be considered as successful
+ // pending forever is not allowed
+ // never set it to 0, if zero, the underlying will set to 2s automatically
+ Timeout int64 `json:",default=2000"` // Timeout control, unit: milliseconds
+ CpuThreshold int64 `json:",default=900,range=[0:1000]"` // CPU load reduction threshold, the default is 900, the allowable setting range is 0 to 1000
+}
+```
+
+### discov.EtcdConf
+```go
+type EtcdConf struct {
+ Hosts []string // etcd host array
+ Key string // rpc registration key
+}
+```
+
+### redis.RedisKeyConf
+```go
+RedisConf struct {
+ Host string // redis host
+ Type string `json:",default=node,options=node|cluster"` // redis type
+ Pass string `json:",optional"` // redis password
+}
+
+RedisKeyConf struct {
+ RedisConf
+ Key string `json:",optional"` // Verification key
+}
+```
diff --git a/go-zero.dev/en/rpc-dir.md b/go-zero.dev/en/rpc-dir.md
new file mode 100644
index 00000000..6b0c2ca8
--- /dev/null
+++ b/go-zero.dev/en/rpc-dir.md
@@ -0,0 +1,51 @@
+# RPC Directory Structure
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+```text
+.
+├── etc // yaml configuration file
+│ └── greet.yaml
+├── go.mod
+├── greet // pb.go folder①
+│ └── greet.pb.go
+├── greet.go // main entry
+├── greet.proto // proto source file
+├── greetclient // call logic ②
+│ └── greet.go
+└── internal
+ ├── config // yaml configuration corresponding entity
+ │ └── config.go
+ ├── logic // Business code
+ │ └── pinglogic.go
+ ├── server // rpc server
+ │ └── greetserver.go
+ └── svc // Dependent resources
+ └── servicecontext.go
+```
+
+> [!TIP]
+> ① The name of the pb folder (the old version folder is fixed as pb) is taken from the value of option go_package in the proto file. The last level is converted according to a certain format. If there is no such declaration, it is taken from the value of package. The approximate code is as follows:
+
+```go
+ if option.Name == "go_package" {
+ ret.GoPackage = option.Constant.Source
+ }
+ ...
+ if len(ret.GoPackage) == 0 {
+ ret.GoPackage = ret.Package.Name
+ }
+ ret.PbPackage = GoSanitized(filepath.Base(ret.GoPackage))
+ ...
+```
+> [!TIP]
+> For GoSanitized method, please refer to google.golang.org/protobuf@v1.25.0/internal/strs/strings.go:71
+
+> [!TIP]
+> ② The name of the call layer folder is taken from the name of the service in the proto. If the name of the sercice is equal to the name of the pb folder, the client will be added after service to distinguish between pb and call.
+
+```go
+if strings.ToLower(proto.Service.Name) == strings.ToLower(proto.GoPackage) {
+ callDir = filepath.Join(ctx.WorkDir, strings.ToLower(stringx.From(proto.Service.Name+"_client").ToCamel()))
+}
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/service-deployment.md b/go-zero.dev/en/service-deployment.md
new file mode 100644
index 00000000..6d30e123
--- /dev/null
+++ b/go-zero.dev/en/service-deployment.md
@@ -0,0 +1,243 @@
+# Service Deployment
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+This section uses jenkins to demonstrate a simple service deployment to k8s.
+
+## Prepare
+* k8s cluster installation
+* gitlab environment installation
+* jenkins environment installation
+* redis&mysql&nginx&etcd installation
+* [goctl install](goctl-install.md)
+
+> [!TIP]
+> Ensure that goctl is installed on each node of k8s
+>
+> Please google for the installation of the above environment, and I will not introduce it here.
+
+## Service deployment
+### 1、Relevant preparations for gitlab code warehouse
+
+#### 1.1、Add SSH Key
+
+Enter gitlab, click on the user center, find `Settings`, find the `SSH Keys` tab on the left
+![ssh key](./resource/ssh-add-key.png)
+
+* 1、View the public key on the machine where jenkins is located
+
+```shell
+$ cat ~/.ssh/id_rsa.pub
+```
+
+* 2、If not, you need to generate it, if it exists, please skip to step 3
+
+```shell
+$ ssh-keygen -t rsa -b 2048 -C "email@example.com"
+```
+
+> "email@example.com" 可以替换为自己的邮箱
+>
+After completing the generation, repeat the first step
+
+* 3、Add the public key to gitlab
+
+#### 1.2、Upload the code to the gitlab warehouse
+Create a new project `go-zero-demo` and upload the code. Details are not described here.
+
+### 2、jenkins
+
+#### 2.1、Add credentials
+
+* View the private key of the machine where Jenkins is located, which corresponds to the previous gitlab public key
+
+```shell
+$ cat id_rsa
+```
+
+* Enter jenkins, click on `Manage Jenkins`-> `Manage Credentials`
+ ![credentials](./resource/jenkins-credentials.png)
+
+* Go to the `Global Credentials` page, add credentials, `Username` is an identifier, add pipeline later, you know that this identifier represents the credentials of gitlab, and Private Key` is the private key obtained above
+ ![jenkins-add-credentials](./resource/jenkins-add-credentials.png)
+
+#### 2.2、 Add global variables
+Enter `Manage Jenkins`->`Configure System`, slide to the entry of `Global Properties`, add docker private warehouse related information, as shown in the figure is `docker username`, `docker user password`, `docker private warehouse address`
+![docker_server](./resource/docker_env.png)
+
+> [!TIP]
+>
+> `docker_user` your docker username
+>
+> `docker_pass` your docker user password
+>
+> `docker_server` your docker server
+>
+> The private warehouse I use here, if you don’t use the private warehouse provided by the cloud vendor, you can build a private warehouse yourself. I won’t go into details here, and you can google it yourself.
+
+#### 2.3、Configure git
+Go to `Manage Jenkins`->`Global Tool Configureation`, find the Git entry, fill in the path of the git executable file of the machine where jenkins is located, if not, you need to download the Git plugin in the jenkins plugin management.
+![jenkins-git](./resource/jenkins-git.png)
+
+
+![jenkins-configure](./resource/jenkins-configure.png)
+#### 2.4、 Add a pipeline
+
+> The pipeline is used to build the project, pull code from gitlab->generate Dockerfile->deploy to k8s are all done in this step, here is the demo environment, in order to ensure the smooth deployment process,
+> Need to install jenkins on the machine where one of the nodes of the k8s cluster is located, I installed it on the master here.
+
+* Get the credential id Go to the credential page and find the credential id whose Username is `gitlab`
+ ![jenkins-credentials-id](./resource/jenkins-credentials-id.png)
+
+* Go to the jenkins homepage, click on `New Item`, the name is `user`
+ ![jenkins-add-item](./resource/jenkins-new-item.png)
+
+* View project git address
+ ![gitlab-git-url](./resource/gitlab-git-url.png)
+
+* Add the service type Choice Parameter, check `This project is parameterized in General`,
+ Click `Add parameter` and select `Choice Parameter`, add the selected value constant (api, rpc) and the variable (type) of the received value according to the figure, which will be used in the Pipeline script later.
+ ![jenkins-choice-parameter](./resource/jenkins-choice.png)
+
+* Configure `user`, on the `user` configuration page, swipe down to find `Pipeline script`, fill in the script content
+
+```text
+pipeline {
+ agent any
+ parameters {
+ gitParameter name: 'branch',
+ type: 'PT_BRANCH',
+ branchFilter: 'origin/(.*)',
+ defaultValue: 'master',
+ selectedValue: 'DEFAULT',
+ sortMode: 'ASCENDING_SMART',
+ description: 'Select the branch'
+ }
+
+ stages {
+ stage('service info') {
+ steps {
+ sh 'echo branch: $branch'
+ sh 'echo build service type:${JOB_NAME}-$type'
+ }
+ }
+
+
+ stage('check out') {
+ steps {
+ checkout([$class: 'GitSCM',
+ branches: [[name: '$branch']],
+ doGenerateSubmoduleConfigurations: false,
+ extensions: [],
+ submoduleCfg: [],
+ userRemoteConfigs: [[credentialsId: '${credentialsId}', url: '${gitUrl}']]])
+ }
+ }
+
+ stage('get commit_id') {
+ steps {
+ echo 'get commit_id'
+ git credentialsId: '${credentialsId}', url: '${gitUrl}'
+ script {
+ env.commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
+ }
+ }
+ }
+
+
+ stage('goctl version detection') {
+ steps{
+ sh '/usr/local/bin/goctl -v'
+ }
+ }
+
+ stage('Dockerfile Build') {
+ steps{
+ sh '/usr/local/bin/goctl docker -go service/${JOB_NAME}/${type}/${JOB_NAME}.go'
+ script{
+ env.image = sh(returnStdout: true, script: 'echo ${JOB_NAME}-${type}:${commit_id}').trim()
+ }
+ sh 'echo image:${image}'
+ sh 'docker build -t ${image} .'
+ }
+ }
+
+ stage('Upload to the mirror warehouse') {
+ steps{
+ sh '/root/dockerlogin.sh'
+ sh 'docker tag ${image} ${dockerServer}/${image}'
+ sh 'docker push ${dockerServer}/${image}'
+ }
+ }
+
+ stage('Deploy to k8s') {
+ steps{
+ script{
+ env.deployYaml = sh(returnStdout: true, script: 'echo ${JOB_NAME}-${type}-deploy.yaml').trim()
+ env.port=sh(returnStdout: true, script: '/root/port.sh ${JOB_NAME}-${type}').trim()
+ }
+
+ sh 'echo ${port}'
+
+ sh 'rm -f ${deployYaml}'
+ sh '/usr/local/bin/goctl kube deploy -secret dockersecret -replicas 2 -nodePort 3${port} -requestCpu 200 -requestMem 50 -limitCpu 300 -limitMem 100 -name ${JOB_NAME}-${type} -namespace hey-go-zero -image ${dockerServer}/${image} -o ${deployYaml} -port ${port}'
+ sh '/usr/bin/kubectl apply -f ${deployYaml}'
+ }
+ }
+
+ stage('Clean') {
+ steps{
+ sh 'docker rmi -f ${image}'
+ sh 'docker rmi -f ${dockerServer}/${image}'
+ cleanWs notFailBuild: true
+ }
+ }
+ }
+}
+```
+
+> [!TIP]
+> ${credentialsId} should be replaced with your specific credential value, that is, a string of strings in the [Add Credentials] module, ${gitUrl} needs to be replaced with the git warehouse address of your code, other variables in the form of ${xxx} are not required Modify it and keep it as it is.
+> ![user-pipepine-script](./resource/user-pipeline-script.png)
+
+### port.sh
+```
+case $1 in
+"user-api") echo 1000
+;;
+"user-rpc") echo 1001
+;;
+"course-api") echo 1002
+;;
+"course-rpc") echo 1003
+;;
+"selection-api") echo 1004
+esac
+```
+
+The content of dockerlogin.sh
+
+```shell
+#!/bin/bash
+docker login --username=$docker-user --password=$docker-pass $docker-server
+```
+
+* $docker-user: docker login username
+* $docker-pass: docker login user password
+* $docker-server: docker private address
+
+## View pipeline
+![build with parameters](./resource/jenkins-build-with-parameters.png)
+![build with parameters](./resource/pipeline.png)
+
+## View k8s service
+![k8s01](./resource/k8s-01.png)
+
+# Guess you wants
+* [Goctl Installation](goctl-install.md)
+* [k8s](https://kubernetes.io/)
+* [docker](https://www.docker.com/)
+* [jenkins](https://www.jenkins.io/zh/doc/book/installing/)
+* [jenkins pipeline](https://www.jenkins.io/zh/doc/pipeline/tour/hello-world/)
+* [nginx](http://nginx.org/en/docs/)
+* [etcd](https://etcd.io/docs/current/)
\ No newline at end of file
diff --git a/go-zero.dev/en/service-design.md b/go-zero.dev/en/service-design.md
new file mode 100644
index 00000000..33501752
--- /dev/null
+++ b/go-zero.dev/en/service-design.md
@@ -0,0 +1,92 @@
+# Directory Structure
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Directory splitting refers to the directory splitting in line with the best practices of go-zero, which is related to the splitting of microservices. In the best practice within the team,
+We split a system into multiple subsystems according to the horizontal division of the business, and each subsystem should have an independent persistent storage and cache system.
+For example, a shopping mall system needs to consist of a user system (user), a product management system (product), an order system (order), a shopping cart system (cart), a settlement center system (pay), an after-sale system (afterSale), etc.
+
+## System structure analysis
+In the mall system mentioned above, while each system provides services to the outside (http), it also provides data to other subsystems for data access interfaces (rpc), so each subsystem can be split into a service , And provides two external ways to access the system, api and rpc, therefore,
+The above system is divided into the following structure according to the directory structure:
+
+```text
+.
+├── afterSale
+│ ├── api
+│ └── rpc
+├── cart
+│ ├── api
+│ └── rpc
+├── order
+│ ├── api
+│ └── rpc
+├── pay
+│ ├── api
+│ └── rpc
+├── product
+│ ├── api
+│ └── rpc
+└── user
+ ├── api
+ └── rpc
+```
+
+## rpc call chain suggestion
+When designing the system, try to make the call between services one-way in the chain instead of cyclically. For example, the order service calls the user service, and the user service does not call the order service in turn.
+When one of the services fails to start, it will affect each other and enter an infinite loop. You order to think it is caused by the user service failure, and the user thinks it is caused by the order service. If there are a large number of services in a mutual call chain,
+You need to consider whether the service split is reasonable.
+
+## Directory structure of common service types
+Among the above services, only api/rpc services are listed. In addition, there may be more service types under one service, such as rmq (message processing system), cron (timed task system), script (script), etc. ,
+Therefore, a service may contain the following directory structure:
+
+```text
+user
+ ├── api // http access service, business requirement realization
+ ├── cronjob // Timed task, timed data update service
+ ├── rmq // Message processing system: mq and dq, handle some high concurrency and delay message services
+ ├── rpc // rpc service to provide basic data access to other subsystems
+ └── script // Script, handle some temporary operation requirements, and repair temporary data
+```
+
+## Example of complete project directory structure
+```text
+mall // 工程名称
+├── common // 通用库
+│ ├── randx
+│ └── stringx
+├── go.mod
+├── go.sum
+└── service // 服务存放目录
+ ├── afterSale
+ │ ├── api
+ │ └── model
+ │ └── rpc
+ ├── cart
+ │ ├── api
+ │ └── model
+ │ └── rpc
+ ├── order
+ │ ├── api
+ │ └── model
+ │ └── rpc
+ ├── pay
+ │ ├── api
+ │ └── model
+ │ └── rpc
+ ├── product
+ │ ├── api
+ │ └── model
+ │ └── rpc
+ └── user
+ ├── api
+ ├── cronjob
+ ├── model
+ ├── rmq
+ ├── rpc
+ └── script
+```
+
+# Guess you wants
+* [API Directory Structure](api-dir.md)
diff --git a/go-zero.dev/en/service-monitor.md b/go-zero.dev/en/service-monitor.md
new file mode 100644
index 00000000..f6caee4f
--- /dev/null
+++ b/go-zero.dev/en/service-monitor.md
@@ -0,0 +1,106 @@
+# Monitor
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In microservice governance, service monitoring is also a very important link. Monitoring whether a service is working normally needs to be carried out from multiple dimensions, such as:* mysql indicators
+* mongo indicators
+* redis indicator
+* Request log
+* Service index statistics
+* Service health check
+...
+
+The monitoring work is very large, and this section only uses the `service indicator monitoring` as an example for illustration.
+
+## Microservice indicator monitoring based on prometheus
+
+After the service is online, we often need to monitor the service so that we can find the problem early and make targeted optimization. The monitoring can be divided into various forms, such as log monitoring, call chain monitoring, indicator monitoring, and so on. Through indicator monitoring, the changing trend of service indicators can be clearly observed, and the operating status of the service can be understood, which plays a very important role in ensuring the stability of the service.
+
+Prometheus is an open source system monitoring and warning tool that supports a powerful query language, PromQL, allowing users to select and aggregate time series data in real time. Time series data is actively pulled by the server through the HTTP protocol, or it can be pushed through an intermediate gateway. Data, you can obtain monitoring targets through static configuration files or service discovery
+
+## Prometheus architecture
+
+The overall architecture and ecosystem components of Prometheus are shown in the following figure:
+![prometheus-flow](./resource/prometheus-flow.png)
+
+Prometheus Server pulls monitoring indicators directly from the monitoring target or indirectly through the push gateway. It stores all captured sample data locally and executes a series of rules on this data to summarize and record new time series or existing data. Generate an alert. The monitoring data can be visualized through Grafana or other tools
+
+## go-zero service indicator monitoring based on prometheus
+
+The go-zero framework integrates prometheus-based service indicator monitoring. Below we use go-zero’s official example short url to demonstrate how to collect and monitor service indicators:
+* The first step is to install Prometheus first, please refer to the official documentation for the installation steps
+* go-zero does not enable prometheus monitoring by default. The opening method is very simple. You only need to add the following configuration to the shorturl-api.yaml file, where Host is the Prometheus Server address, which is a required configuration, the Port port is not filled in and the default is 9091, and the Path is used The path to pull metrics is /metrics by default
+ ```yaml
+ Prometheus:
+ Host: 127.0.0.1
+ Port: 9091
+ Path: /metrics
+ ```
+
+* Edit the prometheus configuration file prometheus.yml, add the following configuration, and create targets.json
+ ```yaml
+ - job_name: 'file_ds'
+ file_sd_configs:
+ - files:
+ - targets.json
+ ```
+* Edit the targets.json file, where targets is the target address configured by shorturl, and add several default tags
+ ```yaml
+ [
+ {
+ "targets": ["127.0.0.1:9091"],
+ "labels": {
+ "job": "shorturl-api",
+ "app": "shorturl-api",
+ "env": "test",
+ "instance": "127.0.0.1:8888"
+ }
+ }
+ ]
+ ```
+* Start the prometheus service, listening on port 9090 by default
+ ```shell
+ $ prometheus --config.file=prometheus.yml
+ ```
+* Enter `http://127.0.0.1:9090/` in the browser, and then click `Status` -> `Targets` to see the job whose status is Up, and the default label we configured can be seen in the Labels column
+![prometheus-start](./resource/prometheus-start.png)
+ Through the above steps, we have completed the configuration work of Prometheus for the indicator monitoring collection of the shorturl service. For the sake of simplicity, we have performed manual configuration. In the actual production environment, we generally use the method of regularly updating configuration files or service discovery to configure monitoring. Goals, space is limited, not explained here, interested students please check the relevant documents on their own
+
+## Types of indicators monitored by go-zero
+
+go-zero currently adds monitoring of request metrics to the http middleware and rpc interceptor.
+
+Mainly from the two dimensions of request time and request error. The request time uses the Histogram metric type to define multiple Buckets to facilitate quantile statistics. The request error uses the Counter type, and the path tag rpc metric is added to the http metric. Added the method tag for detailed monitoring.
+Next, demonstrate how to view monitoring indicators:
+
+First execute the following command multiple times on the command line
+
+```shell
+$ curl -i "http://localhost:8888/shorten?url=http://www.xiaoheiban.cn"
+```
+Open Prometheus and switch to the Graph interface, and enter the {path="/shorten"} command in the input box to view the monitoring indicators, as shown below:
+![prometheus-graph](./resource/prometheus-graph.webp)
+
+We use PromQL grammar query to filter the indicators whose path is /shorten, and the results show the indicator name and indicator value. The code value in the http_server_requests_code_total indicator is the status code of http, 200 indicates that the request is successful, and http_server_requests_duration_ms_bucket separately counts the results of different buckets. , You can also see that all the indicators have added the default indicators we configured
+The Console interface mainly displays the index results of the query. The Graph interface provides us with a simple graphical display interface. In the actual production environment, we generally use Grafana for graphical display.
+
+## grafana dashboard
+
+Grafana is a visualization tool with powerful functions and supports multiple data sources such as Prometheus, Elasticsearch, Graphite, etc. For simple installation, please refer to the official documentation. The default port of grafana is 3000. After installation, enter http://localhost:3000/ in the browser. , The default account and password are both admin.
+
+The following demonstrates how to draw the visual interface based on the above indicators:
+Click on the left sidebar `Configuration`->`Data Source`->`Add data source` to add a data source, where the HTTP URL is the address of the data source
+![grafana](./resource/grafana.png)
+
+Click on the left sidebar to add dashboard, and then add Variables to facilitate filtering for different tags, such as adding app variables to filter different services
+![grafana-app](./resource/grafana-app.png)
+
+Enter the dashboard and click Add panel in the upper right corner to add a panel to count the qps of the interface in the path dimension
+![grafana-app](./resource/grafana-qps.png)
+
+The final effect is shown below. Different services can be filtered by service name. The panel shows the trend of qps with path /shorten.
+![grafana-app](./resource/grafana-panel.png)
+
+# Summary
+
+The above demonstrates the simple process of go-zero based on prometheus+grafana service indicator monitoring. In the production environment, different dimensions of monitoring and analysis can be done according to the actual scenario. Now go-zero's monitoring indicators are mainly for http and rpc, which is obviously insufficient for the overall monitoring of the service, such as the monitoring of container resources, the monitoring of dependent mysql, redis and other resources, and the monitoring of custom indicators, etc. Go-zero will continue to optimize in this regard. Hope this article can help you
\ No newline at end of file
diff --git a/go-zero.dev/en/shorturl-en.md b/go-zero.dev/en/shorturl-en.md
new file mode 100644
index 00000000..fb67d770
--- /dev/null
+++ b/go-zero.dev/en/shorturl-en.md
@@ -0,0 +1,543 @@
+# Rapid development of microservices
+
+English | [简体中文](../cn/shorturl.md)
+
+## 0. Why building microservices are so difficult
+
+To build a well working microservice, we need lots of knowledges from different aspects.
+
+* basic functionalities
+ 1. concurrency control and rate limit, to avoid being brought down by unexpected inbound
+ 2. service discovery, make sure new or terminated nodes are detected asap
+ 3. load balancing, balance the traffic base on the throughput of nodes
+ 4. timeout control, avoid the nodes continue to process the timed out requests
+ 5. circuit breaker, load shedding, fail fast, protects the failure nodes to recover asap
+
+* advanced functionalities
+ 1. authorization, make sure users can only access their own data
+ 2. tracing, to understand the whole system and locate the specific problem quickly
+ 3. logging, collects data and helps to backtrace problems
+ 4. observability, no metrics, no optimization
+
+For any point listed above, we need a long article to describe the theory and the implementation. But for us, the developers, it’s very difficult to understand all the concepts and make it happen in our systems. Although, we can use the frameworks that have been well served busy sites. [go-zero](https://github.com/zeromicro/go-zero) is born for this purpose, especially for cloud-native microservice systems.
+
+As well, we always adhere to the idea that **prefer tools over conventions and documents**. We hope to reduce the boilerplate code as much as possible, and let developers focus on developing the business related code. For this purpose, we developed the tool `goctl`.
+
+Let’s take the shorturl microservice as a quick example to demonstrate how to quickly create microservices by using [go-zero](https://github.com/zeromicro/go-zero). After finishing this tutorial, you’ll find that it’s so easy to write microservices!
+
+## 1. What is a shorturl service
+
+A shorturl service is that it converts a long url into a short one, by well designed algorithms.
+
+Writting this shorturl service is to demonstrate the complete flow of creating a microservice by using go-zero. But algorithms and detail implementations are quite simplified, and this shorturl service is not suitable for production use.
+
+## 2. Architecture of shorturl microservice
+
+
+
+* In this tutorial, I only use one rpc service, transform, to demonstrate. It’s not telling that one API Gateway only can call one RPC service, it’s only for simplicity here.
+* In production, we should try best to isolate the data belongs to services, that means each service should only use its own database.
+
+## 3. goctl generated code overview
+
+All modules with green background are generated, and will be enabled when necessary. The modules with red background are handwritten code, which is typically business logic code.
+
+* API Gateway
+
+
+
+* RPC
+
+
+
+* model
+
+
+
+And now, let’s walk through the complete flow of quickly create a microservice with go-zero.
+
+## 4. Get started
+
+* install etcd, mysql, redis
+
+* install protoc-gen-go
+
+ ```
+ go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.2
+ ```
+
+* install goctl
+
+ ```shell
+ GO111MODULE=on go get -u github.com/tal-tech/go-zero/tools/goctl
+ ```
+
+* create the working dir `shorturl` and `shorturl/api`
+
+* in `shorturl` dir, execute `go mod init shorturl` to initialize `go.mod`
+
+## 5. Write code for API Gateway
+
+* use goctl to generate `api/shorturl.api`
+
+ ```shell
+ goctl api -o shorturl.api
+ ```
+
+ for simplicity, the leading `info` block is removed, and the code looks like:
+
+ ```go
+ type (
+ expandReq {
+ shorten string `form:"shorten"`
+ }
+
+ expandResp {
+ url string `json:"url"`
+ }
+ )
+
+ type (
+ shortenReq {
+ url string `form:"url"`
+ }
+
+ shortenResp {
+ shorten string `json:"shorten"`
+ }
+ )
+
+ service shorturl-api {
+ @server(
+ handler: ShortenHandler
+ )
+ get /shorten(shortenReq) returns(shortenResp)
+
+ @server(
+ handler: ExpandHandler
+ )
+ get /expand(expandReq) returns(expandResp)
+ }
+ ```
+
+ the usage of `type` keyword is the same as that in go, service is used to define get/post/head/delete api requests, described below:
+
+ * `service shorturl-api {` defines the service name
+ * `@server` defines the properties that used in server side
+ * `handler` defines the handler name
+ * `get /shorten(shortenReq) returns(shortenResp)` defines this is a GET request, the request parameters, and the response parameters
+
+* generate the code for API Gateway by using goctl
+
+ ```shell
+ goctl api go -api shorturl.api -dir .
+ ```
+
+ the generated file structure looks like:
+
+ ```Plain Text
+ .
+ ├── api
+ │ ├── etc
+ │ │ └── shorturl-api.yaml // configuration file
+ │ ├── internal
+ │ │ ├── config
+ │ │ │ └── config.go // configuration definition
+ │ │ ├── handler
+ │ │ │ ├── expandhandler.go // implements expandHandler
+ │ │ │ ├── routes.go // routes definition
+ │ │ │ └── shortenhandler.go // implements shortenHandler
+ │ │ ├── logic
+ │ │ │ ├── expandlogic.go // implements ExpandLogic
+ │ │ │ └── shortenlogic.go // implements ShortenLogic
+ │ │ ├── svc
+ │ │ │ └── servicecontext.go // defines ServiceContext
+ │ │ └── types
+ │ │ └── types.go // defines request/response
+ │ ├── shorturl.api
+ │ └── shorturl.go // main entrance
+ ├── go.mod
+ └── go.sum
+ ```
+
+* start API Gateway service, listens on port 8888 by default
+
+ ```shell
+ go run shorturl.go -f etc/shorturl-api.yaml
+ ```
+
+* test API Gateway service
+
+ ```shell
+ curl -i "http://localhost:8888/shorten?url=http://www.xiaoheiban.cn"
+ ```
+
+ response like:
+
+ ```http
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Thu, 27 Aug 2020 14:31:39 GMT
+ Content-Length: 15
+
+ {"shortUrl":""}
+ ```
+
+ You can see that the API Gateway service did nothing except returned a zero value. And let’s implement the business logic in rpc service.
+
+* you can modify `internal/svc/servicecontext.go` to pass dependencies if needed
+
+* implement logic in package `internal/logic`
+
+* you can use goctl to generate code for clients base on the .api file
+
+* till now, the client engineer can work with the api, don’t need to wait for the implementation of server side
+
+## 6. Write code for transform rpc service
+
+- under directory `shorturl` create dir `rpc`
+
+* under directory `rpc/transform` create `transform.proto` file
+
+ ```shell
+ goctl rpc template -o transform.proto
+ ```
+
+ edit the file and make the code looks like:
+
+ ```protobuf
+ syntax = "proto3";
+
+ package transform;
+
+ message expandReq {
+ string shorten = 1;
+ }
+
+ message expandResp {
+ string url = 1;
+ }
+
+ message shortenReq {
+ string url = 1;
+ }
+
+ message shortenResp {
+ string shorten = 1;
+ }
+
+ service transformer {
+ rpc expand(expandReq) returns(expandResp);
+ rpc shorten(shortenReq) returns(shortenResp);
+ }
+ ```
+
+* use goctl to generate the rpc code, execute the following command in `rpc/transofrm`
+
+ ```shell
+ goctl rpc proto -src transform.proto -dir .
+ ```
+
+ the generated file structure looks like:
+
+ ```Plain Text
+ rpc/transform
+ ├── etc
+ │ └── transform.yaml // configuration file
+ ├── internal
+ │ ├── config
+ │ │ └── config.go // configuration definition
+ │ ├── logic
+ │ │ ├── expandlogic.go // implements expand logic
+ │ │ └── shortenlogic.go // implements shorten logic
+ │ ├── server
+ │ │ └── transformerserver.go // rpc handler
+ │ └── svc
+ │ └── servicecontext.go // defines service context, like dependencies
+ ├── pb
+ │ └── transform.pb.go
+ ├── transform.go // rpc main entrance
+ ├── transform.proto
+ └── transformer
+ ├── transformer.go // defines how rpc clients call this service
+ ├── transformer_mock.go // mock file, for test purpose
+ └── types.go // request/response definition
+ ```
+
+ just run it, looks like:
+
+ ```shell
+ $ go run transform.go -f etc/transform.yaml
+ Starting rpc server at 127.0.0.1:8080...
+ ```
+
+ you can change the listening port in file `etc/transform.yaml`.
+
+## 7. Modify API Gateway to call transform rpc service
+
+* modify the configuration file `shorturl-api.yaml`, add the following:
+
+ ```yaml
+ Transform:
+ Etcd:
+ Hosts:
+ - localhost:2379
+ Key: transform.rpc
+ ```
+
+ automatically discover the transform service by using etcd.
+
+* modify the file `internal/config/config.go`, add dependency on transform service:
+
+ ```go
+ type Config struct {
+ rest.RestConf
+ Transform zrpc.RpcClientConf // manual code
+ }
+ ```
+
+* modify the file `internal/svc/servicecontext.go`, like below:
+
+ ```go
+ type ServiceContext struct {
+ Config config.Config
+ Transformer transformer.Transformer // manual code
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ Config: c,
+ Transformer: transformer.NewTransformer(zrpc.MustNewClient(c.Transform)), // manual code
+ }
+ }
+ ```
+
+ passing the dependencies among services within ServiceContext.
+
+* modify the method `Expand` in the file `internal/logic/expandlogic.go`, looks like:
+
+ ```go
+ func (l *ExpandLogic) Expand(req types.ExpandReq) (*types.ExpandResp, error) {
+ // manual code start
+ resp, err := l.svcCtx.Transformer.Expand(l.ctx, &transformer.ExpandReq{
+ Shorten: req.Shorten,
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ return &types.ExpandResp{
+ Url: resp.Url,
+ }, nil
+ // manual code stop
+ }
+ ```
+
+ by calling the method `Expand` of `transformer` to restore the shortened url.
+
+* modify the file `internal/logic/shortenlogic.go`, looks like:
+
+ ```go
+ func (l *ShortenLogic) Shorten(req types.ShortenReq) (*types.ShortenResp, error) {
+ // manual code start
+ resp, err := l.svcCtx.Transformer.Shorten(l.ctx, &transformer.ShortenReq{
+ Url: req.Url,
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ return &types.ShortenResp{
+ Shorten: resp.Shorten,
+ }, nil
+ // manual code stop
+ }
+ ```
+
+ by calling the method `Shorten` of `transformer` to shorten the url.
+
+Till now, we’ve done the modification of API Gateway. All the manually added code are marked.
+
+## 8. Define the database schema, generate the code for CRUD+cache
+
+* under shorturl, create the directory `rpc/transform/model`: `mkdir -p rpc/transform/model`
+
+* under the directory rpc/transform/model create the file called shorturl.sql`, contents as below:
+
+ ```sql
+ CREATE TABLE `shorturl`
+ (
+ `shorten` varchar(255) NOT NULL COMMENT 'shorten key',
+ `url` varchar(255) NOT NULL COMMENT 'original url',
+ PRIMARY KEY(`shorten`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
+ ```
+
+* create DB and table
+
+ ```sql
+ create database gozero;
+ ```
+
+ ```sql
+ source shorturl.sql;
+ ```
+
+* under the directory `rpc/transform/model` execute the following command to genrate CRUD+cache code, `-c` means using `redis cache`
+
+ ```shell
+ goctl model mysql ddl -c -src shorturl.sql -dir .
+ ```
+
+ you can also generate the code from the database url by using `datasource` subcommand instead of `ddl`
+
+ the generated file structure looks like:
+
+ ```Plain Text
+ rpc/transform/model
+ ├── shorturl.sql
+ ├── shorturlmodel.go // CRUD+cache code
+ └── vars.go // const and var definition
+ ```
+
+## 9. Modify shorten/expand rpc to call crud+cache
+
+* modify `rpc/transform/etc/transform.yaml`, add the following:
+
+ ```yaml
+ DataSource: root:@tcp(localhost:3306)/gozero
+ Table: shorturl
+ Cache:
+ - Host: localhost:6379
+ ```
+
+ you can use multiple redis as cache. redis node and cluster are both supported.
+
+* modify `rpc/transform/internal/config.go`, like below:
+
+ ```go
+ type Config struct {
+ zrpc.RpcServerConf
+ DataSource string // manual code
+ Table string // manual code
+ Cache cache.CacheConf // manual code
+ }
+ ```
+
+ added the configuration for mysql and redis cache.
+
+* modify `rpc/transform/internal/svc/servicecontext.go`, like below:
+
+ ```go
+ type ServiceContext struct {
+ c config.Config
+ Model model.ShorturlModel // manual code
+ }
+
+ func NewServiceContext(c config.Config) *ServiceContext {
+ return &ServiceContext{
+ c: c,
+ Model: model.NewShorturlModel(sqlx.NewMysql(c.DataSource), c.Cache, c.Table), // manual code
+ }
+ }
+ ```
+
+* modify `rpc/transform/internal/logic/expandlogic.go`, like below:
+
+ ```go
+ func (l *ExpandLogic) Expand(in *transform.ExpandReq) (*transform.ExpandResp, error) {
+ // manual code start
+ res, err := l.svcCtx.Model.FindOne(in.Shorten)
+ if err != nil {
+ return nil, err
+ }
+
+ return &transform.ExpandResp{
+ Url: res.Url,
+ }, nil
+ // manual code stop
+ }
+ ```
+
+* modify `rpc/shorten/internal/logic/shortenlogic.go`, looks like:
+
+ ```go
+ func (l *ShortenLogic) Shorten(in *transform.ShortenReq) (*transform.ShortenResp, error) {
+ // manual code start, generates shorturl
+ key := hash.Md5Hex([]byte(in.Url))[:6]
+ _, err := l.svcCtx.Model.Insert(model.Shorturl{
+ Shorten: key,
+ Url: in.Url,
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ return &transform.ShortenResp{
+ Shorten: key,
+ }, nil
+ // manual code stop
+ }
+ ```
+
+ till now, we finished modifing the code, all the modified code is marked.
+
+## 10. Call shorten and expand services
+
+* call shorten api
+
+ ```shell
+ curl -i "http://localhost:8888/shorten?url=http://www.xiaoheiban.cn"
+ ```
+
+ response like:
+
+ ```http
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Sat, 29 Aug 2020 10:49:49 GMT
+ Content-Length: 21
+
+ {"shorten":"f35b2a"}
+ ```
+
+* call expand api
+
+ ```shell
+ curl -i "http://localhost:8888/expand?shorten=f35b2a"
+ ```
+
+ response like:
+
+ ```http
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ Date: Sat, 29 Aug 2020 10:51:53 GMT
+ Content-Length: 34
+
+ {"url":"http://www.xiaoheiban.cn"}
+ ```
+
+## 11. Benchmark
+
+Because benchmarking the write requests depends on the write throughput of mysql, we only benchmarked the expand api. We read the data from mysql and cache it in redis. I chose 100 hot keys hardcoded in shorten.lua to generate the benchmark.
+
+![Benchmark](images/shorturl-benchmark.png)
+
+as shown above, in my MacBook Pro, the QPS is like 30K+.
+
+## 12. Full code
+
+[https://github.com/zeromicro/zero-examples/tree/main/shorturl](https://github.com/zeromicro/zero-examples/tree/main/shorturl)
+
+## 13. Conclusion
+
+We always adhere to **prefer tools over conventions and documents**.
+
+go-zero is not only a framework, but also a tool to simplify and standardize the building of micoservice systems.
+
+We not only keep the framework simple, but also encapsulate the complexity into the framework. And the developers are free from building the difficult and boilerplate code. Then we get the rapid development and less failure.
+
+For the generated code by goctl, lots of microservice components are included, like concurrency control, adaptive circuit breaker, adaptive load shedding, auto cache control etc. And it’s easy to deal with the busy sites.
+
+If you have any ideas that can help us to improve the productivity, tell me any time! 👏
diff --git a/go-zero.dev/en/source.md b/go-zero.dev/en/source.md
new file mode 100644
index 00000000..1f1d75d4
--- /dev/null
+++ b/go-zero.dev/en/source.md
@@ -0,0 +1,5 @@
+# Source Code
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+* [demo](https://github.com/zeromicro/go-zero-demo)
\ No newline at end of file
diff --git a/go-zero.dev/en/stream.md b/go-zero.dev/en/stream.md
new file mode 100644
index 00000000..1184c529
--- /dev/null
+++ b/go-zero.dev/en/stream.md
@@ -0,0 +1,367 @@
+# Stream processing
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+Stream processing is a computer programming paradigm that allows given a data sequence (stream processing data source), a series of data operations (functions) are applied to each element in the stream. At the same time, stream processing tools can significantly improve programmers' development efficiency, allowing them to write effective, clean, and concise code.
+
+Streaming data processing is very common in our daily work. For example, we often record many business logs in business development. These logs are usually sent to Kafka first, and then written to elasticsearch by the Job consumption Kafka, and the logs are in progress. In the process of stream processing, logs are often processed, such as filtering invalid logs, doing some calculations and recombining logs, etc. The schematic diagram is as follows:
+![fx_log.png](./resource/fx_log.png)
+### fx
+[go-zero](https://github.com/zeromicro/go-zero) is a full-featured microservice framework. There are many very useful tools built in the framework, including streaming data processing tools [fx ](https://github.com/zeromicro/go-zero/tree/master/core/fx), let’s use a simple example to understand the tool:
+```go
+package main
+
+import (
+ "fmt"
+ "os"
+ "os/signal"
+ "syscall"
+ "time"
+
+ "github.com/tal-tech/go-zero/core/fx"
+)
+
+func main() {
+ ch := make(chan int)
+
+ go inputStream(ch)
+ go outputStream(ch)
+
+ c := make(chan os.Signal, 1)
+ signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
+ <-c
+}
+
+func inputStream(ch chan int) {
+ count := 0
+ for {
+ ch <- count
+ time.Sleep(time.Millisecond * 500)
+ count++
+ }
+}
+
+func outputStream(ch chan int) {
+ fx.From(func(source chan<- interface{}) {
+ for c := range ch {
+ source <- c
+ }
+ }).Walk(func(item interface{}, pipe chan<- interface{}) {
+ count := item.(int)
+ pipe <- count
+ }).Filter(func(item interface{}) bool {
+ itemInt := item.(int)
+ if itemInt%2 == 0 {
+ return true
+ }
+ return false
+ }).ForEach(func(item interface{}) {
+ fmt.Println(item)
+ })
+}
+```
+
+
+The inputStream function simulates the generation of stream data, and the outputStream function simulates the process of stream data. The From function is the input of the stream. The Walk function concurrently acts on each item. The Filter function filters the item as true and keeps it as false. Keep, the ForEach function traverses and outputs each item element.
+
+
+### Intermediate operations of streaming data processing
+
+
+There may be many intermediate operations in the data processing of a stream, and each intermediate operation can act on the stream. Just like the workers on the assembly line, each worker will return to the processed new part after operating the part, and in the same way, after the intermediate operation of the flow processing is completed, it will also return to a new flow.
+![7715f4b6-8739-41ac-8c8c-04d187172e9d.png](./resource/7715f4b6-8739-41ac-8c8c-04d187172e9d.png)
+Intermediate operations of fx stream processing:
+
+| Operation function | Features | Input |
+| --- | --- | --- |
+| Distinct | Remove duplicate items | KeyFunc, return the key that needs to be deduplicated |
+| Filter | Filter items that do not meet the conditions | FilterFunc, Option controls the amount of concurrency |
+| Group | Group items | KeyFunc, group by key |
+| Head | Take out the first n items and return to the new stream | int64 reserved number |
+| Map | Object conversion | MapFunc, Option controls the amount of concurrency |
+| Merge | Merge item into slice and generate new stream | |
+| Reverse | Reverse item | |
+| Sort | Sort items | LessFunc implements sorting algorithm |
+| Tail | Similar to the Head function, n items form a new stream after being taken out | int64 reserved number |
+| Walk | Act on each item | WalkFunc, Option controls the amount of concurrency |
+
+
+
+The following figure shows each step and the result of each step:
+
+
+![3aefec98-56eb-45a6-a4b2-9adbdf4d63c0.png](./resource/3aefec98-56eb-45a6-a4b2-9adbdf4d63c0.png)
+
+
+### Usage and principle analysis
+
+
+#### From
+
+
+Construct a stream through the From function and return the Stream, and the stream data is stored through the channel:
+
+
+```go
+// Example
+s := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 0}
+fx.From(func(source chan<- interface{}) {
+ for _, v := range s {
+ source <- v
+ }
+})
+
+// Source Code
+func From(generate GenerateFunc) Stream {
+ source := make(chan interface{})
+
+ threading.GoSafe(func() {
+ defer close(source)
+ generate(source)
+ })
+
+ return Range(source)
+}
+```
+
+
+#### Filter
+
+
+The Filter function provides the function of filtering items, FilterFunc defines the filtering logic true to retain the item, and false to not retain:
+
+
+```go
+// Example: Keep even numbers
+s := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 0}
+fx.From(func(source chan<- interface{}) {
+ for _, v := range s {
+ source <- v
+ }
+}).Filter(func(item interface{}) bool {
+ if item.(int)%2 == 0 {
+ return true
+ }
+ return false
+})
+
+// Source Code
+func (p Stream) Filter(fn FilterFunc, opts ...Option) Stream {
+ return p.Walk(func(item interface{}, pipe chan<- interface{}) {
+ // Execute the filter function true to retain, false to discard
+ if fn(item) {
+ pipe <- item
+ }
+ }, opts...)
+}
+```
+
+
+#### Group
+
+
+Group groups the stream data. The key of the group needs to be defined. After the data is grouped, it is stored in the channel as slices:
+
+
+```go
+// Example Group according to the first character "g" or "p", if not, it will be divided into another group
+ ss := []string{"golang", "google", "php", "python", "java", "c++"}
+ fx.From(func(source chan<- interface{}) {
+ for _, s := range ss {
+ source <- s
+ }
+ }).Group(func(item interface{}) interface{} {
+ if strings.HasPrefix(item.(string), "g") {
+ return "g"
+ } else if strings.HasPrefix(item.(string), "p") {
+ return "p"
+ }
+ return ""
+ }).ForEach(func(item interface{}) {
+ fmt.Println(item)
+ })
+}
+
+// Source Code
+func (p Stream) Group(fn KeyFunc) Stream {
+ // Define group storage map
+ groups := make(map[interface{}][]interface{})
+ for item := range p.source {
+ // User-defined group key
+ key := fn(item)
+ // Group the same key into a group
+ groups[key] = append(groups[key], item)
+ }
+
+ source := make(chan interface{})
+ go func() {
+ for _, group := range groups {
+ // A group of data with the same key is written to the channel
+ source <- group
+ }
+ close(source)
+ }()
+
+ return Range(source)
+}
+```
+
+
+#### Reverse
+
+
+reverse can reverse the elements in the stream:
+
+
+![7e0fd2b8-d4c1-4130-a216-a7d3d4301116.png](./resource/7e0fd2b8-d4c1-4130-a216-a7d3d4301116.png)
+
+
+```go
+// Example
+fx.Just(1, 2, 3, 4, 5).Reverse().ForEach(func(item interface{}) {
+ fmt.Println(item)
+})
+
+// Source Code
+func (p Stream) Reverse() Stream {
+ var items []interface{}
+ // Get the data in the stream
+ for item := range p.source {
+ items = append(items, item)
+ }
+ // Reversal algorithm
+ for i := len(items)/2 - 1; i >= 0; i-- {
+ opp := len(items) - 1 - i
+ items[i], items[opp] = items[opp], items[i]
+ }
+
+ // Write stream
+ return Just(items...)
+}
+```
+
+
+#### Distinct
+
+
+Distinct de-duplicates elements in the stream. De-duplication is commonly used in business development. It is often necessary to de-duplicate user IDs, etc.:
+
+
+```go
+// Example
+fx.Just(1, 2, 2, 2, 3, 3, 4, 5, 6).Distinct(func(item interface{}) interface{} {
+ return item
+}).ForEach(func(item interface{}) {
+ fmt.Println(item)
+})
+// Output: 1,2,3,4,5,6
+
+// Source Code
+func (p Stream) Distinct(fn KeyFunc) Stream {
+ source := make(chan interface{})
+
+ threading.GoSafe(func() {
+ defer close(source)
+ // Deduplication is performed by key, and only one of the same key is kept
+ keys := make(map[interface{}]lang.PlaceholderType)
+ for item := range p.source {
+ key := fn(item)
+ // The key is not retained if it exists
+ if _, ok := keys[key]; !ok {
+ source <- item
+ keys[key] = lang.Placeholder
+ }
+ }
+ })
+
+ return Range(source)
+}
+```
+
+
+#### Walk
+
+
+The concurrency of the Walk function works on each item in the stream. You can set the number of concurrency through WithWorkers. The default number of concurrency is 16, and the minimum number of concurrency is 1. If you set unlimitedWorkers to true, the number of concurrency is unlimited, but the number of concurrent writes in the stream is unlimited. The data is limited by defaultWorkers. In WalkFunc, users can customize the elements that are subsequently written to the stream, and can write multiple elements without writing:
+
+
+```go
+// Example
+fx.Just("aaa", "bbb", "ccc").Walk(func(item interface{}, pipe chan<- interface{}) {
+ newItem := strings.ToUpper(item.(string))
+ pipe <- newItem
+}).ForEach(func(item interface{}) {
+ fmt.Println(item)
+})
+
+// Source Code
+func (p Stream) walkLimited(fn WalkFunc, option *rxOptions) Stream {
+ pipe := make(chan interface{}, option.workers)
+
+ go func() {
+ var wg sync.WaitGroup
+ pool := make(chan lang.PlaceholderType, option.workers)
+
+ for {
+ // Control the number of concurrent
+ pool <- lang.Placeholder
+ item, ok := <-p.source
+ if !ok {
+ <-pool
+ break
+ }
+
+ wg.Add(1)
+ go func() {
+ defer func() {
+ wg.Done()
+ <-pool
+ }()
+ // Acting on every element
+ fn(item, pipe)
+ }()
+ }
+
+ // Wait for processing to complete
+ wg.Wait()
+ close(pipe)
+ }()
+
+ return Range(pipe)
+}
+```
+
+
+### Concurrent processing
+
+
+In addition to stream data processing, the fx tool also provides function concurrency. The realization of a function in microservices often requires multiple services. Concurrent processing dependence can effectively reduce dependency time and improve service performance.
+
+
+![b97bf7df-1781-436e-bf04-f1dd90c60537.png](./resource/b97bf7df-1781-436e-bf04-f1dd90c60537.png)
+
+
+```go
+fx.Parallel(func() {
+ userRPC()
+}, func() {
+ accountRPC()
+}, func() {
+ orderRPC()
+})
+```
+
+
+Note that when fx.Parallel performs dependency parallel processing, there will be no error return. If you need an error return, or a dependency error report needs to end the dependency request immediately, please use the [MapReduce](https://gocn.vip/topics/10941) tool To process.
+
+
+### Summary
+
+
+This article introduces the basic concepts of stream processing and the stream processing tool fx in go-zero. There are many stream processing scenarios in actual production. I hope this article can give you some inspiration and better response Stream processing scene at work.
+
+
+
+
+
+
diff --git a/go-zero.dev/en/summary.md b/go-zero.dev/en/summary.md
new file mode 100644
index 00000000..d0114eb4
--- /dev/null
+++ b/go-zero.dev/en/summary.md
@@ -0,0 +1,84 @@
+# Summary
+
+* [Introduction](README.md)
+* [About Us](about-us.md)
+* [Join Us](join-us.md)
+* [Concepts](concept-introduction.md)
+* [Quick Start](quick-start.md)
+ * [Monolithic Service](monolithic-service.md)
+ * [Micro Service](micro-service.md)
+* [Framework Design](framework-design.md)
+ * [Go-Zero Design](go-zero-design.md)
+ * [Go-Zero Features](go-zero-features.md)
+ * [API IDL](api-grammar.md)
+ * [API Directory Structure](api-dir.md)
+ * [RPC Directory Structure](rpc-dir.md)
+* [Project Development](project-dev.md)
+ * [Prepare](prepare.md)
+ * [Golang Installation](golang-install.md)
+ * [Go Module Configuration](gomod-config.md)
+ * [Goctl Installation](goctl-install.md)
+ * [protoc & protoc-gen-go Installation](protoc-install.md)
+ * [More](prepare-other.md)
+ * [Development Rules](dev-specification.md)
+ * [Naming Rules](naming-spec.md)
+ * [Route Rules](route-naming-spec.md)
+ * [Coding Rules](coding-spec.md)
+ * [Development Flow](dev-flow.md)
+ * [Configuration Introduction](config-introduction.md)
+ * [API Configuration](api-config.md)
+ * [RPC Configuration](rpc-config.md)
+ * [Business Development](business-dev.md)
+ * [Directory Structure](service-design.md)
+ * [Model Generation](model-gen.md)
+ * [API Coding](api-coding.md)
+ * [Business Coding](business-coding.md)
+ * [JWT](jwt.md)
+ * [Middleware](middleware.md)
+ * [RPC Implement & Call](rpc-call.md)
+ * [Error Handling](error-handle.md)
+ * [CI/CD](ci-cd.md)
+ * [Service Deployment](service-deployment.md)
+ * [Log Collection](log-collection.md)
+ * [Trace](trace.md)
+ * [Monitor](service-monitor.md)
+* [Goctl](goctl.md)
+ * [Commands & Flags](goctl-commands.md)
+ * [API Commands](goctl-api.md)
+ * [RPC Commands](goctl-rpc.md)
+ * [Model Commands](goctl-model.md)
+ * [Plugin Commands](goctl-plugin.md)
+ * [More Commands](goctl-other.md)
+* [Template](template-manage.md)
+ * [Command](template-cmd.md)
+ * [Custom](template.md)
+* [Extended](extended-reading.md)
+ * [logx](logx.md)
+ * [bloom](bloom.md)
+ * [executors](executors.md)
+ * [fx](fx.md)
+ * [mysql](mysql.md)
+ * [redis-lock](redis-lock.md)
+ * [periodlimit](periodlimit.md)
+ * [tokenlimit](tokenlimit.md)
+ * [TimingWheel](timing-wheel.md)
+* [Tools](tool-center.md)
+ * [Intellij Plugin](intellij.md)
+ * [VSCode Plugin](vscode.md)
+* [Plugins](plugin-center.md)
+* [Learning Resources](learning-resource.md)
+ * [Wechat](wechat.md)
+ * [Night](goreading.md)
+ * [OpenTalk](gotalk.md)
+* [User Practise](practise.md)
+ * [Persistent layer cache](redis-cache.md)
+ * [Business layer cache](buiness-cache.md)
+ * [Queue](go-queue.md)
+ * [Middle Ground System](datacenter.md)
+ * [Stream Handler](stream.md)
+ * [Online Exchange](online-exchange.md)
+* [Contributor](contributor.md)
+* [Document Contribute](doc-contibute.md)
+* [Error](error.md)
+* [Source Code](source.md)
+
diff --git a/go-zero.dev/en/template-cmd.md b/go-zero.dev/en/template-cmd.md
new file mode 100644
index 00000000..ec7429b7
--- /dev/null
+++ b/go-zero.dev/en/template-cmd.md
@@ -0,0 +1,113 @@
+# Template Operation
+
+Template is the basis of data-driven generation, all code (rest api, rpc, model, docker, kube) generation will rely on template.
+By default, the template generator selects the in-memory template for generation, while for developers who need to modify the template, they need to drop the template and make template changes in the next code generation.
+For developers who need to modify the templates, they need to modify the templates, and then the next time the code is generated, it will load the templates under the specified path to generate.
+
+## Help
+```text
+NAME:
+ goctl template - template operation
+
+USAGE:
+ goctl template command [command options] [arguments...]
+
+COMMANDS:
+ init initialize the all templates(force update)
+ clean clean the all cache templates
+ update update template of the target category to the latest
+ revert revert the target template to the latest
+
+OPTIONS:
+ --help, -h show help
+```
+
+## Init
+```text
+NAME:
+ goctl template init - initialize the all templates(force update)
+
+USAGE:
+ goctl template init [command options] [arguments...]
+
+OPTIONS:
+ --home value the goctl home path of the template
+```
+
+## Clean
+```text
+NAME:
+ goctl template clean - clean the all cache templates
+
+USAGE:
+ goctl template clean [command options] [arguments...]
+
+OPTIONS:
+ --home value the goctl home path of the template
+```
+
+## Update
+```text
+NAME:
+ goctl template update - update template of the target category to the latest
+
+USAGE:
+ goctl template update [command options] [arguments...]
+
+OPTIONS:
+ --category value, -c value the category of template, enum [api,rpc,model,docker,kube]
+ --home value the goctl home path of the template
+```
+
+## Revert
+```text
+NAME:
+ goctl template revert - revert the target template to the latest
+
+USAGE:
+ goctl template revert [command options] [arguments...]
+
+OPTIONS:
+ --category value, -c value the category of template, enum [api,rpc,model,docker,kube]
+ --name value, -n value the target file name of template
+ --home value the goctl home path of the template
+```
+
+> [!TIP]
+>
+> `--home` Specify the template storage path
+
+## Template loading
+
+You can specify the folder where the template is located by `--home` during code generation, and the commands that have been supported to specify the template directory are
+
+- `goctl api go` Details can be found in `goctl api go --help` for help
+- `goctl docker` Details can be viewed with `goctl docker --help`
+- `goctl kube` Details can be viewed with `goctl kube --help`
+- `goctl rpc new` Details can be viewed with `goctl rpc new --help`
+- `goctl rpc proto` Details can be viewed with `goctl rpc proto --help`
+- `goctl model mysql ddl` Details can be viewed with `goctl model mysql ddl --help`
+- `goctl model mysql datasource` Details can be viewed with `goctl model mysql datasource --help`
+- `goctl model postgresql datasource` Details can be viewed with `goctl model mysql datasource --help`
+- `goctl model mongo` Details can be viewed with `goctl model mongo --help`
+
+The default (when `--home` is not specified) is to read from the `$HOME/.goctl` directory.
+
+## Example
+* Initialize the template to the specified `$HOME/template` directory
+```text
+$ goctl template init --home $HOME/template
+```
+
+```text
+Templates are generated in /Users/anqiansong/template, edit on your risk!
+```
+
+* Greet rpc generation using `$HOME/template` template
+```text
+$ goctl rpc new greet --home $HOME/template
+```
+
+```text
+Done
+```
\ No newline at end of file
diff --git a/go-zero.dev/en/template-manage.md b/go-zero.dev/en/template-manage.md
new file mode 100644
index 00000000..acfd5251
--- /dev/null
+++ b/go-zero.dev/en/template-manage.md
@@ -0,0 +1,4 @@
+# Template
+
+- [Command](template-cmd.md)
+- [Custom](template.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/template.md b/go-zero.dev/en/template.md
new file mode 100644
index 00000000..7e122e31
--- /dev/null
+++ b/go-zero.dev/en/template.md
@@ -0,0 +1,160 @@
+# Template Modification
+
+## Scenario
+Implement a uniformly formatted body accordingly, in the following format.
+```json
+{
+ "code": 0,
+ "msg": "OK",
+ "data": {}// ①
+}
+```
+
+① 实际相应数据
+
+> [!TIP]
+> The code generated by `go-zero` does not process it
+
+## Preparation
+We go ahead and write a `Response` method in the `response` package under the project with `module` as `greet`, with a directory tree similar to the following.
+```text
+greet
+├── reponse
+│ └── response.go
+└── xxx...
+```
+
+The code is as follows
+```go
+package reponse
+
+import (
+ "net/http"
+
+ "github.com/tal-tech/go-zero/rest/httpx"
+)
+
+type Body struct {
+ Code int `json:"code"`
+ Msg string `json:"msg"`
+ Data interface{} `json:"data,omitempty"`
+}
+
+func Response(w http.ResponseWriter, resp interface{}, err error) {
+ var body Body
+ if err != nil {
+ body.Code = -1
+ body.Msg = err.Error()
+ } else {
+ body.Msg = "OK"
+ body.Data = resp
+ }
+ httpx.OkJson(w, body)
+}
+```
+
+## Modify the `handler` template
+```shell
+$ vim ~/.goctl/api/handler.tpl
+```
+
+Replace the template with the following
+```go
+package handler
+
+import (
+ "net/http"
+ "greet/response"// ①
+
+ {{.ImportPackages}}
+)
+
+func {{.HandlerName}}(ctx *svc.ServiceContext) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ {{if .HasRequest}}var req types.{{.RequestType}}
+ if err := httpx.Parse(r, &req); err != nil {
+ httpx.Error(w, err)
+ return
+ }{{end}}
+
+ l := logic.New{{.LogicType}}(r.Context(), ctx)
+ {{if .HasResp}}resp, {{end}}err := l.{{.Call}}({{if .HasRequest}}req{{end}})
+ {{if .HasResp}}reponse.Response(w, resp, err){{else}}reponse.Response(w, nil, err){{end}}//②
+
+ }
+}
+```
+
+① Replace with your real `response` package name, for reference only
+
+② Customized template content
+
+> [!TIP]
+>
+> 1.If there is no local `~/.goctl/api/handler.tpl` file, you can initialize it with the template initialization command `goctl template init`
+
+## Comparison
+* Before
+```go
+func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ var req types.Request
+ if err := httpx.Parse(r, &req); err != nil {
+ httpx.Error(w, err)
+ return
+ }
+
+ l := logic.NewGreetLogic(r.Context(), ctx)
+ resp, err := l.Greet(req)
+ // The following content will be replaced by custom templates
+ if err != nil {
+ httpx.Error(w, err)
+ } else {
+ httpx.OkJson(w, resp)
+ }
+ }
+}
+```
+* After
+```go
+func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ var req types.Request
+ if err := httpx.Parse(r, &req); err != nil {
+ httpx.Error(w, err)
+ return
+ }
+
+ l := logic.NewGreetLogic(r.Context(), ctx)
+ resp, err := l.Greet(req)
+ reponse.Response(w, resp, err)
+ }
+}
+```
+
+## Comparison of response body
+
+* Before
+```json
+{
+ "message": "Hello go-zero!"
+}
+```
+
+* After
+```json
+{
+ "code": 0,
+ "msg": "OK",
+ "data": {
+ "message": "Hello go-zero!"
+ }
+}
+```
+
+# Summary
+This document only describes the process of customizing the template for the corresponding example of http, in addition to the following scenarios of customizing the template.
+* model layer adds kmq
+* model layer to generate the model instance of the option to be valid
+* http customize the corresponding format
+ ...
\ No newline at end of file
diff --git a/go-zero.dev/en/timing-wheel.md b/go-zero.dev/en/timing-wheel.md
new file mode 100644
index 00000000..6b4ec3b3
--- /dev/null
+++ b/go-zero.dev/en/timing-wheel.md
@@ -0,0 +1,321 @@
+# TimingWheel
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+This article introduces the **delayed operation** in `go-zero`. **Delayed operation**, two options can be used:
+
+
+1. `Timer`: The timer maintains a priority queue, executes it at the time, and then stores the tasks that need to be executed in the map
+2. The `timingWheel` in `collection` maintains an array for storing task groups, and each slot maintains a doubly linked list of tasks. When the execution starts, the timer executes the tasks in a slot every specified time.
+
+
+
+Scheme 2 reduces the maintenance task from the `priority queue O(nlog(n))` to the `doubly linked list O(1)`, and the execution of the task only needs to poll the tasks `O(N)` at a time point. Priority queue, put and delete elements `O(nlog(n))`.
+
+
+## timingWheel in cache
+
+
+First, let's first talk about the use of TimingWheel in the cache of collection:
+
+
+```go
+timingWheel, err := NewTimingWheel(time.Second, slots, func(k, v interface{}) {
+ key, ok := k.(string)
+ if !ok {
+ return
+ }
+ cache.Del(key)
+})
+if err != nil {
+ return nil, err
+}
+
+cache.timingWheel = timingWheel
+```
+
+
+This is the initialization of `cache` and the initialization of `timingWheel` at the same time for key expiration processing. The parameters in turn represent:
+
+
+- `interval`: Time division scale
+- `numSlots`: time slots
+- `execute`: execute a function at a point in time
+
+
+
+The function executed in the `cache` is to **delete the expired key**, and this expiration is controlled by the `timingWheel` to advance the time.
+
+
+**Next, let's learn about it through the use of timingWheel by cache. **
+
+
+### Initialization
+
+
+```go
+func newTimingWheelWithClock(interval time.Duration, numSlots int, execute Execute, ticker timex.Ticker) (
+ *TimingWheel, error) {
+ tw := &TimingWheel{
+ interval: interval, // Single time grid time interval
+ ticker: ticker, // Timer, do time push, advance by interval
+ slots: make([]*list.List, numSlots), // Time wheel
+ timers: NewSafeMap(), // Store the map of task{key, value} [parameters needed to execute execute]
+ tickedPos: numSlots - 1, // at previous virtual circle
+ execute: execute, // Execution function
+ numSlots: numSlots, // Initialize slots num
+ setChannel: make(chan timingEntry), // The following channels are used for task delivery
+ moveChannel: make(chan baseEntry),
+ removeChannel: make(chan interface{}),
+ drainChannel: make(chan func(key, value interface{})),
+ stopChannel: make(chan lang.PlaceholderType),
+ }
+ // Prepare all the lists stored in the slot
+ tw.initSlots()
+ // Open asynchronous coroutine, use channel for task communication and delivery
+ go tw.run()
+
+ return tw, nil
+}
+```
+
+
+![76108cc071154e2faa66eada81857fb0~tplv-k3u1fbpfcp-zoom-1.image.png](./resource/76108cc071154e2faa66eada81857fb0_tplv-k3u1fbpfcp-zoom-1.image.png)
+
+
+The above is a more intuitive display of the **"time wheel"** of the `timingWheel`, and the details of the advancement will be explained later around this picture.
+
+
+`go tw.run()` opens a coroutine for time promotion:
+
+
+```go
+func (tw *TimingWheel) run() {
+ for {
+ select {
+ // Timer do time push -> scanAndRunTasks()
+ case <-tw.ticker.Chan():
+ tw.onTick()
+ // add task will enter task into setChannel
+ case task := <-tw.setChannel:
+ tw.setTask(&task)
+ ...
+ }
+ }
+}
+```
+
+
+It can be seen that the `timer` execution is started at the time of initialization, and it is rotated in the `internal` time period, and then the bottom layer keeps getting the tasks from the `list` in the `slot` and handing them over to the `execute` for execution.
+
+
+![3bbddc1ebb79455da91dfcf3da6bc72f~tplv-k3u1fbpfcp-zoom-1.image.png](./resource/3bbddc1ebb79455da91dfcf3da6bc72f_tplv-k3u1fbpfcp-zoom-1.image.png)
+
+
+### Task Operation
+
+
+The next step is to set the `cache key`:
+
+
+```go
+func (c *Cache) Set(key string, value interface{}) {
+ c.lock.Lock()
+ _, ok := c.data[key]
+ c.data[key] = value
+ c.lruCache.add(key)
+ c.lock.Unlock()
+
+ expiry := c.unstableExpiry.AroundDuration(c.expire)
+ if ok {
+ c.timingWheel.MoveTimer(key, expiry)
+ } else {
+ c.timingWheel.SetTimer(key, value, expiry)
+ }
+}
+```
+
+
+1. First look at whether this key exists in the `data map`
+1. If it exists, update `expire` -> `MoveTimer()`
+1. Set the key for the first time -> `SetTimer()`
+
+
+
+So the use of `timingWheel` is clear. Developers can `add` or `update` according to their needs.
+
+
+At the same time, when we follow the source code, we will find that: `SetTimer() MoveTimer()` all transports tasks to channel, and the coroutine opened in `run()` continuously takes out the task operations of `channel`.
+
+
+> `SetTimer() -> setTask()`:
+> - not exist task:`getPostion -> pushBack to list -> setPosition`
+> - exist task:`get from timers -> moveTask()`
+>
+`MoveTimer() -> moveTask()`
+
+
+
+From the above call chain, there is a function that will be called: `moveTask()`
+
+
+```go
+func (tw *TimingWheel) moveTask(task baseEntry) {
+ // timers: Map => Get [positionEntry「pos, task」] by key
+ val, ok := tw.timers.Get(task.key)
+ if !ok {
+ return
+ }
+
+ timer := val.(*positionEntry)
+ // {delay The delay time is less than a time grid interval, and there is no smaller scale, indicating that the task should be executed immediately
+ if task.delay < tw.interval {
+ threading.GoSafe(func() {
+ tw.execute(timer.item.key, timer.item.value)
+ })
+ return
+ }
+ // If> interval, the new pos, circle in the time wheel is calculated by the delay time delay
+ pos, circle := tw.getPositionAndCircle(task.delay)
+ if pos >= timer.pos {
+ timer.item.circle = circle
+ // Move offset before and after recording. To re-enter the team for later process
+ timer.item.diff = pos - timer.pos
+ } else if circle > 0 {
+ // Move to the next layer and convert circle to part of diff
+ circle--
+ timer.item.circle = circle
+ // Because it is an array, add numSlots [that is equivalent to going to the next level]
+ timer.item.diff = tw.numSlots + pos - timer.pos
+ } else {
+ // If the offset is advanced, the task is still in the first layer at this time
+ // Mark to delete the old task, and re-enter the team, waiting to be executed
+ timer.item.removed = true
+ newItem := &timingEntry{
+ baseEntry: task,
+ value: timer.item.value,
+ }
+ tw.slots[pos].PushBack(newItem)
+ tw.setTimerPosition(pos, newItem)
+ }
+}
+```
+
+
+The above process has the following situations:
+
+
+- `delay = old`:``
+ - `newCircle> 0`: Calculate diff and convert circle to the next layer, so diff + numslots
+ - If only the delay time is shortened, delete the old task mark, re-add it to the list, and wait for the next loop to be executed
+
+
+
+### Execute
+
+
+In the previous initialization, the timer in `run()` kept advancing, and the process of advancing was mainly to pass the tasks in the list to the executed `execute func`. Let's start with the execution of the timer:
+
+
+```go
+// Timer "It will be executed every internal"
+func (tw *TimingWheel) onTick() {
+ // Update the current execution tick position every time it is executed
+ tw.tickedPos = (tw.tickedPos + 1) % tw.numSlots
+ // Get the doubly linked list of stored tasks in the tick position at this time
+ l := tw.slots[tw.tickedPos]
+ tw.scanAndRunTasks(l)
+}
+```
+
+
+Next is how to execute `execute`:
+
+
+```go
+func (tw *TimingWheel) scanAndRunTasks(l *list.List) {
+ // Store the task{key, value} that needs to be executed at present [parameters required by execute, which are passed to execute in turn]
+ var tasks []timingTask
+
+ for e := l.Front(); e != nil; {
+ task := e.Value.(*timingEntry)
+ // Mark the deletion, do the real deletion in scan "Delete the map data"
+ if task.removed {
+ next := e.Next()
+ l.Remove(e)
+ tw.timers.Del(task.key)
+ e = next
+ continue
+ } else if task.circle > 0 {
+ // The current execution point has expired, but it is not at the first level at the same time, so now that the current level has been completed, it will drop to the next level
+ // But did not modify pos
+ task.circle--
+ e = e.Next()
+ continue
+ } else if task.diff > 0 {
+ // Because the diff has been marked before, you need to enter the queue again
+ next := e.Next()
+ l.Remove(e)
+ pos := (tw.tickedPos + task.diff) % tw.numSlots
+ tw.slots[pos].PushBack(task)
+ tw.setTimerPosition(pos, task)
+ task.diff = 0
+ e = next
+ continue
+ }
+ // The above cases are all cases that cannot be executed, and those that can be executed will be added to tasks
+ tasks = append(tasks, timingTask{
+ key: task.key,
+ value: task.value,
+ })
+ next := e.Next()
+ l.Remove(e)
+ tw.timers.Del(task.key)
+ e = next
+ }
+ // for range tasks, and then execute each task->execute
+ tw.runTasks(tasks)
+}
+```
+
+
+The specific branching situation is explained in the comments. When you look at it, it can be combined with the previous `moveTask()`, where the `circle` drops, and the calculation of `diff` is the key to linking the two functions.
+
+
+As for the calculation of `diff`, the calculation of `pos, circle` is involved:
+
+
+```go
+// interval: 4min, d: 60min, numSlots: 16, tickedPos = 15
+// step = 15, pos = 14, circle = 0
+func (tw *TimingWheel) getPositionAndCircle(d time.Duration) (pos int, circle int) {
+ steps := int(d / tw.interval)
+ pos = (tw.tickedPos + steps) % tw.numSlots
+ circle = (steps - 1) / tw.numSlots
+ return
+}
+```
+
+
+The above process can be simplified to the following:
+
+```go
+steps = d / interval
+pos = step % numSlots - 1
+circle = (step - 1) / numSlots
+```
+
+
+
+## Summary
+
+
+The `timingWheel` is driven by the timer. As the time advances, the tasks of the `list` "doubly linked list" in the **current time grid** will be taken out and passed to the `execute` for execution.
+
+
+In terms of time separation, the time wheel has `circle` layers, so that the original `numSlots` can be reused continuously, because the timer is constantly `loop`, and execution can drop the upper layer `slot` to the lower layer. You can execute the task up to the upper level continuously in the `loop`.
+
+
+There are many useful component tools in `go-zero`. Good use of tools is of great help to improve service performance and development efficiency. I hope this article can bring you some gains.
diff --git a/go-zero.dev/en/tokenlimit.md b/go-zero.dev/en/tokenlimit.md
new file mode 100644
index 00000000..71e52086
--- /dev/null
+++ b/go-zero.dev/en/tokenlimit.md
@@ -0,0 +1,155 @@
+# tokenlimit
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+This section will introduce its basic usage through token limit (tokenlimit).
+
+## Usage
+
+```go
+const (
+ burst = 100
+ rate = 100
+ seconds = 5
+)
+
+store := redis.NewRedis("localhost:6379", "node", "")
+fmt.Println(store.Ping())
+// New tokenLimiter
+limiter := limit.NewTokenLimiter(rate, burst, store, "rate-test")
+timer := time.NewTimer(time.Second * seconds)
+quit := make(chan struct{})
+defer timer.Stop()
+go func() {
+ <-timer.C
+ close(quit)
+}()
+
+var allowed, denied int32
+var wait sync.WaitGroup
+for i := 0; i < runtime.NumCPU(); i++ {
+ wait.Add(1)
+ go func() {
+ for {
+ select {
+ case <-quit:
+ wait.Done()
+ return
+ default:
+ if limiter.Allow() {
+ atomic.AddInt32(&allowed, 1)
+ } else {
+ atomic.AddInt32(&denied, 1)
+ }
+ }
+ }
+ }()
+}
+
+wait.Wait()
+fmt.Printf("allowed: %d, denied: %d, qps: %d\n", allowed, denied, (allowed+denied)/seconds)
+```
+
+
+## tokenlimit
+
+On the whole, the token bucket production logic is as follows:
+- The average sending rate configured by the user is r, then a token is added to the bucket every 1/r second;
+- Assume that at most b tokens can be stored in the bucket. If the token bucket is full when the token arrives, then the token will be discarded;
+- When the traffic enters at the rate v, the token is taken from the bucket at the rate v, the traffic that gets the token passes, and the traffic that does not get the token does not pass, and the fuse logic is executed;
+
+
+
+`go-zero` adopts the method of `lua script` under both types of current limiters, relying on redis to achieve distributed current limiting, and `lua script` can also achieve atomicity of token production and read operations.
+
+Let's take a look at several key attributes controlled by `lua script`:
+
+| argument | mean |
+| --- | --- |
+| ARGV[1] | rate 「How many tokens are generated per second」 |
+| ARGV[2] | burst 「Maximum token bucket」 |
+| ARGV[3] | now_time「Current timestamp」 |
+| ARGV[4] | get token nums 「The number of tokens that the developer needs to obtain」 |
+| KEYS[1] | Tokenkey representing the resource |
+| KEYS[2] | The key that represents the refresh time |
+
+
+
+```lua
+-- Return whether the expected token can be obtained alive
+
+local rate = tonumber(ARGV[1])
+local capacity = tonumber(ARGV[2])
+local now = tonumber(ARGV[3])
+local requested = tonumber(ARGV[4])
+
+-- fill_time:How long does it take to fill the token_bucket
+local fill_time = capacity/rate
+-- Round down the fill time
+local ttl = math.floor(fill_time*2)
+
+-- Get the number of remaining tokens in the current token_bucket
+-- If it is the first time to enter, set the number of token_bucket to the maximum value of the token bucket
+local last_tokens = tonumber(redis.call("get", KEYS[1]))
+if last_tokens == nil then
+ last_tokens = capacity
+end
+
+-- The time when the token_bucket was last updated
+local last_refreshed = tonumber(redis.call("get", KEYS[2]))
+if last_refreshed == nil then
+ last_refreshed = 0
+end
+
+local delta = math.max(0, now-last_refreshed)
+-- Calculate the number of new tokens based on the span between the current time and the last update time, and the rate of token production
+-- If it exceeds max_burst, excess tokens produced will be discarded
+local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
+local allowed = filled_tokens >= requested
+local new_tokens = filled_tokens
+if allowed then
+ new_tokens = filled_tokens - requested
+end
+
+-- Update the new token number and update time
+redis.call("setex", KEYS[1], ttl, new_tokens)
+redis.call("setex", KEYS[2], ttl, now)
+
+return allowed
+```
+
+
+It can be seen from the above that the `lua script`: only involves the operation of the token, ensuring that the token is produced and read reasonably.
+
+
+## Function analysis
+
+
+![](https://cdn.nlark.com/yuque/0/2020/png/261626/1606107337223-7756ecdf-acb6-48c2-9ff5-959de01a1a03.png#align=left&display=inline&height=896&margin=%5Bobject%20Object%5D&originHeight=896&originWidth=2038&status=done&style=none&width=2038)
+
+
+Seen from the above flow:
+
+
+1. There are multiple guarantee mechanisms to ensure that the current limit will be completed.
+1. If the `redis limiter` fails, at least in the process `rate limiter` will cover it.
+1. Retry the `redis limiter` mechanism to ensure that it runs as normally as possible.
+
+
+
+## Summary
+
+
+The `tokenlimit` current limiting scheme in `go-zero` is suitable for instantaneous traffic shocks, and the actual request scenario is not at a constant rate. The token bucket is quite pre-request, and when the real request arrives, it won't be destroyed instantly. When the traffic hits a certain level, consumption will be carried out at a predetermined rate.
+
+
+However, in the production of `token`, dynamic adjustment cannot be made according to the current flow situation, and it is not flexible enough, and further optimization can be carried out. In addition, you can refer to [Token bucket WIKI](https://en.wikipedia.org/wiki/Token_bucket) which mentioned hierarchical token buckets, which are divided into different queues according to different traffic bandwidths.
+
+
+## Reference
+
+- [go-zero tokenlimit](https://github.com/zeromicro/go-zero/blob/master/core/limit/tokenlimit.go)
+- [Redis Rate](https://github.com/go-redis/redis_rate)
+
+
+
diff --git a/go-zero.dev/en/tool-center.md b/go-zero.dev/en/tool-center.md
new file mode 100644
index 00000000..20cf6d26
--- /dev/null
+++ b/go-zero.dev/en/tool-center.md
@@ -0,0 +1,8 @@
+# Tools
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+In go-zero, a lot of tools to improve engineering efficiency are provided, such as api and rpc generation. On this basis, the compilation of api files seems so weak.
+Because of the lack of highlighting, code hints, template generation, etc., this section will show you how go-zero solves these problems. This section contains the following subsections:
+* [Intellij plugin](intellij.md)
+* [VSCode plugin](vscode.md)
\ No newline at end of file
diff --git a/go-zero.dev/en/trace.md b/go-zero.dev/en/trace.md
new file mode 100644
index 00000000..ea7a79f0
--- /dev/null
+++ b/go-zero.dev/en/trace.md
@@ -0,0 +1,188 @@
+# Trace
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+## Foreword
+
+In the microservice architecture, the call chain may be very long, from `http` to `rpc`, and from `rpc` to `http`. Developers want to know the call status and performance of each link, the best solution is **full link tracking**.
+
+The tracking method is to generate its own `spanID` at the beginning of a request, and pass it down along the entire request link. We use this `spanID` to view the status of the entire link and performance issues.
+
+Let's take a look at the link implementation of `go-zero`.
+
+## Code structure
+
+- [spancontext](https://github.com/zeromicro/go-zero/blob/master/core/trace/spancontext.go) :保存链路的上下文信息「traceid,spanid,或者是其他想要传递的内容」
+- [span](https://github.com/zeromicro/go-zero/blob/master/core/trace/span.go) :链路中的一个操作,存储时间和某些信息
+- [propagator](https://github.com/zeromicro/go-zero/blob/master/core/trace/propagator.go) : `trace` 传播下游的操作「抽取,注入」
+- [noop](https://github.com/zeromicro/go-zero/blob/master/core/trace/noop.go) :实现了空的 `tracer` 实现
+
+![](https://static.gocn.vip/photo/2020/2f244477-4ed3-4ad1-8003-ff82cbe2f8a0.png?x-oss-process=image/resize,w_1920)
+
+## Concept
+
+### SpanContext
+
+Before introducing `span`, first introduce `context`. SpanContext saves the context information of distributed tracing, including Trace id, Span id and other content that needs to be passed downstream. The implementation of OpenTracing needs to pass the SpanContext through a certain protocol to associate the Span in different processes to the same Trace. For HTTP requests, SpanContext is generally passed using HTTP headers.
+
+Below is the `spanContext` implemented by `go-zero` by default
+
+```go
+type spanContext struct {
+ traceId string // TraceID represents the globally unique ID of tracer
+ spanId string // SpanId indicates the unique ID of a span in a single trace, which is unique in the trace
+}
+```
+
+At the same time, developers can also implement the interface methods provided by `SpanContext` to realize their own contextual information transfer:
+
+```go
+type SpanContext interface {
+ TraceId() string // get TraceId
+ SpanId() string // get SpanId
+ Visit(fn func(key, val string) bool) // Custom operation TraceId, SpanId
+}
+```
+
+### Span
+
+A REST call or database operation, etc., can be used as a `span`. `span` is the smallest tracking unit of distributed tracing. A trace is composed of multiple spans. The tracking information includes the following information:
+
+```go
+type Span struct {
+ ctx spanContext
+ serviceName string
+ operationName string
+ startTime time.Time
+ flag string
+ children int
+}
+```
+
+Judging from the definition structure of `span`: In microservices, this is a complete sub-calling process, with the start of the call `startTime`, the context structure `spanContext` that marks its own unique attribute, and the number of child nodes of fork.
+
+## Example application
+
+In `go-zero`, http and rpc have been integrated as built-in middleware. We use [http](https://github.com/zeromicro/go-zero/blob/master/rest/handler/tracinghandler.go), [rpc](https://github.com/tal-tech /go-zero/blob/master/zrpc/internal/clientinterceptors/tracinginterceptor.go), take a look at how `tracing` is used:
+
+### HTTP
+
+```go
+func TracingHandler(next http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ // **1**
+ carrier, err := trace.Extract(trace.HttpFormat, r.Header)
+ // ErrInvalidCarrier means no trace id was set in http header
+ if err != nil && err != trace.ErrInvalidCarrier {
+ logx.Error(err)
+ }
+
+ // **2**
+ ctx, span := trace.StartServerSpan(r.Context(), carrier, sysx.Hostname(), r.RequestURI)
+ defer span.Finish()
+ // **5**
+ r = r.WithContext(ctx)
+
+ next.ServeHTTP(w, r)
+ })
+}
+
+func StartServerSpan(ctx context.Context, carrier Carrier, serviceName, operationName string) (
+ context.Context, tracespec.Trace) {
+ span := newServerSpan(carrier, serviceName, operationName)
+ // **4**
+ return context.WithValue(ctx, tracespec.TracingKey, span), span
+}
+
+func newServerSpan(carrier Carrier, serviceName, operationName string) tracespec.Trace {
+ // **3**
+ traceId := stringx.TakeWithPriority(func() string {
+ if carrier != nil {
+ return carrier.Get(traceIdKey)
+ }
+ return ""
+ }, func() string {
+ return stringx.RandId()
+ })
+ spanId := stringx.TakeWithPriority(func() string {
+ if carrier != nil {
+ return carrier.Get(spanIdKey)
+ }
+ return ""
+ }, func() string {
+ return initSpanId
+ })
+
+ return &Span{
+ ctx: spanContext{
+ traceId: traceId,
+ spanId: spanId,
+ },
+ serviceName: serviceName,
+ operationName: operationName,
+ startTime: timex.Time(),
+ // 标记为server
+ flag: serverFlag,
+ }
+}
+```
+
+1. Set header -> carrier to get the traceId and other information in the header
+1. Open a new span and encapsulate **"traceId, spanId"** in the context
+1. Obtain traceId and spanId from the aforementioned carrier "that is, header"
+ -See if it is set in the header
+ -If it is not set, it will be randomly generated and returned
+1. Generate a new ctx from `request`, encapsulate the corresponding information in ctx, and return
+1. From the above context, copy a copy to the current `request`
+
+![](https://static.gocn.vip/photo/2020/a30daba2-ad12-477c-8ce5-131ef1cc3e76.png?x-oss-process=image/resize,w_1920)
+
+In this way, the information of the `span` is passed to the downstream service along with the `request`.
+
+### RPC
+
+There are `client, server` in rpc, so from `tracing` there are also `clientTracing, serverTracing`. The logic of `serveTracing` is basically the same as that of http. Let’s take a look at how `clientTracing` is used?
+
+```go
+func TracingInterceptor(ctx context.Context, method string, req, reply interface{},
+ cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
+ // open clientSpan
+ ctx, span := trace.StartClientSpan(ctx, cc.Target(), method)
+ defer span.Finish()
+
+ var pairs []string
+ span.Visit(func(key, val string) bool {
+ pairs = append(pairs, key, val)
+ return true
+ })
+ // **3** Add the data in the pair to ctx in the form of a map
+ ctx = metadata.AppendToOutgoingContext(ctx, pairs...)
+
+ return invoker(ctx, method, req, reply, cc, opts...)
+}
+
+func StartClientSpan(ctx context.Context, serviceName, operationName string) (context.Context, tracespec.Trace) {
+ // **1**
+ if span, ok := ctx.Value(tracespec.TracingKey).(*Span); ok {
+ // **2**
+ return span.Fork(ctx, serviceName, operationName)
+ }
+
+ return ctx, emptyNoopSpan
+}
+```
+
+1. Get the span context information brought down by the upstream
+1. Create a new ctx from the acquired span, span "inherit the traceId of the parent span"
+1. Add the span generated data to ctx, pass it to the next middleware, and flow downstream
+
+## Summary
+
+`go-zero` obtains the link traceID by intercepting the request, and then assigns a root Span at the entry of the middleware function, and then splits the child Spans in subsequent operations. Each span has its own specific identification. After Finsh Will be collected in the link tracking system. Developers can trace the traceID through the ELK tool to see the entire call chain.
+
+At the same time, `go-zero` does not provide a complete set of `trace` link solutions. Developers can encapsulate the existing `span` structure of `go-zero`, build their own reporting system, and access links such as `jaeger, zipkin`, etc. Tracking tool.
+
+## Reference
+
+- [go-zero trace](https://github.com/zeromicro/go-zero/tree/master/core/trace)
\ No newline at end of file
diff --git a/go-zero.dev/en/vscode.md b/go-zero.dev/en/vscode.md
new file mode 100644
index 00000000..e04ec329
--- /dev/null
+++ b/go-zero.dev/en/vscode.md
@@ -0,0 +1,42 @@
+# vs code plugin
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+The plug-in can be installed on the 1.46.0+ version of Visual Studio Code. First, please make sure that your Visual Studio Code version meets the requirements and the goctl command line tool has been installed. If Visual Studio Code is not installed, please install and open Visual Studio Code. Navigate to the "Extensions" pane, search for goctl and install this extension (publisher ID is "xiaoxin-technology.goctl").
+
+For the extension of Visual Studio Code, please refer to [here](https://code.visualstudio.com/docs/editor/extension-gallery).
+
+## Features
+
+* Syntax highlighting
+* Jump to definition/reference
+* Code formatting
+* Code block hint
+
+### Syntax highlighting
+
+### Jump to definition/reference
+
+![jump](./resource/jump.gif)
+
+### Code formatting
+
+Invoke the goctl command line formatting tool, please make sure that goctl has been added to `$PATH` and has executable permissions before use
+
+### Code block hint
+
+#### info block
+
+![info](./resource/info.gif)
+
+#### type block
+
+![type](./resource/type.gif)
+
+#### service block
+
+![type](./resource/service.gif)
+
+#### handler block
+
+![type](./resource/handler.gif)
diff --git a/go-zero.dev/en/wechat.md b/go-zero.dev/en/wechat.md
new file mode 100644
index 00000000..45cbcfc0
--- /dev/null
+++ b/go-zero.dev/en/wechat.md
@@ -0,0 +1,31 @@
+# Wechat
+> [!TIP]
+> This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please [PR](doc-contibute.md)
+
+
+Microservices actual combat is the official official account of go-zero, where the latest go-zero best practices will be released, synchronized go night reading, go open source, GopherChina, Tencent Cloud Developer Conference and other channels about the latest go-zero Technology and information.
+
+