Skip to content

Commit

Permalink
[CodeStyle] trim trailing whitespace in .md and .rst (PaddlePaddle#45990
Browse files Browse the repository at this point in the history
)

* [CodeStyle] trim trailing whitespace in .md and .rst

* empty commit, test=document_fix
  • Loading branch information
SigureMo authored Sep 14, 2022
1 parent 1349584 commit 3404ff6
Show file tree
Hide file tree
Showing 21 changed files with 67 additions and 67 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Contribute Code

You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the
You are welcome to contribute to project PaddlePaddle. To contribute to PaddlePaddle, you have to agree with the
[PaddlePaddle Contributor License Agreement](https://gist.github.com/XiaoguangHu01/75018ad8e11af13df97070dd18ae6808).

We sincerely appreciate your contribution. This document explains our workflow and work style.
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<p align="center">
<img align="center" src="doc/imgs/logo.png", width=1600>
<p>

--------------------------------------------------------------------------------

English | [简体中文](./README_cn.md)
Expand Down Expand Up @@ -52,13 +52,13 @@ Now our developers can acquire Tesla V100 online computing resources for free. I
- **High-Performance Inference Engines for Comprehensive Deployment Environments**

PaddlePaddle is not only compatible with models trained in 3rd party open-source frameworks , but also offers complete inference products for various production scenarios. Our inference product line includes [Paddle Inference](https://paddle-inference.readthedocs.io/en/master/guides/introduction/index_intro.html): Native inference library for high-performance server and cloud inference; [Paddle Serving](https://github.com/PaddlePaddle/Serving): A service-oriented framework suitable for distributed and pipeline productions; [Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite): Ultra-Lightweight inference engine for mobile and IoT environments; [Paddle.js](https://www.paddlepaddle.org.cn/paddle/paddlejs): A frontend inference engine for browser and mini-apps. Furthermore, by great amounts of optimization with leading hardware in each scenario, Paddle inference engines outperform most of the other mainstream frameworks.


- **Industry-Oriented Models and Libraries with Open Source Repositories**

PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications.
[Click here to learn more](https://github.com/PaddlePaddle/models)


## Documentation

Expand All @@ -71,7 +71,7 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide

- [Practice](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html)

So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.

- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html)

Expand All @@ -86,11 +86,11 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide
- [Github Issues](https://github.com/PaddlePaddle/Paddle/issues): bug reports, feature requests, install issues, usage issues, etc.
- QQ discussion group: 441226485 (PaddlePaddle).
- [Forums](https://aistudio.baidu.com/paddle/forum): discuss implementations, research, etc.

## Courses

- [Server Deployments](https://aistudio.baidu.com/aistudio/course/introduce/19084): Courses intorducing high performance server deployments via local and remote services.
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets.
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets.

## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
12 changes: 6 additions & 6 deletions README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- **开发便捷的产业级深度学习框架**

飞桨深度学习框架采用基于编程逻辑的组网范式,对于普通开发者而言更容易上手,符合他们的开发习惯。同时支持声明式和命令式编程,兼具开发的灵活性和高性能。网络结构自动设计,模型效果超越人类专家。


- **支持超大规模深度学习模型的训练**

飞桨突破了超大规模深度学习模型训练技术,实现了支持千亿特征、万亿参数、数百节点的开源大规模训练平台,攻克了超大规模深度学习模型的在线学习难题,实现了万亿规模参数模型的实时更新。
[查看详情](https://github.com/PaddlePaddle/Fleet)


- **支持多端多平台的高性能推理部署工具**

Expand All @@ -66,14 +66,14 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- [使用指南](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/index_cn.html)

或许您想从深度学习基础开始学习飞桨

- [应用实践](https://www.paddlepaddle.org.cn/documentation/docs/zh/tutorial/index_cn.html)


- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/zh/api/index_cn.html)

新的API支持代码更少更简洁的程序


- [贡献方式](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/08_contribution/index_cn.html)

Expand All @@ -84,7 +84,7 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
- 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/Paddle/issues)来提交问题、报告与建议
- QQ群: 441226485 (PaddlePaddle)
- [论坛](https://aistudio.baidu.com/paddle/forum): 欢迎大家在PaddlePaddle论坛分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围

## 课程

- [服务器部署](https://aistudio.baidu.com/aistudio/course/introduce/19084): 详细介绍高性能服务器端部署实操,包含本地端及服务化Serving部署等
Expand Down
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ We will indicate the bug fix in the release of PaddlePaddle, and publish the vul

### What is a vulnerability?

In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved.
In the process of computation graphs in PaddlePaddle, models can perform arbitrary computations , including reading and writing files, communicating with the network, etc. It may cause memory exhaustion, deadlock, etc., which will lead to unexpected behavior of PaddlePaddle. We consider these behavior to be security vulnerabilities only if they are out of the intention of the operation involved.



Expand Down
2 changes: 1 addition & 1 deletion doc/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# For Readers and Developers

Thanks for reading PaddlePaddle documentation.
Thanks for reading PaddlePaddle documentation.

Since **September 17th, 2018**, the **0.15.0 and develop** documentation source has been moved to [FluidDoc Repo](https://github.com/PaddlePaddle/FluidDoc) and updated there.

Expand Down
2 changes: 1 addition & 1 deletion paddle/fluid/distributed/ps/service/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 目录说明

* PSServer
* PSServer
* PSClient
* PsService
* Communicator
Expand Down
6 changes: 3 additions & 3 deletions paddle/fluid/inference/analysis/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Inference Analysis

The `inference/analysis` module is used to analyze and optimize the inference program,
it references some philosophy from `LLVM/analysis`,
it references some philosophy from `LLVM/analysis`,
and make the various optimization features be pluggable and co-exist in a pipeline.

We borrowed some concepts from LLVM, such as
Expand Down Expand Up @@ -31,14 +31,14 @@ each pass will generate unified debug information or visualization for better de
## Supported Passes

### `FluidToDataFlowGraphPass`
Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes,
Transform the fluid `ProgramDesc` to a `DataFlowGraph` to give an abstract representation for all the middle passes,
this should be the first pass of the pipeline.

### `DataFlowGraphToFluidPass`
Generate a final `ProgramDesc` from a data flow graph, this should be the last pass of the pipeline.

### `TensorRTSubgraphNodeMarkPass`
Mark the `Node` that are supported by TensorRT,
Mark the `Node` that are supported by TensorRT,
this pass will generate a visualization file which can be used for debugging.

### `TensorRTSubGraphPass`
Expand Down
2 changes: 1 addition & 1 deletion paddle/fluid/inference/api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can easily deploy a model trained by Paddle following the steps as below:

## The APIs

All the released APIs are located in the `paddle_inference_api.h` header file.
All the released APIs are located in the `paddle_inference_api.h` header file.
The stable APIs are wrapped by `namespace paddle`, the unstable APIs are protected by `namespace paddle::contrib`.

## Write some codes
Expand Down
10 changes: 5 additions & 5 deletions paddle/fluid/inference/api/demo_ci/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

There are several demos:

- simple_on_word2vec:
- Follow the C++ codes is in `simple_on_word2vec.cc`.
- simple_on_word2vec:
- Follow the C++ codes is in `simple_on_word2vec.cc`.
- It is suitable for word2vec model.
- vis_demo:
- Follow the C++ codes is in `vis_demo.cc`.
- vis_demo:
- Follow the C++ codes is in `vis_demo.cc`.
- It is suitable for mobilenet, se_resnext50 and ocr three models.
- Input data format:
- Each line contains a single record
Expand All @@ -15,7 +15,7 @@ There are several demos:
<space split floats as data>\t<space split ints as shape>
```
To build and execute the demos, simply run
To build and execute the demos, simply run
```
./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU
```
Expand Down
8 changes: 4 additions & 4 deletions paddle/fluid/inference/api/high_level_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The APIs are described in `paddle_inference_api.h`, just one header file, and tw
## PaddleTensor
We provide the `PaddleTensor` data structure to give a general tensor interface.

The definition is
The definition is

```c++
struct PaddleTensor {
Expand All @@ -17,8 +17,8 @@ struct PaddleTensor {
};
```
The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type.
The `name` field is used to specify the name of an input variable,
The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type.
The `name` field is used to specify the name of an input variable,
that is important when there are multiple inputs and need to distinguish which variable to set.
## engine
Expand All @@ -38,7 +38,7 @@ enum class PaddleEngineKind {
```

## PaddlePredictor and how to create one
The main interface is `PaddlePredictor,` there are following methods
The main interface is `PaddlePredictor,` there are following methods

- `bool Run(const std::vector<PaddleTensor>& inputs, std::vector<PaddleTensor>* output_data)`
- take inputs and output `output_data.`
Expand Down
2 changes: 1 addition & 1 deletion paddle/fluid/inference/api/high_level_api_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
预测库包含:

- 头文件 `paddle_inference_api.h` 定义了所有的接口
- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)`
- 库文件 `libpaddle_inference.so/.a(Linux/Mac)` `libpaddle_inference.lib/paddle_inference.dll(Windows)`

下面是详细的一些 API 概念介绍

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ MODEL_NAME=googlenet, mobilenetv1, mobilenetv2, resnet101, resnet50, vgg16, vgg1
* ## Prepare dataset

* Download and preprocess the full Pascal VOC2007 test set.

```bash
cd /PATH/TO/PADDLE
python paddle/fluid/inference/tests/api/full_pascalvoc_test_preprocess.py
Expand Down
4 changes: 2 additions & 2 deletions paddle/fluid/inference/tests/infer_ut/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ There are several model tests currently:
- test_resnet50_quant.cc
- test_yolov3.cc

To build and execute tests on Linux, simply run
To build and execute tests on Linux, simply run
```
./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR
```
Expand All @@ -24,7 +24,7 @@ busybox bash ./run.sh $PADDLE_ROOT $TURN_ON_MKL $TEST_GPU_CPU $DATA_DIR
- `$TEST_GPU_CPU`: test both GPU/CPU mode or only CPU mode
- `$DATA_DIR`: download data path

now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following.
now only support 4 kinds of tests which controled by `--gtest_filter` argument, test suite name should be same as following.
- `TEST(gpu_tester_*, test_name)`
- `TEST(cpu_tester_*, test_name)`
- `TEST(mkldnn_tester_*, test_name)`
Expand Down
8 changes: 4 additions & 4 deletions paddle/fluid/operators/jit/README.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ JIT(Just In Time) Kernel contains actually generated code and some other impleme
Each implementation has its own condition to use, defined in `CanBeUsed`.
They are combined together to get the best performance of one single independent function.
They could be some very simple functions like vector multiply, or some complicated functions like LSTM.
And they can be composed with some other exited jit kernels to build up a complex function.
And they can be composed with some other exited jit kernels to build up a complex function.
Currently it's only supported on CPU yet.

## Contents
Expand Down Expand Up @@ -38,14 +38,14 @@ All basical definations of jit kernels are addressed in `paddle/fluid/operators/

- `refer`: Each kernel must have one reference implementation on CPU, and it should only focus on the correctness and should not depends on any third-party libraries.
- `gen`: The code generated should be kept here. They should be designed focusing on the best performance, which depends on Xbyak.
- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage.
- `more`: All other implementations should be kept in this folder with one directory corresponding to one library kind or method kind, such as mkl, mkldnn, openblas or intrinsic code. Each implementation should have it advantage.

## How to use

We present these methods to get the functions:
- `GetAllCandidateFuncs`. It can return all the implementations supported. All of the implementations can get the same result. You can do some runtime benchmark to choose which should actually be used.
- `GetDefaultBestFunc`. It only return one default function pointer, which is tuning offline with some genenal configures and attributes. This should cover most situations.
- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute.
- `KernelFuncs::Cache()`. It can get the default functions and save it for next time with the same attribute.
- `GetReferFunc`. It can only get the reference code in CPU, and all the others implementations have same logic with this reference code.

And here are some examples:
Expand Down Expand Up @@ -86,7 +86,7 @@ All kernels are inlcuded in `paddle/fluid/operators/jit/kernels.h`, which is aut

1. Add `your_key` at `KernelType`.
2. Add your new `KernelTuple` which must include `your_key`. It should be a combination of the data type, attribute type and function type. You can refer `SeqPoolTuple`.
3. Add reference function of `your_key`.
3. Add reference function of `your_key`.
Note:
- this should be run on CPU and do not depend on any third-party.
- Add `USE_JITKERNEL_REFER(your_key)` in `refer/CmakeLists.txt` to make sure this code can be used.
Expand Down
6 changes: 3 additions & 3 deletions paddle/scripts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ PaddlePaddle applications directly in docker or on Kubernetes clusters.

To achieve this, we maintain a dockerhub repo:https://hub.docker.com/r/paddlepaddle/paddle
which provides pre-built environment images to build PaddlePaddle and generate corresponding `whl`
binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**)
binaries.(**We strongly recommend building paddlepaddle in our pre-specified Docker environment.**)

## Development Workflow

Expand Down Expand Up @@ -52,8 +52,8 @@ cd Paddle
After the build finishes, you can get output `whl` package under
`build/python/dist`.

This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container.
The container mounts the source directory on the host into `/paddle`.
This command will download the most recent dev image from docker hub, start a container in the backend and then run the build script `/paddle/paddle/scripts/paddle_build.sh build` in the container.
The container mounts the source directory on the host into `/paddle`.
When it writes to `/paddle/build` in the container, it writes to `$PWD/build` on the host indeed.

### Build Options
Expand Down
6 changes: 3 additions & 3 deletions paddle/scripts/musl_build/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Paddle can be built for linux-musl such as alpine, and be used in libos-liked SG

# Build Automatically
1. clone paddle source from github

```bash
git clone https://github.com/PaddlePaddle/Paddle.git
```
Expand Down Expand Up @@ -50,7 +50,7 @@ mkdir -p build && cd build
ls ./output/*.whl
```

# Build Manually
# Build Manually

1. start up the building docker, and enter the shell in the container
```bash
Expand Down Expand Up @@ -88,7 +88,7 @@ make -j8
# Scripts
1. **build_docker.sh**
compiling docker building script. it use alpine linux 3.10 as musl linux build enironment. it will try to install all the compiling tools, development packages, and python requirements for paddle musl compiling.

environment variables:
- PYTHON_VERSION: the version of python used for image building, default=3.7.
- WITH_PRUNE_DAYS: prune old docker images, with days limitation.
Expand Down
10 changes: 5 additions & 5 deletions python/paddle/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ We provide users with four installation methods ,which are pip, conda, docker an

- **pip or pip3 version 9.0.1+ (64 bit)**



#### <a id="Commands to install">Commands to install</a>

Expand Down Expand Up @@ -115,12 +115,12 @@ If you want to install witch conda or docker or pip,please see commands to insta

PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization.
[Click here to learn more](https://github.com/PaddlePaddle/Paddle-Lite)

- **Industry-Oriented Models and Libraries with Open Source Repositories**

PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications.
[Click here to learn more](https://github.com/PaddlePaddle/models)


## Documentation

Expand All @@ -135,10 +135,10 @@ We provide [English](http://www.paddlepaddle.org.cn/documentation/docs/en/1.8/be
- [User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/user_guides/index_en.html)

You might have got the hang of Beginner’s Guide, and wish to model practical problems and build your original networks.

- [Advanced User Guides](https://www.paddlepaddle.org.cn/documentation/docs/en/advanced_guide/index_en.html)

So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.


- [API Reference](https://www.paddlepaddle.org.cn/documentation/docs/en/api/index_en.html)
Expand Down
Loading

0 comments on commit 3404ff6

Please sign in to comment.