Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PaddlePaddle Hackathon 2】89、增加 Taichi 和 PaddlePaddle 高效结合的案例 #2

Open
TCChenlong opened this issue Mar 8, 2022 · 5 comments

Comments

@TCChenlong
Copy link

TCChenlong commented Mar 8, 2022

(此 ISSUE 为 PaddlePaddle Hackathon 第二期活动的任务 ISSUE,更多详见 【PaddlePaddle Hackathon 第二期】任务总览

【任务说明】

  • 任务标题:增加 Taichi 和 PaddlePaddle 高效结合的案例。

  • 技术标签:深度学习框架,高性能计算。

  • 任务难度:简单。

  • 详细描述:选取一个 PaddlePaddle 中暂不支持的 op,使用 Taichi 编写该 op 的并行实现,并在一个 PaddlePaddle 和Taichi的交互案例中展示效果。

    • 注意本题目为开放性题目,需要在题目一的基础上完成。
  • 相关实现:如需使用可微分op,请参考 Taichi 中与 torch autograd 交互的测试样例

【提交内容】

  • 设计文档,并提 PR 至 PaddlePaddle/community 的 rfcs/Taichi 目录

  • PR 请提交到用户自己的公开 repo,提交 repo 的链接即可;

  • Repo 中需要包含详细的案例使用步骤,以及必要的代码讲解和背景知识。

【技术要求】

  • 熟悉 Taichi 和 PaddlePaddle;

  • 熟练掌握 C++ 、Python。

【答疑交流】

  • 如果在开发中对于上述任务有任何问题,欢迎在本 ISSUE 下留言交流。
  • 对于开发中的共性问题,在活动过程中,会定期组织答疑,请大家关注官网&QQ群的通知,及时参与。
@0xzhang
Copy link

0xzhang commented May 11, 2022

The least squares algorithm is a classic algorithm and PaddlePaddle doesn't directly support for now. Before I finished the first task. I didn't have a proper idea. Now I'm going to implement a taichi-based lstsq example. Like in reference, add a benchmark would be better. Is this an appropriate idea?

https://jekel.me/2019/Compare-lstsq-performance-in-Python/

PyTorch, TensorFlow and NumPy all support linalg.lstsq.

https://www.netlib.org/lapack/lug/node27.html

@yuanming-hu
Copy link
Member

I feel like Taichi may not be the optimal tool for lstsq, which is mainly dense regular matrix operations.

Perhaps it's a good idea to use Taichi and its AutoDiff to implement some weird operators that are not provided by PaddlePaddle? Good examples would be NCReLU (ref1, ref2)/MixedConv (ref). Without Taichi, users will need to implement these kernels using CUDA and bind them to PaddlePaddle through C++/Pybind11. With Taichi you can implement everything in pure "Python" with a small amount of code.

(cc @ailzhang who may also be interested in this topic :-))

@0xzhang
Copy link

0xzhang commented May 11, 2022

Thanks for your reply!

It must have been influenced by my working background. I have experience implementing various mathematical operations on supercomputers. But I don't know enough knowledges about deep learning. Therefore in deep learning scope, I don't have a good idea about this task.

And I have also understood a key point following your suggestion. Basic mathmatical operator such as lstsq have already be implemented in a library like NumPy. Taichi doesn't need to build wheels. The real value that Taichi creates is, that developers can use Taichi to implement a variety of customized new algorithms more easier. Based on the Taichi framework, Python-like codes could be translated into high-performance, portable machine codes.

Salut, productivity and performance!

@yuanming-hu
Copy link
Member

The real value that Taichi creates is, that developers can use Taichi to implement a variety of customized new algorithms.

Exactly! Taichi is very suitable for writing complex kernels that are hard to compose using basic linear algebra components :-)

@0xzhang
Copy link

0xzhang commented May 12, 2022

Thanks! I see. However, due to lack of experience in deep learning, there may be someone more suitable for this task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants