-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One-Year Roadmap #2398
Comments
For the point of polishing the autodiff system, I think this kernel simplicity rule comes from the "reverse replay" (if you will call it like this) of Autodiff, if I remember the DiffTaichi paper correctly. With kernel simplicity rule, in the "reverse replay" phase, each inner iteration can have a concrete compute graph that is not dependent on anything in the outerloop iterations (since we cannot have anything in the outerloop iterations). @ti.kernel
def broadcast_add_bad(array0: ti.template(), array1: ti.template(), matrix: ti.template()):
for i in array0:
num0 = array0[i]
for j in array1:
num1 = array1[j]
matrix[i, j] = num0 + num1
@ti.kernel
def broadcast_add_good(array0: ti.template(), array1: ti.template(), matrix: ti.template()):
for i in array0:
for j in array1:
num0 = array0[i]
num1 = array1[j]
matrix[i, j] = num0 + num1 Suppose we have 2 10-element arrays, then we will have 100 iterations. In the bad example, we will have 10*10 inner loop iterations bundled with 10
but if you want to relax or even remove the kernel simplicity rule, you have to record more information in the tape. For this example, the tape need to record that 10 inner loop iterations are bundled to a I don't know if this will help or if you know this, but you can check out Julia Flux's Zygote, which is for source-to-source AD. The link to its paper is here. Flux can handle AD on complex control flows in Julia programs, and if you consider a kernel instance as a thread of a "Julia program", then maybe you can transfer the idea of Zygote to Autodiff. |
Hi,
We are sharing this one-year road map to let you know what features are planned ahead or have been actively worked on. Hopefully some of these sound interesting/exciting! Let us know what you think so that we can adjust accordingly. Thanks!
New Features
Backends
pointer
on Metal (SNodeType=pointer not supported on Metal #1740)Program
from each backendPerformance
import taichi
ti
CLI should only importtaichi
when necessaryDocumentation
Program
,Kernel
)Cleanups
Expr
,Expression
classes and the PythonExpr
class.ti.field
should return an instance of eitherSNode
instance or a dedicated field class (Currently it returns anExpr
).ti.Matrix
andti.Vector
taichi
package hierarchy (Clean up python import statements #2223)libtaichi_core.so
and remove the linker scriptTI_NAMESPACE_{BEGIN, END}
Release
taichi
andtaichilib
release (Split taichi C++ library into separate python package. #2351)taichi
platform-independent ([PyPI] [Blender] Make a platform independent wheel package by download-on-fly? #1987)CI/Productivity
ci/
In addition, the items below are also on our radar, but we haven't thought about too much yet:
You
The text was updated successfully, but these errors were encountered: