Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Texture support][Part 2] Add opencl adreno target, topi schedules, and relay op strategies #7687

Closed
wants to merge 15 commits into from
Closed

[Texture support][Part 2] Add opencl adreno target, topi schedules, and relay op strategies #7687

wants to merge 15 commits into from

Conversation

csullivan
Copy link
Contributor

@csullivan csullivan commented Mar 18, 2021

This PR introduces the opencl --device=adreno target and corresponding relay strategies. The conv2d schedules introduced here utilize spatial and channel-wise packing for the weights (OIHW4o) and activations (NCHW4c), respectively, both with vector length 4 to support lowering to RGBA texture memory.

AutoTVM support

  • AutoTVM doesn't currently capture the runtime context of extracted tasks. Without capturing information about the runtime buffer scopes, the codegeneration during tuning will only occur on flat buffers (not texture). For now, we utilize a cache_read("texture") stage when tuning to explore the performance benefit of utilizing texture memory. We believe that with a sufficient number of iteration's per trial, the copy time to texture prior to running the main compute kernel (which results from a cache_read) should be constant over the search and therefore not greatly impact the tuning results.

  • Note that the cache_read is not needed when using the graph_runtime which supports passing in external texture buffers (see: [Texture support][Part 3] Support storage scope tag in graph runtime codegen, planning, runtime and compile engine  #7688). Therefore, in the schedules one will observe if autotvm.GLOBAL_SCOPE.in_tuning: which around scheduling related to adding a cache_read to texture stage.

  • The schedules can be simplified once either 1) AutoTVM tuning supports capturing this runtime information during task extraction (thereby removing the need for a cache_read to texture) or 2) once texture lowering in tir.TextureFlatten fully supports cache_read cancellation (external buffer forwarding through the cache_read). Cancellation currently is supported except for the cases of padding wherein an extra texture to texture copy results.

See RFC here: https://discuss.tvm.apache.org/t/rfc-texture-memory-support/9467

@elvin-n
Copy link
Contributor

elvin-n commented Apr 28, 2021

Need to add FP32 support. Currently if we do not convert convolutions to FP16 and try to compile it for Adreno, we get the follow error: AssertionError: No float32 input/output tensor support is currently provided for Adreno GPU

@jroesch jroesch added needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it status: need review labels Jan 19, 2022
@csullivan
Copy link
Contributor Author

Closing as duplicated by #11161.

@csullivan csullivan closed this May 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it status: need review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants