You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for sharing this great work!
I’m new to Timeloop and currently working through the tutorial exercises. I have a question on how to adjust the operand precision.
Load FP32 weights from DRAM into the weight scratchpad, and do 4b weight x 4b input convolution.
Load INT8 weights from DRAM into the weight scratchpad, and do 4b weight x 4b input convolution.
I know that there is a datawidth attribute for each level of architecture hierarchy, but I'm not entirely sure how to use this for what I'm intending. Do I just need to change datawidth attribute to DRAM(32)-weight scratchpad(32),input scratchpad(32)-MAC unit(4) for case 1, and DRAM(8)-weight scratchpad(8),input scratchpad(8)-MAC unit(4) for case 2, respectively?
The text was updated successfully, but these errors were encountered:
a2jinhee
changed the title
Question) Is there a way to control the weight precision in CONV tutorial
Question on how to change operand precision.
Nov 3, 2024
Hello, thank you for sharing this great work!
I’m new to Timeloop and currently working through the tutorial exercises. I have a question on how to adjust the operand precision.
In exercise 06-mapper-convlayer-eyeriss, I want to compare the power consumption between two scenarios:
I know that there is a
datawidth
attribute for each level of architecture hierarchy, but I'm not entirely sure how to use this for what I'm intending. Do I just need to changedatawidth
attribute toDRAM(32)
-weight scratchpad(32)
,input scratchpad(32)
-MAC unit(4)
for case 1, andDRAM(8)
-weight scratchpad(8)
,input scratchpad(8)
-MAC unit(4)
for case 2, respectively?The text was updated successfully, but these errors were encountered: