Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quantized int8 storage and operation #230

Closed
nihui opened this issue Jan 4, 2018 · 5 comments
Closed

quantized int8 storage and operation #230

nihui opened this issue Jan 4, 2018 · 5 comments

Comments

@nihui
Copy link
Member

nihui commented Jan 4, 2018

No description provided.

@nihui
Copy link
Member Author

nihui commented Jan 5, 2018

  • int8/uint8 storage support in Mat
  • Quantize Dequantize layer
  • Integer convolution with bias
  • ReLU6 layer (implemented as Clip)
  • tensorflow quantized mobilenet model
  • arm neon optimization

@yanghanxy
Copy link

yanghanxy commented Jun 11, 2018

This project seems like a good reference to quantize a NN model under embedded environment:
https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Deployment/Quant_guide.md

@blueardour
Copy link

Hi, I wonder if the computing of each layer is fixed point data type. As far as I know, the ristretto implements their dynamic fixed point data type by simulation. They just tranfer the data to fixed point and assgin back using float. Thus the real computing is still based on float.

@nihui
Copy link
Member Author

nihui commented Jul 26, 2018

POC implemetation #487

@nihui
Copy link
Member Author

nihui commented Aug 1, 2018

framework int8 a169cec

x86 int8 4be27a0

armv7 int8 e34aa77

@nihui nihui closed this as completed Aug 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants