Skip to content

Latest commit

 

History

History
34 lines (27 loc) · 4.64 KB

File metadata and controls

34 lines (27 loc) · 4.64 KB

Book

  • AI Superpowers: China, Silicon Valley, and the New World Order. Kai-Fu Lee. 2018.

Overview papers

  • Can Deep Learning Revolutionize Mobile Sensing?. Nicholas D. Lane and Petko Georgiev. Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications 2015. [pdf]
  • A First Look at Deep Learning Apps on Smartphones. Mengwei Xu et al. WWW 2018. [pdf]
  • AI Benchmark: Running Deep Neural Networks on Android Smartphones. Andrey Ignatov et al. ECCV Workshops 2018. [pdf]
  • AI Benchmark: All About Deep Learning on Smartphones in 2019. Andrey Ignatov et al. Arxiv 2019. [pdf]
  • Performance Analysis and Characterization of Training Deep Learning Models on Mobile Devices. Liu et al. Arxiv 2019. [pdf]
  • Exploring the Capabilities of Mobile Devices in Supporting Deep Learning. Chen et al. International Symposium on High-Performance Parallel and Distributed Computing 2019. [pdf]

On-device Deep Learning and Natural Language Processing

  • On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study. Chinnadhurai Sankar et al. Arxiv 2019. [pdf]
  • ProSeqo: Projection Sequence Networks for On-Device Text Classification. Zornitsa Kozareva and Sujith Ravi. EMNLP 2019. [pdf]
  • PRADO: Projection Attention Networks for Document Classification On-Device. Prabhu Kaliamoorthi et al. EMNLP 2019. [pdf]
  • On-device Structured and Context Partitioned Projection Networks. Sujith Ravi and Zornitsa Kozareva. ACL 2019. [pdf]
  • Efficient On-Device Models using Neural Projections. Sujith Ravi. ICML 2019. [pdf]
  • Transferable Neural Projection Representations. Chinnadhurai Sankar et al. NAACL 2019. [pdf]
  • Self-Governing Neural Networks for On-Device Short Text Classification. Sujith Ravi and Zornitsa Kozareva. EMNLP 2018. [pdf]
  • ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural Projections. Sujith Ravi. Arxiv 2017. [pdf] (method learns a simple projection-based network that efficiently encodes intermediate network representations (i.e., hidden units) and operations involved, rather than the weights described next. Here are methods that mostly exploit redundancy in the network weights by grouping connections using low-rank decomposition or hashing tricks):
    • [Binarization strategies for network] Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. Courbariaux et al. Arxiv 2016. [pdf]
    • [Reduced numerical precision] Low precision arithmetic for deep learning. Courbariaux et al. Workshop ICLR 2015. [pdf]
    • [Vector quantization] Compressing deep convolutional networks using vector quantization. Gong et al. ICLR 2015. [pdf]
    • [Model Distillation] Distilling the Knowledge in a Neural Network. Hinton et al. Arxiv 2015. [pdf]
    • [Weight sharing] Compressing Neural Networks with the Hashing Trick. Chen et al. ICML 2015. [pdf]
    • [Weight sharing] Predicting parameters in deep learning. Denil et al. NIPS 2013. [pdf]
  • Smart Reply: Automated Response Suggestion for Email. Anjuli Kannan et al. KDD 2016. [pdf]
    • Large Scale Distributed Semi-Supervised Learning Using Streaming Approximation. Sujith Ravi and Qiming Diao. AISTATS 2016. [pdf]
    • Revisiting the Predictability of Language: Response Completion in Social Media. Bo Pang and Sujith Ravi. EMNLP 2012. [pdf]