Skip to content

Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency

Notifications You must be signed in to change notification settings

ZiweiSong96/Deep-Compressive-Offloading

 
 

Repository files navigation

Deep-Compressive-Offloading

Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency

About

Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%