Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. Higher accuracy, faster inference speed and smaller model size is hard to achieve at the same time. This project aims for designing smallest, fastest neural network on the mobile that could achieve satisfying accuracy in imagenet targeting: latency < 200ms on the android or iOS, top-5 imagenet accuracy > 81%, model size<10MB
Perform one task every week. You can manage your own schedule and just make sure delivering
results on time.
Soft Deadline: every Saturday 23:45 UTC+8 (you can earn 10% extra score if you deliver
results before soft deadline)
Hard Deadline: every Sunday 23:45 UTC+8
The final score is peer-review score (30%) plus project host score (70%)
© 2014-2016 京ICP备 14047177