NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
SMLL: Using 200MB of Neural Network to Save 400 Bytes (frankchiarulli.com)
f_devd 26 minutes ago [-]
Having worked on compression algos, any NN is just way to slow for (de-)compression. A potential usage of them is for coarse prior estimation in something like rANS, but even then the overhead cost would need to carefully weighted against something like Markov chains since the relative cost is just so large.
msephton 11 hours ago [-]
No mention of decompression speed and validation, or did I miss something?
savalione 11 hours ago [-]
It's in the post: Benchmarks -> Speed

tl;dr: SMLL is approximately 10,000x slower than Gzip

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:10:42 GMT+0000 (Coordinated Universal Time) with Vercel.