[HN Gopher] Only Train Once: A One-Shot Neural Network Training ...
___________________________________________________________________
Only Train Once: A One-Shot Neural Network Training and Pruning
Framework
Author : azhenley
Score : 49 points
Date : 2021-07-16 17:15 UTC (5 hours ago)
(HTM) web link (arxiv.org)
(TXT) w3m dump (arxiv.org)
| medymed wrote:
| As a hobbyist, I've wondered if the need for umpteen epochs just
| leads many nets to memorize datasets, especially when the
| performance jumps a lot from one epoch to another without much
| change during batches. It's kind of disconcerting for those of us
| who don't have millions of source images to train with.
| wxnx wrote:
| I think the evidence is pretty much in on that -- namely, yes,
| if your data is too small, a reasonably large neural net
| (a.k.a. basically any computer vision model from the last 3-4
| years) is perfectly capable of memorizing the training images.
|
| The relative success of attacks on nets to extract their
| training data support that this happens in practice too.
|
| Generalization performance as it stands now always has to be
| evaluated empirically.
| haolez wrote:
| This could be very useful for adaptive AIs in gaming.
| osipov wrote:
| No code in Github. Not credible.
___________________________________________________________________
(page generated 2021-07-16 23:01 UTC)