[HN Gopher] Imaginary Numbers Protect AI from Real Threats
___________________________________________________________________
Imaginary Numbers Protect AI from Real Threats
Author : sizzle
Score : 14 points
Date : 2021-09-08 18:49 UTC (4 hours ago)
(HTM) web link (pratt.duke.edu)
(TXT) w3m dump (pratt.duke.edu)
| zardo wrote:
| > By including just two complex-valued layers among hundreds if
| not thousands of training iterations
|
| Confused by the comparison of layers to training iterations. How
| many layers are in the model?
| im3w1l wrote:
| Yeah the whole article seems written by someone with a very
| poor understanding.
| owlbite wrote:
| The actual paper is here:
| http://proceedings.mlr.press/v139/yeats21a/yeats21a.pdf
|
| Summary seems to be that ambiguity attacks can be resisted by
| regularization such that basin of attraction for each class
| is larger and it is harder to subtly nudge inference form
| class A to class B. The complex representation makes the
| regularization better.
|
| Mechanism for this is essentially that by encoding real to
| complex as y = { sin(x), cos(x) } and going back again by
| taking the absolute value, the fallout is that a descent
| direction constraint manifests such that {the step in the
| activation descent direction}*2 + {step in regularization
| space}*2 = 1, so large steps are either modifying the
| activation or are modifying the regularization, not both, so
| the result is a more robust training direction.
| Frost1x wrote:
| Clearly, it's n + 2*i^4 layerations.
___________________________________________________________________
(page generated 2021-09-08 23:01 UTC)