[HN Gopher] Breaking into the black box of artificial intelligence
___________________________________________________________________
Breaking into the black box of artificial intelligence
Author : rntn
Score : 32 points
Date : 2022-05-15 20:05 UTC (2 hours ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| SemanticStrengh wrote:
| It's not that it is a black box that can somehow be unblacked.
| Neural networks are inherently messy things that have contrived,
| complex and partial or ad-hoc representations
| burtonator wrote:
| Also, If you look at each layer as just a vector, how the heck
| do you describe that so that it's easy to understand?
|
| I think this might be perpetually difficult to diagnose.
|
| Maybe there could be a tool that could show WHY a decision was
| made but not sure that you could identify bias in deep networks
| before hand.
| ewuhic wrote:
| Isn't the "why" the weights of the model themselves?
| sdenton4 wrote:
| People want brief explanations of reasons for outputs, and
| a 10gb pile of weights isn't really what they mean.
|
| Human explanations meanwhile are often invented to fit
| evidence to personal biases and beliefs, and are thus
| typically deeply flawed. But we're more ok with humans
| making suspect decisions than ML, in many cases.
| version_five wrote:
| Yeah, when people want to see e.g. an image classification
| model explain the different features it saw in the image and
| the weights it assigned them in making it's decision (this is
| an example in the article), they are asking for something that
| isn't what the model does.
|
| ML models have tacit knowledge in a sense, you can't tractably
| write down a process for it. That's not to say you can't
| describe the situations in which a model works.
| drdeca wrote:
| Are you familiar with the "circuits" thread on distill.pub ? (
| https://distill.pub/2020/circuits/ )
|
| Messy and complex, yes, but not altogether immune to analysis.
|
| And, if the training data is diverse enough, it appears that
| the individual neurons can reflect things we find meaningful,
| while being expressed in terms of neurons in previous layers
| which we also find semantically meaningful, in a way we can
| find comprehensible.
|
| Of course, the amount of time and effort needed to collectively
| understand the entirety of such a network (to the point that a
| similar network could be made by people choosing weights by
| hand (not copying the specific numbers, only the interpreted
| meanings/reasons of the numbers), and producing something which
| is not too much worse than the trained network, would be
| gargantuan, and I suspect it might require multiple
| generations, possibly even many.
|
| But, I don't think it is impossible?
|
| (presumably it will never happen, because it would not come
| anywhere close to being worth it to do the whole thing, but,
| still.)
| oneoff786 wrote:
| I feel like articles like this are always... behind. SHAP isn't a
| perfect tool by any means but it would catch the low hanging
| fruit like the "R" example
| killjoywashere wrote:
| Yeah, I skimmed it for about a minute and now I want my minute
| back.
| burtonator wrote:
| The R issue is a data cleansing problem too. Data cleansing is
| something that not many people talk about because it isn't
| exciting work.
___________________________________________________________________
(page generated 2022-05-15 23:00 UTC)