[HN Gopher] A.I. Is Solving the Wrong Problem
___________________________________________________________________
A.I. Is Solving the Wrong Problem
Author : mbellotti
Score : 89 points
Date : 2021-05-27 01:31 UTC (21 hours ago)
(HTM) web link (onezero.medium.com)
(TXT) w3m dump (onezero.medium.com)
| dragontamer wrote:
| Hmmm...
|
| >> Jeff Bezos's Amazon operated on extremely tight margins and
| was not profitable
|
| https://www.sec.gov/Archives/edgar/data/1018724/000119312509...
|
| Amazon made $645 million net profit in 2008, $476 net profit in
| 2007, and $190 million in 2006.
|
| Where did this myth of "Amazon doesn't make profits" come from?
| Why are people seemingly unable to check publicly shared
| historical 10k and fact-check themselves before making statements
| like this?
| lkbm wrote:
| Yeah, the 2008 number is wrong, but the meme comes from
| earlier. Its first profitable quarter was Q4 2001, three years
| after IPO, and it's first profitable year was 2003, six years
| after IPO.[0] This seems like a long time, especially for the
| late nineties/early 2000s. (Though tbh, IPO three years after
| founding feels early to me too.)
|
| Additionally, I seem to recall that they talked this up. Not
| "we're working on becoming profitable" but instead "We plan to
| continue losing money for several years. Deal with it."
|
| I believe that prior to 2016, any profitable years were pretty
| much entirely thanks to Q4, and they were pretty small for its
| size[1]. A profitable year is good, but three quarters of
| losses each year will stand out. Sure, they're retail, but
| they're also tech. Sky high margins are expected year-round.
|
| [0] https://en.wikipedia.org/wiki/History_of_Amazon
|
| [1] https://qz.com/1925043/the-days-of-amazons-profit-
| struggles-...
| da_chicken wrote:
| From what I remember, Amazon's strategy early on was to take
| as much revenue as it could and invest it back into itself.
| It _intentionally_ ran in the red to try to grow faster.
| gowld wrote:
| Amazon was unprofitable because they poured all their
| opearating profits into growth projects, not because they
| were subsidizing operations with investment.
|
| Like many retailers, the business is seasonal and Q4 has more
| shopping. This is modeled as part of the business. We don't
| say lawn care businesses are unstable because they do most
| work in the summer.
| kimi wrote:
| Medium - please don't.
| vallas wrote:
| Paywall is such a terrible model to spread ideas.
| OrderlyTiamat wrote:
| This author makes broad sweeping claims, supprting them with
| numerous references that (in all instances that I checked)
| actually counter their argument. I'm not even sure the author
| knows _what subject they want to talk about_, never mind what
| argument to present.
| digitcatphd wrote:
| Yep... didn't read past the headline of the article.
| truth_ wrote:
| Yes. "AI" (read Deep Learning/LogReg/SVM models) do indeed
| perform better given more data. I can vouch for this myself.
| And there was also a paper regarding this.
|
| Everyone is an expert in AI now!
| plutonorm wrote:
| I waded through half of it hoping a coherent point would
| emerge before heading over to hn comments to confirm my
| suspicions.It's a mash up of a couple of different pop
| opinions on the state of ML without any real insight.
| dnautics wrote:
| at least one of them is true. The effort spending "cleaning
| data" which I think by her description she means
| "connecting pipes" is underestimated. I work for a company
| that deploys an ML model that works on a "lowest common
| denominator" between multiple different downstream SAASes,
| and this week I have been struggling with a field that
| should be an integer, but is a string in one SAAS provider,
| (and there are entries that are not parsable as integer). I
| can't simply convert it to a string because the ML model is
| not expecting it.
| conjectures wrote:
| Whatever it is, it should be antifragile though.
| richk449 wrote:
| > After decades of investment, oversight, and standards
| development, we are not closer to total situational awareness
| through a computerized brain than we were in the 1970s.
|
| Hard to see how that could be true. In just about any field,
| computers today provide much better situational awareness than
| was possible in 1970.
|
| The article makes the usual complaints about self driving cars:
|
| > Despite $16 billion in investment from the heavy hitters of
| Silicon Valley, we are decades away from self-driving cars.
|
| Yet cars are much more intelligent today than they were in the
| 1970s. And we are not decades away from self-driving cars - Waymo
| runs self driving cars today in very specific locations.
|
| Wondering if this article is written by GTP-3.
| zepto wrote:
| > Waymo runs self driving cars today in very _specific
| locations_.
|
| Specific locations where the streets are practically tracks.
| grp000 wrote:
| Phoenix is laid out on a grid, but you can hardly call the
| streets "tracks".
| fungiblecog wrote:
| Self-driving cars are much much better, but they are not
| "intelligent" in any sense of that word
| Scarblac wrote:
| "Intelligence" is anything that humans can do but we don't
| know how to make computers do. Once computers can do it, it's
| "mere computation".
| vagrantJin wrote:
| I doubt we'd call locomotion and reaction to sensory
| feedback intelligent. Even single cell organisms are well
| and truly capable of that.
| dtech wrote:
| What self-driving cars do is closer to something an
| animal like a deer does, not something a bacterium does.
| And you'd generally say a deer uses some intelligence
| while moving.
| Scarblac wrote:
| So can single cell organisms drive cars? Because
| otherwise I don't see your point.
| [deleted]
| solipsism wrote:
| For your personal, idiosyncratic definition of "intelligent".
| fungiblecog wrote:
| In the sense that an intelligent thing can react sensibly
| to situations outside of those explicitly known about in
| advance
| rcxdude wrote:
| Waymo has some excellent examples of their car reacting
| to strange unforseen situations appropriately. You're
| going to need to be more specific.
| js8 wrote:
| > Wondering if this article is written by GTP-3.
|
| FYI, it was written by a woman. I looked up her book, Kill It
| with Fire, and as a mainframer I have to say it seems pretty
| interesting.
|
| I think what she alludes to in this essay, though, is more like
| that AI cannot solve socioeconomic problems of humans. And even
| humans seem to struggle with it.
|
| Whenever I read stories where the metrics became the targets,
| and the like, I am reminded of Varoufakis' book Economic
| Indeterminacy. He doesn't give any answers there, but there is
| this "strange loop" in rationalism that nobody really
| understands.
|
| I also think that AI might be a wrong target, because you need
| to understand the problem before you can solve it, and once
| humans understand the problem, they don't need AI anymore, they
| just code the solution as an algorithm. On the other hand, if
| humans don't fully understand the problem, it's extremely
| difficult (except artificial circumstances like games) to
| explain to AI what the problem is, so it would arrive at a
| "reasonable" solution (and avoided, at the very least, killing
| all humans).
| soco wrote:
| If we can't understand the problem, will we be able to
| understand the solution presented by that AI? Or we'd just
| apply it, trusting blindly the unfathomable reasons the AI
| used? Do we have an AI where the decision tree can be grasped
| by humans?
| lokischild wrote:
| It seems inevitable to me the moment AI capacity seriously
| surpasses human (as a whole) capacity in any specific
| topic, it becomes an oracle. I hear of efforts to translate
| machine decision making to human understandable terms, but
| if it is a question of raw intelligence, it will quickly
| become impossible to understand.
| js8 wrote:
| This is a topic of Lem's novel/essay
| https://en.wikipedia.org/wiki/Golem_XIV.
|
| And to be honest, I wondered about that with GPT-3. Maybe
| it could give a more profound answer to a given prompt,
| but it chose not to, since it found the prompt to be too
| silly, and it responded in kind. Just like adults are
| able to entertain an imaginary universe of children.
|
| So even to explain that we want a "serious" answer might
| be difficult.
| maest wrote:
| It's interesting to see how super advanced ML-based
| engines have changed the field of chess in recent years -
| top engines are way better than top humans, so, in a
| sense, act as oracles.
|
| Top players routinely use them in preparation, to study
| new lines, get hints about what moves make sense in a
| position and also to generate new ideas or tweak the
| principles they apply in the game. The engines don't
| explain their reasoning, but provide something closer to
| the "correct" move in any given position. It's up to the
| humans to do the legwork and understand _why_ the
| recommended move is strong.
|
| Clearly, chess is not real life, but the impact of these
| oracle engines has been broadly positive (with the
| exception of using engines to cheat in online play).
| amelius wrote:
| > I think what she alludes to in this essay, though, is more
| like that AI cannot solve socioeconomic problems of humans.
|
| Yes, the question is: do we want AI to solve first world
| problems, or real problems?
| Workaccount2 wrote:
| I remember in 2015 when we were decades away from a computer
| topping a Go champion...
| paulcole wrote:
| Let's say you and I were going to race each other by walking.
| You start on the east side of Los Angeles and I'm in Santa
| Monica.
|
| Does your lead mean you're in a much better position than me?
| What if the finish line is in Amsterdam?
|
| That's how I see AI (particularly self driving tech) today.
| Yes, technically there's been advancements but we don't even
| know whether it's possible to get to the finish line today.
| richk449 wrote:
| > Let's say you and I were going to race each other by
| walking. You start on the east side of Los Angeles and I'm in
| Santa Monica. Does your lead mean you're in a much better
| position than me? What if the finish line is in Amsterdam?
|
| I can't figure out how to parse this. You refer to my
| starting location as a "lead", but they ask if it means I am
| in a better position - that is the definition of "lead". I
| think your point is that we are so far from what is needed
| that it is hard to know if we are even moving in the right
| direction.
|
| Which is a weird argument. My brother drove me in his Nissan
| Rogue today, which does automatic lane following. You don't
| have to steer your car, or use the gas or brake for many
| driving conditions. That is unambiguously an improvement over
| full manual control.
| icoder wrote:
| It's an improvement in what it does now, but I think the
| point is it is not _necessarily_ bringing the end goal
| closer. Like a side track that runs dead at some point.
|
| No matter how much faster horses may have had become by
| selective breeding, that did not bring 100km/h travel
| closer.
| martin_a wrote:
| With regards to autonomous driving, improvements like
| this absolutely bring us closer to the end goal.
|
| Each improvement in adaptive cruise controls, lane
| following/holding assistants and any other partly-
| autonomous assistance system does its part in acquiring
| experience and technology for "the end goal".
| paulcole wrote:
| You missed the word "much". Is a lead of one step a much
| better position? If it's a 2 step race, then for sure. If
| it's a 2000 step race, almost for sure not. If we're not
| sure how long of a race it is, then who can say.
| Robotbeat wrote:
| If by "finish line" you mean super-human cognition in every
| single sense, then sure (although it is possible). That might
| be a century away while AI has nonetheless been stupendously
| successful in several areas.
| Barrin92 wrote:
| > _Hard to see how that could be true. In just about any field,
| computers today provide much better situational awareness than
| was possible in 1970._
|
| You sure about that?
|
| https://twitter.com/hatr/status/1361756449802768387?s=20
|
| > _Waymo runs self driving cars today in very specific
| locations_
|
| Ernst Dickmann had autonomous cars on the road in very specific
| locations in the 1980s
|
| https://youtu.be/_HbVWm7wdmE
| dtech wrote:
| I don't see how this is relevant. Computers in the 1970 had
| no situational awareness about people interviewing for jobs.
| So yes, that software might be crap, it still has infinitely
| more awareness.
| Barrin92 wrote:
| what even is "infinitely more awareness?" is that an actual
| metric? Computers today have exactly as much awareness as
| they had in the 1970s, situational or otherwise, which is
| none. The algorithm in question does not know what a
| bookshelf is, does not know what a job interview is and it
| does not know how the two relate. It correlates a bunch of
| pixels and creates the illusion of having awareness, but
| this is an anthropomorphization and nothing more.
| TchoBeer wrote:
| How would we know if it had "actual awareness" and not
| "the illusion of awareness"
| 317070 wrote:
| How do you know it is nothing more? Why wouldn't
| awareness emerge from a large amount of correlation?
| croes wrote:
| So you are saying
|
| Computer 1970: 0 situational awareness
|
| Computer nowadays: 0 * infinity = undefined situational
| awareness.
|
| Pretty useless.
| krallistic wrote:
| Its easy to point at the bookshelf example and say "Haha AI
| is stupid", but its actually quite impressive. One could
| easily argue that most human interviewers have similar bias,
| and that it can detect such complex signals (books, glasses
| etc) IS impressive.
|
| The problem is this case is the data and/or wrong objectives,
| but the "AI" here has a lot of awareness, just on the "wrong"
| signals.
| rightbyte wrote:
| That project has way to little coverage. It is like the 70s
| automatic hamburger joint.
|
| https://m.youtube.com/watch?v=FmXLqImT1wE
|
| I recall some Ford(?) project which guided the cars by rail
| that was quite old.
| gjm11 wrote:
| How much would you bet that _human_ interviewers ' opinions
| of a candidate after a video interview wouldn't be affected
| if they were visibly in a room full of books? Or if they wore
| glasses, or had a painting hanging on the wall, or the
| various other things the researchers found made a difference
| to the AI's assessment?
|
| To be clear, I am super-skeptical about the ability of AI
| systems to do a good job of judging an interviewee's
| personality from a short video clip. But (1) this seems
| obviously to be a _really hard_ problem, and one that couldn
| 't even have been attempted in 1970, and (2) I am also pretty
| skeptical about the ability of human interviewers to do it.
| csmpltn wrote:
| > "But (1) this seems obviously to be a really hard
| problem, and one that couldn't even have been attempted in
| 1970"
|
| Apollo 11 landed on the moon July 20th 1969. You think that
| didn't take a degree of "AI" and "ML"? Or maybe we just had
| a different name for these things back then...
|
| It's been 52 years since then.
|
| We're simply focusing on the wrong problems.
| klmadfejno wrote:
| > You sure about that?
|
| Single dumb human posts singular dumb ai example to show that
| all ai are dumb and fails to recognize the irony.
| mjburgess wrote:
| > Yet cars are much more intelligent today than they were in
| the 1970s
|
| Therein lies the problem. Your definition of intelligence
| presumes that it is a simple quantitative scale, measuring I
| guess, something like "system complexity".
|
| The relevant sense here, in which no progress has been made, is
| qualitative -- ie., it is a distinct property. And this
| property has not been acquired.
|
| What is the property? It is dynamical, not formal. It is more
| like gravity (, pregnancy) than it is like addition.
|
| It is the ability many animals have of adaption in the shifting
| and challenging environments in which they are embedded.
|
| That type of adaption is not formal: it is not adaption in the
| sense of "updating a weight parameter". Rather, of the cells of
| their bodies coordinating themselves _differently_ , and thus
| of their tissues, and thus of their organs, and thus of their
| whole brain-body system. Both from a top-down command ("I want
| to run now, and so my cells...") and from a bottom-up ("my
| cells... so I ...").
|
| What enables animals to be fully embedded in their physical
| environment, to cope and adapt to its radical shifts, is this
| capacity. The type of "crossword puzzle" "intelligence" we
| obsess with is entirely derivative of this more basic --and
| vastly more powerful -- intelligence.
|
| Cognition is just a semi-formal process, parasitical on the
| body's intelligence; whose role is simply to notice when it
| fails and problem-solve it.
|
| We have, at best, merely the architecture of this formal
| reasoning. But there is still _nothing_ for it to reason about.
| And in this sense, computer science has made no progress -- and
| indeed, cannot. It is not a formal problem.
| Jabbles wrote:
| Are you suggesting that there is a binary property
| "intelligent or not"? In which case, what would progress even
| look like?
| mjburgess wrote:
| Indeed, a bit like, "made of metal, or not".
|
| And I think it looks more or less like, "organic-in-
| relevant-way or not".
|
| "Relevant" here means a type of organic-physiological
| adaptability which is able to operate on incredibly short
| time scales: ie., you're able to physically adapt your body
| as your environment changes at the second, minute, hour,
| day, ... decade, timescales.
|
| _Typing_ is a type of organic adaption which takes years
| to complete. As are essentially all of our skills.
|
| With digital machines and robots we're _maybe_ able to
| emulate our cognitive processes _as they consider our body-
| environment relationship_. But we have not emulated, at
| all, our ability to _have_ this type of body-environment
| relationship.
| ithkuil wrote:
| A dog is not.made of metal but I have yet to see a dog
| that drives better than a waymo self-driving car.
| corford wrote:
| A waymo self-driving car is not made of flesh and bones
| but I have yet to see a waymo self-driving car that
| performs better than a dog at: rescuing humans, detecting
| cancers and disease, shepherding and playing with other
| animals, protecting family from threats...
| klmadfejno wrote:
| > rescuing humans
|
| Vague and undefined, mostly a hardware problem, not an
| intelligence problem
|
| > detecting cancers and disease
|
| AI is a strong tool for detecting many cancers, and in
| some areas of research, just using the same olfactory
| data as dogs: https://journals.plos.org/plosone/article?i
| d=10.1371/journal...
|
| > shepherding
|
| Again, not so much an intelligence problem, but a
| hardware problem. Nonetheless
|
| https://www.theguardian.com/world/2021/apr/09/stress-
| test-au...
|
| > protecting family from threats
|
| In terms of ability to recognize and classify a threat,
| AI is clearly superior here. Dogs have terrible false
| positive rates.
| ithkuil wrote:
| my point was just a crude attempt to dispel the myth that
| flesh has anything to do with efficiency at some task.
| What matters is design, and the immense research powers
| that nature has through eons of natural selection does an
| impressive job and doing things for which it has been
| "trained for" (hence your example).
| mjburgess wrote:
| A dog's environment isn't roads. And no car has been
| design for a dog to drive.
|
| Were such a car to exist, it is clear the dog would win
| in very very many environments (almost all). As would a
| mouse, let alone a dog.
|
| That it _may_ be possible to rig a human environment to
| be replete with so many symbols (road signs, etc.) that
| an incredibly dumb automated system can follow them is
| hardly here-nor-there.
|
| Personally, I dont even think that will be possible.
| Self-driving cars _may_ work on highways and motorways; I
| don 't see there being any in cities. Not for centuries.
|
| (Absent pretty big engineering projects to make cities so
| overly sign'd that a non-intelligent automated system
| could navigate them. Consider, eg., existing automated
| trains & train networks.)
| klmadfejno wrote:
| > Were such a car to exist, it is clear the dog would win
| in very very many environments (almost all). As would a
| mouse, let alone a dog.
|
| This seems incredibly unlikely. AI vastly outperforms
| 99.99% of humans on various video games, and 100% on many
| others. I'll bet on a well trained ml model over a dog
| every time.
|
| > That it may be possible to rig a human environment to
| be replete with so many symbols (road signs, etc.) that
| an incredibly dumb automated system can follow them is
| hardly here-nor-there.
|
| We already have above average human performance with just
| normal road signs, and could also simply use digital
| information.
|
| > Self-driving cars may work on highways and motorways; I
| don't see there being any in cities. Not for centuries.
|
| Goalpost shifting
| inglor_cz wrote:
| "centuries"
|
| Centuries is a long, long time in science and technology.
| It is 2021. If we take "centuries" to mean two centuries,
| railways with steam engines were not yet a thing in 1821.
| Cars without horses much less so.
|
| None of us can predict the state of computer science in
| 2221.
| ithkuil wrote:
| IMHO the biggest problem is the moral problem; even once
| the tech achieves better reliability in general (as
| compared to human drivers, who are quite crappy but we're
| all used to them), the cases when it will fail will be so
| spectacular and cause so much outrage because we're ill
| equipped to deal with situations where there is nobody to
| blame: we always try to find somebody to hold
| responsible. When there is none, we make them up (deities
| and whatnot).
|
| When natural disaster strike, people feel plenty of
| emotions, including anger. Often that anger though cannot
| be directed to anybody in particular. ("God isn't easily
| sued
|
| When machines misbehave, being by definition human made,
| it's harder to accept it "as just the way the world
| works".
| Retric wrote:
| FYI. Humans _are_ made of metal. Iron, Calcium, and
| Sodium are all essential metals.
| soco wrote:
| You have a valid point here. But it might be that for
| practical reasons the progress in that direction won't be
| needed. Like, brute forcing it might be enough to reach a
| level higher than we can grasp. And if we can't grasp it,
| it's all Greek to us anyway...
| mjburgess wrote:
| The problem with brute forcing is it requires data from the
| future.
|
| Physics can be solved with statistics, but only from God's
| POV.
|
| The future, absent information about it, is too open to
| solve by merely constraining models by past example cases.
| klmadfejno wrote:
| And yet computers continue to perform tasks that were talked
| about for years as something uniquely human / intelligence
| driven. This is a nice philosophical debate, but in practice
| I think it falls flat.
| mjburgess wrote:
| I dont see any single case of that. Rather in every case
| the goal posts were moved.
|
| Can a computer _play_ chess? No.
|
| They _search_ through many permutation of board states and
| in a very dumb way merely select the decision path that
| leads to a winning one.
|
| That was never the challenge. The challenge was having them
| _play_ chess; ie., no tricks, no shortcuts. Really evaluate
| the present board state, and actually choose a move.
|
| And likewise everything else. A rock beats a child at
| finding the path to the bottom of a hill.
|
| A rock "outperforms" the child. The challenge was never,
| literally, getting to the bottom of the hill: that's dumb.
| The challenge was matching the child's ability to do that
| _anywhere_ via exploration, curiosity, planning,
| coordination, and everything else.
|
| If you reduce intelligence to merely completing a highly
| specific task then there is always a shortcut, which uses
| no intelligence, to solving that task. The ability to build
| tools which use these shortcuts was never in doubt: we have
| done that for millenia.
| Falling3 wrote:
| > Can a computer play chess? No. > They search through
| many permutation of board states and in a very dumb way
| merely select the decision path that leads to a winning
| one.
|
| This is a perfect example of moving the goal posts. The
| objective was never to simulate a human playing chess.
| mjburgess wrote:
| The objective was to build an intelligent machine. Games
| were chosen as they, in humans, require intelligence.
|
| They thought that AI would come out of building systems
| that can replace humans: sure, but only insofar as you
| preserve the use of intelligence.
|
| If you replace with a shortcut, you havent built an
| intelligent machine.
| klmadfejno wrote:
| The objective was to make a machine that could beat
| anybody at chess. Nobody on the Alpha Zero team believes
| Alpha Zero is an example of general AI. Teaching a system
| to understand a complex system is a necessary
| subcomponent of general intelligence.
| andrewprock wrote:
| The moving of the goal posts is being done by those who
| claim that chess algorithm are in any sense intelligent.
|
| The same holds for artificial intelligence broadly. It is
| as far from intelligence as a can opener.
| klmadfejno wrote:
| If a lookup table can predict human decisions with high
| accuracy given access to its senses and feelings, then
| either a human is just another can opener or intelligence
| isn't real.
| TchoBeer wrote:
| At what point would you be willing to concede that an
| algorithm is actually intelligent? What would be required
| for that?
| andrewprock wrote:
| The definition of algorithm and intelligence are mutually
| exclusive. Maybe you are asking a different question?
|
| "When will we discover an algorithm for intelligence?"
| richk449 wrote:
| If we understand something, we can describe it with an
| algorithm.
|
| If algorithms for intelligence are by definition
| impossible, then understanding intelligence is by
| definition impossible.
|
| So if true intelligence is something beyond our current
| understanding of the world (beyond algorithmic
| description). To me, this feels like god in the gaps
| applied to intelligence.
| andrewprock wrote:
| Understanding does not imply computability. The halting
| problem is the classic example of this.
| klmadfejno wrote:
| > They search through many permutation of board states
| and in a very dumb way merely select the decision path
| that leads to a winning one.
|
| > That was never the challenge. The challenge was having
| them play chess; ie., no tricks, no shortcuts. Really
| evaluate the present board state, and actually choose a
| move.
|
| Uh-huh. And how exactly do you play chess? Do you not,
| perhaps, think about future states resultant from your
| next move?
|
| Also, Alpha Zero, with its ability to do a tree search
| entirely removed, achieves an ELO score of greater than
| 3,000 in chess, which isn't even the intended design of
| the algorithm.
|
| A rock will frequently fail to get the to bottom of a
| hill due to local minimums vs. global minimums. A child
| will too sometimes.
| mjburgess wrote:
| > Uh-huh. And how exactly do you play chess? Do you not,
| perhaps, think about future states resultant from your
| next move?
|
| Not quite. You'd need to look into how people play chess.
| It has vastly more to do with _present_ positioning and
| making high-quality evaluations of present board
| configuration.
|
| > rock will frequently fail to get the to bottom of a
| hill due to local minimums
|
| Indeed. And what is a system which merely falls into a
| dataset?
|
| A NN is just a system for remembering a dataset and
| interpolating a line between its points.
|
| If you replace a tree search with a database of billions
| of examples, are you actually solving the problem you
| were asked to solve?
|
| Only if you thought the goal was literally to win the
| game; or to find the route to the bottom of the hill.
| That was never the challenge -- we all know there are
| shotcuts to merely winning.
|
| Intelligence is in how you win, not that you have.
| klmadfejno wrote:
| > Not quite. You'd need to look into how people play
| chess. It has vastly more to do with present positioning
| and making high-quality evaluations of present board
| configuration.
|
| That is what Alpha Zero does when you remove tree search
|
| > A NN is just a system for remembering a dataset and
| interpolating a line between its points.
|
| Interpolating a line between points == making inferences
| on new situations based on past experience.
|
| > If you replace a tree search with a database of
| billions of examples, are you actually solving the
| problem you were asked to solve?
|
| The NN still performs well on positions it hasn't see
| before. It's not a database. The fact that the NN learned
| from billions of examples is irrelevant. Age limits
| aside, a human could have billions of examples of
| experience as well.
|
| > A NN is just a system for remembering a dataset and
| interpolating a line between its points.
|
| So are human brains. That is the very nature of how
| decisions are made.
|
| > Only if you thought the goal was literally to win the
| game; or to find the route to the bottom of the hill.
| That was never the challenge
|
| So then why did you bring it up as an example other than
| to move goal posts yet again? I can build a bot to
| explore new areas too. Probably better than humans can.
| Any novel perspective that a human brings, is, by
| definition, learned elsewhere, just like a bot.
|
| > Intelligence is in how you win, not that you have.
|
| Sure, and being a dumbass is in how you convince yourself
| you're superior when you lose every game. There are many
| open challenges in AI. Making systems better at learning
| quickly and generalizing context is a very hard problem.
| But at the same time, intellectual tasks are being not
| only automated, but vastly improved by AI in many areas.
| Moving goalposts on what was clearly thought labor in the
| past is just handwaving philosophy to blind yourself from
| something real and actively happening. The DOTA bots
| don't adapt to unfamiliar strategies by their opponents,
| and yet, they're still good at DOTA.
| omgwtfbyobbq wrote:
| I think intelligence is more generally how an agent
| optimizes to be successful, objectively and subjectively,
| across a wide variety of different situations.
| richk449 wrote:
| Let's say that you have the ability to know the state of
| every neuron, and the interconnect map between them, at
| all times. You watch a chess player make a move,
| determine what is going on, and define the process the
| brain follows as an algorithm. Now that you have an
| algorithm, you have a very powerful piece of silicon
| execute the algorithm. Does that piece of silicon have
| intelligence? You would probably say no, since simply
| executing a pre-defined algorithm is a shortcut.
| Intelligence means the ability to develop the algorithm
| intrinsically in your head.
|
| So fine, we take a step back. Instead of tracing all the
| neurons as they determine a chess move, we trace all the
| neurons as they start, from a baby, and learn to see and
| to understand spatial temporal behavior and as they
| understand other independent entities that can think like
| they do and as they learn chess and how to make a move.
| Then we encode all of that into algorithms and run it on
| silicon. Is that intelligence? To me, it sounds like it
| is just a shortcut - we figured out what a brain does,
| reduced it to algorithms, and ran those algorithms on a
| computer.
|
| What if we go back further and replay evolution. Is that
| a shortcut?
|
| To be fair, you did claim that the ability to adapt and
| make tools is what distinguishes real intelligence. But I
| wonder if ten years from now, you will saying that a tool
| making computer is just a shortcut.
| api wrote:
| "After decades of investment, oversight, and standards
| development, the space program is not closer to light speed
| travel than it was in the 1970s."
| charcircuit wrote:
| >People don't make better decisions when given more data, so why
| do we assume A.I. will?
|
| Because humans aren't computers. Computers are much better at
| being able to handle processing large amounts of data than humans
| can.
|
| >we are decades away from self-driving cars
|
| Self driving cars already exist. In college I had a lab where
| everyone had to program essentially a miniature car with sensors
| on it to drive around by itself. Making a car drive by itself is
| not a hard thing to accomplish.
|
| >the largest social media companies still rely heavily on armies
| of human beings to scrub the most horrific content off their
| platforms.
|
| This content is often subjective. It's impossible for a computer
| to always make the correct subjective choice, no humans will
| always be necessary
| abpavel wrote:
| > On a warm day in 2008, Silicon Valley's titans-in-the-making
| found themselves packed around a bulky, blond-wood conference
| room table.
|
| The author has read The New Yorker a lot. Some captivating
| details, made irrelevant at the end of the paragraph.
| incrudible wrote:
| I can't imagine anyone actually reading these articles in 2021.
| I'm pretty sure most people buy the New Yorker to put it on the
| coffee table, because of the decorative covers.
| gverrilla wrote:
| This style is very funny indeed. Sometimes it's used in my
| language (pt-br) aswell on some articles (probably because they
| bought the piece from reuters or something and translated).
| It's very strong in writers like Gay Talese, who take it so
| serious they even DRESS the style LMAO
| MattGaiser wrote:
| >People don't make better decisions when given more data, so why
| do we assume A.I. will?
|
| How much of this is just because it says something they do not
| want to hear or because there are incentives to not consider it?
| Animats wrote:
| _People don't make better decisions when given more data, so
| why do we assume A.I. will?_
|
| It's recognized that many machine learning systems today need
| very large amounts of training data, far more than humans
| facing the same task. That's a property of the current brute-
| force approaches, where you often go from no prior knowledge to
| some specific classification in one step. This often works
| better than previous approaches involving feature extraction as
| an intermediate step, so it gets used.
|
| This is probably an intermediate phase until someone has the
| next big idea in AI.
| bserge wrote:
| Every single adult person has a 20+ year learning lead on any
| machine, so it's a bit unfair to say humans can do the same
| task with less data.
|
| And computers are already better at tasks involving a lot of
| math, which is the main reasons they've become commonplace.
| mrbungie wrote:
| Yep, 20+ year learning based on 10.000+ years of wisdom
| taught from generation to generation. People take both
| things for granted when comparing humans with "AIs".
| pdkl95 wrote:
| > more data
|
| https://www.youtube.com/watch?v=sTWD0j4tec4
|
| Negativeland has never been more on-topic.
| mark_l_watson wrote:
| I read the whole article and thought it was worth my time. I
| liked to broad strokes of goals of anti fragile AI.
|
| I have been thinking of hybrid AI systems since I retired from
| managing a deep learning team a few years ago. My intuition is
| that hybrid AI systems will be much more expensive to build but
| should in general be more resilient, kind of like old fashioned
| multi agent systems with a control mechanism to decide which
| agent to use.
| aazaa wrote:
| Login required. What is the problem AI is trying to solve, and
| what is the right problem, according to the author?
| jokoon wrote:
| Unless science studies:
|
| * analysis of trained neural network so they're not just black
| boxes.
|
| * arrangement of real neurons in actual brains of ants, mice,
| flies and other small animals.
|
| * some philosophical questioning of how conscience, intelligence,
| awareness emerge, including a good definition and differentiation
| on how the brain is able to recognize causality from correlation.
|
| * some actual collaboration between psychology AND neurology to
| connect the dots between cognition and how an actual brain
| achieve it.
|
| Unless there are more efforts towards those things, machine
| learning will just be "advanced statistical methods", and
| programming experts will keep over-selling their tools. Mimicking
| neural networks is just fancy advertising about a simple graph
| algorithm.
| jjcon wrote:
| 1,2,4 are already occurring so I'm not sure what you're on
| about. The third is completely irrelevant and seems fairly
| pseudoscientific, leave that to philosophers, we're not trying
| to create souls.
|
| > advanced statistical methods
|
| Furthermore plenty of methods in machine learning, including
| some methods of training neural nets, are completely
| astatistical in nature. Unless you want to grow the definition
| of statistics to be so large as to consider all of maths and
| every science as 'statistics' these will rightly remain
| distinct fields of study (though they do overlap just like
| stats is used and overlaps with most sciences).
| unlikelymordant wrote:
| It says I need to install the app to read this article. Is there
| some other way if reading it?
| svantana wrote:
| Browser incognito mode usually does the trick. You never know
| with these "intelligent" websites though :)
| [deleted]
| andreyk wrote:
| Super confusing article.
|
| Title aside (which is silly since AI is a toolset for solving a
| variety of problems), it is just so poorly written that it's not
| until more than halfway through it that I think I see it's main
| points (that present day A.I. systems are too dependenct on
| 'clean' data, and some nebulous discussion of how AI contributes
| to decision making in organizations). And the main point wrt data
| quality is rather silly in itself, because plenty of research is
| done on learning techniques that take into account adversaries or
| bad data. And all the discussion wrt how AI should be used to
| improve decision making is just super vague and makes it seem
| like the author has little understanding of what AI is and how it
| is actually used.
| wombatmobile wrote:
| > Super confusing article.
|
| Yeah I stopped reading at this point:
|
| > Facebook's moderation policies, for example, allow images of
| anuses to be photoshopped on celebrities but not a pic of the
| celebrity's actual anus.
| [deleted]
| lincpa wrote:
| Explainable AI System uses the law model and Warehouse/Workshop
| model
|
| TL;DR
|
| An explainable AI system must be constructed in the following way
| to achieve the best results.
|
| The rule-based AI expert system is used as the logical reasoning
| framework, and the rule base (statute law) is used as the basis
| for interpretation.
|
| The dynamic rules (case law) generated by machine learning run in
| the rule-based AI expert system.
|
| Dynamic rules (case law) generated by machine learning cannot
| violate the rule base (statutory law).
|
| If the dynamic rules (case law) generated by machine learning
| conflict with the rule base (statute law), the system must
| archive it and submit it to humans to decide whether to modify
| the rule base (statute law) and whether to adopt dynamic rules
| (case law).
|
| In short, if you want an AI system to be a reasonable,
| responsible and explainable AI system, you must take up the
| weapon of law :-)
|
| https://github.com/linpengcheng/PurefunctionPipelineDataflow...
___________________________________________________________________
(page generated 2021-05-27 23:03 UTC)