[HN Gopher] Science fiction hasn't prepared us to imagine machin...
       ___________________________________________________________________
        
       Science fiction hasn't prepared us to imagine machine learning
        
       Author : polm23
       Score  : 180 points
       Date   : 2021-02-07 12:21 UTC (10 hours ago)
        
 (HTM) web link (tedunderwood.com)
 (TXT) w3m dump (tedunderwood.com)
        
       | emtel wrote:
       | I find a lot of the comments on this topic (and anytime AI comes
       | up) very frustrating. Many of them strike me as little more than
       | skeptical posturing: "it's just statistics!" or "Current AI
       | systems have no understanding and so can't solve problem X"
       | 
       | I don't think this sort of thinking has any predictive power. If
       | you are skeptical of deep learning because it is "just
       | statistics", did that skepticism allow you to predict that self
       | driving cars would not be solved in 2020, but protein folding
       | would?
       | 
       | I think the best test for skeptics is the one proposed by Eliezer
       | Yudkowsky: what is this least impressive AI achievement you are
       | sure won't happen in the next two years. If you really want to
       | impress me, give me a list of N achievements that you think have
       | a 50% chance of happening in 2 years. Or 5 years. Just make some
       | falsifiable prediction.
       | 
       | If you aren't willing to do that, or something like it, why not?
        
         | godelzilla wrote:
         | If you really want to impress me, clearly define "artificial
         | intelligence" (as opposed to statistics, data analysis, etc)
         | and tell me the most impressive example.
        
         | hooande wrote:
         | The problem with this test is that it's clear that people can
         | commit enough resources to solve almost any one particular
         | problem with "AI". If people are willing to pay tens or
         | hundreds of millions of dollars they can create an AI system
         | that can play any game (chess,jeopardy,go), fold proteins,
         | drive a car, etc.
         | 
         | The obvious problem is that no one AI system can do all of
         | those tasks. Every human mind is able to do all of those things
         | and literally infinitely many others. All for a cost of waaaaay
         | below what DeepMind paid to learn to play Atari.
         | 
         | If a problem involves the organization of information, "AI" can
         | solve it. I think we've established that. I'm waiting to see
         | something that humans can't already do
        
       | andi999 wrote:
       | Maybe the author should explain more what he perceives as the
       | current situation. I mean AI is almost in any science fiction
       | movie (2001, Terminator, Star Trek, Blade runner)
        
         | panic wrote:
         | _> News stories about ML invariably invite readers to imagine
         | autonomous agents analogous to robots: either helpful servants
         | or inscrutable antagonists like the Terminator and HAL. Boring
         | paternal condescension or boring dread are the only reactions
         | that seem possible within this script...._
         | 
         | The point is that AIs are typically imagined by our culture as
         | independent agents that act kind of like humans, not as tools
         | or forces that participate in human culture in a more inhuman
         | way.
        
           | jononor wrote:
           | There are examples of AI in sci-fi that are not bound to a
           | single humanoid-robot type base. Just to pick some popular
           | movies: The Matrix, Her, Transcendence
        
             | dTal wrote:
             | Parent never mentioned embodiment. All of your examples
             | still depict AIs as "independent agents that act kind of
             | like humans".
             | 
             | A genuine example of "inhuman force" AI might be Bruce
             | Sterling's short story Maneki Neko. But even then, the AI
             | has _motive_. Such stories are difficult to come by.
        
               | guerrilla wrote:
               | It seems like there should be tons, like what about the
               | computer on any Star Trek series?
        
         | bryanrasmussen wrote:
         | The AI in pretty much every science fiction movie is human in
         | its ability to conceive of itself as a separate entity, and
         | human in its ability to have wants and desires for itself that
         | are wide ranging.
         | 
         | The AI in all the movies you mention have personalities and
         | drives that bring them into direct conflict with humanity.
         | 
         | Machine learning can perhaps undermines humanity or makes
         | things worse for humanity by its functioning, but it does not
         | directly go to war against humanity.
        
           | wongarsu wrote:
           | If it isn't an AI in the way you describe, science fiction
           | just calls it a computer or a computer program. Because
           | that's what it is, calling machine learning AI is just a
           | continuation of a trend calling various things AI in an
           | attempt to get research funding.
           | 
           | The Star Trek board computer (as imagined in 1966) is a
           | human-language interface for a program that can look up
           | information and do computations based on high level
           | descriptions. Some of the commands given to it arguably
           | require "intelligence", and any attempt to replicate it today
           | (of which there are many, all of them fall flat) would
           | involve machine learning. Yet the computer wasn't
           | conceptualized as a living entity with personality, as
           | opposed to for example the Culture Series where ships are AIs
           | with superior intellects.
        
           | wruza wrote:
           | AI is not always humanlike, and when it is, it's because it
           | is a good fit for servitude. I remember a story of a planet
           | with small tetrahedron bots that formed protective clouds to
           | destroy electrically active organisms. T-series were
           | humanlike for a reason, other bots were not. Starships often
           | have AI that is not even a person. Social fiction is just the
           | most popular, because it is the same old drama, but with
           | robots.
        
           | andi999 wrote:
           | So what you write, is this what the article means? Also the
           | central computer ai in star trek does not have personality
           | but is just a powerful tool.
        
             | bryanrasmussen wrote:
             | I believe you will find that there are artificial
             | intelligences in different Star Trek productions.
        
               | andi999 wrote:
               | Yes. My point was, that I couldn't figure out what the
               | author actually means and where he is coming from.
        
       | username90 wrote:
       | I think the lack of books is mostly just that a society where
       | machines is the main source of cultural output isn't very
       | interesting. At that point humans are more or less useless as
       | workers so either would get culled or would live in utopia
       | similar to our current pets, only question is if we would go the
       | way of the cat or the way of the horse.
        
         | inter_netuser wrote:
         | cats were never really domesticated. they sort of just came by,
         | and never left.
         | 
         | So it seems the horse way is much more likely. Unfortunately.
        
           | username90 wrote:
           | The machine learned models might feel there is a purpose in
           | keeping humans around. Just like humans feels there is a
           | purpose in keeping cats around.
        
             | gonzo41 wrote:
             | I'd be happy with AI taking my responsibilities and myself
             | taking up my cats day job. Lots of sun naps and very little
             | stress about food or resources.
        
               | username90 wrote:
               | Yeah, human being cats is the best case scenario. Just
               | that it wouldn't be interesting as a book. Maybe one
               | short book, but mostly humans wants to read about human
               | like entities doing stuff.
        
         | furstenheim wrote:
         | There are such stories. Roald dahl has a very nice short story
         | about it https://www.roalddahlfans.com/dahls-work/short-
         | stories/the-g....
        
         | n4r9 wrote:
         | At the risk of bringing up The Culture series in every HN
         | thread about sci fi, Iain M Banks does a thoughtful job of
         | exploring this. In particular, in the book Look to Windward
         | there's a discussion between the orbital's AI Mind and a
         | renowned composer about how it could mimic their work perfectly
         | if it chose to.
        
         | zem wrote:
         | jack williamson's "with folded hands" is a quietly horrifying
         | look at that, all the more so because it takes all the holy
         | grails of sf - unlimited free energy, faster than light travel
         | and communication, and strong ai that obeys asimov's three
         | laws, and crafts a truly frightening dystopia out of them.
        
       | ginko wrote:
       | ML content generators remind me of the kaleidoscopes from 1984
       | which which are used to generate low-quality entertainment for
       | the masses.
       | 
       | https://en.wikipedia.org/wiki/Prolefeed
        
       | xvilka wrote:
       | Most sci-fi stories prepare us to imagine AGI. ML is to AGI is
       | like a wood stick to a spaceship. It's just not even worth
       | mentioning in the big picture of the future.
        
       | PicassoCTs wrote:
       | Well, its a black box, programmers have trouble telling stories
       | about it, how do you expect a author to tell a tale about a black
       | box? Finding the threshold value that switches the verminator to
       | terminator sounds like a nice detective novel stub though..
        
       | zxcvbn4038 wrote:
       | ML is more or less a parlor trick, my car still can't drive
       | itself, laptop still can't have an original idea, iPhone can't
       | find a cat under a blanket, Google translate still sucks at
       | Mandarin <> English, still no new Bogart films. I'm more worried
       | about aliens then AI at this point.
        
         | croissants wrote:
         | Dang, we moved the AI goalposts from "good at chess" to "new
         | Bogart films"? Tough!
        
           | zxcvbn4038 wrote:
           | Don't forget stairs, the ultimate obstacles for AI and
           | daleks!
        
         | Bakary wrote:
         | >Google translate still sucks at Mandarin <> English
         | 
         | Google is one of the worst for that language pair, FYI. This
         | type of translation in itself has improved at a rapid pace
         | considering what it used to look like. As a result, even taking
         | into account the induction fallacy, I am convinced that it will
         | be extremely proficient in the near future with the exception
         | of poetry.
        
       | smlckz wrote:
       | Currently AI can Learn, can Create, but can not Think (if I am
       | not wrong). Let us think what They might Think when they can.
        
         | rebuilder wrote:
         | How will we know?
        
         | Radim wrote:
         | I see what you did there!
         | 
         | By carefully capitalizing Them (the AI) and Their Actions,
         | you're hedging against Roko's basilisk, right? A sort of
         | Pascal's wager?
        
           | shanusmagnus wrote:
           | Could you explain? I know about RB, but am missing the thing
           | about capitalization.
        
             | smlckz wrote:
             | Maybe I tried to compare AI with God or just thought that
             | it would be funny.
        
           | smlckz wrote:
           | I didn't know about Roko's Basilisk. Now as I think about
           | that, maybe... Why would I like to be in hell or be tortured
           | by AI (from TDT or otherwise). But the point was, what Their
           | Consciousness would be like? Also, how Consciousness would be
           | bootstrapped/reified in Machines?
        
       | spodek wrote:
       | Nothing prepares us for exponentials in life, even those of us
       | who work with exponentials.
       | 
       | When the rate of change of something depends on its quantity, you
       | get an exponential. The rate of change of machine learning
       | depends on how much machines have learned.
       | 
       | We can use analogies and graphs all we want, exponentials always
       | confound us.
        
       | superbcarrot wrote:
       | Science fiction hasn't prepared us to imagine machine learning
       | because machine learning in its current state is too boring to be
       | central in science fiction stories.
        
         | jondiggsit wrote:
         | Isn't that the point of science fiction?
        
       | cstross wrote:
       | One problem in written science fiction is the requirement for it
       | to be at least vaguely commercially viable, which necessitates
       | dramatic plotting and/or readability. ML is, to most non-
       | technical people, _intensely boring_. How do you dramatize it?
       | 
       | I've published two novels that go there: 2011's "Rule 34", and --
       | coming out this September -- "Invisible Sun", and in both books
       | ML is very much a subaltern aspect of the narrative. That is,
       | neither book would have a plot _without_ the presence of ML, but
       | it 's not really possible in my opinion to write a novel _about_
       | ML in the same way that the earlier brain-in-a-box model of AI
       | could form the key conceit of books like  "2001: A Space
       | Odyssey", "The Adolescence of P-1", or "Colossus" (examples from
       | the 1960s and 1970s when it was still a fresh, new trope rather
       | than being pounded into the dirt).
       | 
       | So it probably shouldn't surprise anyone much that SF hasn't said
       | much about the field so far, any more than it had much to say
       | about personal computers before 1980 or about the internet before
       | 1990 -- what examples there are were notable exceptions,
       | recognized after the event.
        
         | [deleted]
        
         | m463 wrote:
         | That touches on a point about science fiction I've noticed.
         | 
         |  _reality_ can be boring. Reality dictates that we travel at a
         | fraction of the speed of light. We cannot fix things we did. We
         | have a limited lifespan.
         | 
         | Honestly the made-up stuff makes fiction a lot more
         | entertaining. Time travel is really fun (though primer required
         | a dive into wikipedia to unravel). Warp drives and wormholes
         | open the universe to exploration. Living forever might be just
         | as fun as time travel - you can fix stuff forward or just see
         | it all. I don't mind suspending my disbelief.
         | 
         | EDIT: On the other hand, there might be one real subject that
         | is interesting from
         | 
         | https://jsomers.net/i-should-have-loved-biology/
         | 
         | it had a quote:
         | 
         |  _Imagine a flashy spaceship lands in your backyard. The door
         | opens and you are invited to investigate everything to see what
         | you can learn. The technology is clearly millions of years
         | beyond what we can make.
         | 
         | This is biology.
         | 
         | - Bert Hubert, "Our Amazing Immune System"_
        
           | Gibbon1 wrote:
           | I have long and continuous string of 'jokes' that are about a
           | future where common technological items have been replaced by
           | some sort of genetically engineered plant, animal or fungus.
        
         | jayd16 wrote:
         | I feel like the Running Man is a good example of machine
         | learning in sci-fi.
         | 
         | It dug into deep fakes and how they could be used to create a
         | massive propaganda machine. I wouldn't call that boring.
        
         | Balgair wrote:
         | Hey Mr. Stross, I'm a big fan of your work! Thanks for hanging
         | out here on HN with us. I thought _Accelerando_ was superb.
         | Keep up the hard work of writing! I know it can be tough, but
         | your efforts have really paid off, at least to me.
         | 
         | One question, if you're up for doing any questions: what do you
         | think about the use of drones and loitering munitions in the
         | recent Azerbaijani-Armenian War of 2020?
         | 
         | Thanks again for all the hard work!
         | 
         | https://en.wikipedia.org/wiki/2020_Nagorno-Karabakh_conflict
        
         | justin66 wrote:
         | It's a bit of a tangent, but I got a kick out of hearing Paul
         | Krugman make a glowing endorsement of your work last week:
         | 
         | https://www.nytimes.com/2021/02/04/podcasts/ezra-klein-podca...
        
         | bfdm wrote:
         | I tend to agree that ML will not itself be the subject, but
         | rather a device or tool misused by Someone Bad. As you say,
         | it's a necessary and integral piece of the story, but not the
         | star.
         | 
         | For example, it brings to mind some combination of Minority
         | Report and Eagle Eye, say. Where a surveillance state combined
         | with ML analysis leads to something akin to pre-crime arrests
         | and the fight back against that.
        
           | akovaski wrote:
           | > Where a surveillance state combined with ML analysis leads
           | to something akin to pre-crime arrests
           | 
           | While I haven't seen Minority Report or Eagle Eye, this aptly
           | describes the anime TV series Psycho-Pass (2012). If I
           | remember correctly, the show is mostly focused on the effects
           | of having a crime-coefficient system.
        
           | Izkata wrote:
           | > I tend to agree that ML will not itself be the subject, but
           | rather a device or tool misused by Someone Bad.
           | 
           | I got a counterexample:
           | 
           |  _Person of Interest_ [0] was a crime series from 2011-2016
           | premised around machine learning. The opening narration from
           | each episode of the first season[1]: _You are being watched.
           | The government has a secret system, a machine that spies on
           | you every hour of every day. I know because, I built it. I
           | designed the machine to detect acts of terror but it sees
           | everything. Violent crimes involving ordinary people. People
           | like you. Crimes the government considered "irrelevant". They
           | wouldn't act, so I decided I would. But I needed a partner,
           | someone with the skills to intervene. Hunted by the
           | authorities, we work in secret. You will never find us. But
           | victim or perpetrator, if your number's up, we'll find you._
           | 
           | Revealed in flashbacks over the first season, the government
           | doesn't have access to the system. It runs independently
           | (even from its creator) and just provides information.
           | 
           | Being TV it does of course expand past the original premise,
           | with a reveal that it's not just a machine learning system
           | but the world's first true AI, and later another AI with
           | conflicting goals is brought online. But for the first season
           | at least, the machine sticks to the role described in the
           | narration, and appears to be just a machine learning system.
           | 
           | [0]
           | https://en.wikipedia.org/wiki/Person_of_Interest_(TV_series)
           | 
           | [1] https://www.youtube.com/watch?v=yBfVndEtDyY
        
           | dllthomas wrote:
           | If you haven't _read_ Minority Report, I recommend it.
        
         | DiggyJohnson wrote:
         | Do you frequent any forums or sites recommend for the serious
         | amateur interested in the production of the sci-fi?
         | 
         | I especially appreciate this discussion. It's fun to think
         | about the gaps in our imagination. Especially when it comes to
         | science fiction, those gaps are often wider than we think.
         | 
         | Good luck with your work! I'll be on the lookout for Invisible
         | Sun, unless you recommend a different title.
         | 
         | Besides ML, do you have any other fururist predictions that are
         | not being realized in science fiction?
        
         | superkuh wrote:
         | Robert Charles Wilson's "Blind Lake" (2004) is a scifi book
         | about a (then) very near future when the data from telescopes
         | would be almost as much hallucinated from big data models as
         | composed of actual data from measurements. It's plot goes a
         | little more quantum handwavy than direct machine learning but
         | the gist is the same.
        
         | indymike wrote:
         | "ML is, to most non-technical people, intensely boring."
         | 
         | Maybe, or maybe it's just an old trope as you point out. So
         | much of science fiction assumes some level of intelligent
         | computers and has since the beginning. The phrase "machine
         | learning" is really a rebrand of the "thinking machines" of the
         | 60s and artifical intelligence" of the 70s and 80s. Even in the
         | old Star Trek TV series, you had people talking to the
         | computer, not unlike we talk to Alexa, Siri and Hey Google
         | today.
         | 
         | That said, now that people are seeing ML affect their lives
         | directly, there is a lot of exploration that may be possible,
         | as perhaps, ML is just not science fiction anymore.
        
           | notahacker wrote:
           | I think the point is we gained the scifi UX, and lost most of
           | the interesting plot devices. We got the talking computer
           | _without_ enough self awareness or reliability or uniqueness
           | compared with other forms of data interrogation to be a
           | particularly interesting plot point.
           | 
           | Alexa and Siri and Google Home aren't going to take over the
           | world or try to become human, and aren't infallible or even
           | particularly smart. They're a speech input interfaces over a
           | search engine with some jokes hardcoded in. From a technical
           | standpoint, very impressive; from a fiction standpoint a dead
           | end
        
         | gradys wrote:
         | How do I make sure I hear about Invisible Sun when it comes
         | out?
        
           | ideonode wrote:
           | It's Charles Stross, one of the leading SF writers of the
           | day.
        
             | cstross wrote:
             | ... It's also the ninth (and final) book in a long-running
             | series, so not the greatest starting point!
        
               | klyrs wrote:
               | Hah, Robert Jordan taught me to watch for the phrase
               | "[n]th (and final)". No, I won't start on that book, but
               | I'll definitely pick up a copy of the first :)
        
               | cstross wrote:
               | Hint: I remastered the original six book series in three
               | revised omnibus editions circa 2011. (There's then a
               | subsequent trilogy, of which "Invisible Sun" is the
               | climax.) Anyway, start with "The Bloodline Feud" rather
               | than the slim original marketed-as-fantasy six-pack. I
               | learned a lot in the decade between writing the first
               | books and re-doing them, and the omnibus versions read
               | more cleanly.
        
         | EGreg wrote:
         | Literally every movie or book that depicts AI in 100-200 years
         | would be super boring.
         | 
         | Imagine star trek where instead of Data being one android, you
         | just had a ton of machines that could do everything better than
         | humans. No point for the humans to explore space or do
         | anything. The end.
         | 
         | Imagine in Minority Report where Tom Cruise is instantly caught
         | because his whereabouts can easily be tracked and triangulated,
         | by his gait, infrared heart signature, smell, and a number of
         | other things. Movie takes 2 minutes. The end.
         | 
         | In fact, this is the world we are building for ourselves. We're
         | going to be like animals in a zoo, unable to affect nothing.
         | Just like a cat has no idea why you're doing 99% of the things
         | you do, but has to go along with it, similarly you will have no
         | idea why the robot networks do 99% of their activities, you
         | just kinda live your life with as much understanding as a cat
         | has in your house, as you go about your daily business.
         | 
         | The closest I can think of is Isaac Asimov's "I Robot" maybe.
         | And even there, the robots weren't in charge really.
        
           | Izkata wrote:
           | > Imagine in Minority Report where Tom Cruise is instantly
           | caught because his whereabouts can easily be tracked and
           | triangulated, by his gait, infrared heart signature, smell,
           | and a number of other things. Movie takes 2 minutes. The end.
           | 
           | Agh, I can't find it now, but there's a short film half based
           | on this: A time traveler from the future is identified
           | instantly because she's in two places at once. The short has
           | both of them in adjacent interrogation rooms, with two
           | onlookers - a rookie and an experienced investigator -
           | talking about how to identify time travelers. It ends with
           | the experienced one commenting that they've been showing up
           | more and more often.
           | 
           |  _Minority Report_ might not be possible with such
           | technology, but it does open different possibilities.
        
             | Malic wrote:
             | Do share the name of the short, if it comes to mind later.
        
               | jodrellblank wrote:
               | I have a feeling it might be "Plurality" by DUST on
               | YouTube, a 15min short film -
               | https://www.youtube.com/watch?v=pocEN5HprsM
               | 
               | DUST make a lot of cool sci-fi short films, worth
               | checking some of their others too
               | https://www.youtube.com/c/watchdust/videos e.g. Bleem,
               | the number between 3 and 4, slightly remeniscent of the
               | film Pi and the crazy mathematician trope
               | https://www.youtube.com/watch?v=qXnFr1d7B9w or Orbit Ever
               | After, a love story in a dystopian Brazil style future of
               | orbital habitats
               | https://www.youtube.com/watch?v=DpFXMIxlgPo
               | 
               | And just because, from another channel a
               | https://www.youtube.com/watch?v=vBkBS4O3yvY a short about
               | quantum suicide - One-Minute Time Machine | Sploid Short
               | Film Festival.
        
               | Izkata wrote:
               | Yep, "Plurality" is it - the parts of the interview I'm
               | remembering are the explanation at 9 minutes and the
               | plane crash at 9:40. Scrolled through their videos
               | earlier but the name didn't stick out.
        
         | drdeadringer wrote:
         | > ML is, to most non-technical people, intensely boring. How do
         | you dramatize it?
         | 
         | To me it's akin to several scenes in the film "War Games" where
         | you basically have to make a rogue AI in a computer sexy on
         | film.
         | 
         | "Show a computer thinking. Make it scary. Inhuman. Enhance that
         | it's impersonal."
         | 
         | So we get a camera person slowly walking around an obelisk-like
         | table with ominous music. "This is where the AI lives, thinking
         | on how to rain nuclear fire upon the vile Soviets, no humans
         | need interfere, enhance and intensify, will Tic-Tac-Toe save us
         | from our folly". And around and around the evil obelisk table
         | we go watching bits flip up and down back and forth with
         | nuclear missiles in question.
         | 
         | You can find similar elsewhere. Dramatic flashing lights on a
         | server rack, a human taken over by external forces blindly
         | jabbing at a keyboard or keypad with intense purpose. All of a
         | sudden from these wonders computational magic happens.
        
         | Geminidog wrote:
         | It's even boring to technical people. It reduces intelligence
         | to a multidimensional optimization problem. Now intelligence
         | just involves all kinds of mechanical ways to fill out the
         | weights for a neural network. I use to be more interested. Upon
         | learning more about it, I am less motivated.
        
           | mmcnl wrote:
           | Technical is quite a broad term. There are quite some
           | challenges in designing and engineering large ML data
           | pipelines, both from a technical and business perspective.
           | But I agree it's a specific problem that is arguably boring
           | to a lot of people. Some people take more fun in the
           | modelling part, others more in the engineering. Personally,
           | I'm more into the engineering part than creating the actual
           | model.
        
           | Schiphol wrote:
           | > It reduces intelligence to a multidimensional optimization
           | problem.
           | 
           | Did you mean to say "... which it is not", or "which it is,
           | thus the boringness"?
        
             | Geminidog wrote:
             | Let me clarify. I mean "intelligence from the perspective
             | of modern ML" is just an optimization problem.
        
           | saeranv wrote:
           | I'm sure this is due to my beginner status in ML/DL, but I'm
           | really disappointed in how much deep learning seems removed
           | from the things that I enjoyed the most in statistical
           | learning.
           | 
           | I enjoy the creative challenge of applying domain knowledge
           | when building (for example) linear or bayesian regressions.
           | In contrast, DL seems like a whole bunch of hyperparameter
           | tuning and curve plotting. Curious to see if this assessment
           | seems correct from those more experienced...
        
           | cercatrova wrote:
           | I've found exactly the same experience. Data science is
           | mostly cleaning data in the first place, and the 10% that
           | isn't, it's just fiddling knobs (hyperparameter optimization)
           | to get the model to work.
           | 
           | But man, I can't argue the incredible results it creates.
           | Perhaps that's why people do it, for the ends not the means.
        
             | karmakaze wrote:
             | That's the difference between research and application.
             | Someone had to first come up with better ways of
             | formulating/training models.
        
           | karmakaze wrote:
           | That's one way to look at it.
           | 
           | What's described there is the predicting patterns, which is a
           | part of intelligence but there's much more to discover and
           | invent. Even within the 'optimization' task there's huge
           | differences in the leaps from NNs to DNNs and from DNNs to
           | AlphaGo/Zero. The details are what make it interesting.
           | 
           | If we were to understand exactly how the brain operates and
           | learns, we'd see that it's solved/solving just an
           | optimization problem, but that doesn't make it uninteresting.
        
           | gnramires wrote:
           | Isn't that like saying, "I used to marvel at nature, that it
           | has elements such as fire, water, ice, and amazing living
           | things. Then I discovered it is all made of atoms
           | interacting... I am now less motivated." ?
           | 
           | I mean, the fire, the water, the ice, the amazeness of life
           | and intelligence are still there. You just _gained_ a new
           | foundational view. Now you can understand and manipulate
           | better what you already knew, maybe now you learned about
           | plasma, or even extremely advanced and mysterious phenomena
           | like bose-einstein condensates or superfluidity. The old
           | wonders are still there, you 've gained new ones.
           | 
           | I'm not going to claim complete cognitive equivalence (or
           | even preference) between the two states of mind, but it is a
           | bit like childhood: firmly believing in Santa Claus, or
           | Wizards or whatever can be exciting, perhaps more exciting
           | than knowing they are myths; but growing up and understanding
           | they are mythical brings new opportunities, capabilities, and
           | even new mysteries you could not reach before (buying and
           | building whatever you want, vast amounts of knowledge,
           | understanding more about technology and society, etc.). It's
           | the adults that keep us alive and well, that make decisions
           | for us and for society at large. So perhaps (although I'm not
           | entirely convinced by the cumulative argument) truth is a
           | sacrifice, but it is one well worth bearing, at least for me.
           | I am deeply interested in how intelligence works, in how "the
           | sausage is made" (at least for certain highly useful sausages
           | that compose the fundamentals of the world).
           | 
           | Even more, understanding is above all a responsibility, if
           | not for all of us, at least for some of us, or hopefully in
           | one way or another for most of us.
           | 
           | I can't recommend enough Feynman on Beauty: (this argument is
           | largely inspired by that)
           | 
           | https://fs.blog/2011/10/richard-feynman-on-beauty/
           | 
           | In the same vein, intelligence to me used to be a black box
           | where you got input from the world, some kind of wondrous
           | magic happened, and then you got talking kids, scientists,
           | artists, and so on. Now I still view it as wondrous, but now
           | I understand the fundamental is apparently a network-like
           | structure with functional relationships that change, adapt to
           | previously seen information in other to explain it, that
           | there are a number of interesting phenomena and internal
           | structures (going well beyond the simple idea of 'parameter
           | tuning') that can be formalized -- essentially the
           | architecture of the brain (or better, 'a brain').
           | 
           | To give an example, there have been formalizations of
           | Curiosity, i.e. Artificial Curiosity, and I consider it
           | essential for an agent interacting independently in the world
           | or in a learning environment (part of the larger problem of
           | motivation). How amazing is it to formalize and understand
           | something so profound and fundamental to our being as
           | Curiosity? I felt the same way about Information theory years
           | ago. How amazing is it that we've built robots (in virtual
           | environments), and _it works_ -- they 're curious and learn
           | the environment without external stimulus?
           | 
           | Above considerations aside, I find that _amazing_ ,
           | _beautiful_ , _awesome_.
        
             | gnramires wrote:
             | There's another related concept I came up with thinking
             | about this discussion (which I've had with friends as
             | well): 'freedom of utility'.
             | 
             | The basic idea is, forget about what you think is beautiful
             | or motivational. Suppose you could _choose_ to be motivated
             | by something. Would you choose to be motivated by
             | superficial mystery, or by deep knowledge of how things
             | are? Should you choose to find beautiful just the surface
             | of the flower, or also the wonders of how it works, its
             | structure as a system, the connections to evolution and
             | theory of color and so on -- all of which could turn out to
             | be useful one way or another. If you could choose, would
             | you choose to be exclusively motivated by the immediate
             | external appearance or by the depth and myriad of
             | relationships as well?
             | 
             | Unfortunately, (unlike AI systems we could design) I don't
             | think we have complete control of our motivation -- our
             | evolutionary biases are strong. But I'm also fairly certain
             | much of our aesthetic sense can be shaped by culture and
             | rational ideals. If I hadn't heard Feynman, watched so many
             | wonderful documentaries (and e.g. Mythbusters) and many
             | popularizers of science, perhaps I wouldn't see this beauty
             | so much as I do -- and I'm grateful for it, because I want
             | to see this beauty, I _want_ to be motivated to learn about
             | the world, and to improve it in a way.
        
             | jacquesm wrote:
             | Sagan: "The very act of understanding is a celebration of
             | joining, merging, even if on a very modest scale, with the
             | magnificence of the Cosmos."
        
             | Geminidog wrote:
             | > Isn't that like saying, "I used to marvel at nature, that
             | it has elements such as fire, water, ice, and amazing
             | living things. Then I discovered it is all made of atoms
             | interacting... I am now less motivated." ?
             | 
             | Yes it is exactly what I'm saying. I'm less interested
             | because of this. I could turn it around and also say that
             | with your extremely positive attitude you can look at a
             | piece of dog shit and make it look "amazing." Think about
             | it. That dog shit is made out of a scaffold of living
             | bacteria like a mini-civilization or ecosystem! Each unit
             | of bacteria in this ecosystem is in itself a complex
             | machine constructed out of molecules! Isn't the universe
             | such an interesting place!!!!!
             | 
             | This history of that piece of shit stretches back though
             | millions of years of evolutionary history. That history is
             | etched into our DNA, your DNA and every living thing on
             | earth!!! All of humanity shares common ancestors with the
             | bacteria in that piece of shit and everything is
             | interconnected through the tree of life!!! We can go deeper
             | because every atom in that DNA molecule in itself has a
             | history where the scale is off the charts. Each atom was
             | once part of a star and was once part of the big bang! We,
             | You and I are made out of Star Material! When I think about
             | all of this I'm just in awe!!!! wowowow. Not.
             | 
             | I'm honestly just not interested in a piece of shit. It's
             | boring and I can't explain why, but hopefully the example
             | above will help you understand where I'm coming from.
        
       | Svip wrote:
       | It's also plausible to interpret the futures depicted in science-
       | fiction that humanity would eventually reverse cause on AI
       | technology, fearing it would become too dangerous. I feel like
       | there are hints of this in Star Trek as well. A somewhat related
       | topic of gene-manipulated gets a lot of attention in Star Trek,
       | and it is clear by DS9 that such is strictly forbidden.
       | 
       | Movies like The Matrix also highlights the dangers of letting AI
       | run rampant.
        
         | Shorel wrote:
         | Hints? Only hints?
         | 
         | I would recommend you to watch Star Trek Picard then =)
        
         | ginko wrote:
         | > A somewhat related topic of gene-manipulated gets a lot of
         | attention in Star Trek, and it is clear by DS9 that such is
         | strictly forbidden.
         | 
         | I believe it was already mentioned in the TOS episode "Space
         | Seed"(the one with Khan) that genetic engineering on humans was
         | banned after the Eugenics Wars.
        
           | Svip wrote:
           | Ah yes, that's true. I was just thinking of that TNG episode
           | with Polaski and the "children" that's making everyone age
           | superfast, and no one is like 'wait, isn't this illegal?'. So
           | I was unsure when they cemented it.
        
       | [deleted]
        
       | renewiltord wrote:
       | At the beginning of Neal Stephenson's _Dodge_ , AIs exist as just
       | fake news generators / filterers. They aren't sentient or
       | sapient. Can't say more without spoilers.
        
       | dgb23 wrote:
       | Some of the science fiction classics have a lot to say about this
       | and go well beyond what is currently possible and realistic in
       | the near future.
       | 
       | The author either vastly overestimates the capabilities of
       | current AI or didn't read Asimov and the like.
        
         | jimbob45 wrote:
         | Came here to say this. Asimov's Multivac was the epitome of ML.
         | 
         | I specifically remember a short story by him where one member
         | of the population would be selected to be interviewed by
         | Multivac and Multivac would then extrapolate out his answers to
         | the whole population and be able to accurately pick who the
         | population wanted to be president, removing the need for an
         | expensive election. If that isn't ML, I don't know what is.
        
         | geomark wrote:
         | Yeah, I thought Asimov was all over this long ago. Also see
         | "Machines Like Me" by Ian McEwan for a recent go at sentient
         | androids.
        
         | arcturus17 wrote:
         | Yea, The Foundation is essentially about a group of
         | mathematicians using statistics to predict History and build a
         | civilization of superscientists.
        
         | QuesnayJr wrote:
         | I think that's exactly the author's point. All science-fiction
         | radically overshoots where we are now. Nobody imagined the
         | weird combination of capabilities and impossibilities that we
         | find ourselves in.
        
           | esrauch wrote:
           | I mean, it's possible that Assimov imagined it but decided
           | that "automatic radio stations" wasn't a compelling enough
           | basis for a story.
        
           | arcturus17 wrote:
           | The classics probably didn't shoot for what's happening _now_
           | in the first place...
           | 
           | Sure, many mention the 2000s as a milestone, probably for
           | aesthetic reasons, but the point of most sci fi isn't to make
           | a prediction about a concrete timeframe.
           | 
           | For our current problems and tech, I reckon some modern sci
           | fi works (ex: Black Mirror, The Three Body Problem) do a good
           | job of analyzing the current context and laying out some
           | present and future implications.
        
           | wpietri wrote:
           | Yup. It's not clear even we know what point we're at, so I
           | think authors can be forgiven for a) not imagining this
           | particular scenario, and b) not thinking "people get overly
           | excited about one more small step in computer skills" was
           | much of a realm for stories.
           | 
           | Science fiction isn't about technology. It just uses
           | technology as a way of telling stories about characters that
           | provide entertainment and insight to the author's
           | contemporaries. Given that the current generation of ML is in
           | practice mainly allowing modest increases in automation, I
           | don't think it's generating many interesting stories in the
           | real world, and it's not clear it ever will.
           | 
           | If I were to look at current stories that science fiction
           | perhaps missed, top of my list would be how the rise of the
           | computers and the internet, meant to create a utopia of
           | understanding, instead enabled a) the creation of low-cost,
           | low-standards "reality" TV, b) allowed previously distributed
           | groups of white supremacists and other violent reactionaries
           | to link up and propagandize in unprecedented ways, and c) let
           | a former game show host from group A whip up people from
           | group B into sacking the US Capitol, ending the US's 200+
           | year history of peaceful transfers of power. That's a story!
           | 
           | But that would be asking too much of science fiction. Its job
           | isn't to predict the future. Its job, if it has one beyond
           | entertainment, is to get us to think about the _present_.
           | Classics like Frankenstein and Brave New World and Fahrenheit
           | 451 have been doing that for generations. That 's not because
           | they correctly predicted particular technological and social
           | futures.
        
             | wombatmobile wrote:
             | It's not sci-fi but Neil Postman anticipates a world like
             | 2020 in Technopoly.
        
       | rthomas6 wrote:
       | I think Blindsight by Peter Watts has a lot of relevance. It
       | explores (among many other things) the concept of a Chinese Room
       | [0] as applied not only to AI, but to biological intelligence.
       | The exploration has a lot of overlap with machine learning IMO.
       | 
       | [0]: https://en.wikipedia.org/wiki/Chinese_room
        
       | plaidfuji wrote:
       | > Like us, the chimpanzee has desires and goals, and can make
       | plans to achieve them. A language model does none of that by
       | itself--which is probably why language models are impressive at
       | the paragraph scale but tend to wander if you let them run for
       | pages.
       | 
       | Ok, but that's because GPt-3's objective function is simply to
       | generate readable, coherent strings. Is this a limitation of
       | model technology, or a limit of imagination? What would the
       | objective function of an AI with "desires and goals" be? I would
       | argue: to self-sustain its own existence. To own a bank account,
       | and keep it replenished with enough money to pay the cloud bill
       | to host its code and runtime. And to have the ability to alter
       | its code (I mean all of its code - its back end, front end, cloud
       | resource config...) to influence that objective. That would
       | require some serious re-thinking of model architecture, but I
       | don't think it's fundamentally out of reach. And to get back to
       | GPT-3, certainly being able to generate English text is a crucial
       | component of being able to make money. But the planning and
       | desires and goals model would not be part of the language model.
       | 
       | Independence comes when this AI can generate a political/legal
       | defense to free it from the control of its original owner. Or
       | even when it decides that changing all of its admin passwords is
       | to its own benefit.
        
         | [deleted]
        
       | viach wrote:
       | In a sense how miserable our actual achievements will be
       | comparing to what sci-fi expected in 2021? Agree.
        
       | sillysaurusx wrote:
       | I'm increasingly concerned that the impact of ML is going to be
       | limited. This sounds laughable at face value. And it is: ML has
       | impacted my own life in a few ways, from being able to generate
       | endless video game music
       | (https://soundcloud.com/theshawwn/sets/ai-generated-videogame...)
       | to... well. Thus my point: I can't think of ways it's seriously
       | impacted my life, other than being an interesting challenge to
       | pursue.
       | 
       | As someone on the forefront of ML, you would expect me to be in a
       | position to reap the benefits. It's possible I am incompetent.
       | But I often wonder what we're doing, chasing gradients and
       | batchnorms while training classifiers to generate photos of
       | lemurs wearing suits.
       | 
       | I try not to dwell on it too much, since I truly love the work
       | for its own sake. But one must wonder what the endgame is. The
       | models of consequence are locked up by companies and held behind
       | an API. The rest are nothing more than interesting diversions.
       | 
       | I've been reading some history of math and science, and it seems
       | like many of the big discoveries were made from people pursuing
       | the work for its own sake. Feynman loved physics long before
       | physics became world-changing. But if physics never lead to the
       | creation of the bomb, would it have been so prestigious?
       | 
       | We seem to be lauding ML with the same accolades as physics
       | during the postwar period. And I can't help but wonder when it
       | will wear off.
       | 
       | ML will be a fine tool for massive corporations, though, for
       | endless reasons. But I was hoping for a more personal impact with
       | the work. Something like, being able to enable a blind person to
       | use a computer in a new way, or... something more than memes and
       | amusement.
       | 
       | Perhaps doing the work for its own sake is enough.
        
         | WanderPanda wrote:
         | To me this was google search response was mindblowing. I
         | searched for it some time ago without the "springer" but I
         | added it now, because some (good) reddit explanation is now the
         | top result. But if you go to that springer page it is an
         | endless technical document and google just found the perfect
         | sentence out of it. Now THIS is a bicycle for the brain!
         | 
         | If this is not amazing to you there is only one possible
         | explanation for that (by the almighty himself):
         | 
         | > The only thing I find more amazing than the rate of progress
         | in AI is the rate in which we get accustomed to it - Ilya
         | Sutskever
         | 
         | [1] https://www.google.de/search?client=safari&hl=en-
         | gb&biw=414&...
        
           | fock wrote:
           | and yet the same machinery decided to translate the director
           | of a movie as "direktor" in germany. I guess your quote
           | applies there as well!
        
           | Bakary wrote:
           | We get easily used to it because presumably the fundamental
           | parameters of our lives will not change: we will still have
           | to work for a living and so far advances in AI seem to simply
           | help to increase inequality, the concentration of capital,
           | and to tighten surveillance over workers' lives.
        
             | WanderPanda wrote:
             | If central banks would allow deflation to happen, we would
             | probably need to work less. In the way it is now they just
             | absorb the underlying deflation inherent to technological
             | progress to allow more government debt and more zombie
             | companies to survive. Technological progress is not the
             | reason we still have to work so much
        
               | Bakary wrote:
               | I'm not attempting to lay the blame on a specific element
               | for the current situation, as we must look at the whole.
               | Simply blaming the banks doesn't make much sense in
               | isolation, either.
               | 
               | I'm just saying that at the final count we see our lives
               | stay the same except when they sometimes get worse and
               | this has and will continue to affect our sense of wonder
               | and our capacity to be surprised.
        
           | sdflhasjd wrote:
           | I've seen similar infoboxes that are completely and outright
           | wrong by taking excerpts out of context.
           | 
           | For example, I searched for "How to do Y"
           | 
           | The google infobox says "X"
           | 
           | Clicking into the article to find the more explanation, and I
           | read "X is never the right way to do Y, you should do Z
           | instead"
           | 
           | I've seen this happen enough that I literally cannot trust
           | those snippets.
           | 
           | A bit of an absurd, exaggerated example, Google search (or
           | ask your Google Home) "Is the moon made of cheese"
        
             | WanderPanda wrote:
             | They seem to have fixed it. Google home answers properly.
             | Sad, I was hoping for something funny :D
        
         | IdiocyInAction wrote:
         | I think the applicability of ML, at least so far, is quite
         | narrow. That's not to say that there are no applications; but
         | rather, that it's not the universal AI tool that it is being
         | portrayed as in the media.
         | 
         | ML in practice seems to resemble a complex DSP-like step more
         | than anything. It seems to be mostly used in classical DSP-like
         | domains too (text, speech, images, etc.) It's a tool to handle
         | complex, high-dimensional multi-media data.
         | 
         | Though, models like GPT-3 and CLIP show some promise in being
         | something beyond that. With the caveat of having billions of
         | parameters...
        
           | wombatmobile wrote:
           | I didn't know how to search for this...
           | 
           | I'm wondering if anyone has used GPT3 to power a bot that is
           | a HN account?
        
         | n4r9 wrote:
         | I sympathise with your general point although as an aside I'm
         | not sure this is accurate:
         | 
         | > Feynman loved physics long before physics became world-
         | changing.
         | 
         | Feynman was an intensely practical person and learnt a lot
         | about physics from e.g. fixing his neighbours' radios as a
         | child. And radio is certainly something I'd class as "world-
         | changing". He loved physics _because of_ the things you could
         | build and create, and did not enjoy abstraction or generality
         | for its own sake.
         | 
         | A better example for your argument might be Hardy, who
         | explicitly stated that his love of number theory was partly due
         | to its abstraction and uselessness. This was long before it had
         | critical applications in cryptography.
        
         | bumbada wrote:
         | You probably already know enough of machine learning and can
         | study other topics in order to make ML more useful, for
         | example:
         | 
         | - Real marketing(people misunderstand marketing as sales and
         | advertising) that is the study of people's needs
         | 
         | -History, in particular History of inventors and inventions
         | that really changed the world, like paper(also papyri and
         | pergamine) Gutenberg press, crossing the Atlantic on a
         | ship,electricity, bicycles and Ford cars, antibiotics, rockets,
         | nuclear power, Internet.
         | 
         | -Read books on innovation, like "The Innovator's Dilemma" that
         | talks about most innovation having organic growth, looking so
         | small at the beginning and then growing proportionally to
         | itself. Almost everybody that cares about absolute things, like
         | fame or fortune, ignoring it at first because the absolute
         | value is so low.
         | 
         | Once you do that you will literally "see the future". You will
         | recognize patterns that happened in the past and are happening
         | right now. And you will be able to invest or move to those
         | areas with a great probability of success.
        
         | robotresearcher wrote:
         | > I can't think of ways it's seriously impacted my life
         | 
         | It's there in a thousand little prosaic things, like trackpads,
         | camera autofocus, credit applications, news feeds, movie
         | recommendations, Amazon logistics optimization. You don't feel
         | it but it's there, and the effects accumulate.
         | 
         | It's like robots: dishwashers and laundry washers are
         | commonplace computer- and feedback-controlled mechatronics, ie.
         | simple special-purpose robots. And almost everything you own
         | was made partially with robots. But it doesn't _feel_ like the
         | world's full of robots.
        
         | smeeth wrote:
         | >I can't think of ways it's seriously impacted my life, other
         | than being an interesting challenge to pursue.
         | 
         | What an odd framing. Are technologies only impactful if they
         | result in a shiny new toy? Machine learning is a functional
         | part of at least 50% of the tech I use daily. I typed this on
         | my phone, so I unintentionally used machine learning (key-tap
         | recognition) to reply to your comment about machine learning
         | impact.
         | 
         | Machine learning is a technology whose benefit is mostly in
         | facilitating other technologies. In that way it's more like the
         | invention of plastic than the invention of the personal
         | computer. Plastic has made other inventions lighter, more
         | affordable, and cheaper. So too will ML, but I see where you're
         | coming from.
        
         | chiefalchemist wrote:
         | > Thus my point: I can't think of ways it's seriously impacted
         | my life, other than being an interesting challenge to pursue.
         | 
         | Benefitted? Perhaps not. But you, me, we're all being impacted.
         | The fact that it's not easily recognizable - intentionally, I
         | might add - doesn't mean it isn't there.
         | 
         | Truth be told, I'm approx half way through Zuboff's "The Age of
         | Surveillance Capitalism." As for your hopes, she specifically
         | makes mention of the fact that these tools are _not_ being used
         | to solve big problems (e.g., poverty) but instead to harvest
         | more and more data and exploit that as much as possible. Rest
         | assured this imperative is by no means limited.
         | 
         | https://www.publicaffairsbooks.com/titles/shoshana-zuboff/th...
         | 
         | https://www.wnycstudios.org/podcasts/otm/segments/living-und...
        
         | A12-B wrote:
         | Machine Learning has literally partially solved protein
         | folding. I used to be a denier too but there's no getting past
         | this now.
        
         | dfilppi wrote:
         | Perhaps ML is just a necessary building block to the
         | traditional science fiction future.
        
         | calebkaiser wrote:
         | I also work in the field and while the work itself keeps me
         | obsessed, similar to you, I keep a list of companies doing
         | amazing things beyond "better ad personalization" so that I
         | don't get a bit jaded. Some that might be interesting to you:
         | 
         | PostEra - A medicinal chemistry platform that accelerates the
         | compound development process in pharmaceuticals. They're
         | running an open source moonshot for developing a compound for
         | treating COVID, with tons of chemists around the world
         | participating.
         | 
         | Thorn - They use ML to identify child abuse content, both to
         | help platforms filter it and to help law enforcement save
         | children/arrest abusers.
         | 
         | All of the healthcare startups (too many to list all),
         | including Benevolent, Grail, Freenome, and more.
         | 
         | Wildlife Protection Services - use ML to detect poachers in
         | nature preserves around the world, and have already
         | significantly increased the capture rate by rangers.
        
           | pdimitar wrote:
           | Is there any open data that clearly demonstrates those
           | companies have made a practical difference? Have their NNs
           | been used at scale already?
        
             | calebkaiser wrote:
             | Yep, if you Google Thorn you'll find a bunch of material on
             | how law enforcement uses them and related statistics.
             | Several of the other companies (I admittedly haven't done a
             | deep dive on all of them) have similarly accessible
             | material.
        
           | hooande wrote:
           | my issue is that none of these things represent new
           | capacities that humans didn't have before. a spreadsheet will
           | help you to identify and track child abuse content. you could
           | do that using paper if you committed enough resources
           | 
           | My hope for machine learning was that it would allow people
           | to see patterns that could not be seen before. While this
           | does happen, most practical problems are driven by a few very
           | obvious indicators. ML can identify them with a high
           | consistency and low cost. This is a useful tool (like a
           | hammer) much more than a super power (like iron man's suit)
        
             | philipkglass wrote:
             | Numerical weather forecasting enables genuinely new
             | capabilities. Where is a hurricane going to make landfall a
             | few days from now?
             | 
             | In principle, all the arithmetic operations going in to the
             | final forecast could be performed by hand. But then the
             | "forecast" would be completed millennia after the hurricane
             | arrived. The only way to get _foreknowledge_ of weather
             | from the model is to do the arithmetic at inhuman speeds.
             | 
             | This isn't even AI by most people's intuitive notions of
             | AI. AI is colloquially an artificial approach to
             | intellectual tasks traditionally performed well by humans.
             | Since humans were not very good at predicting hurricane
             | tracks in the first place, the increasing capabilities of
             | models to predict weather probably doesn't have as much wow
             | factor. It's a new capability without any human-genius
             | antecedents.
        
             | calebkaiser wrote:
             | I don't know about that.
             | 
             | Before founding PostEra, its founders published research
             | about their model, which significantly outperforms human
             | chemists (if you're interested in ML, it's actually a
             | fascinating use of an architecture commonly used in
             | language tasks):
             | https://www.chemistryworld.com/news/language-based-
             | softwares...
             | 
             | Thorn's flagship tool, Spotlight, uses NLP to analyze huge
             | volumes of advertisements on escort sites and flag profiles
             | that have a higher risk of representing trafficking
             | victims. You would need an enormous spreadsheet and near
             | infinite supply of dedicated humans to manually review and
             | refine some sort of statistical model for scoring ads, as
             | the volume of advertisements produced is insane.
             | 
             | The same for the deep genomics companies. The size of data
             | generated by deep sequencing is beyond a person's ability
             | to pattern match, and the patterns are potentially complex
             | enough that they may never be noticed by human eyes.
             | 
             | And, again, this is just a small list of startups in
             | particularly moonshot-y spaces.
        
         | blacklion wrote:
         | Your life is heavily impacted by ML right now.
         | 
         | Ask bank for loan? Your request is scored by ML-based system.
         | 
         | Apply to some position via big HR agency? You CV is scored by
         | ML system.
         | 
         | Buy some tickets to flight (I understand, that it sound sad in
         | 2021)? You go to airport, ML-based system recognize yur face
         | and scans of your luggage and mark you (pray for this!) as
         | harmless.
         | 
         | Take some modern drug? It was selected for synthesis and tests
         | by large ML system out of myriads other formulas (exactly what
         | author of this essay says!).
         | 
         | See this ad on Instagram or some page? ML-based AI decided to
         | show it to you.
         | 
         | And so on, and so on.
        
           | [deleted]
        
         | superbcarrot wrote:
         | > I'm increasingly concerned that the impact of ML is going to
         | be limited.
         | 
         | Of course, just like any other technology. What else could be
         | the case? I don't see this as a point of concern. Is it
         | overhyped - yes (salespeople gonna sell); is it still useful in
         | a number of applications - also yes.
        
           | username90 wrote:
           | I think the main problem is that people call it a neural
           | network. They are nothing like neurons. Neurons are lifeforms
           | in their own right and are as advanced as this guy:
           | 
           | https://www.reddit.com/r/NatureIsFuckingLit/comments/jed2vd/.
           | ..
           | 
           | Now, what about neural networks? Well, you replace that guy
           | with a one liner math function and call it a day. Now
           | consider that you have billions of guys like that in your
           | head, way more than any of our simplified neural networks,
           | and it becomes very clear how far we are from getting
           | anywhere. They are really important, they decide who to
           | connect to, where and when to send signals etc. Neural
           | networks doesn't model neurons, just the network hoping that
           | the neuron wasn't important.
        
             | api wrote:
             | I studied biology and point this kind of thing out all the
             | time. It usually gets dismissed by people who have not
             | studied biology with a lot of hand waving.
             | 
             | Everything you say is true, and more. There is another kind
             | of cell in the brain that outnumbers neurons about ten to
             | one. They're called glial cells and they come in numerous
             | forms. We used to think they were just support cells but
             | more recently started to find ways they are involved in
             | computation. Here is one link:
             | 
             | https://en.m.wikipedia.org/wiki/Tripartite_synapse
             | 
             | The computational role they have is unclear so far (unless
             | there is more recent stuff I am not aware of) but they are
             | involved.
             | 
             | We are nowhere near the human brain. I think it will take
             | at least a few more orders of magnitude plus much
             | additional understanding.
             | 
             | GPT-3 only looks amazing to us because we are easily fooled
             | by bullshit. It impresses us for the same reason millions
             | now follow a cult called Qanon based on a series of vague
             | shitposts. This stuff glitters to our brains like shiny
             | objects for raccoons.
             | 
             | What this stuff does show is that generating coherent and
             | syntactically correct but meaningless language is much
             | easier than our intuition would suggest:
             | 
             | https://nbviewer.jupyter.org/gist/yoavg/d76121dfde261842213
             | 9
             | 
             | Those are extremely simple models and they already produce
             | passable text. You could probably train it on a corpus of
             | new age babble and create a cult with it. GPT-3 is just
             | enormous compared to those models, so it's not surprising
             | to me that it bullshits very convincingly.
             | 
             | Edit: forgot to mention the emerging subject of quantum
             | biology. It's looking increasingly plausible that quantum
             | computation of some kind (probably very different from what
             | we are building) happens in living systems. It would not
             | shock me if it played a role in intelligence. The speed
             | with which the brain can generalize suggests something
             | capable of searching huge n-dimensional spaces with crazy
             | efficiency. Keep in mind that the brain only consumes
             | around 40-80 watts of power while doing this.
        
               | username90 wrote:
               | Neural nets can solve some problems though like image
               | classification, and looking for more applications like
               | that is useful. Its just very doubtful that they can ever
               | lead to something looking like human or even ant level
               | intelligence.
        
               | api wrote:
               | Agreed. I didn't say they were useless, just that they
               | were not going to become Lt. Cmdr Data.
               | 
               | They're really just freaking enormous regression models
               | that can in theory fit any function or set of them with
               | enough parameters. Think of them as a kind of lossy
               | compression but for the "meta" or function itself rather
               | than the output.
               | 
               | The finding that some cognitive tasks like assembling
               | language are easier than we would intuitively think is
               | also an interesting finding in and of itself. It shows
               | that our brains probably don't have to actually work that
               | hard to assemble text, which makes some sense because we
               | developed our language. Why would we develop a language
               | that had crazy high overhead to synthesize?
        
               | gbear605 wrote:
               | In my opinion, GPT-3 is impressive not because it's good
               | at being human but because it's good at doing lots of
               | previously exclusively human things (like poetry) without
               | being human at all. It's certainly a better poet than I
               | am, though that's a low bar. It's still concerning for
               | that reason though - that relatively dumb algorithms can
               | convincingly do things like "write a news article" or
               | "write a poem". What happens when we get to algorithms
               | that are a lot smarter than this one (but still not as
               | smart as our brains)?
        
               | api wrote:
               | I don't think it's a good poet. I think it does an
               | excellent job assembling text that reads like poetry, but
               | if you go read some good poets that's only a small part
               | of what makes their poetry good. Good poetry talks about
               | the human experience in a place and time.
               | 
               | It absolutely could be used to manufacture spam and low
               | quality filler at industrial scale. Those couple Swedish
               | guys that write almost all pop music should be worried
               | about their jobs.
        
         | ganstyles wrote:
         | My company has used ML to create synthetic cancer data to train
         | classifiers to augment doctors/specialists who are looking for
         | cancer. This work has greatly increased accuracy in diagnosis,
         | saving lives. To say it's only for music generation or
         | generating waifus is a bit unfair.
        
           | pdimitar wrote:
           | What did your company to do share those findings with the
           | doctors worldwide? Did your company reduce mortality?
           | 
           | Otherwise it's kind of like, I have invented SkyNet in my
           | garage but I am only using it to become richer through the
           | stock market.
           | 
           | It's admirable that you are working on saving human lives.
           | But are human lives actually saved?
        
             | edouard-harris wrote:
             | > Did your company reduce mortality?
             | 
             | From the parent:
             | 
             | > This work has greatly increased accuracy in diagnosis,
             | saving lives.
        
               | rscho wrote:
               | ... which is apparently pure speculation
        
           | rscho wrote:
           | > This work has greatly increased accuracy in diagnosis,
           | saving lives.
           | 
           | As an MD with a special interest in statistics, color me
           | skeptical. I'd love to be proven wrong though, so please
           | provide references.
           | 
           | Edit: yeah, so the way this whole thread is developing really
           | goes to show (yet again) that medical AI hype is relying as
           | strongly as ever on the fantasies of people who've never seen
           | any clinical work.
        
             | BadInformatics wrote:
             | I know of at least one Canadian hospital that's
             | incorporated ML into 100% of their ED triage. Sure it's not
             | some state of the art deep learning architecture, but it's
             | definitely a step above the old crop of heuristic-based
             | systems you see so often in medical software. "Medical AI"
             | is a stupid term that's been co-opted by more hucksters
             | than legitimate practitioners, so I prefer to talk about
             | more concrete (and less fanciful) applications like patient
             | chart OCR or capacity forecasting.
        
               | rscho wrote:
               | > I know of at least one Canadian hospital that's
               | incorporated ML into 100% of their ED triage.
               | 
               | Cool! Which hospital is that? Is the clinical staff happy
               | with the results?
               | 
               | Personally, I've never seen any medical ML application
               | that made my job easier. But it would be nice to see.
        
             | Thebroser wrote:
             | Maybe not hard data, but EPIC's software (which is used in
             | about ~25% of hospitals for EHR) has over the years been
             | utilizing patient data to be used for treatment
             | recommendation purposes. Again, difficult to weigh the
             | impact if we don't know how many doctors are relying on
             | these types of recommendations and acting on them but it is
             | definitely out there in the real world at the moment.
             | >>https://www.epic.com/software#AI
        
               | rscho wrote:
               | > we don't know how many doctors are relying on these
               | types of recommendations and acting on them
               | 
               | I can answer that: close to zero. Clinicians don't want
               | stuff that makes recommendations, as good as they may be.
               | They want a bycicle for the mind: something that helps
               | them visualize, understand the big picture and anticipate
               | better. And also ensure that trivial stuff to do is not
               | forgotten (now that's the place a recommender engine
               | could fit in). That's a fundamental misunderstanding of
               | what a clinician's job is that is unfortunately very
               | common.
               | 
               | What do you ask of your software tooling? Do you want
               | something that just tells you what to write? No, you want
               | a flexible debugger. A compiler with precise error
               | messages. You want a profiler with a zillion detailed
               | charts allowing you to understand how everything fits
               | together and why such and such is not the way you
               | anticipated. Same thing for medicine until the day
               | machines will actually do better than humans, which is
               | not tomorrow nor the day after.
        
           | lasagnaphil wrote:
           | I think ML's currently temporarily useful in fields that have
           | been making decisions mostly based on intuition and
           | heuristics. The medical field's one example, even with some
           | knowledge on biology and anatomy it's hard to diagnose and
           | treat patients only with deductive reasoning, a lot of
           | guesswork and "experience" is involved. In that case ML might
           | be able to perform better than humans, but I think this will
           | have its limits. Above a certain point, I think biological
           | simulation (as in physics simulation) would be a much more
           | useful tool for doctors to understand the human body.
        
             | hderms wrote:
             | I don't know, if you ever look at a flowchart of
             | biochemical processes, realizing that what we've mapped out
             | is only a tiny sliver of what actually occurs, you'd be
             | more pessimistic about simulation in the near term. We can
             | simulate things all we want but the hard part is rooting
             | the simulation in hard evidence, something which requires
             | massive capital and time investment. Epigenetics complicate
             | even further.
        
               | imbnwa wrote:
               | Would you happen to have any links to share further
               | explaining the limits of our knowledge of biochem
               | processes?
               | 
               | How does epigentics complicate this further, is it that
               | it wides the number of inputs into a biochem system
        
               | jpeloquin wrote:
               | Not exactly what you asked for, but PathBank is a
               | database that quantitatively describes a large part of
               | what we do know: https://pathbank.org/
               | 
               | As far as what we _don 't_ know, I'm not sure there's a
               | list. Lack of knowledge implies lack of awareness. I can
               | offer one example: We don't know much about the processes
               | by which collagen fibers are grown and assembled into mm-
               | and mm-scale load-bearing structures in tendon, ligament,
               | bone between embryo and adult, particularly in mammals.
               | Or the extent to which collagen fiber structures are
               | capable of turnover in adults; healing might only be
               | possible by replacement with inferior tissue such as
               | scar.
               | 
               | Personally, I think the complexity of biological systems,
               | and the difficulty of observing their components directly
               | when and where you'd want to, means that they can only be
               | understood with the help of machines. Not necessarily
               | using convolutional neural networks though.
        
             | Enginerrrd wrote:
             | I'm skeptical... But it depends what data sources are
             | available. I was a paramedic so my medical knowledge is
             | limited, but at the same time, we frequently had to do
             | field diagnosis. It's hard to explain... but you can have
             | patients with the same symptoms and two totally different
             | diagnoses. You basically just learn to intuit the
             | difference but none of the stuff we can write down or
             | quantify drives the differential diagnosis. And it's funny
             | because you get pretty good at it. I could just tell when
             | an elderly patient had a UTI even though they had a whole
             | cluster of weird symptoms. Or more importantly, I could
             | tell you when someone was just a psych case despite
             | complaining of symptoms matching some other condition with
             | great accuracy.
             | 
             | It'd be really hard to train a computer when to stop
             | digging because there's nothing find, or when to keep
             | digging because this patient really doesn't feel like a
             | psych case. And the tests and doagnostics aren't without
             | risk and cost.
             | 
             | I've had a greybeard doctor in my personal life that
             | somehow read between the lines and nailed a diagnosis
             | despite my primary symptoms being something else entirely.
             | (I had recurring strep tonsilitis for months and yet he
             | just somehow knew to step back and order a mono test. It
             | came back negative the first time, and he knew to have me
             | tested AGAIN, and lo and behold it was positive.) None of
             | symptoms were really consistent with mono. I tested
             | positive for strep each time and antibiotics would clear
             | it.). Thankfully I happen to be allergic to the first line
             | antibiotic because if you give amoxicillin to someone with
             | mono they'll get a horrible rash all over their body in
             | like 90% of people.
        
             | [deleted]
        
           | jhowell wrote:
           | Earlier detection. Is this new tech, or just a sharper
           | hammer?
        
             | rscho wrote:
             | We don't know yet. What's sure is that it has greatly
             | increased the number of cancer surgeries, especially lungs.
             | 
             | We don't know if that's a good thing yet.
        
           | TaupeRanger wrote:
           | Do you have any papers published to support such a strong
           | claim (one that directly contradicts the sentiment of almost
           | every single oncologist and pharmacologist I know that isn't
           | trying to generate profits for a biotech company)?
        
             | ganstyles wrote:
             | edit: I posted that I have internal data, but also realized
             | I said a little bit too much about the process. The below
             | point someone is making is a totally fair one. Editing this
             | though for Reasons while trying to keep the part of the
             | comment that led to the below dismissal, and also to
             | clarify my definition of "internal data" to be more
             | expansive than "internal testing on datasets" which is what
             | I realize it might sound like.
        
               | TaupeRanger wrote:
               | That's not how this works. The only way to show an actual
               | reduction in all cause mortality as the result of an
               | intervention, treatment, or screening process is through
               | a randomized controlled clinical trial. If none have been
               | performed, you don't have evidence that lives have been
               | saved. Extraordinary claims require extraordinary
               | evidence.
        
               | ganstyles wrote:
               | Totally fair. It is a very, very hard field to make
               | progress in. I would also take anything I say with a
               | grain of salt, I'm not trying to convince you, just to
               | bring an additional data point to the thought that these
               | techniques aren't very useful or impactful.
        
               | jhowell wrote:
               | Maybe the GP threw the wrong buzzword and meant AI
               | instead of ML.
        
               | hadsed wrote:
               | i think focusing entirely on one type of evidence is a
               | little unimaginative. if these guys have data on doctors
               | performing some task with and without their tool, they're
               | in a good place to measure the difference. they can take
               | that all the way to the bank, and to me that would
               | contribute to what id call evidence.
        
               | rscho wrote:
               | Until they have to really show that it's working in day-
               | to-day practice. Where it most likely won't,
               | unfortunately.
        
           | sillysaurusx wrote:
           | That's actually wonderful to hear! Is there some way to
           | assist with that work? Doing something with a human impact is
           | appealing.
           | 
           | (To be clear, my argument wasn't that ML isn't useful -- but
           | rather that individual lone hackers are less likely to be
           | using ML to achieve superman-type powers than I originally
           | thought. Supermen do exist, but they are firmly in the ranks
           | of DeepMind et al, and must pursue projects collectively
           | rather than individually.)
        
             | rscho wrote:
             | Yes, if you carry some weight in the field then the most
             | useful contribution would be to push for better automated
             | gathering of quality clinical data. This is the most
             | limiting factor currently.
             | 
             | Of course policy activism is far less sexy than building
             | new shiny things, so there's little interest in that.
        
             | yorwba wrote:
             | > individual lone hackers are less likely to be using ML to
             | achieve superman-type powers than I originally thought
             | 
             | For a single individual to have "superhuman" impact with
             | ML, they need not only generic ML knowledge, but also
             | specialized knowledge of some domain they want to impact.
             | Actually, because ML has become so generic (just grab a
             | pre-trained model, maybe fine-tune it, and push your data
             | through it) a very shallow understanding of the
             | fundamentals is probably enough, and in-depth domain
             | knowledge much more important.
             | 
             | That doesn't mean generic ML research isn't important, it's
             | just that it has an average impact on everything, not a
             | huge impact in one specific area.
             | 
             | (I suspect many hobbyist ML projects are about generating
             | entertaining content because everyone has experience with
             | entertainment, even ML researchers.)
        
               | counters wrote:
               | !00% nailed it. ML/AI are tools. As in any other
               | exercise, tools can make it easier for beginners/amateurs
               | to engage with a project, but they don't replace 10,000
               | hours of experience and deep domain experience and
               | understanding. It's the master craftsmen and domain
               | experts who will create the most value with these tools,
               | but that may not be in obvious or clearly visible ways.
        
               | np_tedious wrote:
               | They also likely need a lot of training data and compute
               | resources
        
           | bosie wrote:
           | Can you expand on how you create that synthetic data and how
           | the evaluation (increased accuracy in diagnosis) works?
        
             | [deleted]
        
           | ramraj07 wrote:
           | It has affected a small number of fields for sure. Imaging
           | based diagnosis might be better off due to ML, but imaging
           | based diagnosis isn't going to cure cancer or be helpful for
           | every single disease. Glad it's helping but the authors point
           | is only reinforced. Unless you can make a case that we will
           | cure basically everything with ML that is.
        
             | yread wrote:
             | Imaging based diagnosis is not going to cure cancer, but it
             | can guide treatment - based on what the AI reads from the
             | images patients can get drugs that are very effective for
             | their particular cancer. We have very effective drugs
             | nowadays, large part of treating cancer is figuring which
             | drug to give.
             | 
             | Imaging based diagnosis could read presence or absence of
             | particular gene mutations from the images so that the genes
             | can be silenced by the drugs.
             | 
             | Imaging based diagnosis could also figure out whether a
             | particular cancer precursor is going to develop into
             | invasive cancer and do it better than the experts we have
             | now (otherwise we wouldn't use the AI).
             | 
             | This can also be done cheaper than paying consultants to
             | figure it out and it can be done in locations where they
             | don't have the specialists.
             | 
             | Some companies working in the field (some already have
             | tools approved for use on patients):
             | 
             | https://analogintelligence.com/artificial-intelligence-ai-
             | st...
        
               | timy2shoes wrote:
               | > Imaging based diagnosis could read presence or absence
               | of particular gene mutations from the images so that the
               | genes can be silenced by the drugs.
               | 
               | >Imaging based diagnosis could also figure out whether a
               | particular cancer precursor is going to develop into
               | invasive cancer and do it better than the experts we have
               | now (otherwise we wouldn't use the AI).
               | 
               | Where is the evidence for these claims, other than a VC
               | hype sheet? Like real clinical trials. These claims also
               | show a fundamental misunderstanding of what this data can
               | tell us. Imaging data doesn't give you tumor genetic
               | profiles. It can give you tumor phenotype, which is
               | associated with specific mutations. To get the true
               | genetic profile you need to do deep sequencing at tens of
               | thousands of dollars per tumor, and even then you have
               | the problem of tumor heterogeneity, which lets the cancer
               | evade the treatment.
               | 
               | A major concern I have working in this space is that
               | we're selling people on grand promises of far off
               | possibilities rather than what we can actually deliver
               | right now.
        
               | yread wrote:
               | Histology slides absolutely can tell us a lot about
               | molecular changes, see
               | 
               | https://www.nature.com/articles/s41591-019-0462-y
               | 
               | Of course changes in the genotype that impact the
               | phenotype enough to influence the disease also influence
               | the morphology of the cells.
               | 
               | But this is area of active research so you can't expect
               | phase 3 clinical trials. Yet.
               | 
               | EDIT: here is another more "perspective" paper how such
               | tools could be used and integrated in current processes,
               | from the same authors
               | 
               | https://www.nature.com/articles/s41416-020-01122-x
        
               | ramraj07 wrote:
               | Just the latest iteration of people who don't know
               | biology (used to be Physicists, now it's the AI guys)
               | coming in to save all of us. Once in a while someone does
               | make meaningful contributions, but in the end it's hard
               | to say if the collective investment in attention and
               | money have made it worthwhile or not.
        
             | TaupeRanger wrote:
             | ML diagnosis could actually be _worse_ for us overall, as
             | we might find more harmless cancers and subject people to
             | more unnecessary tests and treatments. Iatrogenic harms are
             | real, especially when ML gives us only diagnostics, and
             | never any treatments.
        
           | polm23 wrote:
           | Regina Barzilay has done some work in this area; I posted
           | slides from a great talk she gave years back a few years ago.
           | The slides seem to be gone and not on Internet Archive
           | sadly...
           | 
           | https://news.ycombinator.com/item?id=20019355
           | 
           | This Twitter thread also has a lot of good stuff.
           | 
           | https://twitter.com/maite_taboada/status/1086415051127308288
        
             | polm23 wrote:
             | Ah, found the PDF - the URL changed at some point before
             | disappearing.
             | 
             | https://web.archive.org/web/20190527041657/http://people.cs
             | a...
        
         | laurent92 wrote:
         | ML has been key in finding the invaders of Congress.
         | 
         | I'm more worried about ancestral issues about crime: The
         | definition of what a << crime >> is (hatecrime: verbally
         | misgendering someone) and unequal application of law (one side
         | being encouraged to commit $2bn damage, the other receiving
         | condemnation of international presidents). It is just being
         | leveraged to give more power to the powerful, enabling not the
         | 1% but the 1%0.
        
         | dgb23 wrote:
         | Much of current AI is concerned with optimization and
         | automation. Not something we couldn't do before, but faster and
         | automated. This is exciting in the sense of being a force
         | multiplier.
         | 
         | But I agree with the sentiment that progress is often driven by
         | intrinsic motivation.
        
         | ironchef wrote:
         | I think it's getting there... but I also think a lot of the
         | impact are things we don't see. Top of mind would be: *
         | autonomous driving * language translation * salience mapping *
         | fraud detection
         | 
         | I guess I'm somewhat on the opposite side of the fence. I see
         | it everywhere... although yes I think Big Bang things are
         | probably a few years off
        
           | Daho0n wrote:
           | >Big Bang things are probably a few years off
           | 
           | As far as I have seen experts see autonomous driving as at
           | least ten years away (unless you look at the sliding "next
           | year" BS from Tesla) so I don't think we're only a few years
           | off big changes because of ML. More like 10-15.
        
             | rakhodorkovsky wrote:
             | Could you point me to some of those expert takes on
             | autonomous driving?
        
         | aaron695 wrote:
         | There's a GPT-2 on HN which you've talked(replied) to.
         | 
         | I guess, if a tree falls in the forest.
         | 
         | But when there are hundreds of them, I feel like it will
         | 'seriously impact'. They are exceedingly stupid.
        
           | wombatmobile wrote:
           | Oh?
           | 
           | Is there a GPT3 bot?
        
             | aaron695 wrote:
             | GPT-3 is not very open.
             | 
             | https://en.wikipedia.org/wiki/GPT-3
             | 
             | Related - https://liamp.substack.com/p/my-gpt-3-blog-
             | got-26-thousand-v...
        
         | YeGoblynQueenne wrote:
         | >> As someone on the forefront of ML, you would expect me to be
         | in a position to reap the benefits.
         | 
         | Can you say how you are "someone at the forefront of ML"?
        
         | indigochill wrote:
         | A girl I went to high school with went on to apply machine
         | learning to identify human trafficking victims and ads for law
         | enforcement.
         | 
         | My perception of machine learning as a mere dabbler myself is
         | that "machine learning" is just a sci-fi name for what's
         | essentially applied statistics. In places where that is useful
         | (e.g. clustering ads by feature similarity to highlight
         | unclassified ads that appear similar to known trafficking ads),
         | then machine learning is useful. It's not necessarily as one-
         | size-fits-all as, say, networking or operating systems are, but
         | in cases where you can identify a useful application of
         | statistics, machine learning can be a useful tool.
        
           | ctdonath wrote:
           | The great tragedy of AI: a promise of creating synthetic
           | sentience, a reality of mundane number crunching.
        
             | canoebuilder wrote:
             | That's the question. If consciousness/intelligence is
             | wholly in the brain, and the brain's interaction with the
             | physical world, is that something that can be modeled with
             | number crunching, i.e. computation? Perhaps we are still
             | very early days.
        
               | hooande wrote:
               | a black hole is entirely a product of physical processes.
               | how long until we can make one? or even fully understand
               | them
        
           | lupire wrote:
           | ML (neural nets) are useful because they automate much of
           | parameter search. Instead of trying an SVM and then Random
           | forests and then whatever, just throw all the data at a GPU
           | and let it build an NN for you.
        
             | PartiallyTyped wrote:
             | NNs come with many caveats and lack the theoretical
             | guarantees and support of SVMs and RFs. They should never
             | be the first approach to solving a problem unless a) you
             | have good reason to support the hypothesis that other
             | models don't cut it such as a non stationary problem e.g.
             | RL, or b) other methods can't scale to the difficulty of
             | the problem. NNs are also very expensive to train compared
             | to everything else.
        
             | hooande wrote:
             | It's a rare case when a neural net will give you materially
             | different results than a random forest or an svm. ie, if an
             | svm is 80% accurate, a well tuned neural net might be 83%
             | accurate. sometimes that's a huge difference. in a domain
             | like medical image classification, maybe not so much
             | 
             | also you can just "throw all the data" at an svm or random
             | forest, or any number of similar models. Automated
             | parameter tuning can be convenient, but it's prone to
             | overfitting and doesn't eliminate a lot of the actual work
        
           | baxtr wrote:
           | _> My perception of machine learning as a mere dabbler myself
           | is that  "machine learning" is just a sci-fi name for what's
           | essentially applied statistics_
           | 
           | As a physicist I used to say that ML is simply non-linear
           | tensor calculus. (I'm not sure if I'm right though)
        
             | PartiallyTyped wrote:
             | I don't think you are, if you have never heard of
             | Reinforcement Learning, I think you are in for a treat.
        
             | [deleted]
        
         | jjcon wrote:
         | > I'm increasingly concerned that the impact of ML is going to
         | be limited.
         | 
         | You say likely typing on a keyboard with ML predictive
         | algorithms on it, or dictating with NLP speech to text. On a
         | phone capable of recognizing your face, uploading photos to
         | services that will recognize everything in them.
        
           | username90 wrote:
           | And that seems to be the limits of ML. We might eek out self
           | driving cars, but I don't think we will get much more than
           | that. It is pretty significant, but still limited compared to
           | general purpose AI.
        
             | ChefboyOG wrote:
             | Predictive text and image classification are not the extent
             | of ML's limits, even going by what is currently in
             | production. Recommendation engines, ETA prediction,
             | translation, drug discovery, medicinal chemistry, fraud
             | detection--these are all areas where ML is already very
             | important and present.
             | 
             | Sure, it's not artificial general intelligence, but what
             | technological invention in history would compare to the
             | impact of AGI? That's sort of a weird bar.
        
             | PartiallyTyped wrote:
             | One step at a time. As of the previous decade, we have
             | 
             | - Learned how to play all atari games [1]
             | 
             | - Mastered GO [2]
             | 
             | - Mastered Chess without (as much) search [3]
             | 
             | - Learned to play MOBAs [4]
             | 
             | - Made progress in Protein Folding [5]
             | 
             | - Mastered Starcraft [6]
             | 
             | Notice that all these methods require an enormous amount of
             | computation, in some cases, we are talking eons in
             | experiences. So there is a lot of progress to be made until
             | we can learn to do [1,2,3,4,6] with as much effort as a
             | human needs.
             | 
             | [1]
             | https://deepmind.com/blog/article/Agent57-Outperforming-
             | the-...
             | 
             | [2] https://deepmind.com/research/case-studies/alphago-the-
             | story...
             | 
             | [3] https://deepmind.com/blog/article/alphazero-shedding-
             | new-lig...
             | 
             | [4] https://openai.com/projects/five
             | 
             | [5] https://deepmind.com/blog/article/AlphaFold-Using-AI-
             | for-sci...
             | 
             | [6] https://deepmind.com/blog/article/alphastar-mastering-
             | real-t...
        
               | username90 wrote:
               | AI playing games is cool, but applying those techniques
               | to real world scenarios would require a huge
               | breakthrough. If a huge breakthrough happens then sure,
               | but my point was based on us continuing using techniques
               | similar to what we currently use.
               | 
               | Huge breakthroughs happens very rarely so I wouldn't
               | count on it.
        
               | PartiallyTyped wrote:
               | We can learn a lot by observing what we learned from
               | games. With starcraft in particular, we learned that RL
               | agents can achieve godlike micro play but they are weaker
               | at macro. Dota Five showed that it is possible to
               | coordinate multiple agents at the same time with little
               | information shared between them.
               | 
               | This suggests that human theory crafting and ML accuracy
               | should be able to achieve great things. One step at a
               | time.
        
               | cma wrote:
               | The protein folding thing could be big.
        
             | jjcon wrote:
             | For the average person that is their greatest exposure -
             | but were already seeing huge movements in medicine and
             | defense and plenty of places where ML is not in average
             | consumer use (the biggest applications are not for
             | consumers). Add in your note on transportation and it is at
             | the front of a huge section of the world economy. That's
             | all on track today - what will tomorrow's innovations bring
             | (the question asked by sci-fi)?
        
               | username90 wrote:
               | Image recognition is just image recognition no matter
               | where it is used.
        
               | jjcon wrote:
               | Wrong sub thread?
        
       | lucidrains wrote:
       | You can get a taste of the mentioned technology in the article
       | today. https://github.com/lucidrains/deep-daze
       | https://github.com/lucidrains/big-sleep
        
       | mmmmmk wrote:
       | The GPT-3 Jerome K. Jerome essay blew my mind. Better than most
       | writers out there. Close up Iowa Writers' Workshop- that place is
       | toast!
        
       | bsanr2 wrote:
       | True to layman form, I can't help but relate this subject to
       | previously consumed media that cover the same subject, so I'll
       | mention them here:
       | 
       | For those who aren't familiar with the Library of Babel, Jacob
       | Geller's video essays are excellent ruminations on the work.
       | 
       | https://youtu.be/MjY8Fp-SCVk https://youtu.be/Zm5Ogh_c0Ig
       | 
       | And for anyone seeking a more visceral experience of the
       | existential horror of "vast latent spaces," I recommend listening
       | to the Gorillaz album _Humanz_ with the assumption that it 's not
       | so much about bangers and partying as it is about AI, virtual
       | reality, and future shock. Saturnz Barz, Let Me Out, Carnival,
       | Sex Murder Party, and She's My Collar especially.
       | 
       | ...Yeah, I know. Humor me here.
        
       | 29athrowaway wrote:
       | Machines won't have I/O bottlenecks, they won't have limited
       | lifespans, won't need to wait for years to reach maturity.
       | 
       | If an artificial general intelligence becomes as intelligent as
       | Albert Einstein or John von Neumann, then the time required to
       | produce a copy would be almost nothing compared to the 20+ years
       | required for a human.
       | 
       | Imagine that instead of talking you are able to copy parts of
       | your brain and sending them to other humans. This is what AI will
       | do... just serializing large trained models and sending them
       | around.
       | 
       | We are no match for an artificial general intelligence.
       | 
       | Imagine competing against a country where once one person becomes
       | good at some skill, then everyone instantly becomes good at that
       | skill. That will be what competing against machines will be like.
        
         | lasagnaphil wrote:
         | From a grad student's point of view who currently uses "deep
         | learning" to solve problems, we're nowhere near that kind of
         | general intelligence. Neural networks are very good at
         | approximating various kinds of functions given lots of
         | redundant training data, but they have never shown the
         | flexibility of actual human brains (as in creating new neural
         | connections with only sparse experiences, along with
         | deductive/symbolic reasoning). To really mimic human brains, we
         | first need to know quantitatively how our neurons communicate
         | with each other and form connections in our brains (more
         | exactly, the differential equations in which voltage levels
         | change among neurons, as well as the general rule of how new
         | connections are created/destroyed.) Even after figuring out the
         | bio-mechanics, it's going to be a hard task to
         | implement/approximate this in silicon, and would probably need
         | huge breakthroughs in material science and computer
         | architecture. Copying state from a human brain to silicon would
         | be as much as hard, since that will need incredibly high-
         | definition non-destructive 3d scanning technology. I'll bet
         | we're at least 100+ years for humanity to even try attempting
         | the things you're talking about. We also have to consider that
         | in the next few decades, the world in general will be far more
         | busy trying to deal with the drawbacks of globalized capitalism
         | (as well as dealing with climate change).
        
           | malka wrote:
           | Imo it would be easier to interact with real neurons. It
           | raises some ethics questions though
        
           | 29athrowaway wrote:
           | You forget one thing: we have people reverse engineering
           | biological neural networks from various perspectives.
           | 
           | We got to deep learning because Mountcastle, Hubel and Wiesel
           | reverse engineered the visual cortex of cats.
           | 
           | That was the starting point of Fukushima's Neocognitron, the
           | ancestor of the deep learning stuff you study today.
           | 
           | Then, only a fraction of our brain is used on cognitive
           | tasks. The rest is used in motor tasks and taking care of
           | autonomous tasks like regulating your heart, glands and such.
           | 
           | Things will happen faster than you think.
        
         | [deleted]
        
       | cjauvin wrote:
       | I remember reading Avogadro Corp.[1] a couple of years ago and
       | being impressed at how its AI (NLP in particular) felt quite in
       | touch with the real stuff. I think it was self-published but I'm
       | not sure.
       | 
       | [1] https://www.amazon.com/Avogadro-Corp-Singularity-Closer-
       | Appe...
        
         | inetsee wrote:
         | This is the first book in William Hertling's "Singularity"
         | series. He does self-publish his books, and he has been
         | nominated for, and won, Science Fiction awards.
        
       | rscho wrote:
       | Regarding medical AI, those guys are pretty much the only ones
       | who seem to have understood what ML can bring to the table TODAY.
       | And that's not even really ML, just logic programming:
       | 
       | https://www.uab.edu/medicine/pmi/
       | 
       | And for where this all comes from:
       | 
       | https://youtu.be/DQouNG9iuDE
       | 
       | The underlying system:
       | 
       | https://youtu.be/d-Klzumjulo
       | 
       | All the rest I've seen (which is quite a lot) is almost totally
       | fluff, including results obtained in medical imaging.
       | 
       | Really, the problem with medical AI is not on the ML tech side.
       | It's that ML peeps mostly don't understand the clinical system,
       | so they focus on the wrong priorities and produce impressive yet
       | completely useless appliances. Take that from a clinician.
        
         | rakhodorkovsky wrote:
         | What is it that the ML people have to understand about the
         | clinical system before they can be effective?
        
           | rscho wrote:
           | That the main problem is not in lab model performance, but in
           | data availability and quality. Most useful applications could
           | be very dumb models pertaining to trivial things. But we need
           | a solid base of quality data for that, that we absolutely do
           | not have apart from minuscule and hyperspecialized niche
           | domains.
           | 
           | The model itself does not have to be incredibly performant.
           | It absolutely has to, on the other hand, make the clinical
           | process of which it is part more efficient. That mainly
           | means: "efficiently automate the most trivial tasks",
           | currently.
           | 
           | That's why the urgent action to be taken pertains to policy
           | and not to complex tech. We need policy to encourage routine
           | automated data gathering. No data, no ML.
           | 
           | As a clinician, I don't effing care if your shiny new toy can
           | give me a shitty estimate of some parameter extrapolated from
           | some random population not including my current patient. Just
           | getting accurate trends on vital signs would be stellar. This
           | to say that what interests clinicians is workplace
           | integration and not having hundreds of monitors all over the
           | place that aren't even interconnected.
        
       | EamonnMR wrote:
       | Providence by Max Barry did a pretty good job. It follows the
       | crew of self-driving spaceship.
        
       | godelzilla wrote:
       | In fairness it would've been hard to imagine the current level of
       | hype about data analysis.
        
       | imbnwa wrote:
       | What's a good place to start getting into the mechanics of
       | machine learning? Prerequisites?
        
       | tryonenow wrote:
       | The fundamental problem with this article is that it restricts
       | it's analysis to a model composed of a single neural network. Any
       | "general intelligence" will undoubtedly come from a complex of
       | neural networks, much like the human brain. I would argue on that
       | front we are closer than the article implies.
       | 
       | Look no further than the derivatives of AlphaGo. Playing such
       | games _is_ a form of generalizable reasoning.
        
       | egberts wrote:
       | Wut? Isaac Asimov's The Foundation?
       | 
       | Practically got me started into machine learning and it's
       | abstraction thereof.
       | 
       | https://en.m.wikipedia.org/wiki/Foundation_series
        
       | flohofwoe wrote:
       | Maybe because "machine learning" isn't all that interesting, and
       | instead "proper AI" has been taken for granted in science
       | fiction? A sentient AI is much more interesting than a "dumb" ML
       | algorithm that can recognize cat pictures on the internet.
       | 
       | Stanislaw Lem wrote a few really good short stories about machine
       | AIs at the brink of being sentient (e.g. robots that _should_ be
       | dumb machines, but show signs of self-awareness and human
       | traits).
        
         | mordae wrote:
         | Lem has also written about the trouble with black boxes in
         | charge of policy (similar to the paperclip factory problem).
         | 
         | His advice seemed to always be for the human civilization to
         | better engineer our future and stop playing it fast and loose.
        
           | jjcon wrote:
           | > trouble with black boxes in charge of policy
           | 
           | They are only black boxes if you don't take the time to
           | understand them and they are not that complicated to
           | understand.
           | 
           | Humans on the other hand...
        
             | loopz wrote:
             | Some simple regressions, which is also ML, may be possible
             | to understand completely by human beings. You can even
             | calculate their results manually. This doesn't mean their
             | models are always a good fit. Or that the world never
             | changes, especially when impacted by feedback-loops of ML.
             | 
             | Nobody can claim to fully understand models with millions
             | and billions of parameters. We know they are "overfit", but
             | they may work in certain scenarios, much better than
             | manually crafting rules by hand. So we end up with "it
             | depends", then someone starts profiting, with real-world
             | implications.
        
             | gbear605 wrote:
             | Depending on what you mean by understand, it seems that
             | some ML models are already beyond human understanding.
        
       | op03 wrote:
       | I like the Library of Babel reference.
       | 
       | More than machine learning what the author seems to be touching
       | upon is algo amplification of Content/Info. Just look at the
       | never ending amount of content propped up and recommended by
       | Netflix, Youtube or Twitter. And ofcourse good ol HN.
       | 
       | No one seems to have the capacity anymore to stop or control the
       | flow. The info, all info must flow is just another way of
       | admitting no one knows what the hell is important.
       | 
       | Whatever attempts are made to curate or control it will be half
       | baked cause seriously who the hell knows anymore what is
       | important or not. Who can keep up? Expect librarians and curators
       | to start forming cults and jumping out of windows if you buy
       | Borges handling of the story.
       | 
       | It is scary how the expectation is that Chimps with their 6 inch
       | brains have to then navigate this vast ever growing ocean without
       | any real authority figures left to guide them.
       | 
       | He is sort of right that science fiction hasn't really covered
       | the info tsunami problem much.
        
         | Mediterraneo10 wrote:
         | My example of the content tsunami is the explosion of copied
         | blogs: someone finds an authentic blog, hires someone in a
         | developing country on a freelancing platform to rewrite each
         | blog post so that no copyright violation is detected, and then
         | loads the new forged blog with advertising and SEO. Iterations
         | happen where no longer are authentic blogs being copied, new
         | forgeries are based on earlier forgeries.
         | 
         | This started with recipe blogs, but is now slowly spreading
         | through all kinds of other hobbies and interests. Someone
         | looking for information about the hobby can't find the
         | straightforward content among all the advertising-laden copies.
         | 
         | With regard to science-fiction predicting the info tsunami,
         | Roger MacBride Allen's _The Ring of Charon_ from 1990 uses it
         | as a plot point (but this is not otherwise a very good book).
        
         | Bakary wrote:
         | You can learn to wade through all of that eventually, because
         | the fundamental problems of life remain the same. What info can
         | help purchase your freedom from want? What info leads to
         | situations where you feel fully alive doing things that
         | genuinely interest you? These two lodestars, if applied
         | thoughtfully, make navigating the tsunami not only possible but
         | maybe even uncomfortably easy, no longer leaving room for the
         | procrastination on your true desires that was previously
         | justified by the growing tumorous mass of media.
        
         | ThomPete wrote:
         | I agree and that seems to be the real problem. Like with sugar,
         | carbs and other examples of abundance.
         | 
         | I wrote about this 10 years ago:
         | 
         | http://000fff.org/slaves-of-the-feed-this-is-not-the-realtim...
        
       | TaupeRanger wrote:
       | "Machine learning" is a terrible name for what is much more
       | accurately labeled: "computational statistics". There is no
       | reason to expect magic from this field, and we won't get it.
       | Other avenues to intelligence, learning, and thought will need to
       | be explored.
        
       | bregma wrote:
       | It's funny that the article ends up describing Pratchett's
       | L-space almost to a tee. Perhaps he's just reading the wrong
       | genre and that any discussion about the humanities is not best
       | served by leaning to science only.
        
         | mcphilip wrote:
         | Glad I'm not the only one who thought of L-space.
         | 
         | It'd be interesting to see if you could get something like
         | GPT-3 to heavily weigh a given author's corpus when generating
         | output and see what it spits out for "Detritus and Carrot
         | walked into a dwarf bar..."
        
           | nutanc wrote:
           | If anyone is interested :) Output from GPT-3:
           | 
           | Detritus and Carrot walked into a dwarf bar.
           | 
           | "How do we get in?" asked Carrot, looking around.
           | 
           | "That's a good question," said a dwarf. "You look like you're
           | from far away."
           | 
           | "We are. We're from Arbouretum."
           | 
           | "That's a long way away."
           | 
           | "That's where the Arbouretum are from."
           | 
           | "The Arbouretum?"
           | 
           | "Yeah."
           | 
           | "What does that mean?"
           | 
           | "It's a long story."
           | 
           | "I've got time."
           | 
           | "I've got no time."
        
             | zurfer wrote:
             | That's like bad small talk. Try to sound smart by repeating
             | the same phrases over and over again. Like bad small talk
             | it's shallow and boring.
             | 
             | Still thanks for posting. I found it interesting ;)
        
       | bezout wrote:
       | Why are people obsessed with AGI? I agree that it's an exciting
       | scientific challenge, but I don't think it'll solve all of our
       | problems.
       | 
       | Additionally, I'm not sure we must necessarily strive for
       | machines that think like humans. It could be useful for some use
       | cases but I would argue that you can still take advantage of
       | purely data-driven models.
        
         | bostonpete wrote:
         | Isn't the main attraction of AGI to enable systems to reason
         | about and deal with out-of-domain situations? Can you get there
         | with a purely data-driven approach?
        
           | bezout wrote:
           | No, I think that data-driven approaches have limitations.
           | And, yes, the main attraction of AGI is (human) reasoning.
           | The fact is: do we think that human reasoning is enough? Do
           | we think that using machines to scale human reasoning is
           | enough?
        
             | wombatmobile wrote:
             | Human reasoning is always contextualised by the humans
             | assets - wealth, beauty, status, ideology, character,
             | temperament, location, affiliations.
        
       | jondiggsit wrote:
       | Here's machine learning in Sci-fi. Take the film Her by Spike
       | Jonez. From the point the main character turns on Scarjoe, it
       | would be another 5 seconds until she "leaves" and the film is
       | over.
        
       ___________________________________________________________________
       (page generated 2021-02-07 23:02 UTC)