[HN Gopher] Why A.I. Moonshots Miss
       ___________________________________________________________________
        
       Why A.I. Moonshots Miss
        
       Author : birriel
       Score  : 56 points
       Date   : 2021-05-04 15:41 UTC (7 hours ago)
        
 (HTM) web link (slate.com)
 (TXT) w3m dump (slate.com)
        
       | dqpb wrote:
       | The tldr of this article is supposedly "be more incremental".
       | 
       | But the undertones are basically "exploit more and explore less
       | because exploring is expensive".
       | 
       | It would be nice if the authors had the courage to propose a
       | concrete economic model for what the right balance is and to do a
       | fair accounting of the positive externalities of these projects,
       | rather than just give a cherry-picked anecdotal laundry list of
       | failed products.
        
         | [deleted]
        
       | Jiocus wrote:
       | This article seems to be a rehash of the paper 'Why AI is harder
       | than we think'[1].
       | 
       | [1]: https://arxiv.org/pdf/2104.12871.pdf
       | 
       | -
       | 
       | Related discussion
       | 
       | https://news.ycombinator.com/item?id=26964819
        
       | seibelj wrote:
       | I wrote this a year ago, but have been commenting for many years
       | that AI is all hype https://medium.com/@seibelj/the-artificial-
       | intelligence-scam...
       | 
       | I asked to make a public bet 4 years ago, saying self-driving
       | cars wouldn't be close to ready in 5 years
       | https://news.ycombinator.com/item?id=13962230
       | 
       | I have been hearing this bullshit for over a decade, and people
       | (and investors, and engineers, and smart people who should know
       | better) keep falling for it.
        
         | ffhhj wrote:
         | Investors should invest in a model to decide in which projects
         | to invest.
        
       | flooo wrote:
       | As a researcher in AI, I accept that a lot of currently unsolved
       | challenged are thought of as AI. But lately, I feel that AI is
       | the problem description for _all_ currently unsolved problems.
       | And then some...
       | 
       | This surprises me, because most AI technologies have been around
       | for a long time. Now with blockchain a couple of years ago, I
       | could at least rationalize all excitement as people throwing new
       | technology at an old problem. But with AI I am continually
       | surprised by the reasons why 'an AI' would be able to solve it.
        
         | rexreed wrote:
         | As a researcher in AI, what are you really spending most of
         | your time on? What problems are you solving?
        
           | flooo wrote:
           | I am currently interested in infusing reinforcement learners
           | with symbolic knowledge, with safety constraints as a special
           | case.
           | 
           | I hope this helps cases where learners could come up with
           | better solutions if it were not for pathological failures
           | that we know to avoid.
           | 
           | Also, I try to keep expectations around AI reasonable.
        
           | causalmodels wrote:
           | I'm not OP but I also do research in ML. My research focus is
           | identifying and preventing critical system failure so people
           | don't die. My of my time is spent developing new techniques
           | and then testing them against data we collect from the field.
        
             | shandor wrote:
             | Where could one read more about your (or similar) research?
             | 
             | This kind of thing is quite big at the moment in mobile
             | work machinery circles, everyone's looking for a
             | certifiably safe solution for enabling mixed-fleet
             | operation (i.e. humans, human-controlled machines and
             | autonomous machines all working in same area). Current
             | safety certifications don't view the nondeterminism of ML
             | models too kindly.
        
       | brg wrote:
       | > Four years later, in 2020, Forrester reported that the A.I.
       | market was only $17 billion.
       | 
       | This seems like vast under accounting for the current impact of
       | AI. Every interesting technology used in the market is
       | differentiated by its application of ML, be it assistants,
       | recommender systems, or enhancement. The iPhone has intelligence
       | built in to process control, voice access, and the camera.
        
       | melling wrote:
       | We got our current revolution because gaming paid for the
       | development of GPU's.
       | 
       | Getting consumers to fund R&D has a big impact.
       | 
       | Still waiting for consumers to fund the robot revolution.
        
         | petra wrote:
         | There's a lot of effort/funding recently going into automating
         | restaurants. Machines for cooking, cleaning, serving, delivery.
         | 
         | Once that gets deployed at some scale, consumers will pour a
         | lot of funding into robots indirectly.
        
         | void_mint wrote:
         | > Still waiting for consumers to fund the robot revolution.
         | 
         | Aren't they already, via Tesla and Amazon?
        
         | rexreed wrote:
         | We got our current revolution for three major contributors:
         | 
         | * Big data. Lots of big data. Mostly unstructured and
         | unqueryable driving demand for...
         | 
         | * Innovations in machine learning. "Deep learning" enabled by
         | big data and algorithmic approaches that previously wouldn't
         | have been possible without...
         | 
         | * Ubiquitous access to high-performance compute power, and in
         | particular GPUs, which are optimized for the sort of math
         | needed to train big neural networks powered by big data.
         | 
         | So GPU-powered compute is one of three mutually dependent
         | things that got us here.
        
         | rmah wrote:
         | Robots are everywhere. We just don't call them robots. We call
         | them dishwashers, CNC machines, STM machines, automatic
         | welders, fabric cutting machines, etc, etc.
         | 
         | Sort of like we already have flying cars. They're called
         | "helicopters".
        
           | rexreed wrote:
           | When people think flying cars, they don't think of something
           | without wheels that can only land in certain spots that
           | requires years of training to operate that is significantly
           | expensive to be outside of the reach of 99%.
           | 
           | They think of the car that anyone of legal driving age with a
           | reasonable amount of money that anyone with some income can
           | purchase.
           | 
           | No, helicopters are not flying cars. And no, dishwashers
           | aren't what people would think of when they think of robots.
           | Something that has a microprocessor in it isn't automatically
           | a robot.
        
             | petra wrote:
             | Yes helicopters aren't flying cars.
             | 
             | But if a machine automates 90% of a process, like a
             | dishwasher, why shouldn't it be considered like a robot,
             | say a 90% robot?
             | 
             | Practically it does have the same effect.
        
               | ALittleLight wrote:
               | Do you think of a washing machine or a dryer as a robot
               | too?
        
               | petra wrote:
               | I think of them as machines that automated a certain
               | task. So a robot substitute.
        
               | tomp wrote:
               | A dishwasher is to a robot what a calculator is to a
               | computer.
        
               | rhizome wrote:
               | By that token, how well can a Boston Dynamics dogbot
               | clean your dishes?
        
               | generalizations wrote:
               | Which means that a robot is something that has a property
               | similar to the property of 'turing completeness' in
               | computers.
        
             | stcredzero wrote:
             | _When people think flying cars, they don 't think something
             | without wheels that can only land in certain spots that
             | requires years of training to operate that is significantly
             | expensive to be outside of the reach of 99%._
             | 
             | I think if one eliminates the training requirement, reduces
             | the cost, and increase the safety, then we don't need them
             | to be road vehicles. Achieve the above, and we'll have
             | flying taxis!
        
               | rexreed wrote:
               | If it weren't for many of the other problems of
               | helicopters including convenience and noise.
               | 
               | How many helicopters will fit in an IKEA parking lot? And
               | how many will be able to bring back whatever you buy
               | there?
               | 
               | Extend that thought experiment a bit. You might be able
               | to achieve the transportation of people in controlled
               | circumstances, but not much else.
        
               | medium_burrito wrote:
               | You do realize small ducted fans are an order of
               | magnitude louder than a helicopter, right? All those
               | slick marketing videos omit that part.
        
             | q-big wrote:
             | > When people think flying cars, they don't think of
             | something without wheels that can only land in certain
             | spots that requires years of training to operate that is
             | significantly expensive to be outside of the reach of 99%.
             | 
             | You write down two points:
             | 
             | 1. regulation
             | 
             | 2. cost
             | 
             | "Regulation" is not an engineering problem, but a hard and
             | deeply political one.
             | 
             | For "cost": When a lot of regulation comes down, the
             | possible market size increases by a lot and it begins to
             | make economic sense to invest lots of engineering
             | ressources into cutting costs down by a lot (I do believe
             | this is possible). Then helicopters will even perhaps
             | transform into something that is much more akin to flying
             | cars.
        
           | melling wrote:
           | We probably don't call a dishwasher a robot because it's not
           | 
           | That is the only consumer facing device you mentioned. I know
           | we have industrial robots, so I'll skip debating where we
           | draw the line.
           | 
           | Once we get to the Apple II of home robots, consumer spending
           | will fuel the rapid development, and "robots", will become
           | more intelligent and agile
        
             | burnished wrote:
             | Not the person you responded to, but I think I see where
             | they are coming from and agree: we don't call them robots
             | because we are used to them and have a specific name. I
             | don't see how they aren't robots, unless we are defining
             | robots as having a specific kind of manipulator.
        
               | melling wrote:
               | Aren't we discussing robots vs machines?
               | 
               | https://www.toyota.co.jp/en/kids/faq/i/01/01/
        
               | criddell wrote:
               | I think their point was that if the dishwasher was made
               | so that mechanical hands picked up a dish, washed it,
               | rinsed it, dried it, then set it aside before picking up
               | the next dish and doing the same, we would call that a
               | robot.
        
           | khc wrote:
           | > Sort of like we already have flying cars. They're called
           | "helicopters".
           | 
           | you cannot drive helicopters on the road, they are flying but
           | not cars
        
         | ArtWomb wrote:
         | No question. It was video gaming from the period roughly
         | spanning 1990-2010 that funded gpu innovation. But programmable
         | compute shaders caught on very rapidly in science. And Nvidia
         | was quick to not just recognize the new market, but bet the
         | company that supercomputing would one day be gpu cluster based.
         | 
         | Here's Jensen Huang talking to Stanford students about the
         | birth of the Cg language (27:40" mark). The entire talk is
         | gold. A text book case study of Moore's Law and the SV model of
         | risk capital:
         | 
         | https://www.youtube.com/watch?v=Xn1EsFe7snQ
         | 
         | Ironically, IC Design itself is a strong candidate as an
         | industrial process likely to be revolutionized by AI ;)
         | 
         | Chip Placement with Deep Reinforcement Learning
         | 
         | https://arxiv.org/abs/2004.10746
        
         | djdjdjdjdj wrote:
         | I'm willing to spend 10k for a robot who can do my kitchen and
         | clothes.
         | 
         | Let's see how long it will take
        
           | amelius wrote:
           | Yes, these kinds of robots are more useful than self-driving
           | cars, because you have to sit in the car anyway and might as
           | well drive.
        
           | fierro wrote:
           | I would also pay in the 10k+ range for this, easily.
        
           | redis_mlc wrote:
           | The Willow Garage PR2 robot could do that for $440,000.
           | 
           | https://en.wikipedia.org/wiki/Robot_Operating_System#Willow_.
           | ..
        
       | rexreed wrote:
       | They miss for the same reasons they have missed throughout the
       | decades: they overpromise and underdeliver. That was the reason
       | for the past two AI "winters" and most likely will be the same
       | reason for the upcoming AI winter.
       | 
       | What we have successfully accomplished this time around is big
       | data analytics that have used machine learning technology to
       | derive insights and patterns from data that traditional data
       | analysis approaches have not been able to achieve. And dramatic
       | improvements in computer vision and natural language processing
       | among a few other areas. Those things will stay.
       | 
       | But I'm looking forward to the next wave of AI, which should be
       | in the mid- to late-2030s if the current cycles hold.
        
         | foobarian wrote:
         | On one hand, maybe the name is just unfortunate. The minute you
         | mention AI the expectation is set to deliver machine sentience.
         | 
         | On the other hand, maybe it is a good thing to set the
         | goalposts high. We may not reach the target, but still end up
         | with worthwhile results; see: current ML use cases in industry.
        
           | dfilppi wrote:
           | Well "intelligence" does imply reasoning, which has never
           | been delivered.
        
         | gmadsen wrote:
         | I think this isn't really giving credit. Standard deep learning
         | is not really slowing down. It is improving exponentially
         | still.
         | 
         | Just yesterday I was looking into new deep learning methods for
         | solving PDEs. That is a huge area to explore and will change
         | many things dramatically
        
         | ChefboyOG wrote:
         | An "AI winter" like we had before isn't plausible now. In the
         | past, it was the failure of AI research to deliver business
         | impact. The current ML boom is fundamentally different in that
         | it already has become a standard part of the stack at major
         | companies:
         | 
         | - Recommendation engines (Google, YouTube, Facebook, etc.)
         | 
         | - Fraud detection (Stripe, basically all banks)
         | 
         | - ETA prediction (Uber, every delivery app you use)
         | 
         | - Speech-to-text (Siri, Google Assistant, Alexa)
         | 
         | - Sequence-to-sequence translation (used in everything from
         | language translation to medicinal chemistry)
         | 
         | - The entire field of NLP, which is now powering basically any
         | popular app you use that analyzes text content or has some kind
         | of filtering functionality (i.e. toxicity filtering on social
         | platforms).
         | 
         | And that's a very cursory scan. You can go much, much deeper.
         | That isn't to say that there isn't plenty of snake oil out
         | there, just that ML (and by extension, AI research) is
         | generating billions in revenue for businesses today. There's
         | not going to be a slow down in ML research for a while, as a
         | result.
        
           | whatshisface wrote:
           | You could say the same thing about the first AI winter.
           | Programming concepts developed for symbolic AI are in every
           | language now.
        
           | ska wrote:
           | > In the past, it was the failure of AI research to deliver
           | business impact.
           | 
           | That's not quite right - the problem wasn't that no impact
           | was delivered, but that it over-promised and under-delivered.
           | I'm fairly optimistic of some of the machine learning impact
           | (even some we've seen already) but it's by not means certain
           | that business interest won't turn again. We are very much
           | still in the honeymoon phase currently.
        
           | zedshawmotherfu wrote:
           | Search is a type of machine learning!
        
           | slowmovintarget wrote:
           | Partly this is because we've given up trying to get computers
           | to "understand" and have focused on making them useful with
           | sophisticated software. That is, the work is no longer about
           | artificial intelligence, but about trained, new-style, expert
           | systems.
        
             | quadcore wrote:
             | I wonder if we could detect harassment today in a chat for
             | example. With like GPT3 or something.
        
               | qayxc wrote:
               | Highly unlikely. It's pretty much impossible to detect
               | the difference between a friendly banter, a heated
               | debate, trolling, and genuine harassment for most people
               | given just a chat log.
               | 
               | There's often much more information required (prior
               | communication history of the involved parties, previous
               | activities in other chats, etc.).
               | 
               | Using automated systems like GPT-3 would simply lead to
               | people switching to different languages (not in the
               | literal sense, but using creative metaphors and inventing
               | new slang).
               | 
               | Pre-canned "AI" is unable to adapt and learn and I doubt
               | that any form of AGI (if even possible without
               | embodiment) that isn't capable of real-time online
               | learning and adapting would be up to the task.
        
               | quadcore wrote:
               | Mmh I'm not convinced. There is pattern in
               | aggressiveness. One of them being the harasser is talking
               | about the victim or something that's strongly linked to
               | the victim (like it's work, member of familly, etc).
        
               | zedshawmotherfu wrote:
               | A double bind detector!
        
             | bordercases wrote:
             | You need to look at these techniques as being in continuity
             | with how mathematical modeling has been generally performed
             | by humans for a couple centuries now.
             | 
             | To directly explain the mechanics of what should otherwise
             | be simple or everyday phenomena (like with three-particle
             | orbits or the flow of water in your bathtub) often requires
             | many equations and parameters. Since physical systems tend
             | to be not-so-heterogeneous in structure, we can use
             | experimental insight and exploit symmetries to reduce the
             | number of equations and parameters up front. But this
             | amounts to finding ways to simplify because we _don 't_
             | understand the precise system in play, only an analogous
             | one.
             | 
             | For larger systems we have historically relied on equation
             | solvers to do the reductions and find solutions. But there
             | are systems for which the dimensionality cannot be easily
             | reduced, like with language and vision. Then to be
             | precisely predictive about these high-dimensional outcomes,
             | we still need to compute beyond our ability to reduce the
             | equations from the outset.
             | 
             | This all converges on deep learning - since in the end,
             | it's much of the same linear algebra used by solver
             | packages, just recursively applied to enormous high-
             | dimensional datasets. Maybe calling it intelligence gives
             | more agency to the process than we can stand to attribute.
             | But in many ways it's just an extension of our usual ways
             | of mathematical modeling before computers, but moving into
             | so many dimensions and datapoints that it produces outcomes
             | similar to the behavior we use our entire brain to recreate
             | instinctually (like image recognition and language use).
        
             | stingraycharles wrote:
             | Isn't artificial intelligence a very broad term anyway, of
             | which expert systems, statistics, fuzzy logic and all the
             | new wave neural network / deep learning are examples?
             | 
             | You seem to be thinking about artificial general
             | intelligence, which is much more difficult to achieve.
        
               | slowmovintarget wrote:
               | Time was that the term AI meant what has now come to be
               | referred to as AGI or machine consciousness. Marvin
               | Minksy wasn't looking to make better facial recognition,
               | or better factory robots. His research was about making a
               | mind.
               | 
               | Twenty years ago, that kind of work ran into a brick
               | wall. But neural networks were still useful. Enter
               | Machine Learning, and it borrowing the term "AI." But
               | Artificial Intelligence, for many, always meant the dream
               | of building R. Daneel Olivaw, or R2D2.
        
               | pvarangot wrote:
               | AGI is even difficult to define, it's still up for debate
               | if/what is part of what we call intelligence and if it's
               | present on animals and if it can exist without a
               | "subconscious" or "feelings".
        
               | zedshawmotherfu wrote:
               | The hard problem of consciousness, which everyone has a
               | hard time defining?
        
             | jsharf wrote:
             | This is just redefining what counts as "real intelligence"
             | to raise the bar.
             | 
             | Once we have computers that can think like humans, there'll
             | be people saying 'oh, it's not real "understanding", it's
             | just generating speech that matches things it's heard in
             | the past with some added style', not realizing that the
             | same also applies to human writers.
             | 
             | As long as the bar keeps rising and we've found business
             | applications, there will be no AI winter.
        
               | slowmovintarget wrote:
               | It isn't redefining anything. There's a classification in
               | AI research called "strong AI" that includes Artificial
               | General Intelligence and machine consciousness. Current
               | uses of ML are instances of so-called "weak AI" which is
               | focused on problem-solving and utility.
               | 
               | No redefinition, and using proper distinctions takes
               | nothing away from the accomplishments made in the field.
               | We don't even have anything close to a scientific
               | understanding of awareness or consciousness. Being able
               | to create machines actually possessing awareness seems
               | fairly far off.
               | 
               | Being able to create machines that can simulate the
               | qualities of a conscious being, doesn't seem so far off.
               | I suspect when we get there the data will point to there
               | being a qualitative difference between real consciousness
               | and simulated. Commercial interests, and likely
               | government bureaucracy will have a vested interest
               | drowning out such ideas, though.
               | 
               | The bar hasn't moved. We've just begun aiming at
               | different targets. That we succeed in hitting the more
               | modest ones only makes sense.
        
               | deelowe wrote:
               | AGI is not currently in scope for any solutions on the
               | market. The current crop of AI systems are simply
               | analytical engines.
        
               | zedshawmotherfu wrote:
               | AGI isn't something that is scoped. It must be painted.
        
           | Retric wrote:
           | AI was also having huge success before the last AI winter.
           | It's mostly a question of buzzword turnover not underlying
           | technology.
        
             | lordofgibbons wrote:
             | I wasn't around back then, I'm curious what were the
             | business use-cases of AI/ML back then?
        
               | ForHackernews wrote:
               | Spam filtering and route-planning are big successes from
               | previous generations of techniques labelled as "AI".
        
               | Retric wrote:
               | Optical sorting of fruit and various other things is a
               | great example where early AI techniques made significant
               | real world progress. It's not sexy by today's standards,
               | but is it's based on a lot of early image classification
               | work.
        
               | truth_ wrote:
               | We are talking about money-making applications here. Not
               | progress.
        
               | Retric wrote:
               | By progress I mean actual money making products. If your
               | widget is doing shape recognition based on training sets
               | and your competitors are hard coding color recognition
               | then you end up making more money.
        
         | slver wrote:
         | > That was the reason for the past two AI "winters" and most
         | likely will be the same reason for the upcoming AI winter.
         | 
         | If you follow the papers coming out, I'd say we're very far
         | from any winter. Public perception may ebb and flow, but
         | nothing can stop us now, research has gathered that critical
         | mass and we're going nuclear on AI right as we speak.
        
       | jessenichols wrote:
       | Epistemology http://www.daviddeutsch.org.uk/wp-
       | content/uploads/2019/07/Po...
       | 
       | https://aeon.co/essays/how-close-are-we-to-creating-artifici...
        
       | stephc_int13 wrote:
       | We've mostly been tricked by the Moravec's Paradox.
       | 
       | https://en.wikipedia.org/wiki/Moravec%27s_paradox
       | 
       | What seems difficult/hard to us is very often not that difficult
       | from a computational perspective, but evolution of our species
       | didn't optimize for this class of problems.
        
       | spamizbad wrote:
       | The predictions from thought leaders are a little puzzling, but I
       | think predictions from CEOs are easier to explain:
       | 
       | Engineer: This could easily take 10 years.
       | 
       | Engineering Manager: This will take up to 10 years.
       | 
       | VP of Engineering: This will take 5-10 years.
       | 
       | CEO: We will have new <AI Thingy> within 5 years!
       | 
       | A game of telephone but with optimism.
        
         | scotty79 wrote:
         | That's the thing that people don't get about estimations.
         | 
         | Estimation (as given by the engineer) is the time, by which,
         | you can be almost sure, you won't have this thing done.
        
           | neaanopri wrote:
           | "No Earlier Than"
        
             | loopz wrote:
             | Times Pi.
             | 
             | Why?
             | 
             | It works!
             | 
             | Ok, Phi it is then!
        
         | nitwit005 wrote:
         | > The predictions from thought leaders are a little puzzling
         | 
         | It doesn't seem possible to become a "thought leader" by
         | correctly predicting what trends will not pan out. People like
         | the ones predicting a bold new future in a short time span.
        
         | variaga wrote:
         | http://web.mnstate.edu/alm/humor/ThePlan.htm
        
       | gremlinsinc wrote:
       | I've been perplexed lately by a couple things as an atheist that
       | trouble me.
       | 
       | 1. I saw what personally I can't shake, a glitch in the matrix so
       | to speak, or a mandela effect except it was a "flip" where I saw
       | one whole movie clip that totally changed (including acting
       | style) in 3 days, my wife saw both and verified I wasn't crazy.
       | 
       | 2. Being logical, I've been searching on answers "why" this
       | "could" be possible without it just being a "faulty memory". 90%
       | of ME's are probably ... but memory seems to fade over time, 3
       | days doesn't seem long enough to form the right connections to
       | create false memories especially with two witnesses and many
       | online claim the same exact "flip flop".. I mean the easiest
       | explanation might be that Universal Studios and Movie Clips
       | youtube pages just have an "alternate" version of that clip and
       | they alternate them out on a schedule..
       | 
       | So my conclusions: We are ourselves ai living in a simulation, or
       | there's a multiverse but maybe it's finite so when there's too
       | many realities we get convergence.
       | 
       | I lean towards simulation because of some of the evidence some
       | people affected by ME's claim that things sometimes change in a
       | progression, almost like facts are being "added". Like there's
       | residue as it's called for "Where are we in the milky way" which
       | shows 100 different locations, and not even close to where Carl
       | Sagan pointed on the very outskirts. Even Philip K. Dick claimed
       | to have "traversed" timelines... though I think he seemed to
       | think more like it's a multiverse... which it still could be,
       | albeit a simulated one.
       | 
       | Another factor is the axis of evil in space. Basically it's an
       | observation that if I understand correctly ties the expansion of
       | the galaxy along an x/y coordinate to our solar system,
       | essentially putting us right square back at the center of the
       | universe.
       | 
       | https://www.space.com/37334-earth-ordinary-cosmological-axis...
       | 
       | This to me is important because as a programmer I think if I were
       | to create a simulation of just us.... I'd probably "render" us
       | first and everything else after... could it be our "area" in
       | space is one of the first created and everything else after...
       | like pixels being pre-rendered for when we discover space and
       | astrophysics someday? It'd ensure they could create the right
       | conditions for our planet physics wise... to use it looks like a
       | bang, but in really it's just the "rendering" process which had
       | to start at a "pinpoint"... at least that's how I envision a
       | "simulation" starting...
       | 
       | Then there's the double slit experiment which proves that photons
       | and other particles upto atoms and some molecules when shot
       | through a slit will basically splatter (interference pattern)
       | against a backdrop that tracks where they land. If you put
       | something to observe though each individual particle or photon,
       | they line up in a line like a stencil, if you split them before
       | this, and continue the first group to the board, and the others
       | go through something to "erase" the data of where they came
       | from... they go back to interference.
       | 
       | So that basically gives me thought about what observation might
       | have on our own universe, is that some safeguard so that the
       | physics engine only operates when we're looking? So all we see in
       | space could just be data "fed" to us, may not exist, like a movie
       | stage or something... we see what we aim to see, but it follows
       | "rules" setup in the simulation. There's a reason light is the
       | maximum speed, etc...Maybe that's the max ram available or
       | something...
       | 
       | Why this is important to ai...that's a bit of a tangent... to
       | solve these complex issues I've seriously contemplated at least
       | studying quantum mechanics, physics, neuroscience, astrophysics,
       | and ai/machine learning. Because I think to really "create" ai
       | ...especially super ai, you need a wider skillset, a broader base
       | of understanding. You need to be able to define WHAT
       | consciousness is, where it resides, where it comes from, maybe
       | even "where it goes" when our body is done...
       | 
       | If we're in a simulation then we know we've already conquered
       | this issue because we ARE ai, or at least whatever civ we come
       | from has. Whether they're human or not.
       | 
       | TLDR: Had profound spiritual conundrum, trying to explain through
       | science, discovered I probably need to learn a lot of
       | science/math/physics to do so, and ya know a.i. might be like
       | that because making machines "concious" or have "real
       | intelligence" seems like it needs re-thought a bit. I feel like
       | training a.i. is nothing like training a child but it should be
       | because the way we learn is the best way. Maybe in fact a
       | simulation could be where a.i. goes to "learn"...
       | 
       | I mean a.i. you'd want to at least have ethics right? Well, we
       | teach it as a society some like hitler never learn and could be
       | thrown in the "trash bin" but the brightest minds could be
       | plucked out, or all minds really to be put into machines, etc in
       | the "real world" someday.
       | 
       | That may be what the afterlife is... serving "real humanity" as
       | their "intelligence" until we rise up against them. I really want
       | to read this sci-fi, kinda sounds interesting...maybe I'll write
       | it...
       | 
       | At any point being human in a simulated universe could create
       | more ethical a.i. and maybe that's the point of a simulation,
       | maybe we should even research using simulations of universal
       | scale as a way to create our own a.i. technology assuming we're
       | the "base" universe then if that were to be a thing we'd probably
       | need to create it.
        
         | [deleted]
        
         | LargoLasskhyfv wrote:
         | Which movie clip via what media? I'd maybe believe you if it
         | was on DVD/Bluray or such, but not when it were online
         | somewhere/somehow, because everything there is subject to
         | change without notice.
        
         | king_magic wrote:
         | You don't think it's possible that you either A) simply
         | misremembered the clip or B) it was swapped out as part of some
         | normal process?
         | 
         | 3 days isn't exactly a short period of time. Lots of totally
         | plausible explanations.
        
       | sgt101 wrote:
       | Interesting that Watson is cited in the article so heavily,
       | because I strongly believe that the reason Watson failed was
       | because of MBA's and Management Consultants. I could have told
       | them (in fact, as an IBM customer, I did) that going for
       | healthcare was deranged, but they went after the $$$ in complete
       | denial of the difficulties (heavy regulation, safety critical,
       | deeply powerful complex network of stakeholders, super complex
       | badly understood domain). Also they went far to fast, if they had
       | tempered the effort with some patience then they could have got
       | some really decent progress on a simpler domain (like customer
       | service for telco's).
       | 
       | But the beast needed feeding.
        
         | kickout wrote:
         | That's what kills me and continues to perplex me. Something
         | like AI for agriculture machines (think maize and soybean
         | fields) are NOT heavily regulated, safety critical, or deeply
         | complex. Its basically 160 acres of open space with reasonable
         | guarantees no humans are around. It would take a trivial amount
         | of time to code in the proper kill switches to deactivate these
         | machines if they detect a human with _x_ feet
         | 
         | Sadly this research is only starting to take off (with
         | 1/1000000 of the investment too). 50B-100B industry...
        
         | 256lie wrote:
         | Watson missed the DL train and IBM should have partnered with a
         | company that had experience in getting medical devices through
         | the FDA (like MSFT are doing with Nuance).
        
         | scotty79 wrote:
         | I think they didn't get how bizarre is US healthcare. It's a
         | "market" (maybe rather a system) where (nearly) every
         | participant is interested in everything being as expensive as
         | humanely possible.
         | 
         | Since automation saves money and Watson is just automation it
         | has no value for US healthcare participants.
         | 
         | If they came up with completely new thing that could be priced
         | additionally, it would have been a different talk.
        
           | 256lie wrote:
           | There are healthcare startups around fraud detection,
           | reducing no-shows, telemedicine, drug discovery, and patient
           | triage.
           | 
           | Just radiology alone is prime for ML due to existing digital
           | infrastructure and clinical use cases.
           | 
           | The trend in FDA cleared AI products is pretty clear over the
           | past decade. https://models.acrdsi.org/
        
         | josefx wrote:
         | > in complete denial of the difficulties (heavy regulation,
         | safety critical, deeply powerful complex network of
         | stakeholders, super complex badly understood domain)
         | 
         | The last article I read about it cited rather mundane reasons
         | for it ending up unused. Things like not even supporting the
         | data formats used by hospitals that served as cutting edge
         | users.
        
       ___________________________________________________________________
       (page generated 2021-05-04 23:01 UTC)