[HN Gopher] If it works, it's not AI: a commercial look at AI st...
       ___________________________________________________________________
        
       If it works, it's not AI: a commercial look at AI startups (1999)
        
       Author : rbanffy
       Score  : 97 points
       Date   : 2025-06-07 13:52 UTC (9 hours ago)
        
 (HTM) web link (dspace.mit.edu)
 (TXT) w3m dump (dspace.mit.edu)
        
       | clbrmbr wrote:
       | Title should end (1999), as 1977 is the birth year of the author
       | not the publication date.
        
       | MontyCarloHall wrote:
       | The sentiment of the title is reflected in this comment [0] from
       | a few hours ago:                 We use so much AI in production
       | every day but nobody notices because as soon as a technology
       | becomes useful, we stop calling it AI. Then it's suddenly "just
       | face recognition" or "just product recommendations" or "just
       | [plane] autopilot" or "just adaptive cruise control" etc
       | 
       | [0] https://news.ycombinator.com/item?id=44207603
        
         | JimDabell wrote:
         | This is known as the AI effect:
         | 
         | https://en.wikipedia.org/wiki/AI_effect
        
           | dunham wrote:
           | I remember being taught in my late 90's AI class something
           | along the lines of: "AI is anything we don't know how to
           | solve, and it gets another name when we figure out how to
           | solve it".
        
             | neilv wrote:
             | Same here.
             | 
             | "AI is things we currently think are hard to do with
             | computers"
             | 
             | "AI is things that are currently easier for humans than
             | computers".
        
         | layer8 wrote:
         | I don't think that's the sentiment of the title.
        
           | MontyCarloHall wrote:
           | It is exactly the sentiment of the title. From the paper's
           | conclusion:                 Although most financiers avoided
           | "artificial intelligence" firms in the early 1990s, several
           | successful firms have utilized core AI technologies into
           | their products. They may call them intelligence applications
           | or knowledge management systems, or they may focus on the
           | solution, such as customer relationship management, like
           | Pegasystems, or email management, as in the case of Kana
           | Communications. The former expert systems companies,
           | described in Table 6.1, are mostly applying their expert
           | system technology to a particular area, such as network
           | management or electronic customer service. All of these firms
           | today show promise in providing solutions to real problems.
           | 
           | In other words, once a product robustly solves a real
           | customer problem, it is no longer thought of as "AI," despite
           | utilizing technologies commonly thought of as "artificial
           | intelligence" in their contemporary eras (e.g. expert systems
           | in the 80s/90s, statistical machine learning in the 2000s,
           | artificial neural nets in the 2010s onwards). Today, nobody
           | thinks of expert systems as AI; it's just a decision tree. A
           | kernel support vector machine is just a supervised binary
           | classifier. And so on.
        
             | layer8 wrote:
             | The paper is picking up a long-standing joke in its title.
             | From https://www.cia.gov/readingroom/docs/CIA-
             | RDP90-00965R0001002... (1987): _All these [AI] endeavors
             | remain at such an experimental stage that a joke is making
             | the rounds among computer scientists: "If it works, it's
             | not AI."_
             | 
             | The article is re-evaluating that prior reality, but it
             | isn't making the point that successful AI stops being
             | considered AI. In the part you quote, it's merely pointing
             | out that AI technology isn't always marketed as such, due
             | to the negative connotation "AI" had acquired.
        
         | kevin_thibedeau wrote:
         | Predicting the trajectory of a cannonball is applied
         | mathematics. Aircract autopilot and cruise control are only
         | slightly more elaborate. You can't label every algorithmic
         | control system as "AI".
        
           | MontyCarloHall wrote:
           | I agree that aircraft autopilot/other basic applications of
           | control theory are not usually considered "AI," nor were they
           | ever -- control theory has its roots in mechanical governors.
           | 
           | Certain adaptive cruise control systems certainly are
           | considered AI (e.g. ones that utilize cameras for emergency
           | braking or lane-keep assist).
           | 
           | The line can be fuzzy -- for instance, are solvers of
           | optimization problems in operations research "AI"? If you
           | told people in the 1930s that computers would be used in a
           | decade by shipping companies to optimally schedule and route
           | packages or by militaries to organize wartime logistics at a
           | massive scale, many would certainly have considered that some
           | form of intelligence.
        
           | jagged-chisel wrote:
           | It's "AI" until it gets another name. It doesn't get that
           | other name until it's been in use for a bit and users start
           | understanding its use.
           | 
           | So you're right that some of these things aren't AI now. But
           | they were called that at the start of development.
        
           | hamilyon2 wrote:
           | Yes. Chess engine is clever tree search at it's core. Which
           | in turn is just loops, arithmetic and updating some data
           | structures.
           | 
           | And every AI product in existence is the same. Map
           | navigation, search engine ranking, even register allocation
           | and query planning.
           | 
           | Thus they are not AI, they're algorithms.
           | 
           | The frontier is constantly moving.
        
             | andoando wrote:
             | There is a big divide between problem specific problem
             | solving and general intelligence.
        
               | behringer wrote:
               | There's no g in Ai. We'll unless you spell it out but you
               | know what I mean.
        
           | paxys wrote:
           | Next you'll tell me AI is just algorithms under the hood!
        
             | tempodox wrote:
             | That would be such a spoiler. I want to believe in
             | miracles, oracles and omniscience!
        
             | small_scombrus wrote:
             | What's the saying?
             | 
             | > All sciences are just highly abstracted physics
        
               | paxys wrote:
               | And physics is applied math. And math is applied logic.
               | And logic is applied philosophy...
        
               | goatlover wrote:
               | Platonic forms all the way down...
        
           | sjducb wrote:
           | When algorithms improve with exposure to more data are they
           | still algorithms?
           | 
           | Where is the line where they stop being algorithms?
        
             | adammarples wrote:
             | Because they're algorithms that have an algorithm (back
             | propagation) that improves the other algorithm (forward
             | propagation). Very roughly speaking.
        
           | internet_points wrote:
           | And deep learning is just applied optimization and linear
           | algebra (with a few clever heuristics, learnt by throwing phd
           | students's at the wall and seeing what sticks).
        
         | hliyan wrote:
         | Funnily enough, this same year (1999), I wrote an essay for a
         | university AI subject where I concluded "Intelligence is a
         | label we apply to information processing for which we have not
         | yet identified an algorithm. As soon as an algorithm is
         | discovered, it ceases to be intelligence and becomes
         | computation". I thought I was very clever, but later discovered
         | that this thought occurs to almost everyone who thinks about
         | artificial intelligence in any reasonable depth.
        
           | MichaelZuo wrote:
           | This would imply "artificial intelligence" itself is a
           | nonsensical term... as in "artifical [label we apply to
           | information processing for which we have not yet identified
           | an algorithm]".
        
             | kevinventullo wrote:
             | I dunno, the opacity of LLM's might kind of fit the bill
             | for not having an "algorithm" in the usual sense, even if
             | it is possible in theory to boil them down to cathedrals of
             | if-statements.
        
         | CooCooCaCha wrote:
         | I'm reminded of AI hypesters complaining that people are
         | constantly moving the goalposts of AI. It's a similar effect
         | and I think both have a similar reason.
         | 
         | When people think of AI they think of robots that can think
         | like us. That can solve arbitrary problems, plan for the
         | future, logically reason, etc. In an autonomous fashion.
         | 
         | That's always been true. So the goal posts haven't really
         | moved, instead it's a continuous cycle of hype, understanding,
         | disappointment, and acceptance. Every time a computer exhibits
         | a new capability that's human-like, like recognizing faces, we
         | wonder if this is what the start of AGI looks like, and
         | unfortunately that's not been the case so far.
        
           | simonw wrote:
           | The term "artificial intelligence" started out in academia in
           | 1956. Science fiction started using that language later.
        
             | CooCooCaCha wrote:
             | I'm not concerned with who used what and when. I'm talking
             | about what people expect of AI. When you tell people that
             | you're trying to create digital intelligence, they'll
             | inevitably compare it to people. That's the expectation.
        
             | neepi wrote:
             | The term AI was invented because Claude Shannon was fed up
             | of getting automata papers.
        
               | simonw wrote:
               | I thought it was John McCarthy trying to avoid having to
               | use the term "cybernetics".
        
           | parineum wrote:
           | I think you're spot in with this. It's the enthusiasts that
           | are constantly trying to move the goalposts towards them and
           | then the general public puts it back where it goes once they
           | catch on.
           | 
           | AGI is what people think of when they hear AI. AI is a
           | bastardized term that people use to either justify, hype
           | and/or sell their research, business or products.
           | 
           | The reason "AI" stops being AI once it becomes mainstream is
           | that people figure out that it's not AI once they see the
           | limitations of whatever the latest iteration is.
        
         | jodrellblank wrote:
         | This is one of my pet dislikes; so after 1950, a time when a
         | computer that could win at tic-tac-toe was 'AI', nobody is ever
         | allowed to talk about AI again without that whinge being
         | posted? Because AI was solved then so shut up?
         | 
         | The author of that whinge thinks that what we all wanted from
         | _Artificial Intelligence_ all along was a HAAR cascade or a
         | chess min-maxer, that was the dream all along? The author
         | thinks that talking intelligence any more is what, "unfair"
         | now? What are they even whining _about_?
         | 
         | Because the computers of yesteryear were slow enough that
         | winning a simple board game was their limit, you can't talk
         | about what's next!
         | 
         | And thats to put aside the face recognition that Google put out
         | which classified dark skinned humans as gorillas, not because
         | it was making a value judgement about race but because it has
         | no understanding of the picture or the text label. Or the
         | product recommendation engines which recommend another hundred
         | refrigerators after you just bought one, and the engineers on
         | the internet who defend that by saying it genuinely is the most
         | effective advert to show, and calling those systems
         | "intelligent" just because they are new. Putting a radar on a
         | car lets it follow the car in front at a distance because there
         | is a computer to connect the radar, engine, and brakes and not
         | because the car has gained an understanding of what distance
         | and crashing are.
        
           | potatoman22 wrote:
           | I don't think anyone is saying "you can't call facial
           | recognition AI." I think their point is that laypeople tend
           | to move the goalpost of what's considered AI.
        
             | jodrellblank wrote:
             | And my point is: _so what_? [edit: missed a bit; it 's not
             | 'moving the goalposts' because those things were never _the
             | goal_ of Artificial Intelligence!].
             | 
             | A hundred years ago tap water was a luxury. Fifty years ago
             | drinkable tap water was a luxury. Do we constantly have to
             | keep hearing that we can't call anything today a "luxury"
             | because in the past "luxury" was achieved already?
        
             | mjevans wrote:
             | How about: We can call it 'AI' when it should have the same
             | rights as any other intelligence. Human, or otherwise?
        
             | goatlover wrote:
             | Laypeople have always had in mind Data or Skynet for what's
             | considered genuine AI. Spielberg's AI movie in 2001
             | involved androids where the main character was a robot
             | child given the ability to form an emotional bond to a
             | human mother, resulting in him wanting to become a real
             | boy.
             | 
             | The moving goalposts come from those hyping up each phase
             | of AI as AGI being right around the corner, and then they
             | get pushback on that.
        
             | zahlman wrote:
             | I have always considered that the term AI was inaccurate
             | and didn't describe an actual form of intelligence,
             | regardless of the problem it was solving. It's great that
             | we're now solving problems with computer algorithms that we
             | used to think required actual intelligence. But that
             | doesn't mean we're moving the goalposts on what
             | intelligence is; it means we're admitting we're wrong about
             | what can be achieved without it.
             | 
             | An "artificial intelligence" is no more intelligent than an
             | "artificial flower" is a flower. Making it into a more
             | convincing simulacrum, or expanding the range of roles
             | where it can adequately substitute for the real thing (or
             | even vastly outperform the real thing), is not reifying it.
             | Thankfully, we don't make the same linguistic mistake with
             | "artificial sweeteners"; they do in fact sweeten, but I
             | would have the same complaint if we said "artificial
             | sugars" instead.
             | 
             | The point of the Turing test and all the other contemporary
             | discourse was never to establish a standard to determine
             | whether a computing system could "think" or be
             | "intelligent"; it was to establish that this is the wrong
             | question. Intelligence is tightly coupled to volition and
             | self-awareness (and expressing self-awareness in text does
             | not demonstrate self-awareness; a book titled "I Am A Book"
             | is not self-aware).
             | 
             | No, I cannot rigorously prove that humans (or other life)
             | are intelligent by this standard. It's an axiom that
             | emerges from my own _experience of observing_ my thoughts.
             | I think, therefore I think.
        
         | AIPedant wrote:
         | What this really reflects is that before these problems were
         | solved it was assumed (without any real evidence) the solutions
         | required something like intelligence. But that turned out to
         | not be the case, and "AI" is the wrong term to use.
         | 
         | There's also the effect of "machine learning" being used
         | imprecisely so it inhabits a squishy middle between
         | "computational statistics" and "AI."
        
         | alkonaut wrote:
         | I don't know, isn't "AI" more a family of technologies like
         | neural networks etc? Facial recognition using such tech is and
         | was always AI, while adaptive cruise control using a single
         | distance sensor and PID regulation is just normalcontrol" and
         | not AI?
         | 
         | I never heard about AI being used in plane autopilots, no
         | matter how clever.
        
       | clbrmbr wrote:
       | Fascinating reading the section about why the 1980s AI industry
       | stumbled. The Moore's law reasoning is that the early AI machines
       | used custom processors which were commoditized. This time around
       | we really are using general purpose compute though. Maybe there's
       | an analogy to open weight models but it's a stretch.
       | 
       | Also the section on hype is informative, but I really see (ofc
       | writing this from peak hype) a difference this time around. I
       | fund $1000 in Claude Code Opus 4 for my top developers over the
       | course of this month, and I really do expect to get >$1000 worth
       | of more work output. Probably scales to $1000/dev before we hit
       | diminishing returns.
       | 
       | Would be fun to write a 2029 version of this, with the assumption
       | that we see a similar crash as happened in ~87 but in ~27. What
       | would some possible stumbling reasons be this time around?
        
         | klabb3 wrote:
         | > I fund $1000 in Claude Code Opus 4 for my top developers over
         | the course of this month, and I really do expect to get >$1000
         | worth of more work output. Probably scales to $1000/dev before
         | we hit diminishing returns.
         | 
         | Two unknowns: the true non-VC-subsidized cost and the effects
         | of increasing code output and maintenance of the code
         | asymptotically. There are also second order effects of
         | pipelines of senior engineers drying up and costing a lot.
         | Chances are if widespread longterm adoption, we'll see 90% of
         | costs going to fixing 10% or 1% of problems that are expensive
         | and difficult to avoid with LLMs and expensive to hire humans
         | for. Theres always a new equilibrium.
        
         | rjsw wrote:
         | I was running compiled Franz Lisp on an Atari ST in 1986,
         | general purpose computing processors were usable back then.
        
       | dtagames wrote:
       | This paper is far too long and poorly written, even considering
       | that the topic of expert systems was once "a thing."
       | 
       | There are three key parallels that I see applying to today's AI
       | companies:
       | 
       | 1. Tech vs. business mismatch. The author points out that AI
       | companies were (and are) run by tech folks and not business
       | folks. The emphasis on the glory of tech doesn't always translate
       | to effective results for their businesses customers.
       | 
       | 2. Underestimating the implementation moat. The old expert
       | systems and LLMs have one thing in common: they're both a
       | tremendous amount of work to integrate into an existing system.
       | Putting a chat box on your app isn't AI. Real utility involves
       | specialized RAG software and domain knowledge. Your customers
       | have the knowledge but can they write that software? Without it,
       | your LLM is just a chatbot.
       | 
       | 3. Failing to allow for compute costs. The hardware costs to run
       | expert systems were prohibitive, but LLMs invoke an entirely
       | different problem. Every single interaction with them has a cost,
       | both inputs and outputs. It would be easy for your flat-rate
       | consumer to use a lot of LLM time that you'll be paying for. It's
       | not the fixed costs amortized over the user base, like we used to
       | have. Many companies' business models won't be able to adjust to
       | that variation.
        
         | rjsw wrote:
         | It is a masters thesis, the length seems fine to me, spotted a
         | few typos though.
        
           | analog31 wrote:
           | A hastily edited thesis is a sure sign of a student who got a
           | job offer. ;-)
        
       | api wrote:
       | Something like the AI effect exists in the biological sciences
       | too. You know what you call transhumanist enhancement and life
       | extension tech when it actually works? Medicine.
       | 
       | Hype is fun. When you see the limits of a technology it often
       | becomes boring even if it's still amazing.
        
       | TruffleLabs wrote:
       | This is a form of machine learning and is within the realm of
       | artificial intelligence. In 1961 this was definitely leading edge
       | :)
       | 
       | "Matchbox Educable Noughts and Crosses Engine" -
       | https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_...
       | 
       | "The Matchbox Educable Noughts and Crosses Engine (sometimes
       | called the Machine Educable Noughts and Crosses Engine or MENACE)
       | was a mechanical computer made from 304 matchboxes designed and
       | built by artificial intelligence researcher Donald Michie in
       | 1961. It was designed to play human opponents in games of noughts
       | and crosses (tic-tac-toe) by returning a move for any given state
       | of play and to refine its strategy through reinforcement
       | learning. This was one of the first types of artificial
       | intelligence."
        
       | neilv wrote:
       | Around the time of this MEng thesis, in one startup-oriented
       | professor's AI-ish research group, the spinoff startups found
       | that no one wanted the AI parts of the startups.
       | 
       | But customers of the AI startups very much wanted more mundane
       | solutions, which the startup would then pivot to doing.
       | 
       | (For example, you do startup to build AI systems to do X, and a
       | database system is incidental to that; turns out the B2B
       | customers wanted that database system, for non-AI reasons.)
       | 
       | So a grad student remarked about AI startups, "First thing you
       | do, throw out the AI."
       | 
       | Which was an awkward thing for students working on AI to say to
       | each other.
       | 
       | But it was a little too early for deep learning or transformers.
       | And the gold rush at the time was for Web/Internet.
        
       | ricksunny wrote:
       | It's interesting to see observe how the author's career
       | progressed over the 26 years following graduation with this
       | thesis. Here she is just last year presenting on ML in the
       | context of the LLM age:
       | 
       | https://www.youtube.com/watch?v=p9bUuOzpBGE
        
       | ChuckMcM wrote:
       | The corollary works well too; "If it's AI, it doesn't work."
       | 
       | That's because its the same mechanism at play. When people can't
       | explain the underlying algorithm, they can't show when the
       | algorithm would work and when it wouldn't. In computer systems,
       | one of the truisms is that for the same inputs a known algorithm
       | produces the same outputs. If you don't get the same outputs you
       | don't understand all of the inputs.
       | 
       | But that helps set your expectations for a technology.
        
       | QuadmasterXLII wrote:
       | Don't forget the contrapositive! If it's still called AI it
       | doesn't work yet.
        
         | asmor wrote:
         | I wonder if LLMs will ever not be AI. Applications using LLMs
         | that aren't terrible experiences are so because they replace
         | large parts of things people say LLMs can do with vector
         | databases, glorified if conditions and brute force retrying.
         | 
         | I coincidentally can run local LLMs (7900 XTX, 24 GB), but I
         | almost never want to because the output of raw LLMs is trash.
        
       ___________________________________________________________________
       (page generated 2025-06-07 23:01 UTC)