[HN Gopher] We'll call it AI to sell it, machine learning to bui...
___________________________________________________________________
We'll call it AI to sell it, machine learning to build it
Author : participant1138
Score : 190 points
Date : 2023-10-11 12:30 UTC (3 hours ago)
(HTM) web link (theaiunderwriter.substack.com)
(TXT) w3m dump (theaiunderwriter.substack.com)
| k_kelly wrote:
| People pay for holes not drills.
| JohnFen wrote:
| Given that I use my drills for things unrelated to making
| holes, I paid for a drill, not holes.
| [deleted]
| octagons wrote:
| The worst case of AI marketing that I have seen recently was an
| interview where the interviewee was describing ChatGPT 4's
| capabilities. He was describing the model as having an IQ of 180
| and comparing it to Einstein's alleged IQ as well as ChatGPT 3,
| which had a lower IQ.
|
| The subjectivity of IQ combined with the leading premise of being
| able to quantify a model's performance with it is extremely
| disingenuous.
|
| I can't find a link, but I'll share one if I do. I believe it was
| with someone in the C-suite at OpenAI.
| lawlessone wrote:
| All the articles about LLMs passing different exams isn't
| helping. LLM can pass exam X etc..
|
| Of course it can pass exams, so could a database storing the
| answers.
|
| Ask it to try multiply two floating point numbers though and
| you often get the wrong answer.
| softg wrote:
| > In the run up to Uber's IPO in 2019, venture capital funds were
| flooded with pitches from startups offering "Uber for X". Uber
| for parking spaces.
|
| Ugh. The current French president famously proposed to "uberize"
| the economy, by which he meant less secure jobs that cost less to
| the employer. The C-- people in my workplace are already talking
| non stop about generative AI and the like. I don't look forward
| to hearing more marketing mumbo jumbo about AI-izing everything
| in the near future.
| m3kw9 wrote:
| Or you can be a saint and call it program but you would sort of
| grandstanding just for the heck of it.
| Angostura wrote:
| I think it is notable that Apple has kept plugging away calling
| it machine learning.
| world2vec wrote:
| Goes well with the classic joke: "Machine Learning is usually
| written in Python, AI is written in Powerpoint".
| [deleted]
| amelius wrote:
| ... and we post it to a hacker forum to discuss it.
| brookst wrote:
| ITT: engineers disgusted that marketing uses different language
| to communicate product benefits to consumers.
|
| See also: "posi-trac" (limited slip differential), "alleve"
| (2-(6-Methoxynaphthalen-2-yl)propanoic acid), Technicolor
| (usually to describe a 3-strip prismatic camera+film
| arrangement).
|
| In general, consumers are marketing care about the benefits, and
| engineers care about the methods. Hence AI versus machine
| learning.
|
| It's nothing to get upset or disgusted about.
| mistrial9 wrote:
| half the ordinary people I talk to are spooked by "AI" combine
| with general deterioration of services by big tech, invasive
| agreements and slow, growing awareness of what surveillance
| might look like
| JohnFen wrote:
| Half of the engineers I talk to feel the same way.
| VoodooJuJu wrote:
| What you say is true, however in this case, there's no
| communication of benefits. _AI_ is not a benefit, it 's an
| attention-grabbing buzzword.
|
| I lament that people fall for buzzwords, hollow words that all
| mean the same thing to a fool - "Oh it's got <BUZZWORD>, that
| means _GOOD_! Just take my money! ".
|
| But I don't lament for long, and when I'm done lamenting,
| that's when I start selling.
|
| "The characteristic feature of the loser is to bemoan, in
| general terms, mankind's flaws, biases, contradictions, and
| irrationality--without exploiting them for fun and profit."
|
| - Nassim Taleb, _The Bed of Procrustes_
| BeFlatXIII wrote:
| > "alleve" (2-(6-Methoxynaphthalen-2-yl)propanoic acid)
|
| Chemical names aren't the best examples to use, as even within
| the scientific community, it's extremely rare to use full IUPAC
| systematic names for well-known organic molecules. The fancy
| name for caffeine would be 1,3,7-trimethylxanthine, not
| 1,3,7-trimethyl-3,7-Dihydro-1H-purine-2,6-dione.
| phillipcarter wrote:
| Yep. A lot of engineers don't like the idea that most people
| aren't engineers, don't think like them, and don't appreciate
| things the way they do. You can't sell "machine learning
| solutions" unless your target audience is developers building
| ML systems.
| rapind wrote:
| I have to disagree. "Machine learning" could just as easily
| be a marketing buzzword. Artificial Intelligence is just
| sexier because it's misleading (while being broad enough to
| be acceptable).
| phillipcarter wrote:
| Again, that's just the inner developer not liking the term.
| There's a reason why AI sticks with people and ML doesn't.
| lucubratory wrote:
| If machine learning were the broadly accepted term to refer
| to these techniques in society, the people who currently
| complain that "AI is misleading because they're not
| intelligent" would instead be complaining that "ML is
| misleading because they're not learning". I know this
| because I have already seen people complaining that ML is
| misleading because "they're not learning".
|
| The reality is that no matter what structure the software
| takes, or what outputs it achieves, it can't falsify a
| fundamentally unfalsifiable belief that machines cannot be
| like people in ways that could imply any sort of social
| recognition of that status.
| rapind wrote:
| Ai is misleading because it's too broad and consumers
| confuse it as AGI which is far more powerful (and not yet
| possible). From a marketing perspective this is a
| feature, not a bug since it gives off the appearance of
| being a much bigger deal than it is.
|
| Are we really going to pretend that marketing departments
| / companies aren't fully aware of and taking advantage of
| this misunderstanding? This just seems like common sense
| to me.
| [deleted]
| JohnFen wrote:
| > See also: "posi-trac" (limited slip differential), "alleve"
| (2-(6-Methoxynaphthalen-2-yl)propanoic acid), Technicolor
| (usually to describe a 3-strip prismatic camera+film
| arrangement).
|
| But none of those examples are misleading. "AI" is (or at least
| very often is).
| TimPC wrote:
| I think this one annoys people because when engineers hear AI
| they think of a bunch of techniques that mostly didn't work and
| caused an AI winter. When they hear ML they think of the latest
| and greatest techniques that have moved the needle on some of
| the hardest problems in the space.
| [deleted]
| sdflhasjd wrote:
| See also: Autopilot.
|
| I think the disgust can be justified in some circumstances,
| when marketing is used to oversell or just outright deceive, in
| this case and others which involves taking advantage of common
| ignorance.
| bongoman37 wrote:
| [dead]
| jalk wrote:
| I was pretty disgusted when Dyson called their hairdryers
| "zero carbon emission" because they used brushless DC motors
| bee_rider wrote:
| Autopilot is a particularly egregious case, though. Naming an
| experimental, very limited car feature after the device that
| flys the plane most of the time, and has for many years,
| really ought to have been illegal. It sets wildly
| inappropriate expectations.
|
| Most of the other names are just random nice sounding names
| that they made up.
|
| It is more like if they called DayQuil "Tracheotomax,"
| because you know, it helps you breathe!
| darkwater wrote:
| I'm not a Elon fan but the autopilot in a plane does
| basically the same (if we are saying that planes and cars
| do the same thing, which is transport people) that Tesla's
| or many more manufacturers "autopilot" does on highways.
| Not an aviation expert but I don't think AP is controlling
| take-offs and landings. It controls cruise speed, altitude
| and direction. A car AP on a highway does the same (minus
| altitude).
| bee_rider wrote:
| Also not an aviation expert, but it looks like airplane
| autopilots can do everything except taxiing and taking
| off (they can do landings now I guess, in fact, just
| looking on Wikipedia it sounds like they are preferred in
| low visibility situations for some airports, because they
| have more sensors and the airports have maps/beacons to
| help them out).
|
| Apparently it also _must_ be engaged above 28000 ft.
| Imagine if autonomous vehicles were so good that they
| were _required_ to be used while going at speed on the
| highway.
| I_Am_Nous wrote:
| True, but that's where the difference in intended usage
| is problematic. In the sky, autopilot can't accidentally
| hit another plane or barrier because it got confused
| about the road paint or construction signs. The stakes
| are a lot higher on the ground, even though it
| technically controls less of the vehicle than autopilot
| in a plane does (acceleration, braking, and steering
| compared to elevator, trim, roll, pitch, throttle,
| vector) since cars don't have to worry about 3
| dimensional movement much while airplanes do.
| wongarsu wrote:
| Also a plane's autopilot has situations where it will
| yell at you to take over; and have situations where it's
| expected that the pilot recognizes problems and overrides
| the autopilot.
|
| It's a good analogy from a technical standpoint, with the
| "minor" difference that in most situations pilots have a
| lot more time to react than the driver of a car. Which
| makes it very different from a consumer standpoint
| rando_dfad wrote:
| and pilots have a lot more training
| alwayseasy wrote:
| Autopilot was not a consumer feature but a professional
| one where the understanding of the system comes with
| multiple certifications.
| tivert wrote:
| > I'm not a Elon fan but the autopilot in a plane does
| basically the same (if we are saying that planes and cars
| do the same thing, which is transport people) that
| Tesla's or many more manufacturers "autopilot" does on
| highways.
|
| Even if true, when you talk about consumer marketing,
| none of that matters.
|
| I don't have name for it, but there's a whole class of
| bad-faith, deliberately misleading statements that
| exploit the difference between common and technical
| understandings (e.g. say something you know most people
| will inaccurately interpret as X, then fall back to the
| much narrower Y when challenged).
| bee_rider wrote:
| I think it is pretty well understood, for example if you
| look at something like the IEEE code of ethics, that
| technical professionals have an obligation to honesty
| beyond just not lying; a requirement to communicate in a
| way that helps the general public clear up likely
| misunderstandings.
| JohnFen wrote:
| I call it "lying". It's a special subset of lying, but
| the intention is to be deceptive, so it's in that
| category.
| devin wrote:
| I'm pretty sure this would match Harry G. Frankfurt's
| definition of "bullshit".
| agodfrey wrote:
| While that's true (on a plane, I've seen simply keeping
| the wings level labelled "Autopilot" without it even
| maintaining altitude), it's still a travesty.
|
| a) Pilots have certification and training which includes
| proper use of whatever 'autopilot' that plane has.
|
| b) Even so, the name still "over-promises" in an arena
| where doing so risks lives. So it should never have been
| called that even on a plane. Let alone on a car sold to
| consumers with little regulation.
| nihzm wrote:
| But there is a very important difference, and that is
| that airplane autopilots are certified with extremely
| expensive years long tests to demonstrate failure rates
| of 10E-9 (once in a billion hours) or even stricter.
| Whereas a computer vision model is considered "good
| enough" by the car industry after just a few hundred
| hours of "self driving" without major accidents, and this
| is in spite of the fact that roads are full elements that
| are definitely a lot more unpredictable (eg. other
| drivers) than what airplanes usually encounter during
| landing (that is, a mostly empty runway)
| I_Am_Nous wrote:
| >It is more like if they called DayQuil "Tracheotomax,"
| because you know, it helps you breathe!
|
| "Introducing new Rectopurge, for all your constipation
| needs!"
| sdfghswe wrote:
| > In general, consumers are marketing care about the benefits,
| and engineers care about the methods. Hence AI versus machine
| learning.
|
| Sorry, which is the methods and which is the benefits? In your
| mind is "AI" a benefit?
| itsoktocry wrote:
| > _Sorry, which is the methods and which is the benefits? In
| your mind is "AI" a benefit?_
|
| I think you're demonstrating the issue by focusing on the
| name again: "AI" and "Machine Learning" are the same thing.
| Engineers care about the method, so ML is more appropriate;
| it describes what they are doing. Consumers care about
| outcomes, so "AI" is used because it's familiar.
| throwaway290 wrote:
| > I think you're demonstrating the issue by focusing on the
| name again: "AI" and "Machine Learning" are the same thing.
| Engineers care about the method, so ML is more appropriate;
| it describes what they are doing. Consumers care about
| outcomes, so "AI" is used because it's familiar.
|
| So how is "AI" an outcome benefitting me as customer?
| philipov wrote:
| From an old fortune message: Q: "So,
| why did you get into Artificial Intelligence?" A:
| "It made sense: I didn't have any real intelligence."
|
| Sums up the benefit nicely.
| jncfhnb wrote:
| They are not the same thing.
|
| Consumer don't know that and hence don't care.
|
| And that's fine.
| sdfghswe wrote:
| That's my point. They are the same thing. But the person
| I'm responding to implies that one is a benefit and the
| other one is a method. I was just asking which is which,
| seeing as I don't see the difference. They're both
| different names for the same set of tools, in my opinion.
| falcor84 wrote:
| Not the parent, but yes, with my consumer hat on, ML is the
| method - how it learns is an implementation detail I
| shouldn't care about - while AI is the benefit - it applies
| something resembling intelligence to help address my needs.
| throwaway290 wrote:
| The word "AI" sells consumers an abstract image of
| themselves as having something "intelligent" and "smart" at
| their service + edgy feeling of having almost person at
| your complete command but without(?) the moral issues of
| slavery.
|
| If you take away buzzwords and apply good product design,
| when ML-based stuff works it's invisible powering features
| like "autocomplete" or "voice control" or "internet
| search".
| falcor84 wrote:
| But "autocomplete", "voice control" and "internet search"
| as we know them are terms that appeared relatively
| recently for capabilities that people even 30 years ago
| would have said are in the realm of sci-fi. It sounds to
| me like just moving-the-goalposts such that when
| something is proven to work well enough to have a name,
| it becomes "plain old tech" rather than AI. Is there any
| computer capability that when widely released you'd be ok
| with calling AI?
| jebarker wrote:
| I'd also add that, as much as many engineers hate the fact,
| marketing is very necessary to sell things to the general
| market. It's also a real skill set to figure out how to market
| things well. Even more so when trying to sell technical
| capabilities.
| runeofdoom wrote:
| I doubt it's "we need marketing to sell stuff" that engineers
| hate, but rather the seemingly inevitable "marketing lies
| about what our product does".
| JohnFen wrote:
| I certainly don't hate that fact at all. What I hate is that
| marketers very often lie and misrepresent.
| malkosta wrote:
| The problem is the "intelligence" word causes more harm than
| value. It makes the whole world afraid of what is in fact just
| matrix multiplications.
|
| We don't know what "intelligence" means, but we know it isn't
| matrix multiplication, or brute force algorithms, otherwise
| gears could be called intelligent.
|
| I understand the importance of selling. But selling shouldn't
| be confused with deceiving consumers. It's hard to accept our
| work is used to create general panic for the sake of money.
| Kaytaro wrote:
| If you built a complex series of gears that took input,
| revolved through different sets of millions of gears, and
| produced meaningful output, I would consider that a form of
| intelligence.
| unshavedyak wrote:
| Agreed. My understanding of NN is that more than matrix
| multiplication, it's that it's a general purpose solver.
| You could write the same thing yourself, it would just take
| you ages.
|
| So with unlimited budget and time, can you write something
| complex enough to seem intelligent? I think so. Is what you
| wrote actually intelligent? No idea, and I think that's
| more philosophy than I'm interested in.
|
| General purpose solving functions will only get better with
| time and already solve more than we can write solvers for
| by hand. I don't suspect there's a limit here, assuming we
| can keep improving in ways to scale its compute and scale
| the function goals.
| hackinthebochs wrote:
| >We don't know what "intelligence" means, but we know it
| isn't matrix multiplication, or brute force algorithms,
| otherwise gears could be called intelligent.
|
| We know no such thing. The simplicity of the basic operations
| do not necessarily constrain the complexity of the whole
| system composed of those operations. We compose simple
| operations into complex units all the time.
| snapcaster wrote:
| "It makes the whole world afraid of what is in fact just
| matrix multiplications" is such a reductionist view that
| doesn't really capture the reality of AI taking people's jobs
| and reshaping the economy. That's like saying electricity is
| just "electrons moving through a wire"
| _3u10 wrote:
| For the most part electrons don't move through the wire,
| that would be very inefficient.
| korijn wrote:
| Also consuming enormous amounts of power and water
| bee_rider wrote:
| "Just a bunch of matrix multiplications" is also a bit odd
| because lots of jobs have been automated out of existence
| by tools way less complicated than matrix multiplications.
|
| The weird thing, I think, about these matrix
| multiplications, seems to be that they might be coming for
| the jobs of people who are generally in the same field that
| invented them (programmers) and also they might be coming
| for the jobs of reporters, creatives, and hot-take authors.
| People with bigger platforms than factory workers.
| jvanderbot wrote:
| Look bombs are just an accelerated oxidization. Like
| rusting but fast. What's the big deal. Rusting is natural.
| 7thaccount wrote:
| You and the comment you're referring to are both making
| good comments, but may not be talking apples to apples.
|
| I believe the post you're replying to is just saying that
| matrix multiplications (as useful as they are), aren't
| going to become Skynet.
|
| Your post is pointing out that various AI techniques are
| replacing loads of jobs for folks that still need to make
| ends meet, but likely don't have the skill sets to
| magically become a web developer overnight. As a result, AI
| is pretty dangerous like many disruptions throughout
| history. Only in the past, there was usually still plenty
| of need for labor.
| malkosta wrote:
| That's a deep thought, it puts the discussion in a great
| perspective for further thinking. Thanks!
| GaggiX wrote:
| >We don't know what "intelligence" means, but we know it
| isn't matrix multiplication
|
| I didn't, how do you know? And to be honest many LLM seems to
| show intelligence to some degree, at least when they solve
| complex and novel problems I ask them in a random language
| and using N library, that's feel pretty intelligent to me and
| if it's not than also humans are not.
| tlrobinson wrote:
| > We don't know what "intelligence" means, but we know it
| isn't matrix multiplication
|
| What makes you believe "intelligence" can't emerge from lots
| of matrix multiplications? Unless you believe in some more
| mystical explanation, human intelligence is just
| electrochemical processes not that unlike computers.
| Balgair wrote:
| > but we know it isn't matrix multiplication
|
| Neuroscientist here: You're right!
|
| At least in the neuro that we know today, most [0] of the
| communication between neurons is done in the Fourier domain.
| In that, it's the frequency of firing events that matters,
| not that a neuron fired at all.
|
| [0] by no means is it exclusive. The brain is _really_
| complicated and there are edge cases all over the place.
| unlikelymordant wrote:
| The fourier domain is just a linear transform of the time
| domain, i.e. just a matrix multiply away!
| malkosta wrote:
| Wow, could you point me to other resources to read about
| this?
| marcosdumay wrote:
| That "it isn't matrix multiplication" argument is completely
| equivalent to "no computer can do it", and to "nobody can
| ever understand it". And is practically equivalent to "you
| need a soul to have intelligence".
|
| The same applies to "brute force algorithms" and "otherwise
| gears could do it".
|
| It's very likely that the current crop of LLMs do the wrong
| set of matrix multiplications. (If you ask me, it's a
| certainty.) But that doesn't change the fact that matrix
| multiplications can do anything.
| oooyay wrote:
| > The problem is the "intelligence" word causes more harm
| than value.
|
| By this measure "learning" could be inaccurate too. Does a
| human learn if they just commit something to memory?
| Programming is chalk full of people who know the code but
| haven't "learned" to program. Intrinsically we all know
| there's something deeper to learning than just memorizing or
| retaining to memory.
|
| I think GP has a point. Marketing terms are to the benefit of
| the user to be relatable, whether accurate or not. We can,
| and to my knowledge, historically do retain the stories of
| what the correlation of a marketing term versus its actual
| technology is.
| datameta wrote:
| I do see where you are coming from, but perhaps it is good
| that people think of it as proper AI (with all the inherent
| concerns) so that we enact rules and regulations _before_
| there is a runaway ML arms race that actually _does_ give us
| AGI. Will we be ready then otherwise?
| malkosta wrote:
| Great response. I will take some time to think about it,
| thanks!
| gwbas1c wrote:
| No, it's truth in labeling.
|
| > I've been told that a product was "driven by AI" only to find
| out it was driven by "if-then" statements.
|
| At best, a system like that is an "expert system." It's not
| artificially-intelligent in any way.
|
| This, BTW, is why many developed countries have strict
| labelling laws for food and trademarks. Otherwise, people will
| call something whatever they can get away with in order to sell
| it, even if it's not what they claim they're selling.
| dragonwriter wrote:
| > At best, a system like that is an "expert system." It's not
| artificially-intelligent in any way.
|
| "Expert system" was adopted for that particular form of AI
| (which it was also considered when it was developed) because
| "expertise" is a combination of both _intelligence_ and
| _knowledge_.
|
| So, if "AI" is misleading for it, "expert system" is _more_
| misleading.
| helpfulContrib wrote:
| [dead]
| sandworm101 wrote:
| I owned a vacuum cleaner back in the 90s with "fuzzy logic", the
| cool tech buzzword of the time. Who knows what it actually did. I
| suspect it just meant that the thing has a medium power setting
| between on and off.
| [deleted]
| participant1138 wrote:
| An AI Public Service Announcement
| welder wrote:
| > GaaS- trademark pending, thank you, I'll see myself out
|
| That's the best part! We've been needing an acronym for "GPT as a
| Service"
| rando_dfad wrote:
| Even better if you use it to control your smart-home. GaaS-
| lights!
| essive wrote:
| If it passes the Turing Test then it's just semantics after that
| - linear regression gains artificial consciousness
| JoeAltmaier wrote:
| Something like that. The test demonstrates that we can treat it
| as if it is a person without much risk. Says nothing about what
| it 'really' is.
| sieste wrote:
| ... but it's really just linear regression.
| bee_rider wrote:
| And matrix multiplications.
|
| Premium GEMMs.
| troelsSteegin wrote:
| So, "We'll call it AI to sell it, Machine Learning to Build
| it", and regression to make it work.
| raverbashing wrote:
| No, it's like saying "computers are just switches turning off
| and on". It's a naive take
|
| And forgetting all the abstractions that make it work
|
| If it was we'd have LLMs 20 years ago. Not even mnist is "just
| linear regression" (because neither sigmoid nor relu are
| linear)
| VHRanger wrote:
| sigmoid is logistic regression
|
| ReLu is truncated regression
|
| It's all just GLMs!
| JackFr wrote:
| No, it's not a linear regression, but it is at its heart
| optimization.
|
| And yes, that is indeed like saying a computer is just a
| bunch of zeroes and ones and logic gates. It's true and
| beautiful and profound but nearly useless from a practical
| perspective. And the wonder, like computers when you scale
| the numbers of transistors to the billions, is that when you
| scale the number of parameters to the billions, you end up
| with something amazing.
| Uehreka wrote:
| Yeah but the people making these comments aren't trying to
| point to the wonder of how simple mathematics can underpin
| large complex systems, they're doing the opposite: Trying
| to trivialize the system and its immense potential for good
| and bad by pointing out that it uses simple mathematics
| under the hood (and thus can't be _that_ amazing).
| Uehreka wrote:
| I don't get what people are trying to say when they say these
| kinds of things about AI. That human-level writing is as simple
| as a linear regression? That we could've had computer programs
| capable of human-level writing decades ago? Have they not used
| these AIs enough to see how powerful they are? Are they seeing
| the bad outputs and thinking that AIs are always doing that
| poorly?
|
| Like seriously, if you're telling me that it was obvious that a
| "linear regression" could pass the LSAT I've got a macvlan to
| sell you.
|
| Edit: they're also literally not linear regression!
| https://youtu.be/Ae9EKCyI1xU?feature=shared
| lawlessone wrote:
| A database could pass the LSAT if you put the answers in it.
| burnished wrote:
| I interpret it similar to the difference between calling
| yourself a software engineer and calling yourself a code
| monkey.
| [deleted]
| VHRanger wrote:
| Formally it's a generalized linear model with a constructed
| feature set.
|
| A "kitchen sink" regression with enough polynomial terms
| (x^2, x^3, etc.) and interaction terms (a _b, (a_ b)^2, etc.)
| will be a function approximator the same way a neural net is.
|
| The computational mechanics are different (there's a reason
| we dont use it) but in the land of infinite computational
| power it can be made equivalent.
| Roark66 wrote:
| >The computational mechanics are different (there's a
| reason we dont use it) but in the land of infinite
| computational power it can be made equivalent.
|
| In the land of infinite computational power every
| computation is just a series of 1s and 0s added and
| subtracted. You can implement everything with just few more
| operations. But we don't live in a land of infinite
| computational power and it took us (as humanity) quite a
| while to discover things like transformer models. If we had
| the same hardware 10 years ago would we have discovered
| them back then? I very much doubt it. We didn't just need
| the hardware, we needed the labelled data sets, prior art
| in smaller models etc.
|
| Personally I think current AI/ML (LLMs, ESRGANs, and
| diffusion models) have huge capability to increase people's
| productivity, but it will not happen overnight and not for
| everyone. People have to learn to use AI/ML.
|
| This brings me to the "dangers of AI". I laugh at all these
| ideas that "AI will become sentient and it will take over
| the world", but I'm genuinely fearful of the world where
| we've became so used to AI delivered by few "cloud
| providers" that we cannot do certain jobs without it. Just
| like you can't be a modern architect without Cad software,
| there may be time when you'll not be able do any job
| without your "AI assistant". Now, what happens when there
| is essentially a monopoly on the market of "AI assistants"?
| They will start raising prices to the point in future
| paying off your "AI assistant" bill may be higher than your
| taxes and you'll have a choice of paying or not working at
| all.
|
| This is why we have to run these models locally and advance
| local use of them. Yes (not at all)OpenAI will give you
| access to a huge model for a fraction of the cost, but it's
| like with the proverbial drug dealers that gives you the
| first hit for free, you'll more than make up for the cost
| of it once you get hooked up. The "dangers of AI" is that
| it becomes too centralised, not "uncontrollable"
| hcks wrote:
| What's the point of this comment seriously.
|
| Making universal function approximators is trivial. It's
| not where the value lies.
| tsroe wrote:
| >in the land of infinite computational power it can be made
| equivalent
|
| I.e. they are not equivalent and the original comment was
| wrong.
| [deleted]
| extrememacaroni wrote:
| It's basically just math.
| Maken wrote:
| With lots of data.
| robg wrote:
| When can we harmonize the two and call it "Statistical Learning"?
| lawlessone wrote:
| Hasn't scifi and comp science been using AI since the early days?
| They weren't using as marketing term, and this was before PC's so
| there wasn't much of a market.
| izzydata wrote:
| When science fiction mentions artificial intelligence it is
| almost always in reference to what we now call general
| artificial intelligence. Which is very different than the
| machine learning we now call AI.
| gwbas1c wrote:
| Yes: "AI" stands for "Artificial Intelligence."
|
| As in a computer mimics human-level intelligence; as opposed
| to, you know, sitting down and writing a computer program.
| lawlessone wrote:
| Personally I figured ML was a toolset to maybe eventually
| achieve some form of AI.
| ilamont wrote:
| Inserting tech buzzwords into pitch decks is as old as the Valley
| itself. Not long ago "powered by blockchain" was all the rage and
| some of these companies were funded regardless of whether there
| was any working blockchain behind the curtain.
|
| How many have pivoted to "powered by AI," I wonder?
| HPsquared wrote:
| AI has always been a "consumer" term... Games have always had
| various things called "AI", for example.
| conductr wrote:
| I like to point to the TV industry. They're pretty good at
| jumping on any hype train they can to make the TV sound more
| advanced than last year's models.
|
| I have no idea what it does but my Sony has some AI sprinkled
| in it somewhere. Meanwhile, I just need a big dumb display
| lawlessone wrote:
| Funnily game AI is often smoke and mirrors. devs often have
| really dumb down the game AI to be fair to the players. And
| they balance out the dumbing down by giving it bonuses.
|
| example: game AI's often have unlimited ammo. But their bullets
| don't hurt the player much.
| jameshart wrote:
| It's almost like 'artificial intelligence' is the simulation
| of intelligence using artifice.
|
| Do people just gloss over the 'artificial' part of 'AI'?
| RandomWorker wrote:
| I don't know who said it, but an amazing quote I love is: "they
| call it AI until it starts working, see autocomplete"
|
| I love this because when a company tells me they do AI (as a
| software engineer) they tacitly say that they have little to no
| knowledge of where they want to go or what services they will be
| offering with that AI.
| meowface wrote:
| Vaguely reminds me of this Steve Jobs quote (whether or not one
| agrees with him) when he met the Dropbox team: "you have a
| feature, not a product".
| makeitdouble wrote:
| This quote always struck me as a weird anti competitive flex,
| in the "we can crush you anytime" way.
|
| And Apple later released the whole iCloud suite that made
| Dropbox a second class citizen in the OS, even as to this day
| Dropbox works better than iCloud in many ways. We more and
| more hear the "services revenue" drum at every Apple earning
| call, so Jobs was not wrong either.
| toomuchtodo wrote:
| Dropbox laughing in that $10B market cap and $2B+ ARR. Jobs
| was right about the concept (insert meme about Apple
| ecosystem devs realizing their product was killed by an Apple
| feature release), but wrong in that specific instance.
| throwaway290 wrote:
| Dropbox was losing money every year of its existence until
| 2021.
| viridian wrote:
| So was almost every unicorn startup. They purposefully
| aim for growth until its unsustainable, then switch over
| to exploiting their market position. We may not like it,
| but the business model is far from novel or unexpected.
| [deleted]
| cowsup wrote:
| By design.
|
| If I gave you $250,000,000 to grow a company, and then
| next year I saw you had $250,050,000 in the bank, then
| $250,102,000 next year, and so on, I'd be pretty annoyed
| that I backed you. You have so much money you could be
| spending on hiring, development, and marketing, and
| you're instead just slowly chugging along, padding the
| corporate bank account? What am I paying you for?! Give
| me my money back.
|
| VC-backed companies that spend more than they earn aren't
| duds. It's the nature of VC-backed corporations.
| throwaway290 wrote:
| They spent it all on storage and on their new spammy
| looking marketing emails that pester free users to
| upgrade I guess. I don't recall something really new on
| Dropbox since they were established.
| toomuchtodo wrote:
| I am paying Dropbox for storage and will pay them until I
| die. Rock solid sync and object durability, API access to
| my storage for my apps, no complaints whatsoever. I don't
| want new, I want storage I don't have to think about.
| xwdv wrote:
| Until _they_ die, relatively soon. 100 years from now
| Dropbox will be a distant memory, but locally mounted FTP
| directories under version control will be alive and well.
| toomuchtodo wrote:
| > but locally mounted FTP directories under version
| control will be alive and well.
|
| This might matter to you, but it does not matter to me.
| In the meantime, my life will have been better and my
| time saved. That's what the money is for. Time is non
| renewable. If you have more time and ideology than money,
| I admit your solution is a better fit for your life and
| processes. Self host if you want, I have better things to
| do personally vs cobbling together technology that I can
| buy polished for the cost of two coffees a month. There's
| a product lesson in this subthread.
| aoeusnth1 wrote:
| The goal is not to make money for the most years in a
| row.
| jjoonathan wrote:
| It's a quality zinger, but ironically the product may have
| been a subset of the feature: I'd argue the product is the
| fact that Dropbox _doesn 't_ belong to a platform vendor and
| therefore can't be leveraged for anticompetitive purposes /
| lock-in.
| tomrod wrote:
| This is a beautiful way to frame it.
|
| My consulting company works with folks to take ideas that are
| solvable or usable with an ML framework and map those to
| digital services or solutions.
|
| Rarely is the end product called "<something> AI" -- but at the
| end, it's still using AI/ML.
|
| We've been around a bit longer than the current AI hype, and
| our niche is engaging and fun.
| brookst wrote:
| When someone says they "do middleware" or "do massively
| concurrent data stores", does the same observation apply?
| brudgers wrote:
| I don't know exactly where on HN I read it but it was
| "Artificial Intelligence is an ideology, not a technology."
|
| So I am wary of the means justification that AI projects
| entail.
|
| YMMV.
| tivert wrote:
| > I don't know exactly where on HN I read it but it was
| "Artificial Intelligence is an ideology, not a technology."
|
| What is this? https://www.wired.com/story/opinion-ai-is-an-
| ideology-not-a-...
|
| That's a fantastic observation. I'd even hazard to say that
| for some Artificial Intelligence is closer to a _religion_ ,
| not just an ideology.
| JohnFen wrote:
| I agree, there is a rather vocal crowd of people who don't
| sound much different from evangelically-minded religious
| folk.
| potatolicious wrote:
| As someone who works in the field and works with LLMs on
| the daily - I feel like there are two camps at play. The
| field is bimodally distributed:
|
| - AI as understandable tool that power concrete products.
| There's already tons of this on the market - autocorrect,
| car crash detection, heart arrythmia identification,
| driving a car, searching inside photos, etc. This crowd
| tends to be much quieter and occupy little of the public
| imagination.
|
| - AI as religion. These are the Singularity folks, the
| Roko's Basilisk folks. This camp regards the
| current/imminent practical applications of AI as almost a
| distraction from the true goal: the birth of a Machine-God.
| Opinions are mixed about whether or not the Machine-God is
| Good or Bad, but they share the belief that the birth of
| Machine-God is imminent.
|
| I'm being a bit uncharitable here since as someone who
| firmly belongs in the first camp I have so little patience
| for people in the second camp. Especially because half of
| the second camp was hawking monkey JPEGs 18 months ago.
| pastacacioepepe wrote:
| I've had to deal with this kind of company in FOMO mode. They
| start from a solution (AI) in search of a problem to solve,
| while the ideal approach would be the inverse.
|
| Pretty much a guarantee that a lot of money will be wasted
| while panickly iterating through pointless approaches. I figure
| this happens every time a new fundamental technology comes out,
| the dot-com bubble has probably seen many such companies.
| mistrial9 wrote:
| more money is being printed than ever before.. some people
| literally have to find something to do with it.. the waste in
| AI marketing is one result.
|
| AI in the digital age is uniquely disruptive however, since
| it connects directly to the way we communicate.. so there is
| some reason to be wound-up by this, whatever role you are in
| avgcorrection wrote:
| Isn't ML the only kind of successful AI?
| sirwhinesalot wrote:
| No, just that every other type of successful AI ends up being
| called something else.
| hunter2_ wrote:
| Do you have any examples? I'm charged with figuring out how
| my organization can benefit from AI, and hearing that there
| are non-ML options is very relieving.
|
| I assume it's not quite so simple as to include any
| algorithm, right? TFA even sort of refutes that idea, saying
| _I've been told that a product was "driven by AI" only to
| find out it was driven by "if-then" statements._
| progval wrote:
| PageRank (search ranking), A* algorithm (pathfinding),
| simplex (linear optimization), branch prediction (in CPUs),
| autocomplete
| sorenjan wrote:
| If you've used a GPS navigator you've used AI, Pathfinding
| is a type of AI. Saw mills use planning algorithms to
| extract the maximum amount of useful planks from lumber,
| that's AI.
|
| The most popular text book in the subject might be a good
| starting place: https://aima.cs.berkeley.edu/
| sirwhinesalot wrote:
| Any form of constraint solving tech. SAT solvers (used in
| hardware-synthesis, software verification, math proofs,
| etc.), Mixed Integer Solvers (usually sold for tens of
| thousands of dollars) that are used for hardcore
| optimization problems, Google's Operations Research toolkit
| (OR-tools), etc.
|
| Unlike most algorithms, these things are general purpose,
| they can solve any* NP-complete problem (*usually in a
| useful amount of time).
|
| My manager refers to these things as "Machine Reasoning" in
| contrast with "Machine Learning", since they start from the
| rules instead of from examples.
| lawn wrote:
| Not at all.
|
| AI in games (of many different types) is extremely successful.
| bitwize wrote:
| ML is the intersection of the set of successful things and the
| set of things we call AI today.
|
| There are things we used to call AI, like inference engines,
| that were and are phenomenally successful (not to mention
| easier to implement). Type inference in modern programming
| languages, for instance, still uses the GOFAI technique of
| unification to solve for unspecified types of variables in a
| program.
|
| That's why I found it funny the article said "There are
| companies claiming their products are powered by AI, when
| they're really powered by IF statements." Back in the day, AI
| was itself powered by IF statements.
| avgcorrection wrote:
| > There are things we used to call AI, like inference
| engines, that were and are phenomenally successful (not to
| mention easier to implement). Type inference in modern
| programming languages, for instance, still uses the GOFAI
| technique of unification to solve for unspecified types of
| variables in a program.
|
| Okay. I don't see how these application domains have anything
| to do with "AI" in the sense of something that resembles
| human reasoning in any given subarea (like pattern
| recognition, language).
___________________________________________________________________
(page generated 2023-10-11 16:00 UTC)