[HN Gopher] Don't Build AI Products the Way Everyone Else Is Doi...
       ___________________________________________________________________
        
       Don't Build AI Products the Way Everyone Else Is Doing It
        
       Author : tortilla
       Score  : 218 points
       Date   : 2023-11-10 17:20 UTC (5 hours ago)
        
 (HTM) web link (www.builder.io)
 (TXT) w3m dump (www.builder.io)
        
       | nittanymount wrote:
       | good points ! :+1:
        
       | bob1029 wrote:
       | > The solution: create your own toolchain
       | 
       | No thanks. I have an actual job & customer needs to tend to. I am
       | about 80% of the way through integrating with the OAI assistant
       | API.
       | 
       | The real secret is to already have a viable business that AI can
       | subsequently _improve_. Making AI _the business_ is a joke of a
       | model to me. You 'd have an easier time pitching javascript
       | frameworks in our shop.
       | 
       | Our current application of AI is a 1:1 mapping between an OAI
       | assistant thread and the comment chain for a given GitHub issue.
       | In this context of use, latency is absolutely not a problem. We
       | can spend 10 minutes looking for an answer and it would still
       | feel entirely natural from the perspective of our employees and
       | customers.
        
         | golergka wrote:
         | > I am about 80% of the way through integrating with the OAI
         | assistant API.
         | 
         | I've been there. Turns out, the last 20% takes x10 the time and
         | effort compared to these first 80%.
        
           | tebbers wrote:
           | Sounds like a normal development project then.
        
             | golergka wrote:
             | Not really, no. In a normal development project last 20%
             | take just as long. But AI applications are a very special
             | beast.
        
               | johnnyanmac wrote:
               | If there's anything I would not trust an AI on its
               | polish. It's amazing for prototyping, and has some
               | viability at scaling up an existing operation. But the
               | rough edges (literally in some industries) are the exact
               | reason it's such a controversial tech as of now.
        
             | pvorb wrote:
             | Just like the old joke: "I'm already 90 percent done, now
             | I'm going for the other 90 percent."
        
               | morkalork wrote:
               | Before we can finish automating this process, we just
               | have to automate this one other little task inside.
        
               | romanhn wrote:
               | Zeno's paradox of software development - you can complete
               | 90% of the remaining work, but you can never be fully
               | done.
        
               | j45 wrote:
               | Needs to be a sign
        
         | rvz wrote:
         | > The real secret is to already have a viable business that AI
         | can subsequently improve. Making AI the business is a joke of a
         | model to me.
         | 
         | Precisely. No doubt that the tons of VC fuelled so-called AI
         | startups that are wrapping around the ChatGPT API are already
         | getting themselves disrupted due to the platform risk by
         | OpenAI.
         | 
         | They never learn. Even when the possibility of OpenAI competing
         | against their own partners is 99.99% despite denying it a year
         | ago.
        
           | furyofantares wrote:
           | > Precisely. No doubt that the tons of VC fuelled so-called
           | AI startups that are wrapping around the ChatGPT API are
           | already getting themselves disrupted due to the platform risk
           | by OpenAI.
           | 
           | Eh - such startups are like a year in with a small headcount
           | I'd think? They're still figuring out what they're gonna
           | build imo. I don't think I'd be sad sinking a year of
           | investment into folks who've been spending a year trying to
           | build things with this stuff even if they are forced to find
           | a new direction due to competition from the platform itself.
        
         | personjerry wrote:
         | So the secret to building a viable AI business... is to build a
         | viable business, with AI?
        
           | chasd00 wrote:
           | the secret is to already have a viable business and
           | constantly sprinkle in the latest tech. trends to maintain
           | the illusion of being something fresh and new.
        
             | LtWorf wrote:
             | That's why our bakery uses AI generated blockchains :D
        
               | cj wrote:
               | My local bakery adopted a new Point of Sale that does a
               | really good job making me feel like I have to tip $2 when
               | buying a donut!
               | 
               | No blockchain or AI, but new tech (for them) nonetheless
               | :)
        
               | DonHopkins wrote:
               | My coffeeshop uses Dall-E to render flattering
               | caricatures of customers in ground chocolate and cinnamon
               | on the foamed milk. ;)
        
           | johnnyanmac wrote:
           | "what problem am I trying to solve?" if you can answer that
           | question and justify AI as an optimization (and all the gray
           | area fallouts that comes with early adoption) then you have a
           | chance at building a viable business with AI.
           | 
           | Having a solution and looking for problems to solve (or
           | create) isnt the mentality of an entrepreneur but of a
           | grifter, in my crass cynical opinion. But I can't deny that
           | you ma still make money that way.
        
             | iinnPP wrote:
             | Having a solution to an unknown problem and working towards
             | finding a problem the solution fills can be rewritten as:
             | Having a problem and looking for a solution.
             | 
             | Calling that grifting is strange.
        
               | johnnyanmac wrote:
               | I did say it was a crass opinion.
               | 
               | But just because the audience doesn't know the problem
               | doesn't mean you (the entrepreneur) don't ask the
               | question. I'm sure that not many people were asking for
               | faster horse buggies in the late 19th century, but you
               | certainly ask it and try to find a solution. Note that
               | the problem doesn't have to be pressing to be asked.
        
             | narag wrote:
             | _" what problem am I trying to solve?" if you can answer
             | that question and justify AI as an optimization..._
             | 
             | Replace "AI" with "a machine" and you've just define
             | Industrial Revolution.
             | 
             |  _Having a solution and looking for problems to solve (or
             | create) isnt the mentality of an entrepreneur but of a
             | grifter_
             | 
             | Why? If the steam engine had just been invented, would it
             | be only justified to use it for whatever problem the
             | original inventor had conceived it?
        
           | happytiger wrote:
           | Ai as a tool of the business not ai as the business.
           | 
           | It's pretty straightforward.
        
           | vasco wrote:
           | They mean, don't build an email summarizer.
           | 
           | Instead if you already run an email service successfully on
           | its own, you can easily include email summaries that are
           | better due to AI.
        
           | j45 wrote:
           | Sometimes I see a request to create an AI product, when
           | normal code works fine and it's already solved... except they
           | might not be aware.
        
         | Bjartr wrote:
         | I slightly disagree. I think a business can have an AI focus
         | rather than it being mere improvement over an already viable
         | business if, without AI, the business model can't succeed due
         | to excessive costs of scaling to serve enough users to have
         | sufficient revenue. There are some cases like that where adding
         | AI makes that previously non-viable business model viable
         | again.
        
           | lazide wrote:
           | Sounds risky - if you can't make a big enough improvement,
           | you're SOL. In their case, they keep making money regardless
           | and just make more and more if they get better?
        
         | CobrastanJorji wrote:
         | > Making AI the business is a joke of a model to me.
         | 
         | I don't think AI businesses are jokes, so long as you're
         | selling a platform or a way to customize AI to some specific
         | need or hardware. AI is a gold rush, and the most reliable way
         | to get rich in a gold rush is to sell shovels.
         | 
         | But if you want to make money from actually using AI yourself,
         | then yeah, you've gotta have a business that AI makes better.
        
         | wokwokwok wrote:
         | Did you read the article or are you responding to what you
         | imagine it says?
         | 
         | > a whole toolchain of specialized models, ... all of these
         | specialized models are combined with tons of just normal code
         | and logic that creates the end result
         | 
         | They are not referring to a toolchain as "write a compiler".
         | 
         | They are referring to it as "fine tune models with specific
         | purposes and glue them together with normal code".
         | 
         | It's a no-brainer that any startup that _doesnt_ do this is a
         | thin wrapper around the openAI api, has zero moat, and is
         | therefore:
         | 
         | A) deeply vulnerable to having any meaningful product copied by
         | others (including openAI)
         | 
         | B) lazy AF now that fine tuning is so simple to do.
         | 
         | C) will be technically out competed by their competitors
         | because fine tuned models _are better_.
         | 
         | D) therefore, probably doomed.
         | 
         | > The most important thing is to not use AI at first.
         | 
         | > Explore the problem space using normal programming practices
         | to determine what areas need a specialized model in the first
         | place.
         | 
         | > Remember, making "supermodels" is generally not the right
         | approach.
         | 
         | This is good advice.
         | 
         | > The real secret is to already have a viable business that AI
         | can subsequently improve
         | 
         | You realise that what _you_ said, is the equivalent of what
         | _they said_ , which is: use AI to solve problems, rather than
         | slapping it on meaninglessly.
        
           | johnsonjo wrote:
           | I don't really have much beef with your comment as it has
           | pretty substantive points, but I just wanted to remind and
           | let everybody know about a Hacker News guideline outlined on
           | their guidelines under the comments section. Sorry, I just
           | recently re-read the guidelines, so I thought I might point
           | others to it too. I honestly believe there are a lot more
           | people breaking all these guidelines on this site, so the
           | whole thing is a good read for anyone uninformed, and yes
           | there are definitely more egregious breakages of guidelines
           | elsewhere.
           | 
           | > Please don't comment on whether someone read an article.
           | "Did you even read the article? It mentions that" can be
           | shortened to "The article mentions that". [1]
           | 
           | I just mainly brought this one up, because I see it come up
           | often, and because I didn't even notice it was really a
           | violation until I reread the guidelines the other day.
           | 
           | [1]:
           | https://news.ycombinator.com/newsguidelines.html#comments
        
           | bob1029 wrote:
           | > They are referring to it as "fine tune models with specific
           | purposes and glue them together with normal code".
           | 
           | > will be technically out competed by their competitors
           | because fine tuned models _are better_.
           | 
           | I disagree that fine tuning is the way to go. We spent a
           | large amount of effort on that path and found it to be
           | untenable for our business cases - not from an academic
           | standpoint, but from a practical data management/discipline
           | standpoint. For better or worse, we don't have super clean,
           | structured data about our business. We also aren't big enough
           | to run a full-time data science team.
           | 
           | Picking targeted feature verticals and applying few-shot
           | learning w/ narrowly-scoped, dynamic prompts seems to give us
           | a lot more value per $$$ and unit time. For us, things like
           | the function calling API _are_ fine-tuning, because we can
           | now insist that we get a certain shape of response.
           | 
           | I have a hard time squaring an implied, simultaneous
           | agreement with "supermodels are generally not the right
           | approach" and "fine tuned models are better". These ideas
           | seem (to me) to be generally at odds with one another. Few-
           | shot learning is still the real magic trick in my book.
        
         | nostromo wrote:
         | Just be prepared for OpenAI to pull the rug out from under you
         | at some point... and probably sooner than you realize.
         | 
         | This is always the approach in our industry. During the land
         | rush, you offer very affordable, very favorable terms for
         | people building on your stack.
         | 
         | When they've wiped out most of the competition and have massive
         | marketshare -- they shift from land-rush mode to rent-seeking
         | mode, and your business is either dead entirely or you now live
         | as a sharecropper.
        
           | j45 wrote:
           | It's why being platform independent as possible is critical.
           | Even if it's an app in someone's store. Can't quite run it
           | with just their stuff.
        
           | baxtr wrote:
           | Who cares if you don't have any paying customers?
        
           | happytiger wrote:
           | If you don't own the API AND the customer, you don't own
           | anything: you rent.
        
         | elorant wrote:
         | Sure, and then you'll wake up one morning and OpenAI will
         | either have eaten your lunch, or quadrupled their prices
         | because they'd have achieved wide adoption.
        
       | mmoustafa wrote:
       | Great tips. I tried to do this with SVG icons in
       | https://unstock.ai before a lot of people started creating text-
       | to-vector solutions. You also have to keep evolving!
        
         | ShamelessC wrote:
         | Would it not make sense to use a text to image generator, then
         | convert the image to svg using normal methods?
        
       | JSavageOne wrote:
       | > "One way we explored approaching this was using puppeteer to
       | automate opening websites in a web browser, taking a screenshot
       | of the site, and traversing the HTML to find the img tags.
       | 
       | > We then used the location of the images as the output data and
       | the screenshot of the webpage as the input data. And now we have
       | exactly what we need -- a source image and coordinates of where
       | all the sub-images are to train this AI model."
       | 
       | I don't quite understand this part. How does this lead to a model
       | that can generate code from a UI?
        
         | obmelvin wrote:
         | If I'm understanding correctly, they are talking about how they
         | are solving very specific problems with their models.
         | 
         | In this case, if you look two images up you will see e-commerce
         | image with many images composted into one image/layer. How will
         | their system automatically decide whether all those should be
         | separate images/layers or one composted image? To do so they
         | trained a model that examines web pages and <img> tags and
         | see's their location. Basically, they are under the assumption
         | that their data has good decisions and you can learn in which
         | cases people use multiple vs one image.
         | 
         | I could be misunderstanding :)
        
         | mnutt wrote:
         | They have a known system that can go from specified coordinates
         | to images in the form of puppeteer (chromium) and so they can
         | run it on lots of websites to generate [coordinates, output
         | image] pairs to use for training data. In general, if you have
         | a transform and input data, you can use it to train a model to
         | learn the reverse transform.
        
       | BillFranklin wrote:
       | This is a nice post, and I think it will resonate with most new
       | AI startups. My advice would be don't build an AI product at all.
       | 
       | To my mind an "x product" is rarely the framing that will lead to
       | value being added for customers. E.g. a web3 product, an
       | observability product, a machine vision product, an AI product.
       | 
       | Like all decent startup ideas the obviously crucial thing is to
       | start with a real user need rather than wanting to use an
       | emerging technology and fit it to a problem. Developing a UI for
       | a technology where expectations are inflated is not going to
       | result in a user need being met. Instead, the best startups will
       | naturally start by solving a real problem.
       | 
       | Not to hate on LLMs, since they are neat, but I think most people
       | I know offline hate interacting with chat bots as products. This
       | is regardless of quality, bots are rarely as good as speaking
       | with a real human being. For instance, I recently moved house and
       | had to interact with customer support bots for energy / water
       | utilities and an ISP, and they were universally terrible. So
       | starting with "gpt is cool" and building a customized chatbot is
       | to my mind not going to solve a real user need or result in a
       | sustainable business.
        
         | fillskills wrote:
         | This. Avoid the "If have hammer, everything looks like a nail"
         | strategy. Find a customer pain point and use the right blend of
         | tools
        
         | fillskills wrote:
         | This. Avoid the "If have hammer, everything looks like a nail"
         | strategy. Find a customer pain point and use the right blend of
         | tools. You can also make a new tool where tools dont exist
        
         | duped wrote:
         | I think it's a great idea if you're a serial founder and want
         | some money for the idea that's going nowhere. People love to
         | throw money at buzzwords.
         | 
         | A couple of years ago it was blockchain. Not sure what the next
         | one is, but I already see all the "technologists" in my
         | LinkedIn network have pivoted from crypto startups to AI
         | startups.
        
         | threeseed wrote:
         | > I think most people I know offline hate interacting with chat
         | bots as products
         | 
         | It's hilarious to me when people are bringing back chat bots as
         | a concept.
         | 
         | We had chat bots a few years ago and it was something that
         | almost all larger companies had built strategies around. The
         | idea being that they could significantly reduce call centre
         | staff and improve customer experience.
         | 
         | And it wasn't just that the quality of the conversations were
         | poor it was that for many users it's about being the human
         | connection of being listened to that is important. Not just
         | getting an answer to their problem.
        
       | adriancooney wrote:
       | With the pace of AI, that (large) investment into a custom
       | toolchain could be obsolete in a year. It feels like ChatGPT is
       | going to gobble up all AI applications. Data will be the only
       | differentiator.
        
         | dontupvoteme wrote:
         | Not unless your toolchain is highly specialized.
         | 
         | There's not even a good way to _benchmark_ language models at
         | the moment.
        
       | infixed wrote:
       | I think the prose in the pre-amble is a bit over-flowery and
       | heavy handed (e.g. LLMs really aren't that expensive, I very much
       | doubt the WSJ claim that Copilot is losing money per user, LLMs
       | aren't always "painfully slow", etc.)
       | 
       | Having said that, the actual recommendations the article offers
       | are pretty reasonable:
       | 
       | - Do as much as you can with code
       | 
       | - For the parts you can't do with code, use specialized AI to
       | solve it
       | 
       | Which is pretty reasonable? But also not particularly novel.
       | 
       | I was hoping the article would go into more depth on how to make
       | an AI product that is actually useful and good. As far as I can
       | tell, there have been a lot of attempts (e.g. the recent humane
       | launch), but not a whole lot of successes yet.
        
       | ravenstine wrote:
       | I appreciate the overall sentiment of the post, but I can't say I
       | would choose anything like the implementation the author is
       | suggesting.
       | 
       | My takeaway is to avoid relying too heavily on LLMs both in terms
       | of the scope tasks given to them as well as relying too heavily
       | on any specific LLM. I think this is correct for many reasons.
       | Firstly, you probably don't want to compete directly with
       | ChatGPT, even if you are using OpenAI under the hood, because
       | ChatGPT will likely end up being the better tool for very
       | abstract interaction in the long run. For instance, if you are
       | building an app that uses OpenAI to book hotels and flights by
       | chatting with a bot, chances are someday either ChatGPT or
       | something by Microsoft or Google will do that and make your puny
       | little business totally obsolete. Secondly, relying too heavily
       | on SDKs like the OpenAI one is, in my opinion, a waste of time.
       | You are better off with the flexibility of making direct calls to
       | their REST API.
       | 
       | However, should you be adding compilers to your toolchain? IMO,
       | any time you add a compiler, you are not only liable to add a
       | bunch of unnecessary complexity but you're making yourself
       | _dependent_ upon some tool. What 's particulry bad about the
       | author's example is that it's arguably completely unnecessary for
       | the task at hand. What's so bad about React or Svelte that you
       | want to use a component cross-compiler? That's a cool compiler,
       | but it sounds like a complete waste of time and another thing to
       | learn for building web apps. I think every tool has its place,
       | but just "add a compiler, bruh" is terrible advice for the target
       | audience of this blog post.
       | 
       | IMO, the final message of the article should be to create the
       | most efficient toolchain for what you want to achieve. Throwing
       | tools at a task doesn't necessarily add value, nor does doing
       | what everyone else is doing necessarily add value; and either can
       | be counterproductive in not just working on LLM app integration
       | but software engineering in general.
       | 
       | Kudos to the author for sharing their insight, though.
        
         | reason5531 wrote:
         | I agree with this. I do like the general points about AI in the
         | original post but writing your own compiler doesn't seem like
         | the best solution. Sure, it's unique and people can't just copy
         | it but it will also be a massive amount of work to maintain it,
         | considering all the languages it supports. For me this
         | additional layer of abstraction does have a bit of the
         | 'factory-factory-factory' vibe.
        
         | lamontcg wrote:
         | > What's particulry bad about the author's example is that it's
         | arguably completely unnecessary for the task at hand.
         | 
         | I entirely missed the compiler on the first read through and I
         | don't know why so many commenters are fixated on that
         | specifically. That wasn't what the blog post was actually
         | about.
        
       | danenania wrote:
       | This is a thought-provoking post and I agree with the "avoid
       | using AI as long as possible" point. AI is best used for things
       | that can _only_ be accomplished with AI--if there 's any way to
       | build the feature or solve the problem without it, then yeah, do
       | that instead. Since everyone now has more or less equal access to
       | the best models available, the best products will necessarily be
       | defined by everything they do that's _not_ AI--workflows, UIs,
       | UX, performance, and all that other old-fashioned stuff.
       | 
       | I'm not so sure about the "train your own model" advice. This
       | sounds like a good way to set your product up for quick
       | obsolescence. It might differentiate you for a short period of
       | time, but within 6-12 months (if that), either OpenAI or one of
       | its competitors with billions in funding is going to release a
       | new model that blows yours out of the water, and your
       | "differentiated model" is now a steaming pile of tech debt.
       | 
       | Trying to compete on models as a small startup seems like a huge
       | distraction. It's like building your own database rather than
       | just using Postgres or MySQL. Yes, you need a moat and a product
       | that is difficult to copy in some way, but it should be something
       | you can realistically be the best at given your resources.
        
         | JamesBarney wrote:
         | 100%, worked with a founder who thought the AI hype was
         | overblown 5 years ago so focused on "workflows, UIs, UX,
         | performance, and all that other old-fashioned stuff" while all
         | his competitors focused on building AI models. Then ChatGPT
         | came out and all his competitors work was instantly obsolete,
         | and he could achieve AI feature parity in weeks.
         | 
         | He was right about what to build but for the wrong reasons and
         | it's been a huge boon to his business.
        
           | fest wrote:
           | Why are these reasons wrong? I have a similar attitude in my
           | field (UAVs), where many desperately chase the next whizzbang
           | technology, ignoring the boring, old fashioned stuff
           | (workflows, UX, 3rd party addons).
        
         | SomeoneFromCA wrote:
         | Not doing so will produce same boring Chatgpt based bot,
         | everyone have already seen and tired off. I mean, differention
         | is quite important thing actually.
        
           | danenania wrote:
           | That's why you should spend your time building a great
           | product rather than an also-ran model. While you're working
           | on your model, your competitors are working on their
           | products. In a few months when your model is made obsolete by
           | the latest OpenAI release, your competitors will have a
           | better product _and_ a better model than you.
        
             | SomeoneFromCA wrote:
             | This is a persistent, but in fact a poorly justified
             | opinion. First of all, according to linked article, they
             | were able to make something much better and cheaper than
             | Chatgpt (for their task obv.) in quite a short period of
             | time. What stops them from making the same feat again? What
             | makes you think, that the same boring bland OpenAI wrappers
             | are going to take off at all, let alone survive till next
             | OpenAI iteration? People are not stupid, they can see that
             | the product you are offering is essentially a low-effort
             | wrapper, they have already seen, why would they choose you
             | product at all?
        
               | danenania wrote:
               | You seem to be conflating a "boring" product with using
               | OpenAI models, but the two have nothing to do with each
               | other. 99% of users don't care what models you're using
               | underneath. They only care how well the product works.
               | 
               | "What stops them from making the same feat again?"
               | 
               | Hopefully nothing, for their sake, because they're going
               | to have to do it again and again to keep up.
               | 
               | Look, I'm not saying this specific product was wrong to
               | build their own models for certain tasks. If it works for
               | their product and they're getting users, then bully for
               | them. I just don't think it's great general advice. I
               | also think it provides a lot less long-term
               | differentiation and competitive edge than the author of
               | the post seems to think.
        
       | dmezzetti wrote:
       | There is so much available in the open model world. Take a look
       | at the Hugging Face Hub - there are 1000s of models that can be
       | used as-is or as a starting point.
       | 
       | And those models don't have to be LLMs. It's still a valid
       | approach to use a smaller BERT model as a text classifier.
        
       | cryptoz wrote:
       | > When passing an entire design specification into an LLM and
       | receiving a new representation token by token, generating a
       | response would take several minutes, making it impractical.
       | 
       | Woe is me, it takes minutes to go from user-designed mockup to
       | real, high-quality code? Unacceptable, I tell you!
       | 
       | But seriously, if there are speed improvements that you can make
       | and are on the multiple-orders-of-magnitude then I do get it,
       | those improvements are game-changing. But also, I think we're
       | racing too quickly with expectations here; where minutes is
       | unacceptable now when it used to take a human days? I mean,
       | minutes is still pretty good! IMO.
        
         | willsmith72 wrote:
         | From experience developing with GPT4, minutes would be too long
         | for me.
         | 
         | I use it for exactly this use-case, converting mockups to code,
         | but you need short feedback loops.
         | 
         | It will get things wrong. There'll be things it misunderstood,
         | or small tweaks you realise you need after it's done its first
         | job. Or maybe it misunderstood part of your design, or just
         | needs extra prompting (ALL CAPS for emphasis, for example).
         | 
         | Even after multiple iterations it will extremely rarely be
         | perfect, which is fine, because once it has a decent readable
         | solution, you can obviously take ownership of it for yourself.
         | 
         | Where minutes might be fine would be in a "handoff" workflow,
         | where designers do design and then handoff to devs. 10 minutes
         | in between of AI processing to get something for the dev to
         | start on would be acceptable, and the dev could then take that
         | first attempt and using GPT4 refine it a bit. But I don't
         | really like handoff teams anyway..
        
       | nothrowaways wrote:
       | Please tell it to Google search.
        
       | nothrowaways wrote:
       | Google search dearly needs this advice.
        
       | andix wrote:
       | I think soon AI will be build into a lot of different software.
       | This is when it will really get awesome and scary.
       | 
       | One simple example are e-mail clients. Somebody asks for a
       | decision or clarification. The AI could extract those questions
       | and just offer some radio buttons, like:                 Accept
       | suggested appointment times: [Friday 10:00] [Monday 11:30]
       | [suggest other]       George whats to know if you are able to
       | present the draft: [yes] [no]
       | 
       | I think Zendesk (ticketing software for customer support) already
       | has some AI available. A lot of support requests are probably
       | already answered (mostly) automatic.
       | 
       | Human resources could use AI to screen job applications and let
       | an AI resarch additional information about the applicant on the
       | internet, and then create standardized database entries (which
       | may be very flawed).
       | 
       | I think those kind of applications are the interesting ones. Not
       | another ChatGPT extension/plugin.
        
         | alchemist1e9 wrote:
         | I'm trying to build that already personally. My plan is mbsync
         | to Maildir storage, then process all emails using Haystack.
         | Then trigger a pipeline on each new email with the goal of
         | proposing some actions.
         | 
         | Still bouncing around various approaches in my head, but all
         | seems very doable already.
        
           | andix wrote:
           | Personally I would put this functionality into an email
           | client.
        
             | alchemist1e9 wrote:
             | I was leaning towards the opposite and thinking about some
             | way to support many email clients by perhaps leveraging
             | IMAP to store drafts generated by the AI backend.
             | 
             | Another idea I had was to output it's results into a
             | ticketing system and allowing it to attach related
             | documents and information it finds to be reviewed by a
             | human and provide optional pre-configured actions.
        
       | adrwz wrote:
       | Feels like a little too much engineering for an MVP.
        
         | thuuuomas wrote:
         | LLM codegen should herald the death of the MVP. It's time to
         | solve problems well.
        
           | dumbfounder wrote:
           | This doesn't make any sense. LLMs help you get to an MVP much
           | faster, then if you want to take it further you can go deeper
           | and make the solution more robust. Don't solve problems well
           | you don't know you need to solve well. This is premature
           | optimization.
        
       | yieldcrv wrote:
       | > They use a simple technique, with a pre-trained model, which
       | anyone can copy in a very short period of time.
       | 
       | This article acts like the risk was something the creators cared
       | about
       | 
       | all they wanted was some paid subscribers for a couple months, or
       | some salaries paid by VCs for the next 18 months
       | 
       | in which case, mission accomplished for everyone
        
       | hubraumhugo wrote:
       | As in every hype cycle: When all you have is a hammer, everything
       | looks like a nail. A little while ago the hammer was blockchain,
       | now it's AI.
        
         | DonHopkins wrote:
         | Blockchain isn't anywhere near as useful as a hammer. If you
         | were gullible enough to fall for the promises of the
         | blockchain, then of course you're going to be disappointed that
         | AI isn't a get-rich-quick pyramid scheme that will make you
         | millions of dollars without any investment of time or energy or
         | original thought, because if that's all you're looking for,
         | you're going to be continuously disappointed (and deserve to
         | be). There's a lot more to AI than to the blockchain, and
         | comparing the two as equal shows you don't understand what
         | either of them are.
        
       | liuliu wrote:
       | Very similar sentiment when AppStore built. Everyone tries to
       | avoid host their business on someone else's platform. Hence FB
       | tried to do H5 with their app (so it is open-standard (a.k.a.
       | web) based, people launches their own mobile phones etc etc.
       | 
       | At the end of the day, having app in AppStore is OK as long as
       | you can accumulate something the platform company cannot access
       | (social network, driver network, etc). OpenAI's thing is too
       | early, but similar thinking might be applicable there too.
        
       | wg0 wrote:
       | I've survived "What is your organization's Kubernetes strategy."
       | 
       | And then came along "You need to integrate Blockchain in your
       | business processes."
       | 
       | And now is the time for "make your products smart with AI"
       | season.
        
         | toyg wrote:
         | To be fair, LLMs as a tech are actually useful, even just to
         | take human input.
         | 
         | Blockchain an K8s though... Eh. Geekery for geekery's sake.
        
           | elpakal wrote:
           | I think we'll find that, when the dust settles, AI's
           | usefulness wasn't as impactful as we thought. AI is only as
           | good as its data and no meaningful dataset is 100%
           | representative and 100% accurate IMHO. Don't get me wrong--
           | it's neat and all, and it can be somewhat useful in the right
           | context, but the hype is huge. Much bigger than Kubernetes
           | ever saw and probably bigger than blockchain.
        
             | brandall10 wrote:
             | There's a huge win to be had though with "this is my
             | natural language data, give me natural language insights at
             | a 100 foot view, at a 50,000 foot view, etc". This is one
             | of the big pulls OpenAI is hoping for w/ GPTs/Assistants.
        
               | krainboltgreene wrote:
               | Too bad this costs a few million (only if you're using
               | insanely subsidized hardware).
        
             | fkyoureadthedoc wrote:
             | > data and no meaningful dataset is 100% representative and
             | 100% accurate IMHO
             | 
             | And? What point are you trying to make here?
        
               | threeseed wrote:
               | People seem to think that LLMs are going to be this
               | source of truth that everyone can rely on to give them
               | the information they need. Which for certain use cases
               | e.g. text based is fine because you can tolerate
               | inaccuracies.
               | 
               | But I have worked at highly regulated finance companies
               | who aren't interested in LLMs at all. Because their
               | business can't tolerate if your model returns a figure or
               | calculation that is inaccurate.
        
               | nasir wrote:
               | Your point is still not clear
        
               | elpakal wrote:
               | That it will be inaccurate. A lot.
        
               | stocknoob wrote:
               | This drug only helps 80% of patients, it's useless.
        
               | elpakal wrote:
               | This drug only killed 20% of patients. We should sell it.
        
           | wg0 wrote:
           | I don't doubt that. I myself use it but the major problem
           | that seems to remain unsolved for a while is that LLMs can't
           | be blindly relied upon because they don't have any knowledge
           | rather they mimic what knowledge might look like.
           | 
           | I find LLMs so much useful for myself because in some areas,
           | I have developed expertise and even with a wrong LLM output,
           | I can manually make few tweaks to make it work.
           | 
           | But same can't be said for an LLM bot meant for SAP or
           | Netsuite that it'll guide a user reliably to a correct
           | answer.
           | 
           | There you still need a real expert that's going to be way way
           | slower than an LLM but with way way more higher accuracy rate
           | in ballpark of 98.9% or above.
           | 
           | And that's where LLM with your own toolchain or rented
           | toolchain doesn't make much sense. For many use cases. Yet.
        
             | visarga wrote:
             | I would generalize that as "no LLM can surpass domain
             | experts on any task, yet"
             | 
             | The only superhuman AIs are very narrow, done by OpenAI -
             | AlphaZero and AlphaFold, and they don't train on language
        
           | hagbarth wrote:
           | Yes they are useful. K8S is also obviously useful.
           | 
           | Doesn't mean they are useful for everything.
        
           | threeseed wrote:
           | I've worked for a number of enterprise companies. All have
           | moved to Kubernetes.
           | 
           | And it's not just geekery by developers who aren't as smart
           | as you.
           | 
           | It's because it allows you to treat all of your
           | infrastructure in one way. Whether you are on GCP, AWS or On-
           | Premise, whether you use Java, Spark, Web Serving or ML
           | Training, whether you are deploying direct to Production or
           | go through multiple staging environments etc. It is always
           | one way of deploying things, one way of securing things, one
           | way of doing everything.
           | 
           | It is far cheaper, easier, more secure and less risky than
           | managing infrastructure yourself. And believe me we all tried
           | that.
        
             | MajimasEyepatch wrote:
             | Seriously. People make Kubernetes out to be this wildly
             | complicated technology, but if your environment is more
             | complex than just "Here's a glorified VM running on a
             | couple EC2 instances behind a load balancer," it has a lot
             | of benefits and is really not that difficult to set up in
             | this day and age.
        
             | tryauuum wrote:
             | Do you run VMs inside kubernetes as well? I know this is
             | possible but wonder how's the actual experience
        
           | DonHopkins wrote:
           | Bad comparison of blockchain and k8s. When you actually start
           | doing something non-trival complexity and non-toy scale,
           | you're actually faced with huge problems that k8s and
           | terraform practically solve. Whereas blockchain is only a
           | solution in search of problems that nobody actually has.
        
           | krainboltgreene wrote:
           | This is what each of those trends said.
        
         | dist-epoch wrote:
         | https://en.wikipedia.org/wiki/Problem_of_induction
        
       | danielmarkbruce wrote:
       | People are overthinking this from a competitive perspective.
       | Create something that isn't easy to replicate - there are several
       | ways to do that, but it's the only rule required from a
       | competitive perspective.
        
         | aabhay wrote:
         | 1000%. If your business case involves technical differentiation
         | then build an AI stack. If your differentiation is something
         | that can't be replicated by someone else using OAI then you're
         | in the clear to use OAI. If your only differentiation is that
         | you use OAI... well you're hosed anyway.
        
           | danenania wrote:
           | Yeah, though it's also fine to start as a wrapper and iterate
           | your way into differentiation. That's something people seem
           | to often be missing when disparaging these wrapper products.
           | Like yeah, perhaps the initial version of the business will
           | be disrupted and they'll need to pivot, but if they got a
           | million users in the meantime, they are a lot more likely to
           | iterate toward PMF than someone starting from scratch.
        
       | sanitycheck wrote:
       | The tech is moving incredibly fast, I think at the moment putting
       | minimal effort into some sort of OAI API wrapper is precisely the
       | right thing to do for most companies whose AI business case is
       | 90% "don't be seen to get left behind".
        
       | ge96 wrote:
       | Man I saw this product recently it was like "Use AI for SEO,
       | everything else sucks". $3K/mrr I feel like people can just make
       | things up, hype, some landing page, people buy it, get burned,
       | that company disappears.
        
       | jumploops wrote:
       | To preface, I largely agree with the end state presented here --
       | we use LLMs within a state machine-esque control flow in our
       | product. It's great.
       | 
       | With that said, I disagree with the sentiment of the author. If
       | you're a developer who's only used the ChatGPT web UI, you should
       | 100% play with and create "AI wrapper" tech. It's not until you
       | find the limits of the best models that you start to see how and
       | where LLMs can be used within a traditional software stack.
       | 
       | Even the author's company seems to have followed this path, first
       | building an LLM-based prototype that "sort of" worked to convert
       | Figma -> code, and then discovering all the gaps in the process.
       | 
       | Therefore, my advice is to try and build your "AI-based trading
       | card grading system" (or w/e your heart desires) with e.g.
       | GPT-4-Vision and then figure out how to make the product actually
       | work as a product (just like builder.io).
        
       | j45 wrote:
       | Shortcuts in early product can definitely affect flexibility as
       | new things keep arriving to tryout and further handcuffing
       | things.
       | 
       | I love speed and frequency of shipping but sometimes thinking
       | about things just a bit, but not too much doesn't always hurt.
       | 
       | Sometimes simple is using a standard to keep the innovation
       | points for the insights to implement.
       | 
       | Otherwise innovation points can be burnt on infrastructure and
       | maintaining it instead of building that insight that arrives.
       | 
       | Finding a sweetspot between too little, and too much tooling is
       | akin to someone starting with vanilla javascript to learn the
       | value of libraries, and then frameworks, in that order rather
       | than just jump into frameworks.
        
       | own2pwn wrote:
       | github actually denied they losing money on copilot:
       | https://twitter.com/natfriedman/status/1712140497127342404
        
       | aftoprokrustes wrote:
       | > That car driving itself is not one big AI brain.
       | 
       | > Instead of a whole toolchain of specialized models, all
       | connected with normal code -- such as models for computer vision
       | to find and identify objects, predictive decision-making,
       | anticipating the actions of others, or natural language
       | processing for understanding voice commands -- all of these
       | specialized models are combined with tons of just normal code and
       | logic that creates the end result -- a car that can drive itself.
       | 
       | Or, as I like to say it: what we now call "AI" actually refers to
       | the "dumb" part (which does not mean easy or simple!) of the
       | system. When we speak of an intelligent human driver, we do not
       | mean that they are able to differentiate between a stop sign and
       | a pigeon, or understand when their partner asks them to "please
       | stop by the bakery on the way home" -- we mean that they know
       | what decision to take based on this data in order to have the
       | best trip possible. That is, we refer to the part done with "tons
       | of normal code", as the article puts it.
       | 
       | Needless to say, I am not impressed by the predictions of "AI
       | singularity" and whatever other nonsense AI evangelists try to
       | make us believe.
        
       | fullofdev wrote:
       | I think in the end, it comes down to "does it helpful for the
       | customer or not"
        
       | yagami_takayuki wrote:
       | I feel like chat with a pdf is the easiest thing to integrate
       | into various niches -- fitness, nutrition, so many different
       | options
        
       | tqi wrote:
       | This post seems pretty focused on the How of building AI
       | products, but personally I think that whether or not an "AI
       | product" succeeds or fails mostly wont come down to
       | differentiation / cost / speed / model customization, but rather
       | whether it is genuinely useful.
       | 
       | Unfortunately, most products I've seen so far feel like solutions
       | in search of problems. I personally think the path companies
       | should be taking right now is to identify the most tedious and
       | repetitive parts of using the product and looking for ways that
       | can be reliably simplified with AI.
        
       | atleastoptimal wrote:
       | This makes sense because the figma -> code conversion is very
       | programmatic. For anything more semantic or more vague in
       | approach, a heavier dependence on LLM's might be needed until the
       | infrastructures mature.
        
       | digitcatphd wrote:
       | IMO the counter argument is to initially rely on commercial
       | models and then make it an objective to swap them out.
        
       | happytiger wrote:
       | The issue here isn't AI, it's not shovels and goldrushes, and
       | it's not about building how others are doing it.
       | 
       | It's fundamental value.
       | 
       | It's who is creating value that cannot be destroyed. Who owns the
       | house is determined by who builds the foundation first, and that
       | means those that control the ecosystems.
       | 
       | All others will play, survive, rent, and buy inside of those
       | ecosystems.
       | 
       | If you're not building fundamental value, you are an
       | intermediary, which may be huge companies, but ultimately
       | companies built on others. If you don't own the API _and_ the
       | customer, you're a renter. And renters can get evicted.
       | 
       | Those opportunities may still be worth chasing, but we shouldn't
       | get confused or over complicate what's going on or we risk
       | investing and building straw houses when brick was available.
       | 
       | Nothing wrong with that. Respect to success. But let's keep
       | fundamental value in mind, as it's the most important thing for
       | first generation technology companies.
        
         | zemvpferreira wrote:
         | I agree with you, but only to a point. In a healthy market
         | renters can make landlords compete for their business. Plenty
         | of healthy companies are built on other's infrastructure.
         | 
         | But your point remains: Where will the linchpin be? Will AI be
         | a commodity like the cloud, or a fundamental asset like search?
         | And how quickly will we find out?
        
         | w10-1 wrote:
         | It's fundamental value
         | 
         | Yes!                   value that cannot be destroyed
         | 
         | Or taken                   Who owns the house is determined by
         | who builds the foundation first,          and that means those
         | that control the ecosystems
         | 
         | Maybe. Most platform plays in tech fail or barely make ends
         | meet, while their renters make bank and impact.
         | we risk investing and building straw houses when brick was
         | available
         | 
         | There's no magic bias that solves the build-vs-buy question.
         | 
         | More importantly, the article is encouraging people to stick
         | with the structure of the problem and solution as you would
         | normally for building products, and use AI at the edges rather
         | than the engine.
         | 
         | IMHO that's much controllable for developers than a full-on
         | dependence on black-box LLM's, and it's even better for the AI
         | providers: they're much more likely to help with a narrowly-
         | defined solution.
         | 
         | Even openai is emphasizing incrementalism. It fits available
         | tech, and it counters bubble bias.
        
       | sevensor wrote:
       | > One awesome, massive free resource for generating data is
       | simply the Internet.
       | 
       | Isn't that building AI products _exactly_ the way everyone else
       | is doing it? There are things in the world the internet doesn't
       | know much about, like how to interpret sensor data. There are
       | lots of transducers in the world, and the internet knows jack
       | about most of them.
        
       | orliesaurus wrote:
       | I totally agree with this article, it's actually not that
       | complicated to build your own toolchain, you can use one of the
       | many open models, and if you're building it for profit make sure
       | you read the ToS.
       | 
       | Build a moat y'all - or be prepared to potentially shut down!
        
       | jongjong wrote:
       | I built a no-code, serverless platform and intend to use AI to
       | compose the HTML components together. ChatGPT seems to be good at
       | this based on initial tests. It was able to build a TODO app with
       | authentication which syncs with back end in the first try using
       | only HTML tags. My platform allows 'logic' to be fully specified
       | declaratively in the HTML so it helps to reduce complexity and
       | the the margin for error. The goal is to reduce app building down
       | to its absolute bare essentials then let the AI work with that.
        
       | FailMore wrote:
       | Thank you, I thought that was great
        
       ___________________________________________________________________
       (page generated 2023-11-10 23:00 UTC)