[HN Gopher] Combine GPTs with private knowledge for actually use...
___________________________________________________________________
Combine GPTs with private knowledge for actually useful AIs
Author : Weves
Score : 71 points
Date : 2023-11-28 17:20 UTC (5 hours ago)
(HTM) web link (medium.com)
(TXT) w3m dump (medium.com)
| ned_at_codomain wrote:
| Congrats, guys! Love the demos.
| nottorp wrote:
| They could have asked their private GPT to write a text
| description :)
| henjodottech wrote:
| Imagine all the kinds of documentation that could be automated
| with this
| xcv123 wrote:
| But it only works if you have already written all of the
| documentation manually, and kept that up to date. It's
| basically a chat bot that knows all of your documentation.
| andrei_says_ wrote:
| How is it better than a good search? Better enough to warrant
| the potential of hallucinated answers presented as truth?
| fassssst wrote:
| Good search requires language models. GPT is a really good
| language model.
| xcv123 wrote:
| The scenario is a customer opens a chat box on your website
| and asks some questions for the LLM.
|
| You wouldn't expect your customers to search your internal
| Confluence pages. The LLM would be trained on all of your
| internal documentation which is not exposed publicly.
|
| Hallucination is mostly a problem with insufficient
| training with the current generation of LLMs.
|
| Edit: Maybe not "all" of your internal docs should be
| exposed via LLM. But the idea is this is an interactive
| support agent for customers.
| semi wrote:
| that sounds like a dangerous scenario. If your docs are
| intentionally internal and not public, why would you let
| a publicly accessible LLMs answer questions with info
| from them?
|
| An LLM trained on public docs for the public could be a
| better interface for projects with lots of public
| documentation.
|
| An LLM trained on internal docs only accessible to
| internal users might be similarly useful
|
| Even a private LLM on public docs for your support agents
| to use could increase their efficiency.
|
| But I would never expose an LLM to the public that has
| been trained on data I don't want public
| xcv123 wrote:
| Yes, hence my quick edit of my comment above just before
| you replied
| stillwithit wrote:
| The memes of society are hallucinations. Worked ok so far.
|
| If you want to live by raw logic well, you're one of
| billions, idgaf what you want.
|
| ^^ there's social life under raw logic, sort of like
| regular life where I have no obligation to your existence,
| but everyone reminds you explicitly instead of
| hallucinating otherwise cordially
|
| Hallucinations may not be all that _bad_ unless they're
| hallucination 's that lead to atrocity. Like the
| hallucination we can keep burning resources to make AI
| bots.
| manicennui wrote:
| I would not allow any company's AI product near my company's
| private info.
| ForkMeOnTinder wrote:
| What if it was fully opensource and self-hostable?
| consp wrote:
| Wouldn't that simply be probitivly expensive?
| blooalien wrote:
| One example that says "no" to your question. ->
| https://ollama.ai/ There are surely more. It can be used
| with something like "LangChain" or "LlamaIndex" to give the
| locally hosted LLM access to local data, and a bit of
| Python "glue code" to tie it all together.
| consp wrote:
| That's why it was a question. All I hear is data farms
| and massive datacenters and those you cannot do at
| home/small business.
| Casteil wrote:
| For GPT4? Sure..
|
| For small LLMs like Llama2 7B/13B and its derivatives?
| They can be run quite gracefully on Apple Silicon Macs &
| similarly capable PC hardware.
| vinni2 wrote:
| Smaller or even larger Llama models are vastly inferior
| to GPTs.
| Casteil wrote:
| That's not a big problem with the training/fine-tuning
| you would do when creating specialized 'local' LLM
| agents.
| j4yav wrote:
| Anything currently private that these companies can't access and
| train their models on is the one valuable competitive advantage
| you have going for you, giving them access to it for a bit of
| convenience seems short sighted.
| dreadlordbone wrote:
| Their product is open source and self hosted.
| yjk wrote:
| I assumed GP was referring to OpenAI, not danswer (given that
| they mentioned that those companies were training models).
| And you're still using OpenAI's API, so neither open source
| and self hosting affect data collection.
| vinni2 wrote:
| To use custom GPTs you need ChatGPT plus subscription. So your
| customers need to have ChatGPT plus subscription to get support?
| As far as I know there is no API to integrate custom GPTs into.
| awestroke wrote:
| This is a product that uses the OpenAI API's. You configure it
| with your OpenAI API key.
| vinni2 wrote:
| ok I got confused it with OpenAI's GPTs.
| textcortex wrote:
| You can do this without paying to oai: textcortex.com
| antman wrote:
| It does notvsay is the difference between it and a RAG. So how
| does itvretriwve the "most useful" vs the "most relevant"
| documents?
___________________________________________________________________
(page generated 2023-11-28 23:01 UTC)