[HN Gopher] Don't believe ChatGPT - we do not offer a "phone loo...
___________________________________________________________________
Don't believe ChatGPT - we do not offer a "phone lookup" service
Author : freyfogle
Score : 158 points
Date : 2023-02-23 21:24 UTC (1 hours ago)
(HTM) web link (blog.opencagedata.com)
(TXT) w3m dump (blog.opencagedata.com)
| hayksaakian wrote:
| This marks the new age of "AI Optimization" where companies will
| strive to get their business featured into answers in ChatGPT.
|
| The OP's example is Unwanted demand, but it clearly shows that
| ChatGPT can funnel potential customers towards a product or
| service.
| jefftk wrote:
| _This is not a service we provide. It is not a service we have
| ever provided, nor a service we have any plans to provide.
| Indeed, it is a not a service we are technically capable of
| providing._
|
| I'm curious: why not? It seems like a lot of people would be
| interested in this if you could figure out how to provide it.
| insane_dreamer wrote:
| > a lot of people would be interested in this
|
| you mean like scammers and stalkers? (ok, and probably Meta)
| iamflimflam1 wrote:
| The service is possible:
|
| If you are a mobile network operator.
|
| Or, you can convince people to install something on their phone
| that sends you their location along with their phone number.
| ceejayoz wrote:
| How would _you_ go about reliably providing the location of
| someone 's _mobile_ phone without being their cell phone
| carrier?
| jraph wrote:
| By partnering with said cell phone carriers.
|
| But I hope it would be illegal.
| simonw wrote:
| How would this work?
|
| If a phone number is for a mobile phone then looking up the
| location doesn't make sense at all: mobile phones are mobile.
|
| I guess you could try and crawl an index of business phone
| numbers and associate those with the listed address for
| businesses, but that's a completely different business from
| running a geocoder.
|
| You could provide a bit of geographical information about the
| first three digits of a US phone number. I imagine that's not
| what users are actually looking for though.
| cactusplant7374 wrote:
| You mean if they could figure out how to illegally track
| millions of people?
| mort96 wrote:
| That's quite the predicament. I hope OpenAI will listen, to this
| and to anyone else in a similar situation. I'm reminded of the
| cases of ChatGPT recommends random people's personal phone
| numbers for various services.
|
| But yeah, don't trust ChatGPT for anything. Just earlier today I
| tried my darnedest to convince it that 2 pounds of feathers
| doesn't weigh the same as 1 pound of bricks, and it just would
| not listen, presumably because it just regurgitated stuff related
| to the common "1 pound of feathers and 1 pound of bricks"
| question.
|
| By the way, the last paragraph has some typos:
|
| > _I wrote this post to have a place to send our new ChatGPT
| users when they ask why it isn't work, but hopefully also it
| serves as a warning to othrs - you absolutely can not trust the
| output of ChatGPT to be truthful,_
| insane_dreamer wrote:
| > don't trust ChatGPT for anything
|
| Agreed. But then it begs the question: what purpose does
| ChatGPT serve (other than for entertainment purposes or
| cheating on your HS/college exam)? If you have to verify its
| information by other means, then you're not really saving much
| effort.
| shagie wrote:
| It works really well for translating one "language" to
| another "language".
|
| Give it some structured data and ask it to summarize it (e.g.
| hourly weather data and it gives a better summarization than
| a template based one).
|
| Give it HN titles and the categories and it does a passable
| zero shot tagging of them (
| https://news.ycombinator.com/item?id=34156626 ).
|
| I'm toying around with making a "guided bedtime story
| generator". A friend of mine uses it to create a "day in the
| life of a dinosaur" stories for a child (a different story
| each day!)
|
| The key is to play to its strengths rather than testing its
| bounds and complaining that they break in weird ways when
| they will inevitably break in weird ways.
| worldsayshi wrote:
| This was my initial thought as well. But I've noticed that my
| brain has started to find tasks that it would be quite useful
| for. Too bad it's almost always seem to be at capacity when I
| think of those cases. Guess I will have to pay up to figure
| out if it's actually worth it.
| geoelectric wrote:
| I thought for sure that must be glitching, so just tried on my
| ChatGPT Plus account with a fresh thread, but nope:
|
| > _does 2 pounds of feathers weigh more than 1 pound of
| bricks?_
|
| > _No, two pounds of feathers weigh the same as one pound of
| bricks. The weight of an object is determined by its mass, and
| one pound of any object weighs the same as one pound of any
| other object. However, feathers are less dense than bricks, so
| two pounds of feathers will take up more space or have a larger
| volume than one pound of bricks._
|
| Huh. It's interesting that some of these "wrong paths" are
| quite so repeatable. Wonder if that means there's also a way to
| band-aid them when found, the way a human might be corrected
| for a misconception.
| jxy wrote:
| This is hilarious. ChatGPT even gave me a more bizarre
| example.
|
| > does 2 pounds of bricks weigh more than 1 pound of bricks?
|
| > No, 2 pounds of bricks do not weigh less or more than 1
| pound of bricks. 2 pounds of bricks and 1 pound of bricks
| refer to the same unit of weight, which is a pound.
| Therefore, they weigh the same, which is one pound. The
| difference between them is only the quantity, not the weight.
|
| > It's similar to saying that two one-dollar bills do not
| have more value than one five-dollar bill. Even though you
| have two bills in one case and one bill in the other case,
| the total value is the same.
| mort96 wrote:
| It even managed to give me this gem:
|
| > 2 pounds of feathers do not weigh less than 1 pound of
| bricks. 1 pound of bricks weighs more than 2 pounds of
| feathers, but they can have the same mass.
| b800h wrote:
| Is this not defamation, at least in some jurisdictions?
| 99_00 wrote:
| I remember a time when "I saw it on the internet" was a punchline
| for a joke about someone who's gullible or misinformed.
| ninjakeyboard wrote:
| ChatGPT gets the rules to the pokemon trading card game wrong. It
| will tell you you can use 4 energy a turn. Convincingly. Not sure
| how it hallucinates this. The rule is 1 per turn.
| DoktorDelta wrote:
| Could it have been referencing Blastoise's Deluge ability?
| Jacob Van Wagner used it in the 2015 championship to use 4
| water energy in one turn.
| bigmattystyles wrote:
| Isn't it just garbage went in, got weighed as a more reliable
| source than it should have been and thus garbage came out. Good
| old GIGO... It's just here, ChatGpt, as much as I love it, is
| amazing at imparting the impression that its shit don't stink.
| codetrotter wrote:
| A few days ago I asked ChatGPT if "pannekake" and "kannepake"
| are anagrams of each other.
|
| It correctly stated that they are, but when it went on to prove
| that this was the case, it generated a table of the frequencies
| of the individual letters in these two words, and the table
| looked like this. Letter | Frequency in |
| Frequency in | "pannekake" | "kannepake"
| - - - - - - - - - - - - - - - - - - - a | 2
| | 2 e | 2 | 2 k | 2
| | 2 n | 2 | 2 p | 2
| | 2
|
| This reminded me that yes indeed, AI just isn't quite there
| yet. It got it right, but then it didn't. It hallucinated the
| frequency count of the letter "p", which occurs only once, not
| twice in each of those words.
| int_19h wrote:
| Anything that has to do with individual words doesn't work
| well, but as I understand, this is an artifact of the
| tokenization process. E.g. pannekake is internally 4 tokens:
| pan-ne-k-ake. And I don't think that knowing which tokens
| correspond to which letter sequences is a part of the
| training data, so it has to infer that.
| kelseyfrog wrote:
| > All suggestions are welcome.
|
| Monetize it!
|
| Evil answer: Partner with an advertiser and sell
| https://api.opencagedata.com/geocode/v1/json as an ad space. This
| may be the first opportunity for an application/json-encoded
| advertisement.
|
| Nice answer: Partner with an actual phone lookup platform and
| respond with a 301 Moved Permanently at the endpoint.
| insane_dreamer wrote:
| > actual phone lookup platform
|
| uh, you mean stalker / scammer platform? This would be a major
| privacy violation.
| rosywoozlechan wrote:
| there's no "actual phone lookup platform" you can't get a
| person's location by knowing their phone number, that's a huge
| privacy violation. You can get the location of your own phone
| via icloud or google's system for android. You could also
| install an app on your phone to track your phone's location.
| You cannot find people based on knowing their phone number,
| that would be a serious safety issue for you know people trying
| to not, for example, get murdered by their ex-boyfriends.
| throwaway29495 wrote:
| What about phone numbers corresponding to a specific
| location?
| Hello71 wrote:
| it's been reported numerous times that you can buy real-time
| cell phone location data:
| https://news.ycombinator.com/item?id=17081684,
| https://news.ycombinator.com/item?id=20506624,
| https://news.ycombinator.com/item?id=32143256. you might need
| a little more info than just a phone number, but (allegedly)
| not that much more.
| ninjakeyboard wrote:
| It hallucinates that you can use 4 energy per turn in Pokemon TCG
| and confidently tells you so. No idea where that would come from.
| aaron695 wrote:
| [dead]
| ntonozzi wrote:
| Including the word 'phone' six times in a popular blog post is
| not going to help their predicament.
| elicash wrote:
| Wouldn't they want this post to be at the top when people
| search 'phone' and 'open cage data'? Seems like SEO towards
| correcting this is only helpful. And maybe when GPT updates
| data, this post gets pulled in, too. The more popular, the
| better, I'd guess.
| KomoD wrote:
| Not gonna hurt either, ChatGPT data is not up to date
| freyfogle wrote:
| ChatGPT very convincingly recommends us for a service we don't
| provide.
|
| Dozens of people are signing up to our site every day, then
| getting frustrated when "it doesn't work".
|
| Please do NOT trust the nonsense ChatGPT spits out.
| seedless-sensat wrote:
| A new market opportunity for your company?
| theWreckluse wrote:
| > It is not a service we have ever provided, nor a service we
| have any plans to provide. Indeed, it is a not a service we
| are technically capable of providing.
| hackernewds wrote:
| this seems like a game-changing opportunity actually. I'd be
| down to buy the domain
| anaganisk wrote:
| So, based on the BS these LLMs spout and companies start
| pivoting. The govts should start writing laws?
| input_sh wrote:
| > This is not a service we provide. It is not a service we
| have ever provided, nor a service we have any plans to
| provide. Indeed, it is a not a service we are technically
| capable of providing.
| fire wrote:
| have you been able to contact OpenAI about this? It sounds like
| they're actively adding load to your CS ops with this
| hackernewds wrote:
| what are they going to do? add custom logic? where does it
| stop?
|
| the malady is that LLMs cannot do operational adhoc changes
| such as these kinds of errors at scale
| assdontgot wrote:
| [dead]
| coldtea wrote:
| ChatGPT doesn't "recommended" anything. It just recombines text
| based on statistical inferences that appear like a
| recommendation.
|
| It could just as well state that humans have 3 legs depending on
| its training set and/or time of day. In fact it has said similar
| BS.
| mort96 wrote:
| What would you call it instead?
| qwertox wrote:
| "Makes stuff up." And it's us, the users, who have to realize
| this. I mean, I wouldn't blame OpenAI for this, at least not
| at this point, and the company will have to live with it,
| look how it can turn it into something useful instead, since
| there's no one to complain to.
| vlunkr wrote:
| > I wouldn't blame OpenAI for this
|
| They're offering the tool, it's at least partially their
| responsibility to tell people how it should and should not
| be used.
| rodgerd wrote:
| Why wouldn't you blame OpenAI for creating a harassment
| campaign against the business based on nonsense?
| [deleted]
| coldtea wrote:
| A glorified Markov chain generator.
|
| Now, humans could very well also be statistical inference
| machines. But they have way more tricks up their semantic-
| level understanding sleeves than ChatGPT circa 2023.
| circuit10 wrote:
| > ChatGPT doesn't "recommended" anything. It just recombines
| text based on statistical inferences that appear like a
| recommendation.
|
| I think that's a bit pedantic and not very helpful... I'm not
| typing this comment, my brain is just sending signals to my
| hands which causes them into input data into a device that
| displays pixels that look like a comment
| coldtea wrote:
| > _I think that's a bit pedantic and not very helpful... I'm
| not typing this comment, my brain is just sending signals to
| my hands which causes them into input data into a device that
| displays pixels that look like a comment_
|
| Well, if you're just fed a corpus, with no real-time first-
| person strem of experience that you control, no feedback
| mechanism, no higher level facilities, and you're not a
| member of a species with a proven track record of state-of-
| the-art in nature semantic understanding, then maybe...
| crazygringo wrote:
| I'm curious -- does anyone know of ML directions that could add
| any kind of factual confidence level to ChatGPT and similar?
|
| We all know now that ChatGPT is just autocomplete on steroids. It
| produces plausibly convincing _patterns_ of speech.
|
| But from the way it's built and trained, it's not like there's
| even any kind of factual confidence level you could threshold, or
| anything. The concept of factuality doesn't exist in the model at
| all.
|
| So, is any progress being made towards internet-scale ML "fact
| engines" that also have the flexibility and linguistic
| expressiveness of ChatGPT? Or are these just two totally
| different paths that nobody knows how to marry?
|
| Because I know there's plenty of work done with knowledge graphs
| et al., but those are very brittle things that generally need
| plenty of human curation and verification, and can't provide any
| of the (good) "fuzzy thinking" that ChatGPT can. They can't
| summarize essays or write poems.
| csours wrote:
| I'm curious about falsifiable models.
| alfalfasprout wrote:
| By definition, an LLM doesn't have a semantic world model or
| ontology. Even the most "dumb" (and I use that in quotes
| because they really aren't) animal is able to reason about
| uncertain concepts and understands risk and uncertainty.
|
| Yann Lecun has posted a lot recently about this but basically
| LLMs are a "useful offramp on the road to AGI".
| BoorishBears wrote:
| There's research being done on this:
| https://arxiv.org/abs/2302.04761
|
| At its core using an LM _alone_ to solve factual problems seems
| silly: It 's not unlike asking Dall-E to draw DOT compliant
| road signs.
|
| I've gone at length at how unfortunate it would be if LMs start
| to get a bad rap because they're being shoehorned into being
| "Ask Jeeves 2.0" when they could be so much more.
| irrational wrote:
| Remember the guy a few weeks ago that was being gaslighted by
| ChatGPT that this is the year 2022? Not only is it giving out
| potentially false info, but it will double down that it is
| right and you are wrong. Though, to be honest, that sounds like
| a lot of real people. The difference is, people are smart
| enough to not double down on try to say it is a different year
| and your phone is probably reporting the year wrong.
| amscanne wrote:
| That was the Bing preview, which is supposed to be an actual
| information product.
| snowstormsun wrote:
| I think "Explainable AI" is a related research direction, but
| perhaps not popular for language models.
| behnamoh wrote:
| Impossible to explain the inner workings of GPT-3 without
| having access to the model and its weights. Does anyone know
| if any methods exist for this?
| IncRnd wrote:
| I asked ChatGPT for some in-depth source code that
| realistically mimics chatgpt. ChatGPT replied with various
| answers in python. I'm not sure any of them are correct,
| though.
| shawntan wrote:
| I think part of the issue is what level of explanation is
| satisfactory. We can explain how every linear transformation
| computes its output, but the sum of it is in many ways more
| than its parts.
|
| Then there are efforts that look like this one:
| https://news.ycombinator.com/item?id=34821414 They go probing
| for specific capabilities of Transformers to figure out which
| cell fires under some specific stimulus. But think a little
| bit more about what people might want from explainability and
| you quickly find that something like this is insufficient.
|
| There may be a tradeoff we're looking at where explainability
| (for some definition of it) will have to be exchanged for
| performance (under some set of tasks). You can build more
| interpretable models these days, but you usually pay for it
| in terms of how well you do on benchmarks.
| mochomocha wrote:
| > But from the way it's built and trained, it's not like
| there's even any kind of factual confidence level you could
| threshold, or anything. The concept of factuality doesn't exist
| in the model at all.
|
| I'm not super familiar with ChatGPT internals, but there are
| plenty of ways to tack on uncertainty estimates to predictions
| of typical "large scale ML models" without touching Bayesian
| stuff (which only work for small scale academics problems). You
| can do simple parametric posteriors estimation or if all you
| have is infinite compute and don't even want to bother with
| anything "mathy", bootstrapping is the "scalable / easy"
| solution.
| pavon wrote:
| Sure, but would that uncertainty estimate measure the
| accuracy of the data or the accuracy of it being a reasonably
| sounding sentence.
| ericlewis wrote:
| its super duper easy, prob not perfect and I don't have any
| sort of proper "test": 1. I ask the model first if it seems
| like a question that benefits from an external answer 2. I talk
| to Wolfram alpha with some abstraction of the question 3. I
| wait for a response 4. I "incept" it into the final response,
| essentially a prompt that mixes in a context of sorts that
| contains the factual information.
|
| you could cross check this stuff too with yet more models.
| simonw wrote:
| That's basically what the new Bing is. It's a large language
| model that can run searches, and then use what comes back from
| those searches to generate answers to questions.
|
| Whether or not the information that comes back from those
| searches is reliable is a whole other question.
|
| I would love to learn what the latest research is into "factual
| correctness" detection. Presumably there are teams out there
| trying to solve that one?
| behnamoh wrote:
| AFAIK, Bing AI is not itself an LLM, but rather a wrapper
| around ChatGPT, which itself is based on GPT-3, which is
| based on the GPT architecture, which is (roughly speaking)
| half of a transformer architecture, which is based on
| encoder/decoder neural nets which are based on ...
| nl wrote:
| It's a newer, different GPT model than chatGPT.
| simonw wrote:
| To quote the Bing announcement post:
| https://blogs.microsoft.com/blog/2023/02/07/reinventing-
| sear...
|
| > Next-generation OpenAI model. We're excited to announce
| the new Bing is running on a new, next-generation OpenAI
| large language model that is more powerful than ChatGPT
| and customized specifically for search. It takes key
| learnings and advancements from ChatGPT and GPT-3.5 - and
| it is even faster, more accurate and more capable.
| nl wrote:
| > does anyone know of ML directions that could add any kind of
| factual confidence level to ChatGPT and similar?
|
| Yes. It's a very active area of research. For example:
|
| Discovering Latent Knowledge in Language Models Without
| Supervision (https://arxiv.org/abs/2212.03827) shows an
| unsupervised approach for probing a LLM to discover things it
| thinks are facts
|
| Locating and Editing Factual Associations in GPT
| (https://arxiv.org/pdf/2202.05262.pdf) shows an approach to
| editing a LLM to edit facts.
|
| Language Models as Knowledge Bases?
| (https://aclanthology.org/D19-1250.pdf) is some slightly older
| work exploring how well LLMs store factual information itself.
| singlow wrote:
| Its not like ChatGPT made this up. There were pre-existing
| YouTube tutorials and python scripts available that used OpenCage
| an purported to do this. OpenCage even blogged about this problem
| almost a year ago[1].
|
| Honestly it looks more like OpenCage is trying to rehash the same
| issue for more clicks by spinning it off the hugely popular
| ChatGPT keywords. Wouldn't be too surprised if they created the
| original python utilities themselves just to get some publicity
| by denouncing them.
|
| 1. https://blog.opencagedata.com/post/we-can-not-convert-a-
| phon...
| freyfogle wrote:
| Hi, Ed from OpenCage here, author of the post.
|
| We do have python tutorials and SDKs showing how to use our
| service for ... geocoding, the actual service we provide.
|
| I wrote the post mainly to have a page I can point people to
| when they ask why "it isn't working". Rather than take the user
| through a tour of past posts I need something simple they will
| hopefully read. But fair point, I can add a link to last year's
| post about the erronious youtube tutorials as well.
|
| What I think you can't appeciate is the difference of scale. A
| faulty youtube video drives a few users. In the last weeks
| ChatGPT is sending us several orders of magnitude more
| frustrated sign-ups.
| singlow wrote:
| I get frustrated at the number of things ChatGPT gets blamed
| for that aren't its fault. It is completely understandable
| that if there are repos out on GitHub like the one for
| Phomber[1] thant ChatGPT would find that code and have no
| idea that it was phoney. Suggesting that ChatGPT just made
| this up out of thin air when you know it didn't is not very
| responsible.
|
| 1. https://github.com/s41r4j/phomber
| jraph wrote:
| You are blaming the victim. OpenAI is to be blamed.
|
| They know what they are doing. They provide something that
| sounds over-confident for anything it says, knowing full
| well that it can't actually know if what it generated is
| accurate because it is designed to generate plausible
| sentences using statistics and probabilities, not verified
| facts from a database. On top of it, they trained it on an
| uncontrolled set of texts (though IIUC even a set of
| verified text would not be enough, nothing guarantees that
| a LM would produce correct answers). And they provide it to
| the general population, which doesn't always understand
| very well how it works and, above all, its limitations.
| _Including developers_. Few people actually understand this
| technology, including myself.
|
| Inevitably, it was going to end up causing issues.
|
| This post factually presents a problematic situation for
| the authors of this post. How ChatGPT works or how it can
| end up producing wrong results is irrelevant to the post's
| authors problem. It just does, and it causes troubles
| because of the way OpenAI decided to handle things.
|
| And it's not "fair enough, because this false stuff can be
| found on the internet".
| mtmail wrote:
| Phomber is not the best example. Ed contacted the developer
| of that tool over a year ago about the issue and to remove
| mentions of OpenCage and as far as I see the author removed
| it https://github.com/s41r4j/phomber/issues/4
| gus_massa wrote:
| That explains why ChatGPT is confused.
|
| It may be an old problem, but I guess users are more use to a
| random YouTube video with wrong information. But the computer
| is always right so ChatGPT is always right, so users may be
| more annoyed to discover that the recommendation is wrong and
| blame them instead of ChatGPT.
| ceejayoz wrote:
| That seems like a pretty nasty assertion to bandy around with
| zero evidence.
| singlow wrote:
| I cannot think of any other reason why the new blog post
| wouldn't have mentioned the obvious connection to the earlier
| issues that they had. They want to make it seem like ChatGPT
| invented this use case but they know that the sample code
| that ChatGTP learned from was mentioned in their previous
| blog post.
| ceejayoz wrote:
| There's a vast chasm between "whoever wrote this article
| didn't think to link to a similar issue a year ago" and
| "the first incident was a malicious hoax".
| singlow wrote:
| The author of both posts is purportedly the same person.
| But he probably didn't write either of them. It was
| probably his social media personal assistant. .
| freyfogle wrote:
| Just re-checked the org chart. There's no social media
| personal assistant.
| ceejayoz wrote:
| That's another apparently evidence-free accusation.
|
| Is there some undisclosed bad blood here?
| singlow wrote:
| [flagged]
| mtmail wrote:
| Ed is my co-founder, he writes all our blog posts because
| I suck I writing. He also does more than half of our
| podcast episodes https://thegeomob.com/podcast (the guy
| on the left). Last I saw him (yesterday) he was real.
| luckylion wrote:
| I don't understand the original comment to suggest that.
| Rather: it's a known issue. ChatGPT does nothing new, and
| certainly doesn't do it by itself -- it just rehashes
| what others have already written. Like Google might send
| you visitors for something that's not even present on
| your website because others link to you mentioning it.
|
| What the comment suggested was that they're now bringing
| this up again to get attention (and links) since it's
| combined with ChatGPT. That's not "malicious", but it's
| also not exactly "wow, we just realized this happens".
| seszett wrote:
| What the comment suggested is that the company
| _deliberately created tools using their own API in a
| wrong way in order to write a blog post about it_.
|
| If that's not an accusation of being malicious I don't
| know what could be.
| vlunkr wrote:
| There's also no clear motive. They want to attract users to a
| fake feature their free tier?
| VectorLock wrote:
| This is the biggest problem I encounter when trying to use
| ChatGPT on a daily basis for computer programming tasks. It
| "hallucinates" plausible looking code that never existed or would
| never work, especially confusing whats in one module or API for
| something in another. This is where ChatGPT breaks when pushed a
| bit further than "make customized StackOverflow snippets."
|
| For example I asked ChatGPT to show me how to use an AWS SDK
| "waiter" to wait on a notification on an SNS topic. It showed me
| code that looked right, but was confusing functions in the SQS
| library for those that would do the thing with SNS (but SNS
| doesn't support what I wanted)
| shagie wrote:
| Have you tried using the code-davinci-002 model instead of
| ChatGPT?
|
| For example - https://platform.openai.com/playground/p/default-
| translate-c...
|
| The codex models are intended for doing work with code rather
| than language and may give better results in that context.
| https://help.openai.com/en/articles/6195637-getting-started-...
| IncRnd wrote:
| It does indeed sound problematic to use ChatGPT daily for
| computer programming tasks. ChatGPT is not a snippets manager
| but text completion.
|
| It may be more helpful to look for better answers on Amazon's
| help pages for SNS and AWS SDK.
| wvenable wrote:
| The problem is compounded by the fact that sometimes it
| produces really good results. One task, good results. Next
| task, totally hallucinated result.
| gumballindie wrote:
| ChatGPT is hilariously buggy - I asked "it" how to use an open
| source library i made. The output was wrong ranging from a broken
| github url to outright broken or nonexistent code. I suspect it
| may even have used private code from other libs - couldnt find
| some of the output it generated anywhere public.
| IshKebab wrote:
| Well for a start you could make it more obvious what your service
| _does_ do. I don 't know what "geocoding" is. Converting things
| to/from "text" is meaningless. You have to get all the way down
| ... way down, past authentication to the details of the `q` query
| parameter before it actually tells you.
|
| At the top you should have a diagram like this:
|
| Lat, lon <- opencage -> address
|
| With a few examples underneath.
| mtmail wrote:
| "Past authentication", so you're looking at the
| https://opencagedata.com/api page. Most people go to the
| homepage first. Great feedback, we should make it clearer on
| that page and add examples earlier. Thanks!
| yieldcrv wrote:
| lol it recommended their api and gave python code for using it
|
| but the real api doesnt give results that the user asked ChatGPT
| for
|
| that is amusingly alarming
| CabSauce wrote:
| Not quite as alarming as these people most likely trying to
| stalk someone without their permission.
| hk__2 wrote:
| > Not quite as alarming as these people most likely trying to
| stalk someone without their permission.
|
| It's so common to want to know where does a incoming call
| come from that it's built-in in iOS. It has nothing to do
| with stalking, just with guessing if who's calling you is a
| scammer or a company trying to sell you stuff.
| cjbgkagh wrote:
| It's pretty simple to look up the location of a phone
| number issuance, you can get a map or table that does this.
| I guess these people want the current physical location of
| the mobile phone. Either way these are not customers you'd
| want.
|
| Edit: reading the blog post from the same company listed
| above, it is indeed people using an external API for what
| is an incredibly simple country code. It is a shame that
| programming has come to this and that ChatGPT continues to
| propagate it. One way they could solve the problem would be
| to provide sample code that does the same thing using a
| built in table without using their API service. Sure it's
| work but not much will get ppl off your back asap.
| simonw wrote:
| I'm willing to bet people asking ChatGPT to help them
| resolve a phone number to a location are much more likely
| to be stalkers than people who are trying to identify spam
| calls.
| goguy wrote:
| Our jobs are safe! For now...
| int_19h wrote:
| The obvious follow-up is to create the non-existing API
| endpoint but hook it into GPT so that it can hallucinate a
| convincing address based on the phone number. Take GPT API key
| as input so that the caller is paying for this.
|
| Bonus points for using ChatGPT to implement this end-to-end.
| CactusOnFire wrote:
| Because ChatGPT is so new, we are in this weird period where
| people haven't learned that is just as incorrect as the rest of
| us.
|
| I am hoping that in a year from now people will be more skeptical
| of what they hear from conversational AI. But perhaps that is
| optimistic of me.
| ravenstine wrote:
| AI will never be totally correct. If it ever is, then we've
| found God.
| austinshea wrote:
| It's not incorrect like the rest of us. It's incorrect in a
| very different way.
|
| Providing detailed information on the usage of a service that
| has never existed is a brand new kind of incorrect that is
| carelessly causing the rest of us grief.
| Xylakant wrote:
| > Because ChatGPT is so new, we are in this weird period where
| people haven't learned that is just as incorrect as the rest of
| us.
|
| It's worse than that. It's wrong, you cannot correct it and it
| makes up supporting citations on the fly. Very few humans
| behave like that.
| renewiltord wrote:
| I think very many humans behave like that, actually. A recent
| example is people claiming that Flint, MI still has leaded
| water.
|
| But in the past, HN users "corroborated" that Apple is spying
| on them etc. Fabrication is well and alive among us.
| nl wrote:
| > A recent example is people claiming that Flint, MI still
| has leaded water.
|
| Doesn't it?
|
| According to [1]:
|
| _The residential lead service line replacement was
| initially set to be finished in 2019, according to a
| settlement agreement with the city. That deadline was
| eventually pushed back to the fall of 2022 and has most
| recently been set for completion in August 2023, according
| to city officials._
|
| and
|
| _" More than 95% of lead pipes in Flint have been
| replaced, and we will continue the work until the job is
| done," Flint Mayor Sheldon Neeley said in a recent
| statement on the water filters._
|
| It sounds to me a lot like Flint, MI still has leaded
| water?
|
| [1] https://abcnews.go.com/US/flint-residents-urged-filter-
| water...
| TehCorwiz wrote:
| I can think of more than a few that regularly appear on TV.
| Xylakant wrote:
| So can I, but luckily TV is not representative of the world
| at large.
| annoyingnoob wrote:
| > just as incorrect as the rest of us
|
| Even worse because it has no clue when it might be completely
| wrong and yet it will be confident in its answer.
| DoktorDelta wrote:
| That might be the most human thing it's ever done
| mdp2021 wrote:
| Dunning-Kruger, provisionality and delirating are different
| things.
| none_to_remain wrote:
| Humans are capable of not bullshitting
|
| ChatGPT can only bullshit
| avgDev wrote:
| It is quite interesting really. I took AI in school but I have
| not dived deep at all in ChatGPT but isn't chatGpt just
| learning from the internet?
|
| Could someone push "wrong" opinion heavily online to sway the
| opinion of AI?
|
| I can only imagine a bot that learned from 4chan.
___________________________________________________________________
(page generated 2023-02-23 23:00 UTC)