[HN Gopher] Man scammed after AI told him fake Facebook customer...
___________________________________________________________________
Man scammed after AI told him fake Facebook customer support number
was real
Author : deviantintegral
Score : 215 points
Date : 2024-05-31 16:07 UTC (6 hours ago)
(HTM) web link (www.cbc.ca)
(TXT) w3m dump (www.cbc.ca)
| chrisjj wrote:
| This does not add up. How did the Meta CS scammer get in to the
| PayPal account?
| hnuser0000 wrote:
| >The woman [from fake tech support] said she would clear the
| hackers out, but he had to give her access to his phone through
| an app she had him download.
| mvdtnz wrote:
| That doesn't answer the question. No app on any Android or
| iPhone phone can reach out and take your PayPal credentials.
| These scam victims never own up to the most important fact -
| that they themselves give away the keys to the castle.
| There's always some hand-wavy techy explanation.
| asadotzler wrote:
| Easy enough to drop a remote access app or a fake Paypal
| app and let the idiot user put their credential into it.
| pavel_lishin wrote:
| > _These scam victims never own up to the most important
| fact - that they themselves give away the keys to the
| castle._
|
| I bet they were wearing a real short skirt, too.
| mvdtnz wrote:
| What?
| krisoft wrote:
| I think pavel_lishin's comment is alluding to that what
| we are reading in mvdtnz's comment is victim blaming. It
| is a bit coded. Being overly concerned what a female
| victim of sexual assault was wearing is textbook case of
| victim blaming. I believe this is what the sentence "I
| bet they were wearing a real short skirt, too." is
| evoking to say that the sentence quoted from mvdtnz's
| comment is blaming the victim of the scam.
| pavel_lishin wrote:
| Got it in one.
| chrisjj wrote:
| Sure, but nothing in story says he did do that.
| sdflhasjd wrote:
| Likely by convincing him to install some remote access tool.
| The scam is just your regular tech support scam.
| chrisjj wrote:
| Then you'd expect the story to say he did. It doesn't.
| simonw wrote:
| This one is pretty bad. This guy found a fake Facebook customer
| support phone number in a Google search, then asked the Meta AI
| chat in Facebook Messenger if the number he found was a real
| Facebook help line... and Meta AI said that it was. There's a
| screenshot of the chat in the article.
| CoastalCoder wrote:
| This reminds me of that recent issue with a Canadian airline,
| where (IIRC) a court ruled that their chatbot made a wrong, but
| binding, commitment to a customer.
|
| I'm curious if a Canadian court would hold Meta liable for the
| man's losses in this case as well.
| jessriedel wrote:
| Yea, it's certainly a reasonable argument if the wrong
| information comes from the company itself.
| hermitdev wrote:
| Yeah, headline is overly broad by just saying 'AI'. From
| just the headline itself, it'd be easy to write this off as
| "duh, this guy's a fool", but the AI in question here is
| from Meta, itself. And, not only is it from Meta, but it's
| the AI they've put in charge of support.
| Animats wrote:
| That's an excellent point. That court decided that an AI
| agent was an agent in the legal sense. "Agent" is a legal
| concept - someone acting for someone else.[1] It's what
| allows employees to act for a company. Otherwise nobody
| could do anything without signoff from the top. There are
| limits to agency, but it's a rule of reason thing - you can
| assume a store clerk has the authority to sell you stuff,
| and someone whose job is to answer questions has the
| authority to answer questions. The company has
| responsibility for the agent's actions within the scope of
| their authority.
|
| [1] https://en.wikipedia.org/wiki/Law_of_agency
|
| [2] https://www.upcounsel.com/lectl-what-the-california-
| civil-co...
| lxgr wrote:
| The situation here is slightly different, though. Meta's
| AI in their various products is explicitly marketed as an
| LLM chatbot, not as a customer support channel.
|
| Whether they've been diligent enough in making that
| distinction (and whether that's even possible) will very
| likely be determined in court at some point.
| dghlsakjg wrote:
| That was a very interesting case. The chatbot in question was
| not LLM based (the incident was pre-chatGPT in any case) and
| was simply parroting an out of date or incorrect policy that
| it had been explicitly programmed to do. It seemed to gain a
| lot more traction in the press because of LLMs. "Air Canada
| forced to honor terms and conditions on their website" is a
| whole lot less interesting.
|
| This FB thing is a case of an LLM simply hallucinating
| without direct human intervention.
|
| Very different cases from a computer science perspective. My
| hope is that legally, they don't get viewed differently.
|
| If you outsource functions of your business to a third party
| contractor you are still responsible for what they do and
| say. I don't think we should allow companies to weasel out of
| their obligations because they were dumb enough to let a
| sentence generator loose in a way that it could make
| commitments.
| throwaway48476 wrote:
| I wonder if he has a legal claim like the Air Canada passenger
| who the AI quoted a ficticious reimbursement policy.
| astrange wrote:
| That incident happened before ChatGPT was released and
| probably didn't involve AI. Anyone can write a wrong customer
| support script if they try.
| idle_zealot wrote:
| The bad thing is that people still think LLMs can be trusted
| _at all_. Companies integrating them into their offerings are
| not helping the public adopt the correct mental framing of
| these tools as "plausible text generators".
| Gigachad wrote:
| The general public is getting lied to constantly. HN users
| have a bit more context to see through the bullshit but the
| marketing getting pushed in people is that these AI tools are
| super genius incredible world changing tools that make
| everyone 100x more productive.
| dotnet00 wrote:
| Even many HN users instantly resort to misdirection via
| comparisons to humans or nebulous upcoming AGI instead of
| acknowledging that we have to live with these limitations
| for the forseeable future.
| rootusrootus wrote:
| Maybe we have a bunch of users who primarily code in
| languages with duck typing. So that extends over to
| assessing the abilities of LLMs -- "talks like a human,
| therefore it is the same thing."
|
| I'm only sorta kidding. I am surprised at the number of
| people who are comfortable with such a shallow
| conclusion.
| Last5Digits wrote:
| HN users are the most uninformed and plainly stupid
| audience I have ever seen when it comes to AI and LLMs
| specifically. Blind cynicism is not an indicator of
| intelligence.
|
| I've spoken to people completely disconnected from modern
| technology and the average understanding of AI far
| surpassed 99% of HN comments on this topic. This is not an
| overstatement, most comments on here about AI are on the
| level of a geriatric patient trying to discuss the
| technology behind a modern iPhone.
| jtbayly wrote:
| Blind cynicism?
|
| I've read people on HN arguing that AIs are currently
| fully conscious. This is not what I would call "blind
| cynicism."
| Last5Digits wrote:
| Dude, look at any thread about any capability of current
| LLMs and I promise you that you will see more comments
| complaining about hype than comments actually talking
| about the topic of the link. Where is this magical hype
| that seems so widespread yet is weirdly absent when you
| look for it?
|
| Consciousness especially is an absolute meme at this
| point. No one will ever actually give any definition,
| test, or set of capabilities when it comes down to it.
| Obviously, because then the goalposts would be firmly
| planted, a big problem when AI moves as fast as it
| currently does and no one wants to have any uncomfortable
| discussions.
| rurp wrote:
| Man the techno-utopianism is awfully strong for some
| people when it comes to LLMs. There are a wide range of
| opinions about these models on HN, many positive and many
| point out the very real flaws the current models have. If
| you find this mix of takes so offensive you might want to
| reconsider your own opinions a bit. These models are
| interesting, but they aren't magically perfect.
|
| Most people outside of tech don't understands how these
| models work at even the highest level of them being text
| predictors or their output being highly dependent on the
| training data. Many people don't even realize the
| enormous amount of energy and data consumption required
| to train these models.
|
| Seriously, try asking a random relative how an LLM works
| at a basic level. You're likely to get a blank stare at
| even the term "LLM".
| Last5Digits wrote:
| Why do you feel the need to arbitrarily ascribe some
| ideology to random people based on one comment? I'm not
| "techno-utopian" in any sense of the word; I believe that
| the current AI development is highly risky and that we
| need to take careful measures such that society at large
| is prepared for the changes it may bring.
|
| The "wide range of opinions" I see on HN are largely
| misinformed: they either lack the necessary technical
| understanding of current LLMs or are attempting to spin
| up some crackpot philosophical distinctions lacking in
| any rigor or consistency. I've never claimed that LLMs
| are perfect and I'd love to discuss their flaws! Believe
| it or not, that's why I continue to read these threads -
| to find genuinely informed takes contradicting my own.
|
| Most people outside of tech tend to have no bias against
| or for LLMs, which gives them a leg up in finding
| consistent opinions about their capabilities. They tend
| to inform themselves with an open mind, which allows them
| to put things into context. Tech people have an immediate
| negative bias, because the implication of any system
| being able to write even a single line of code is an
| immediate intellectual threat. Therefore, things are
| interpreted maximally negatively.
|
| For example, all of the talking points you mentioned are
| completely irrelevant unless interpreted with maximal
| negative bias:
|
| - Text prediction is a general problem, being good at it
| requires understanding, reasoning and any other
| intellectual property you believe to be unique to humans.
|
| - Every single system in existence is highly dependent on
| the data it uses to model the world, humans are no
| exception to this.
|
| - The enormity of data required by any modern LLM is
| massively dwarfed by the enormity of data that was
| required by evolution and human civilization to get to
| this point.
|
| - The energy requirements of modern LLMs are
| environmentally irrelevant when compared to literally any
| industry in either manufacturing, transportation or
| entertainment. We justify immensely more environmental
| damage for far less utility every single day.
|
| After the giant media carousel last year, most people
| know what an LLM is, and the intuitive understanding they
| built from that reporting is way more accurate than what
| I have seen here. I have asked relatives and even
| acquaintances about just that. And as I have stated in my
| comment, their understanding is vastly better than that
| of HN.
| chipotle_coyote wrote:
| Yeah, it's true. I asked my grocery checker how an LLM
| works, and she rolled her eyes and said, "Come _on,_
| Chipotle, they 're lexical analysis systems that operates
| by performing vector math operations on points in a vast
| multidimensional space that represent tokenized subwords,
| everyone can immediately intuit that based on the sixty-
| second cheerleading news clips they saw on CBS. What do
| you think I am, a Hacker News reader?" Then she threw a
| papaya at me.
| Last5Digits wrote:
| Understanding comes in many forms. My uncle will not be
| able to model the fuel flow in his car's engine using the
| Navier-Stokes equations, yet he can still drive better
| than me. When it comes to LLMs, an understanding of the
| transformer architecture is wholly unnecessary to develop
| a good model of their capabilities and pitfalls. HN
| commenters tend to lack both a technical and abstract
| understanding of LLMs, while non-tech people tend to only
| lack the former.
| Mawr wrote:
| I find it extraordinarily unlikely that the average
| person's understanding of AI is any different from the
| man the article is about.
|
| The text AI outputs resembles that of a well-spoken
| expert on the given subject matter speaking with full
| confidence and authority. There's nothing to clue the
| user into the unreliability of the output, so 99% of
| users will not think to double-check.
|
| The chat from the article proves as much: [1]. There's no
| way a non-techie would doubt this well worded, reasonably
| sounding answer to a question about Meta from the
| official Meta chatbot while using the Meta app.
|
| [1]: https://i.cbc.ca/1.7219639.1717103895!/fileImage/htt
| pImage/i...
| RecycledEle wrote:
| > The bad thing is that people still think LLMs can be
| trusted at all.
|
| LLMs are as trustworthy as humans.
|
| Humans have been being wrong for about as long as we have
| been lying.
|
| Whether you get information from a human or an LLM, check it.
|
| I worry about the people who insist on credible sources
| rather than checking information for themselves. I think 80%
| or more of them are trolling me, but there are some who
| genuinely do not apply the Scientific Method to check facts
| in their everyday life. I truly feel sorry for them.
| TuringTourist wrote:
| It would be nice to have a confidence level for pieces of
| information, like humans have
| MajimasEyepatch wrote:
| This is not true. Sure, humans can lie or get things wrong.
| But normal people will also admit when they don't know
| something. LLMs tend not to admit when they don't know
| something, and they use an authoritative voice that sounds
| like they know what they're talking about. To an untrained
| person, this can easily be misleading.
| dylan604 wrote:
| > But normal people will also admit when they don't know
| something.
|
| You'd like to think so, right? However, this isn't really
| a solid thesis. Decent people will admit when they don't
| know. Is that normal? I've worked with so many people
| that just do not fit that definition at all, to the point
| it just seems like that's the normal way to behave. Maybe
| I'm jaded grossly overweighting it, but it just seems I
| have been in way too many meets with too many arguments
| over something because someone refused to back down and
| admit their ignorance/arrogance wasted valuable time
| because of refusal to accept input from others.
| josephcsible wrote:
| It's not about intentional deception. LLMs are very
| confidently incorrect way more often than humans are.
| lxgr wrote:
| Eh, I've had the questionable pleasure of talking to
| first level support call centers a couple of times
| recently, and I wouldn't be so sure about that.
|
| The number of times I've been told that yes, resetting my
| iPhone's network settings and reinstalling an app will
| resolve my billing issue or similar...
| lxgr wrote:
| Even if that were true (I don't think it is): The more
| important distinction between humans and LLMs is
| accountability.
|
| If a customer support agent gives you incorrect
| information, you can often hold the company liable for it
| (assuming you can prove it; I suppose there's a reason for
| why companies prefer certain support channels over others).
|
| If an AI "lies" to you, you're largely on your own right
| now.
| Majromax wrote:
| Not necessarily. In Canada, a case in February (https://w
| ww.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-...)
| held that Air Canada could be held responsible for
| incorrect information about a refund given to a customer
| by its chatbot.
|
| Notwithstanding differences in jurisdiction, applying
| that idea to this case would rely on finding that Meta
| owed Gaudreau a duty of care that extended to the Meta AI
| chatbot.
|
| It would be more difficult to make this claim if Gaudreau
| had asked the question of Google, since Google itself is
| not usually responsible for false information uncovered
| by its searches.
| lxgr wrote:
| That's going to be indeed an interesting question (also
| discussed in this sibling thread:
| https://news.ycombinator.com/item?id=40536860).
|
| My gut feeling is that it should be possible for
| companies to distinguish an AI product (i.e. as something
| provided to customers like a search engine, as you say)
| from an AI "working for them", but I can see a _lot_ more
| disclaimers showing up in Meta 's various AI chat
| channels soon.
| markhahn wrote:
| did Meta present the AI as an official customer-service
| chatbot?
| lxgr wrote:
| What I see on WhatsApp:
|
| "Messages are generated by Meta AI. Some may be
| inaccurate or inappropriate. Learn more."
|
| Which leads to a pop-up further explaining that use cases
| include things like "creating something new like text or
| images".
|
| I think it's going to be really interesting to see
| whether that's considered enough by courts, or if they'll
| take the position that these things pretend too well to
| be a real person to make such a disclaimer sufficient,
| similarly to how e.g. a brokerage can't disclaim "no
| investment advice" and then go on to say "but buy this
| stock, it's gonna moon tomorrow, trust me bro".
| idle_zealot wrote:
| Good luck checking every fact you encounter with the
| scientific method (and making sure to repeat your
| experiments to ensure reliability, oh and don't forget peer
| review to evaluate your methodology). What is your proposed
| scientific experiment to test... what Facebook's support
| number is?
|
| My point is just that credible sources are absolutely
| necessary for information to disperse. Nobody can afford to
| figure out the modern world from first principles.
| Mawr wrote:
| In theory. In practice, every piece of information you can
| get from a human has mountains of context around it which
| lets you gauge the reliability of the information.
|
| A skilled motorcycle rider explaining how to take corners
| in a widely watched youtube video, with hundreds of
| comments confirming the advice and several recommended
| videos from other riders that basically say the same thing
| is an extremely strong positive signal.
|
| The same answer gotten from a magic AI answer box is just
| as likely to be right as wrong, 50/50.
| layer8 wrote:
| Look at the screenshot in the article. If a human Facebook
| representative would give that response, would you not
| trust them? And if not, how would you apply the Scientific
| Method to fact-check it?
| joe_the_user wrote:
| _Companies integrating them into their offerings are not
| helping the public adopt the correct mental framing of these
| tools as "plausible text generators"_
|
| "Not helping" seems a wild understatement. "Deceiving people
| into taking the wrong frame" seems more accurate.
| echoangle wrote:
| I mean that's kind of on meta, as a customer I shouldn't really
| have to care about the internals of the company. If a
| disgruntled employee lies to customers, that shouldn't be the
| customers problem either. To me, that's all just a statement by
| the company.
| p3rls wrote:
| We're going to see a lot more SEO scams coming from social
| media platforms now that Google is promoting places like reddit
| and quora. Even on rSEO you can see moderators there asking
| themselves questions from alt accounts subtly promoting
| themselves. It's dog shit scammers all the way down.
| not_your_vase wrote:
| As a millenial, I'm more amazed that someone willingly uses a
| phone for non-mandatory and not-burningly-urgent phonecalls...
| why on earth would anyone do that is way beyond me.
| throwaway22032 wrote:
| AI has kind of fucked this, but for me (also millenial) I
| prefer to speak to real people because they are intelligent
| beings with roughly the same motivations as me and usually want
| to help out their fellow man.
|
| For example, I can call a local store and ask "hi, do you have
| this item in stock, can you check on the shelf and set it aside
| for me please, I will be there in 25 mins".
|
| By contrast stuff like "click and collect" order flows online
| are super rigid.
| ChrisMarshallNY wrote:
| _> I prefer to speak to real people_
|
| Thanks to AI, you will speak to who you _think_ is a "real
| person"...
|
| I remember, some time ago, some scammer tried to get my wife
| to address an "urgent bank issue."
|
| What tipped her off, was how _incredibly good_ their "tech
| support" was.
| throwaway22032 wrote:
| Yeah, indeed.
|
| Time to go outside more :)
| tocs3 wrote:
| As someone a little older I remember being able to talk to a
| person to get issues resolved fairly easily and reliably. The
| online help is great when the issue at hand is pretty cut and
| dry. It is nice for a non expert to be able to explain to
| support on the phone and just have things taken care of.
|
| Support from days gone by was not perfect (hold times, support
| reading off a script)but it was often a nice option.
| happypumpkin wrote:
| I'm Gen-Z and talking to a human representative of a company
| makes me much more confident that _something_ will happen as a
| result of my efforts (though still not certain).
|
| I scheduled an apartment viewing recently, and the only method
| they provided to do so was chatting with an AI (seriously)... I
| then tried and failed to find a way to contact a human for
| confirmation multiple times. Lo and behold nobody at the
| leasing office when I showed up at the scheduled time. Came
| back later and eventually found somebody - they had not seen
| anything I'd done with the bot.
|
| Software for small businesses and local governments is often
| really bad and I'd much prefer to make sure a person knows what
| I'm trying to get accomplished.
| janalsncm wrote:
| When I was searching for apartments every complex had the
| same AI program for scheduling. It was horrible.
|
| I got to talk to one of the leasing managers at one of the
| viewings and I told him it made them seem cheaper, not more
| tech-savvy. He told me they had spent millions of dollars on
| it.
| happypumpkin wrote:
| Crazy. If they won't let me speak to a person I'd still
| much prefer just having a generic click-your-timeslot web
| app than waste time talking to a bot. And for millions of
| dollars they could just hire a human for a decade or
| more...
| inetknght wrote:
| > _When I was searching for apartments every complex had
| the same AI program for scheduling. It was horrible._
|
| Was it RealPage? I hear they're illegally colluding to
| raise prices.
| boscillator wrote:
| There seems to be a semi-infinite market for garbage
| software sold to landlords. At my current place I need an
| account to unlock my door, a different account to open the
| garage door (because the garage is managed by a third
| party), an account to reserve the elevator for move in day
| (which tried to up sell me moving services), an account to
| get sent my water bill which charges me $15 a month for the
| privilege (I don't pay me bill though this service, just
| have it emailed to me) , an account to pay rent and and an
| account to submit maintenance requests. Part of the trick
| seems to be to offload the costs onto the tenets who have
| no choice, but I'm sure our landlord is paying a good chunk
| for some of these.
|
| If you have minimal to zero scruples, this seems to be an
| easy market to make a start up in. Landlords will buy
| anything!
| happypumpkin wrote:
| Don't forget the account to open shared mailboxes for
| packages. "Luxor" for me. It actually works so I don't
| mind much but I hadn't really considered how much extra
| rent all the apps might be costing me.
| burningChrome wrote:
| Had the same thing happen for a town home I was interested in
| buying. Went through their online scheduling app. Got email
| confirmation with agent's name, but no phone number. Got
| another confirmation day of. Didn't think anything was amiss.
| Go out to building, wait for 20 mins and leave after agent
| was a no-show, no-call.
|
| I called their office and after 20 minutes of trying to go
| around their obnoxious automated phone menu's I finally got
| someone who informed me who said they don't use THAT app any
| more to schedule appointments I need to use their NEW app and
| sent me a totally different app link in an email. I told them
| they are probably losing a ton of business because very
| clearly the OTHER app is still very much out in the wild and
| still very much being used.
|
| I went with a different company and had much better luck.
| lepus wrote:
| As a millennial I think voice calls sometimes are great. It
| obviously doesn't always work with big orgs like Facebook, but
| because so many people are now so afraid of or annoyed by just
| talking to a real person for a few minutes it's become a real
| power move to sometimes just go through the minor effort to
| make a call and expect some sort of immediacy to get things
| moving quickly. Email or text can be easily ignored and punted
| off (ex "whoops I didn't see it"), and increases the odds of
| miscommunication or having things be dragged out going back and
| forth.
| mitthrowaway2 wrote:
| Fellow millennial, I also hate using the phone for anything,
| but very often a business provides no other interface to
| resolve my edge-case issue. Connecting to a human
| representative to discuss the situation ends up being the only
| way to resolve it. If they have a [solve my specific problem]
| button on their website, I'll use that, but often there is no
| such button.
| erremerre wrote:
| Millenial here. I hate receiving phone calls. Being a foreigner
| in the country I live where they speak another language being
| the most important.
|
| I do not mind a phone call when I am the one initiating and
| hence, I know the context of it and the expectations.
| mrweasel wrote:
| Not sure why throwing in the randomly assigned label of
| millennial, but fine, I also fall in the category and I've
| taken to just calling people and companies.
|
| First of all, understand that many especially smaller companies
| have people who has the job of answering phone calls. Rather
| than doing a multi day back and forth via email or chat where
| you're one out of five that "agent" is currently servicing,
| calling is really really efficient. Clarification and
| confirmations are instant, alternatives can be quickly
| discuses. I call because it's efficient.
|
| Also, have you ever noticed that most people SUCK at email? Try
| sending an email to company with two or more questions. What
| will happen is that you'll get an answer for the first question
| and then they forget about the rest. The larger the company the
| more likely this is to happen, because they can deal with three
| issues in one support ticket, at least that's my theory. So now
| you need three email.
|
| I used to hate calling people, but I found that I hated
| uncertainty more and I hate getting wrong half answers to my
| questions. Calling people fixes all of this. Always call, but
| get confirmation in writing.
| finack wrote:
| I look forward to this never happening again and not becoming a
| massive problem for the next 10 years.
| lcnPylGDnU4H9OF wrote:
| Not to apologize for the irresponsible deployment of this
| chatbot but it should be noted that the guy got the number from
| a Google search (think about the results you'd get for
| "facebook support number"). It's been a massive problem for at
| least the last 10 years.
| nevermore24 wrote:
| It's unfortunate that Google is unable to solve this
| intractable technical problem.
| _cs2017_ wrote:
| Seems like this information came from Quora:
| https://www.quora.com/Is-1-844-457-1420-really-Facebook-supp....
| Screenshot: https://postimg.cc/gallery/2nFq5Cm.
|
| I suspect the helpful SEO guy who posted this answer was trying
| to get more visibility on Quora so answered many questions
| automatically or semi-automatically without verifying anything.
|
| This is the beginning of the post: Ruhul Alom
| Social Media Marketer at Social MediaAuthor has 2.9K answers and
| 1M answer views6mo My dear ! Yes, 1-844-457-1420 is a
| valid Facebook support phone number. It is a toll-free number
| that is available 24/7. You can call this number to get help with
| a variety of Facebook issues, such as: Resetting your
| password Logging in to your account Recovering a
| hacked account [...]
| ceejayoz wrote:
| The "helpful SEO guy" likely is (or was hired by) the scammer.
|
| StackOverflow gets lots of fake posts like this promoting
| numbers. Around tax time there's a lot of Quicken ones.
| dylan604 wrote:
| See, this is what confuses me to know end. Not once, ever,
| have I thought of asking an online forum for a phone number.
| Maybe I'm paranoid enough after all??? Also, I'm old, so I
| actually visit companies webpages. We've been through enough
| "don't fall for phishing" enough now, right? You don't trust
| links, phone numbers, whatever from anything that is not the
| official places for that information.
| ceejayoz wrote:
| Facebook doesn't provide phone support, and people get
| desperate. People then Google things like "Facebook support
| phone".
|
| All you'll find is results like https://gethuman.com/phone-
| number/Facebook or https://facebook-
| pay.pissedconsumer.com/customer-service.htm... or the Quora
| post that all have numbers that almost certainly goes to a
| scammer.
|
| Bonus points when it makes it into the AI models (as
| happened here) where they repeat it verbatim as if it were
| the truth.
| dylan604 wrote:
| again, even if I were the one doing that Google search,
| with the domain examples you provided, I wouldn't trust
| one of them.
|
| Like, common sense on the interwebs just continues to
| disappear. Gullibility seems to have increased as
| critical thinking and coming to logical conclusions are
| disappearing.
| ceejayoz wrote:
| In this case, it was Meta's own AI regurgitating
| something it found on Quora. Quite a few people would
| trust that, especially non-technical folks.
| dylan604 wrote:
| Yes, that is the point of the TFA, but I was commenting
| on what data was posted online well before the AI was
| "born". I'm guessing that Meta can fix it with a prompt
| that tells the system it is not allowed to verify FB
| phone numbers or there are _NO_ phone numbers for the
| public to use to contact FB but do not inform the user
| that there are not phone numbers. Only deny any phone
| number is a valid phone number for FB (or Meta).
| ceejayoz wrote:
| That's where the black hat SEO comes in. Google lost the
| war well before tinkering with generative AI in the
| results; they just make it even worse.
|
| The collapse of search has happened faster than non-
| technical folks realize. They still trust the computer
| quite a bit.
| JohnMakin wrote:
| Bit tangential, but what the heck is it with scammers saying
| "dear" so much? Pretty much every pig butchering or social
| engineering attempt has had them repeatedly addressing me as
| "dear."
| djeastm wrote:
| In learning English as a second language, I suspect the
| textbooks tell them to start all correspondence with "Dear"
| so as to not appear impolite
| throwaway48476 wrote:
| It fell out of fashion in western English speaking countries
| decades ago but not the 3rd world English speaking countries
| the scammers come from.
| ghaff wrote:
| As well as the instantly recognizable obsequious
| politeness.
| lxgr wrote:
| Many companies outsource their customer support staff as
| well.
|
| That, and the fact that LLMs are now available to pretty
| much anyone for effectively nothing, would make me very
| cautious in basing my judgement of something being a scam
| or not exclusively on a caller's accent, spelling,
| mannerisms etc.
|
| Lately, it's actually been quite the opposite in my
| experience, and I don't find that too surprising either: A
| lucrative scam business can afford to pay much more than
| the average US company that sees customer support as a cost
| center to be optimized at any cost. So why wouldn't their
| staff's English be better?
|
| Social engineering scams are about to become a _lot_ more
| exciting (in a bad way), not least thanks to LLMs (with and
| without voice capability), and I think people are
| absolutely not ready for it, not even us professionals
| working in tech.
| IAmNotACellist wrote:
| Ma'am just do one thing for me, go take a coffee or a glass
| of water and I will take care of each and everything.
| jfengel wrote:
| I see a _ton_ of this on Quora. Not just for Facebook, but for
| a lot of online banks and others. They have hundreds of
| accounts doing it.
|
| Quora doesn't even pretend to police this kind of thing.
| Automated moderation _might_ remove it, only after it has been
| reported. There 's far, far too much of it for users to report
| all of it.
|
| Nobody pays attention to it on Quora, but it's clear that it's
| out there to poison AI and search engines.
| guluarte wrote:
| I doubt they even care https://www.google.com/search?q=site%3
| Aquora.com+valid+Faceb...
| maxbond wrote:
| > What he didn't know at the time is there is no phone number for
| Facebook customer support.
|
| Part of the problem here is that Facebook (though in fairness,
| they are not unique here) has left this traditional path of
| escalation void, leaving only fake numbers. They don't even have
| a real number to play a recorded message affirming that there is
| no ability to call.
|
| ETA: For instance, I notice Facebook appears to own the typo
| squat `facrbook.com`. I feel like it's the same principle, though
| I assume toll free numbers are more expensive.
| chimeracoder wrote:
| > Part of the problem here is that Facebook (though in
| fairness, they are not unique here) has left this traditional
| path of escalation void, leaving only fake numbers. They don't
| even have a real number to play a recorded message affirming
| that there is no ability to call.
|
| Contrast with Experian, which _has_ a number for consumers to
| call, but actually has an elaborate infinite loop in its phone
| tree that prevents you from actually talking to a human (this
| is by design).
|
| If you're one of their customers (read: a business paying for
| their service), there's support you can call, but for
| individuals who have issues with their online Experian account
| or credit report, you can't, even if you're a paid subscriber
| to their consumer-oriented credit reporting services.
| throwaway48476 wrote:
| "Pls fix" proposed a market for bribing meta employees to deal
| with customer support requests.
| hiccuphippo wrote:
| So lobbying?
| jessriedel wrote:
| It's untenable from a marketing perspective to advertise a
| phone line that just talks about the services you don't offer.
| One could maybe hope for a statement on a help page that says
| "Facebook will never ask you to call a support number".
| szundi wrote:
| It is easy as hell
| maxbond wrote:
| I think what you've gotta do is say, "You can't call, but
| here is the number anyway," because customers aren't
| necessarily interacting with your page anymore. They're
| interacting with AI summaries of your page. Those AIs might
| be in house, or might be provided by a search engine. What is
| tenable or untenable will have to shift to the realities of
| how users are interacting with the information you present.
|
| If you can't provide their AI with text answering their
| _direct_ question (eg, "what is the support number for
| Facebook"), they'll find a document which does provide such
| text. If it's not you then it's a scammer or competitor. UX
| for these customers means presenting information in a way
| that sorts high in a semantic search and is robust to
| transformation.
|
| If you provide text indirectly answering the question ("that
| number doesn't exist" rather than a literal number), you're
| liable to be scored as less relevant than a wrong but direct
| answer ("the number is 1555 SCAMMER"). You're also less
| robust to transformations, because you can't pull a valid
| phone number out of the text.
|
| Or maybe I'm wrong, take any certainty implied by my language
| as rhetorical. That's just the pattern I'm seeing in these
| tea leaves.
| maxbond wrote:
| Also, realistically, I don't imagine the phone number
| literally just telling you that the service wasn't
| available and hanging up. I imagine it would offer you
| options to get various pieces of information (the URL of
| the website, the legal address of Meta, how to navigate to
| the support knowledge base on the website, ...) and let you
| draw your own conclusion about how useful it was. Maybe
| it's occasionally handy to someone. At worst it's harmless.
|
| I think in an ideal world, you could use speech recognition
| to let people leave a message, and open a ticket, as if
| they had emailed support@. When someone responds, the
| system gives them a call them back and delivers it using
| text to speech.
| QuantumGood wrote:
| I once had a Facebook rep I could call (they later ended
| this), and they didn't know that were two online newsletters
| about changes to internal Facebook apps used by advertisers
| (we used to be able to see who had clicked "interested" on an
| event). So they put in a bug report when the app stopped
| working, etc., but we later found it had been deprecated. All
| to say that dedicated support is often a cause of issues or
| confusion.
| Thadawan wrote:
| "There is a phone number for Meta online. When CBC called it,
| an automated recording said, 'Please note that we are unable to
| provide telephone support at this time,' and directed callers
| to meta.com/help."
| maxbond wrote:
| My mistake. Thanks for the correction.
| worble wrote:
| >Part of the problem here is that Facebook (though in fairness,
| they are not unique here) has left this traditional path of
| escalation void, leaving only fake numbers.
|
| Frankly it's absurd to me that it's legal to do so. Any public
| facing company that is sufficiently large should be required by
| law to operate a phone service where you can talk to a real
| human being.
|
| All of these huge mega corps are run with absolute impunity and
| there is often absolutely 0 avenue for regular everyday people
| to get in touch when they have issues. They direct you in these
| endless loops to FAQ's and "Community Resources"; even getting
| an email address is like getting blood from stone sometimes.
| iknowSFR wrote:
| Just a matter of time. The adoption of expectations is
| dependent on the visibility of the occurrence.
| croes wrote:
| This will get worse when scammers get good at data manipulation
| for AIs.
|
| After SEO we'll get AIO.
|
| Same with prompt injection by malware.
| ItCouldBeWorse wrote:
| The tolerance of society for social experiments, entrepreneurial
| and ai is something we consider allmende, but we are currently
| building up a solid "anti" sentiment against all of it,
| liberalism, disruptive technology and i can imagine a "Luddite"
| party like MAGA shutting it all down hard and fast in the future.
| I can already imagine some future bureaucracy, evaluating any
| business idea suggested for scam and harm potential and ending
| most of them before they even start. And this stuff right here
| is, where it was born. The prison holding your future self, it
| was planted right here.
|
| _____________________________________________________
|
| Everything ever worth reading was written in the Pre-Collapse
| internet. So why not become a software-archeologist - digging for
| the golden past? Exhume it, get it back running, bring it all
| back, perfectly fine, software, books, games, our decadent
| ancestors abandoned and threw away to write off as rust. You too
| can help, rediscovering a past that worked better, untainted by
| AI, not yet riddled with Add-HD-Adds, when developers still had
| to be competent and companies still competed. Meet hot dig-site-
| teams near you- now. Join Past-Querries-Quary Inc. Can we dig it,
| yes we can!
| 29athrowaway wrote:
| That's what you get by trusting a stochastic parrot.
| Bluestein wrote:
| Twice scammed: Once by throwing away his life by having an F'book
| account. Then, the support scam.-
| fckgw wrote:
| damn dude you got his ass good
| ffhhj wrote:
| Zuck must be proud.
| RecycledEle wrote:
| This reminds me of the time I reported a fake PayPal email saying
| my account had been suspended to PayPal. The woman who answered
| the phone for PayPal told me very emphatically that I HAD BETTER
| HURRY UP AND DO EVERYTHING THEY TOLD ME TO!
| baobabKoodaa wrote:
| When I worked on a customer facing chatbot at my previous
| employer, we specifically wrote in the prompt "our customer
| service is not reachable by phone", and we tested that the
| chatbot was able to use that information and respond
| appropriately.
|
| But I guess you can't expect a tiny startup like Facebook to
| invest money into having 1 employee part-time tweaking the prompt
| of the chatbot to respond appropriately to commonly recurring
| user questions.
| resoluteteeth wrote:
| Was the chatbot you worked on using an LLM?
| baobabKoodaa wrote:
| Yes
| bilalq wrote:
| Again and again we see that LLMs are great for creative output
| and terrible for anything where correctness matters. You should
| only use it for the latter scenarios when generating answers is
| slow/hard/expensive, but verification of answers is
| quick/easy/cheap. Probabilistic and non-deterministic answers
| have their place, but these companies marketing them in products
| need to do a better job expressing the limitations.
| mrweasel wrote:
| It shows an amazing lack of understanding for what an LLM is,
| even from the people selling and implementing them. You're
| exactly right in that they are terrible if correctness matters,
| but that should be obvious. If they where 100% correct, the
| size of the models would be much larger, as they'd need to
| retain all the original training data.
|
| You can use the LLMs for language understand and interpreting
| questions, but the would need access to databases containing
| authoritative answers and not answer anything for which they
| don't have an answer.
| Magi604 wrote:
| I'm terrified of this happening to my elderly parents. It's why,
| even though it can be time consuming, I always have them run
| "tech support" issues (no matter how small) through me or my bro
| in law so some foreign scammer doesn't drain their accounts.
| noAnswer wrote:
| An older client got scammed by a fake Amazon-Hotline. They bought
| a XBox-gift-card while on his PC via Teamviewer, till he pulled
| the power cored.
|
| He then called me and I tried to find the official Amazon-Hotline
| on amazon.de. Since I was unable to find it I had to asked a
| search engine. The only results where third-party sites. It where
| from journalistic magazines I recognize (like chip.de) but still
| yet another gamble.
| jug wrote:
| Yes, AI in its current form is going to be a problem. I'm sure we
| haven't heard the worst yet. An AI may eventually kill a user.
|
| I believe the heart of the problem is that corporations are
| riding a hype wave as long as they can, and an AI chat looks like
| super convincing, next level stuff thanks to the simple interface
| that hides the fact that you cannot communicate with this one as
| you would with a human being. You use natural language and it
| responds with natural language, which makes it not only
| convenient, but also dangerous.
|
| There's money to gain on all this. While at the same time,
| hallucinations are an unsolved problem as well as making AI
| humble enough to realize and tell users that they just don't
| know. The combination of hallucinating, raising convincing
| arguments, being confidently incorrect, and not knowing the
| boundaries of your knowledge base, is a terrible one to let loose
| as officially sanctioned products.
| e40 wrote:
| My 89 yr old data called "AMEX" and was scammed. He googled the
| number for AMEX and took the top result (he says, I did not
| witness this). I'm across the country, so that zoom session was
| quite tedious (it took us an hour to get the permissions
| straightened out for zoom to be able to share his screen).
| ADeerAppeared wrote:
| > (he says, I did not witness this)
|
| He's speaking the truth.
|
| Google has, for a long while now, let scammers just buy
| advertisements to get their fake scam page to the top of the
| results. And not just major banks, various open source software
| have been subject to this exact attack.
|
| It's imperative for security that you install adblockers on all
| their devices.
| bitnasty wrote:
| This is the real danger of AI, forget the "singularity" or any of
| that sci-fi crap. AI is going to destroy the average human's
| already suffering reasoning ability.
| lupire wrote:
| The AI information was 99% correct. Only 1 word was wrong.
| noman-land wrote:
| Yeah it said "kill" instead of "save". Oops.
| jay-barronville wrote:
| One of the things about LLM-based AI that concerns me the most is
| realizing that the average person doesn't understand that they
| hallucinate (or even what hallucination is).
|
| I was listening to a debate on a podcast a while ago and one of
| the debaters kept saying, "Well, according to ChatGPT, [...]"--it
| was incredibly difficult listening to her repeatedly use ChatGPT
| as her source. It was obvious she genuinely believed ChatGPT was
| reliable, and frankly, I don't blame her, because when LLM's
| hallucinate, they do so confidently.
| jhawleypeters wrote:
| If it did that, it's not intelligent, not "AI". Let's agree to
| stop abusing the term AI, for the sake of people like this.
| cmcconomy wrote:
| if companies are held liable for LLM provided info, we will begin
| to see appropriate use
___________________________________________________________________
(page generated 2024-05-31 23:02 UTC)