[HN Gopher] AI browser extensions are a security nightmare
___________________________________________________________________
AI browser extensions are a security nightmare
Author : terracatta
Score : 145 points
Date : 2023-06-08 15:51 UTC (7 hours ago)
(HTM) web link (www.kolide.com)
(TXT) w3m dump (www.kolide.com)
| Garcia98 wrote:
| The issue is not AI, nor browser extensions per se, the issue is
| the lackluster permission system that Chrome extensions have,
| it's pretty similar to what Android had 7 (?) years ago, which
| should not be acceptable in 2023.
| activiation wrote:
| Automatic updates should be disabled by default...
| matheusmoreira wrote:
| Pretty much every single extension that isn't uBlock Origin is a
| security nightmare.
| vorticalbox wrote:
| Even unblock is, only takes the repository owners login to be
| taken an update pushed.
| hxugufjfjf wrote:
| No. There are many good, secure browser extensions.
| madeofpalk wrote:
| Such as?
| hxugufjfjf wrote:
| Privacy Badger, 1Password, HTTPS Everywhere, Dark Reader,
| to name a few.
| madeofpalk wrote:
| > Add "Dark Reader"?
|
| > It can: Read and change all data on all your websites
|
| It already has the broadest permissions available. Dark
| Reader injects arbitary code into every page you visit.
| It's one silent update away from stealing all your
| sessions. This is a security nightmare.
|
| All browser extensions are a security nightmare.
| ricardo81 wrote:
| Only skimmed through the article, it seems -AI from the title
| would be an old story?
|
| Also, that huge 4.7MB image in the head of the article...
| SCUSKU wrote:
| SEO, who needs it!
| FL33TW00D wrote:
| But what is the "AI" ran entirely locally? https://pagevau.lt/
| williamstein wrote:
| The "Download for Chrome" link on that page is broken. "404.
| That's an error. The requested URL was not found on this
| server. That's all we know."
| amelius wrote:
| Actually, aren't _all_ browser extensions a security nightmare?
|
| Or has something changed recently?
| jprete wrote:
| No, because a typical safe-to-run browser extension is written
| in such a way that it can be examined to see what it does. AI-
| based tools can't be analyzed based on their code, so the only
| way to make them safe is by limiting their capabilities. Any
| such capability limit is likely to be either too constraining,
| not constraining enough, or require as much planning ability as
| the AI itself.
| amelius wrote:
| The problem is the permission system. Like apps, extensions
| have an all-or-nothing attitude to permissions. Browsers
| should allow the user to be more specific about permissions,
| and let extensions think the user gave more permissions than
| they actually did. E.g. if extension insists that they need
| "access to entire filesystem", the browser should make the
| extension believe they have access to the entire filesystem,
| but of course the entire thing is sandboxed and the user can
| restrict the access behind the scenes.
|
| Without this feature, extensions will keep insisting they
| need access, and the user will eventually fall for it.
| josteink wrote:
| > Like apps, extensions have an all-or-nothing attitude to
| permissions
|
| Browser extensions needs to declare their permissions. With
| Manifest V3 we're seeing even more need to declare
| permissions.
|
| Any extension cannot do anything not explicitly granted to
| it by the user upon installation.
| actionablefiber wrote:
| The issue is those extensions can withhold valuable
| functionality needlessly.
|
| If I download $usefulWikipediaCompanionExtension whose
| functionality only depends on access to *.wikipedia.org
| but whose manifest _demands_ permission on all sites, I
| 'd like to be able to tell my browser "if I'm not really
| on Wikipedia, only show the extension a blank page."
| hgsgm wrote:
| That's a lot more work than saying "No" to using the
| malware.
| actionablefiber wrote:
| It's common for various counterparties, including
| software, to ask for much more information than they need
| and possibly be doing untrustworthy things with it while
| also providing legitimate value to the end user.
|
| I've lied about my birthday while signing up for websites
| before. I've also made ad-hoc email addresses with
| forwarding to conceal my main email address. I've given
| fictitious phone numbers and I've used the names of
| fictional characters. I do this because I benefit from
| the service but I don't trust the provider to use my
| information responsibly.
|
| Not a logical leap to go from there to feeding fake data
| to extensions when they request data that the user deems
| unnecessary for their functionality.
| saurik wrote:
| Yeah: while declaring permissions sounds cool and tries
| to fit into the narrative of helping protect end users
| who don't know how to manage anything themselves, _at the
| end of the day_ it not only requires an extremely
| opinionated central entity in charge of listings which
| takes a role in attempting to mediate the incentive
| incompatibilities (something which should raise serious
| ethical red flags and begs the question of conflicts of
| interest with respect to that player and the market that
| they get to create) but not only _doesn 't work_ to
| prevent users from getting abused... it _will never work_
| : "this app has requested access to your birthday" might
| be easy, but the only actually-correct solution is to
| always provide a random date to every app by default and
| then allow the user to go out of their way--and this must
| not under any circumstance be something the app is
| allowed to prompt for: this must be something the user
| has to initiate through external UI--to say "I grant this
| app access to my real birthday".
| majormajor wrote:
| When you talk about not being able to analyze these based on
| their code do you mean because today they're all just calling
| out to OpenAI or whoever?
|
| The risks listed in the article itself mostly seem to fall
| under the same, non-AI-extension, core problem of "you're
| given them all your data." And that's a risk for non-AI-based
| extensions too, but if you look at the code of an AI one,
| it's gonna be obvious that it's shipping it off to a third
| party server, right? And once that happens... you can't un-
| close that door.
|
| (The risks about copyright and such of content you generate
| by using AI tools are interesting and different, but I don't
| know that I'd call them security ones.)
|
| The prompt injection one is pretty interesting, but still
| seems to fall under "traditional" plugin security issues: if
| you authorize a plugin to read everything on your screen, AND
| have full integration with your email, or whatever, then...
| that's a huge risk. The AI/injection part makes it
| triggerable by a third-party, which certainly raises the
| alarm level a lot, but also: bad idea, period, IMO.
| madeofpalk wrote:
| That's actually what I thought the title was until reading your
| comment, and I agreed vehemently.
| kpw94 wrote:
| Yeah parts of the article would still be as valid if this was
| about regular extensions.
|
| The main difference is that AI extension, by design, send the
| content of the pages you browse to a server.
|
| A malicious "calculator" extension could also send all the
| content to a server, and extension users don't really have an
| idea of what each extension is actually doing.
|
| So skip the "Malware posing as AI browser extension" section,
| it's same kind of security issues as a malware calculator
| extension.
|
| The legitimate AI extension's problems are more interesting.
|
| Article wastes a bit more time on other security issues you get
| from using AI LLM in general. Those apply whether you're using
| a browser extension or chat.openai.com directly.
|
| The valid point that applies to narrowly AI browser extension
| are:
|
| 1) it could send sensitive data you wouldn't have sent
| otherwise. Most people would know what they're doing when they
| explicitly paste the stuff on chat.openai.com. But when it's
| now automated via the extension DOM scraping, it's a bit harder
| to realize how much you're giving away.
|
| 2) And the hidden text prompt injection. That's interesting as
| now your attacker could be the website you browse, if you have
| configured too many plugins (Zapier plugin giving access to
| your email)
|
| These 2 parts of TFA are imo novel security issues that only
| exist with AI browser extension, and are interesting.
| moritzwarhier wrote:
| Already commented something similar in another thread:
|
| Why is the security policy for extensions still not architected
| like other web permissions?
|
| There has been a shift on mobile already from "take it or leave
| it"-style permissions on install towards more fine grained
| control not overidable by the app manifest.
|
| I think Browser extensions should behave similarly. Especially
| when it comes to which origins an extensions is allowed to act
| on.
|
| The user should be able to restrict this regardless of the
| manifest, even forced to do.
|
| Extensions that need to act on all or an unknown set of origins
| should require a big and scary prompt after installation,
| regardless of what the user agrees to during installation.
|
| I say this as a happy user of uBlock origin and React DevTools.
|
| But for the common user the default should be to deny
| permissions and require user interaction.
| hgsgm wrote:
| Mobile doesn't give you control over which origins it
| contacts.
| moritzwarhier wrote:
| Yes you are right, that came down to me after I hit the
| submit button. But consider my train of thought more an
| associative one.
|
| I'd like an UI similar to the mobile one. I brought up the
| origin thing because for lots of extensions I would like
| that kind of UI for origin control. Origin control is part
| of WebExtension API, but it's during installation, which
| forces even well-meaning developers to request overly broad
| permissions for some kinds of extensions.
| notatoad wrote:
| an extension developer can scope their extension to only run on
| certain URLs, and if that list changes then chrome will
| automatically disable it until the user re-authorizes for the
| new set of URLs.
|
| so they're not a _total_ security nightmare if they 're only
| authorized to run on sites where you don't enter any private
| data. for example, looking through my extensions list, the
| py3redirect that autmatically redirects python2 documentation
| pages to python3 pages doesn't request access to anything other
| than python.org.
|
| but otherwise, yeah, you're giving permission to execute
| arbitrary code on any website you visit, which is about as
| compromised as your browser can get.
| LapsangGuzzler wrote:
| shout out to the Arc browser, which has it's own browser
| sandbox and WYSIWYG tools to build JS snippets that run in your
| browser. I'm not affiliated with them in any way, but they're
| really changing the way I look at browsing online.
| moffkalast wrote:
| Does that come on a CD along with Intel Arc GPUs? :D
| Tycho wrote:
| I wonder when we'll start seeing computer viruses that
| communicate with a remote LLM in order to get help circumventing
| barriers.
|
| Alternatively, maybe anti-virus software can phone home to get
| on-the-fly advice.
| kypro wrote:
| > _Yes, large language models (LLMs) are not actually AI in that
| they are not actually intelligent, but we're going to use the
| common nomenclature here.
|
| I'm sorry for the off-topic comment, but why do I keep seeing
| this? What am I missing here - is it that some people define
| intelligence as >= human, or that LLM are not intelligence
| because they're *just* statistical models?_
| sublinear wrote:
| > LLM are not intelligence because they're _just_ statistical
| models
|
| This is exactly it for me.
| hospitalJail wrote:
| Its interesting to see what it thinks about some ideas, like
| I ask, what 5 companies are best at marketing. My goal here
| is to be hypercritical of the companies it says because they
| are masters at manipulation. GPT3.5 was awful and confused
| advertising and marketing. GPT4 was perfect (Apple, Nike,
| Coke, Amazon, P&G)
|
| As much as chatgpt doesnt want to give you answers because
| the fuzziness, it has the ability to make judgements on
| things like "This is the best" or "This is the worst".
|
| Ofc with bias.
| nathan_compton wrote:
| Does it have the ability or is it just generating text
| similar to what it has seen before? The two things are very
| different.
| hospitalJail wrote:
| In this examples, it likely took that those companies are
| often praised about their marketing in the same sentence
| marketing is mentioned.
|
| LLMs don't repeat text its seen before, it links
| words/tokens/phrases that are related. Its prediction,
| but the prediction isnt just copypasting a previous
| webpage.
|
| Have you use chatgpt yet? I wouldn't delay. Heck you are
| here on HN, you basically have a responsibility to test
| it.
| nathan_compton wrote:
| I've used it _extensively_. GPT4 is great, but it is not
| intelligent. I think its really weird and also totally
| understandable that people think it is.
| pixl97 wrote:
| Eh, please comprehensively define intelligent... I have a
| feeling that this may explain a lot about your answer.
| girvo wrote:
| It's something so new and foreign that I'm deeply
| unsurprised that some feel it's intelligent.
|
| I personally don't care one way or the other, whether it
| is or isn't. What I care about is whether it's useful.
| ericd wrote:
| And if your brain is mostly a statistical model of the world,
| with action probabilities based on what parts of it happen to
| be excited at the moment?
| jmopp wrote:
| How do we know that the brain is a statistical model of the
| world? It sounds like explaining an unknown phenomenon
| using the technology du jour - just 10/20 years ago, the
| brain was a computer.
| pixl97 wrote:
| So conversely, is the brain magic? And if so, if we look
| at the evolutionary lineage of neural networks, at which
| point did it become so?
| JohnFen wrote:
| This touches on a dichotomy that has fascinated me for
| decades, from the very beginning of my interest in AI.
|
| One side of the dichotomy asserts that "if it walks like
| a duck..." that is, if a computer appears to be
| intelligent to us, then it must be intelligent. This is
| basically the Turing Test crowd (even though Turing
| himself didn't approve of the Turing Test as an actual
| test of AI).
|
| On the other side, you have people who assert that the
| human mind is really just a super-complicated version of
| "X", where "X" is whatever the cool new tech of the day
| is.
|
| I have no conclusions to draw from this sort of thing,
| aside from highlighting that we don't know what
| intelligence or consciousness actually are. I'm just
| fascinated by it.
| xigency wrote:
| Are you intelligent or just a bunch of cells? Given that I
| can query it for all sorts of information that I don't know,
| I would consider LLMs to, at the very least, contain and
| present intelligence...artificially.
| vel0city wrote:
| I can query Wikipedia or IMDB for all sorts of information
| I don't know. I wouldn't consider the search box of either
| site to be "intelligent", so I don't know "query it for all
| sorts of information" is a generally good rubric for
| intelligence.
| [deleted]
| JohnFen wrote:
| Because we don't have a real handle on what "intelligence"
| actually is, any use of the word without defining it is
| essentially just noise.
| ethanbond wrote:
| Yeah this is exactly it. It's interesting seeing a precision-
| oriented discipline (engineering) running into the inherently
| very, very muddy world of semantics.
|
| "What do you mean it's not intelligent?! It passed Test X!"
|
| "Yes and now that tells us Test X was not a good test for
| whatever it is we refer to as 'intelligence'"
| ravenstine wrote:
| > is it that some people define intelligence as >= human
|
| I just want to say that this seems to be how many, if not most
| people define intelligence internally. If an LLM gets something
| wrong or doesn't know something, then it must be completely
| unintelligent. (as if humans never get anything wrong!)
| xigency wrote:
| Clearly the test isn't >= as ChatGPT is already more coherent
| than large swaths of the population. The AI test for some is
| that its intelligence >>> human intelligence. Which is funny
| because by that point in time, their opinion will be more
| than worthless.
| ethanbond wrote:
| Like with humans, there are intelligent ways to be wrong and
| unintelligent ways to be wrong.
|
| LLMs do a whole lot of "wrong in a way that indicates it is
| not 'thinking' the way an intelligent human would."
| ravenstine wrote:
| What's concerning about this is we are evaluating AI on a
| basis that humans are not subject to. LLMs in their current
| form are built on the knowledge of the internet, while
| humans have both the internet and realtime feedback from
| their own lives in the physical world. If a human brain
| could be trained the same way as an LLM, might it also
| connect seemingly unconnected ideas in a way that would
| appear as non-thought? Maybe, maybe not. LLMs seem to be
| biased heavily towards making best effort guesses on things
| it doesn't know about, whilst humans are far more modest in
| doing so. I just don't know if we're really at a point
| where we can conclusively decide that something isn't
| thinking just because it doesn't appear to be thinking by
| the standards we place upon ourselves.
| guy98238710 wrote:
| More like intelligence == human. ChatGPT is superhuman in many
| ways.
| shagie wrote:
| I think its the "just" statistical models part.
|
| If you pull up the TOC for an AI textbook, you'll find lots of
| things that aren't "intelligent". Machine learning is just a
| subset of it. I recall a professor in the AI department back in
| the 90s working on describing the shape of an object from a
| photograph (image to text) based on a number of tools (edge
| detection was one paper I recall).
|
| Also in AI is writing a deductive first order logic solver is
| covered in there as are min-max trees and constraint
| satisfaction problems.
|
| http://aima.cs.berkeley.edu
|
| https://www.cs.ubc.ca/~poole/ci/contents.html (note chapter 4)
|
| https://www.wiley.com/en-us/Mathematical+Methods+in+Artifici...
|
| People are trying to put a box around "AI" to mean a particular
| thing - maybe they want AI to mean "artificial general
| intelligence" rather than all the things that are covered in
| the intro to AI class in college.
|
| I ultimately believe that trying to use a term that has been
| very broad for decades to apply to only a small subset of the
| domain is going to end up being a fruitless Scotsman tilting at
| windmills.
|
| ... And you know what, I think it does a pretty good job at
| being intelligent.
| https://chat.openai.com/share/01d760b3-4171-4e28-a23b-0b6565...
| nathan_compton wrote:
| I say that large language models are not intelligent because of
| the way they fail to do things. In particular, they fail in
| such a way as to indicate they have no mental model of the
| things they parrot. If you give them a simple, but very
| unusual, coding problem, they will confidently give you an
| incorrect solution even though they _seem_ to understand
| programming when dealing with things similar to their training
| data.
|
| An intelligent thing should easily generalize in these
| situations but LLMs fail to. I use GPT4 every day and I
| frequently encounter this kind of thing.
| NumberWangMan wrote:
| Is there a definition of intelligence that rules out large
| language models, but that does not also rule out large
| portions of humanity? A lot of people would readily admit
| that they don't have programming aptitude and would probably
| end up just memorizing things. Do we say those people are not
| intelligent?
|
| It seems to me that the perceived difference is mostly in
| being able to admit that you don't know something, rather
| than make up an answer -- but making up an answer is still
| something that humans do sometimes.
| nathan_compton wrote:
| I have to admit this is a genuinely interesting question.
| Language models demonstrably do have some models of the
| world inside of them. And, I admit, what I say that they
| aren't intelligent, I mostly mean they are very stupid,
| rather than like a machine or algorithm. Artificial
| stupidity is progress.
| wongarsu wrote:
| There's long been a divide between what people call hard vs
| soft AI, or strong vs weak AI, or narrow vs general. The
| definitions are a bit fuzzy, but generally a hard AI or strong
| AI would be able to think for itself, develop strategies and
| skills, maybe have a sense of self. Soft AI in contrast is a
| mere tool where you put something in and get something out.
|
| Now some people don't like using the term AI for
| soft/weak/narrow AI, because it's a fleeting definition, mostly
| applied to things that are novel and that we didn't think
| computers were able to do. Playing chess used to be considered
| AI, but a short time after AI beat the human chess world master
| it was no longer considered AI. If you buy a chess computer
| capable of beating Magnus Carlsen today that's considered a
| clever algorithm, no longer AI. You see the same thing playing
| out in real time right now with LLMs, where they go from AI to
| "just algorithms" in record time.
| bee_rider wrote:
| Very clever people have located true intelligence in the gaps
| between what an machine can do and what a human can. Therefore,
| to show that you aren't a starry-eyed rube you put a disclaimer
| that you aren't really talking about intelligence, but
| something that just looks and acts like it.
|
| True intelligence is, of course, definitionally the ability to
| do things like art or... err, wait, sorry, I haven't checked
| recently, where have we put the goalposts nowadays?
| hospitalJail wrote:
| Stable Diffusion doesnt make art, it makes photos. We can
| deem them art.
|
| Its denoising software.
| lucubratory wrote:
| Ooh, this is a rare one! A comment directly noting the
| similarities between AI art with photography, but insisting
| both aren't art. You're in very historical company:
| https://daily.jstor.org/when-photography-was-not-art/
| pixl97 wrote:
| Heh, Computers will never be intelligent, we will just moving
| the bar until humans can no longer be classified as
| intelligent.
| ethanbond wrote:
| I'm hesitant to even call this moving the goal posts.
| Intelligence has never been solidly defined even within
| humans (see: IQ debate; book smart vs street smart; idiot
| savants).
|
| It's unsurprising that creating machines that seem to do some
| stuff very intelligently and some other things not very
| intelligently at all is causing some discontent with regard
| to our language.
|
| I see a whole lot more gnashing of teeth about goalposts
| moving than I do about people proposing actual solid
| goalposts.
|
| So what's your definition?
| bee_rider wrote:
| > I'm hesitant to even call this moving the goal posts.
| Intelligence has never been solidly defined even within
| humans (see: IQ debate; book smart vs street smart; idiot
| savants).
|
| > It's unsurprising that creating machines that seem to do
| some stuff very intelligently and some other things not
| very intelligently at all is causing some discontent with
| regard to our language.
|
| I think I agree about the language.
|
| I don't have a definition of intelligence. I don't work in
| one of those fields that would need to define it, so my
| first attempt probably wouldn't be very good, but I'd say
| intelligence isn't a single thing, but a label we've
| arbitrarily applied to a bunch of behaviors that are
| loosely related at best. So, trying to say this thing is
| intelligent, this thing is not, is basically hopeless,
| especially when things that we don't believe are
| intelligent are being made to exhibit those behaviors, one
| behavior at a time.
|
| > I see a whole lot more gnashing of teeth about goalposts
| moving than I do about people proposing actual solid
| goalposts.
|
| I might not see a ton of explicit "here are the goalpost"
| type statements. But, every time someone says "I'm using
| the term AI, but actually of course this isn't
| intelligence," the seem to me at least to be referencing
| some implicit goalposts. If there isn't a way of
| classifying what is or isn't intelligent, how can they say
| something isn't it? I think the people making the
| distinction have the responsibility to tell us where
| they've made the cutoff.
|
| Maybe I'm just quibbling. Now that I've written all that
| out, I'm beginning to wonder if I just don't like the
| wording of the disclaimer. I'd probably be satisfied if
| instead of "this isn't intelligence, but I'm going to call
| it AI," people would say "Intelligence is too hard to
| define, so I'm going to call this AI, because why not?"
| oszai wrote:
| Conceptually Speaking you can reduce it down to
| Intelligence and strip out the Artificial Label.
|
| So know the question is what is Intelligence. Our
| standardized testing Model tells us passing tests that
| Humans cannot would be considered intelligent.
|
| Then add back in artificial to complete the equation.
|
| Commercially the Term Ai Means nothing thanks to years of
| Machine Learning being labeled such. It's arbitrary and
| relays more to Group Think to avoid approaching that
| Intelligence is a Scalar Value and not a Binary
| Construct.
| [deleted]
| pixl97 wrote:
| >So what's your definition?
|
| I say we take the word intelligence and throw it out the
| window. It's a bit like talking about the either before we
| discovered more about physics. We chose a word with an
| ethereal definition that may or may not apply depending on
| the context.
|
| So what do we do instead? We define sets of capability and
| context and devise tests around that. If it turns out a
| test actually sucked or was not expansive enough, we don't
| get rid of that particular test. Instead we make a new more
| advanced test with better coverage. Under this domain no
| human would pass all the tests either. We could each
| individual sub test with ratings like 'far below human
| capability', 'average human capability', 'far beyond human
| capabilities'. These tests could be everywhere from
| emotional understanding and comprehension, to reasoning and
| logical ability, and even include embodiment tests.
|
| Of course even then I see a day where some embodied robot
| beats the vast majority of emotional, intellectual, and
| physical tests and some human supremacist still comes back
| with "iTs n0t InTeLLigeNt"
| russdill wrote:
| It's statistical models all the way down.
| ryanklee wrote:
| That is not a very good reason to call an entity
| unintelligent. There are uncontroversial models of human
| intelligence that are Bayesian.
| russdill wrote:
| That's what I'm alluding to.
| ryanklee wrote:
| Ah, apologies, I read your comment as alluding to
| statistics as a reason to dismiss intelligence in
| machines
| majormajor wrote:
| AI's a very soft term, and there's long been a technical vs
| "casual" split in what it means. Five or ten years ago you'd
| say your photo was retouched with AI dust removal, say, and
| we'd all know what that means. And that there was a big gulf
| between that and the sci-fi "AI" of Blade Runner or Her or Star
| Wars, etc.
|
| The user interface to Chat GPT and similar tools, though, has
| made a lot of people think that gap is gone, and that instead
| of thinking they are using an AI tool in the technical sense,
| they now think they're talking to a full-fledged other being in
| the sci-fi sense; that that idea has now come true.
|
| So a lot of people are careful to distinguish the one from the
| other in their writing.
| VoodooJuJu wrote:
| It's a way for the author to distinguish himself as one who is
| neither a purveyor of, nor fooled by, the magic, grift, and
| cringy sci-fi fantasizing that currently comprises the majority
| of AI discussion.
|
| Currently, most mentions of AI, outside of a proper technical
| discussion, are coming from crypto-tier grifters and starry-
| eyed suckers. Even further, a lot of discussions from otherwise
| technical people are sci-fi-tier fearmongering about some
| ostensible Skynet, or something, it's not quite clear, but it's
| clearly quite cringe. The latter is one of the many calibers of
| ammunition being used by AI incumbents to dig regulatory moats
| for themselves.
|
| Anyway, I understand why the author is distinguishing himself
| with his LLM...AI disclaimer, given the above.
| dguest wrote:
| In my field it's accepted (by some) that you write "AI" for
| your grant proposal and say "ML" when you talk to colleagues
| and want to be taken seriously.
|
| It feels a bit wrong to me, because as you say it's arguably
| a grift, in this case on the taxpayer who funds science
| grants. More charitably it might just be the applicant
| admitting that they have no idea what they are doing, and the
| funding agency seeing this as a good chance to explore the
| unknown. Still, unless the field is AI research (mine isn't)
| it seems like funding agencies should giving money to people
| who understand their tools.
| sebzim4500 wrote:
| Most people outside of academia understand AI to include
| way more than just ML. People refer to the bots in video
| games as AI and they are probably a few hundred lines of
| straightforward code.
|
| I don't think there is anything wrong with using the
| colloquial definition of the term when communicating with
| funding agencies/the public.
| dguest wrote:
| I agree that using a colloquial definition is fine. And I
| don't mean to be too harsh on people who use buzzwords in
| their grant proposal: it's just sort of the sea you swim
| in.
|
| But I only wish we could say that a few hundred lines of
| code was "AI": that would mean funding for a lot of
| desperately needed software infrastructure. Instead AI is
| taken as synonymous with ML, and more specifically deep
| neural networks, for the most part.
| dsr_ wrote:
| I think you're entirely wrong about this. Using the term
| AI or artificial intelligence directly invokes several
| centuries of cultural baggage about golems, robots,
| Terminators, androids and cyborgs and Matrix-squid.
|
| Saying "large language models" does not. Saying "giant
| correlation networks" does not. Not to be too Sapir-
| Whorfian, but the terminology we use influences our
| conversations: terrorists, guerillas, rebels,
| revolutionaries, freedom-fighters.
| sebzim4500 wrote:
| Should a nuclear power station rebrand itself to avoid
| being associated with Hiroshima? I really don't get what
| you are trying to say.
| shagie wrote:
| Would those topics that "outside academia understands AI
| to include" be covered in http://aima.cs.berkeley.edu ?
|
| When you say "bots in video games as AI" that's covered
| in the book titled Artificial Intelligence: A Modern
| Approach, 4th US ed. : II Problem-
| solving 3 Solving Problems by Searching
| ... 63 4 Search in Complex Environments
| ... 110 5 Adversarial Search and Games
| ... 146 6 Constraint Satisfaction Problems
| ... 180
|
| Those topics would be in chapter 5.
|
| Sure, it may be a few hundred lines of code, but it's
| still something that a Berkley written AI textbook
| covers.
|
| Spelled out more for that section:
| Chapter 5 Adversarial Search and Games ... 146
| 5.1 Game Theory ... 146 5.1.1 Two-player
| zero-sum games ... 147 5.2 Optimal Decisions
| in Games ... 148 5.2.1 The minimax search
| algorithm ... 149 5.2.2 Optimal decisions
| in multiplayer games ... 151 5.2.3 Alpha--
| Beta Pruning ... 152 5.2.4 Move ordering
| ... 153 5.3 Heuristic Alpha--Beta Tree Search
| ... 156 5.3.1 Evaluation functions ... 156
| 5.3.2 Cutting off search ... 158 5.3.3
| Forward pruning ... 159 5.3.4 Search
| versus lookup ... 160 5.4 Monte Carlo Tree
| Search ... 161 5.5 Stochastic Games ... 164
| 5.5.1 Evaluation functions for games of chance ... 166
| 5.6 Partially Observable Games ... 168
| 5.6.1 Kriegspiel: Partially observable chess ... 168
| 5.6.2 Card games ... 171 5.7 Limitations of
| Game Search Algorithms ... 173
| Animats wrote:
| I think I have an original edition of that book
| somewhere. Good Old Fashioned AI.
| shagie wrote:
| My assignments (different book) for Intro to AI class
| were:
|
| Boolean algebra simplifier. Given a LISP expression - for
| example (AND A (OR C D)) write a function to return the
| variables needed to make the entire expression TRUE.
| Return NIL if the expression is a paradox such as (AND A
| (NOT A)). The expressions that we were to resolve had on
| the order of 100-200 operators and were deeply nested. I
| recall that I wrote a function as part of it that I
| called HAMLET-P that identified terms of the form (OR 2B
| (NOT 2B)) and rapidly simplified them to TRUE.
|
| Not-brute-force job scheduler. The job-shop scheduling
| problem ( https://en.wikipedia.org/wiki/Job-
| shop_scheduling ) with in order processing of multiple
| tasks that had dependencies. Any worker could do any task
| but could only do one task at a time.
|
| The third one I don't remember what it was. I know it was
| there since the class had four assignments... (digging...
| must have been something with Prolog)
|
| The last assignment was written in any language (I did it
| in C++ having had enough of LISP and I had a good model
| for how to do it in my head in C++). A 19,19,5 game (
| https://en.wikipedia.org/wiki/M,n,k-game ). Similar to
| go-maku or pente. This didn't have any constraints that
| go-maku has or captures that pente has. It was to use a
| two ply min-max tree with alpha beta pruning. It would
| beat me 7 out of 10 times. I could get a draw 2 out of 10
| and win 1 out of 10. For fun I also learned ncurses and
| made it so that I could play the game with the arrow keys
| rather than as '10,9... oh crap, I meant 9,10'.
|
| And I still consider all of those problems and homework
| assignments as "AI".
|
| From the digging, I found a later year of the class that
| I took. They added a bit of neural nets in it, but other
| topics were still there.
|
| By way of https://web.archive.org/web/19970214064228/http
| ://www.cs.wis... to the professors's home page and
| classes taught - https://web.archive.org/web/199702242211
| 07/http://www.cs.wis...
|
| Professor Dryer taught a different section https://web.ar
| chive.org/web/19970508190550/http://www.cs.wis...
|
| The domain of the AI research group at that time: https:/
| /web.archive.org/web/19970508113626/http://www.cs.wis...
| CyberDildonics wrote:
| Browser Extensions Are a Security Nightmare - I guess you can add
| AI in front to make it seem new.
| mahogany wrote:
| Exactly - it blows my mind how normalized the permission
| _Access your data for all websites_ is (I think it 's _Read and
| Change all your data on all websites for Chrome_ ). I use only
| one or two extensions because of this. Why does a
| procrastination tool need such an insanely broad permission?
| waboremo wrote:
| If it operates on more than one domain, it needs those
| permissions to function based on how the permissions system
| works. You can limit those yourself in the settings page for
| the extension, but everything else is basically workarounds
| applied to avoid that permission.
|
| For example, a web clipper operates on multiple domains, but
| it can avoid it by using activetab permission instead and
| then offering optional permissions if it wants when you click
| on the clipper extension icon.
|
| If you want something to be done automatically on multiple
| domains, this is not possible without that permission. Not
| unless you want to annoy users with prompts.
| hospitalJail wrote:
| Just because an extension can do that, doesnt mean they are
| sending your info to a server.
| mahogany wrote:
| No, but (1) you are trusting the extension to not do that,
| and (2) even if you vet the extension now, it could change
| in the future. Or am I mistaken? My understanding is that
| by default, extensions update automatically. If you accept
| these permissions initially, then you implicitly accept
| them for any future update. The alternative is keeping
| track of and updating every extension manually, re-vetting
| each one every time.
| hoosieree wrote:
| I wrote a Chrome extension[1] that reads no data but places a
| colored translucent div over the page. It requires that same
| "change all your data" permission.
|
| My takeaway lesson is that the permissions model for
| extensions is confusing and nearly useless.
|
| [1] https://chrome.google.com/webstore/detail/obscura/nhlkgni
| lpm...
| youreincorrect wrote:
| Do you suppose it's possible that accessing the DOM to add
| a div implicitly requires access to page data?
| hoosieree wrote:
| I can see how many applications might want to read the
| page, but in my case it's not necessary. My extension
| tries to add a <div> under the <body> element, regardless
| of what's going on in the page. If there's no <body>, my
| extension stops working but the browser keeps going.
|
| In short, if there were separate "read" and "write"
| permissions, I would only need "write". For privacy-
| concerned people, that's a very important distinction.
| jabradoodle wrote:
| It would be more complex than that given you can write
| arbitrary JavaScript that can read anything it likes and
| send it anywhere.
| thfuran wrote:
| How would you allow changing page contents with a narrow
| permission?
| gnicholas wrote:
| I also have a Chrome extension that needs access to page
| content on all pages, for the purpose of making text
| easier to read.
|
| I could see distinguishing between extensions that in any
| way exfiltrate data from the pages you view, versus
| extensions that process the DOM and do something locally,
| but never send the data anywhere.
|
| This requires a bit closer vetting than Google currently
| does, I think. To demonstrate that all processing happens
| locally, we encourage our users to load various websites
| with our extension toggled off, then go into airplane
| mode, and then turn our extension on. This doesn't
| strictly guarantee that we're not separately exfiltrating
| data (we aren't), but it does prove that our core process
| happens locally.
| sebzim4500 wrote:
| There are hundreds of thousands of extensions, and none
| of them make Google any money. Hard to see how they could
| justify any serious manual review.
| gnicholas wrote:
| Yeah, it could make sense for them to structure their
| extension framework so that developers could work with
| website data in a sandbox, if their use case allows for
| it. That would enable developers who don't need to send
| data to a server for processing to prove that the data
| never leaves the user's machine.
| [deleted]
| croes wrote:
| Exactly.
|
| But I think at the moment it's easier to get someone to install
| an extension as long it mentions GPT or AI.
___________________________________________________________________
(page generated 2023-06-08 23:00 UTC)