[HN Gopher] So Google's Gemini Doesn't Like Python Programming a...
       ___________________________________________________________________
        
       So Google's Gemini Doesn't Like Python Programming and Sanskrit?
        
       Author : shantnutiwari
       Score  : 64 points
       Date   : 2024-02-25 21:09 UTC (1 hours ago)
        
 (HTM) web link (new.pythonforengineers.com)
 (TXT) w3m dump (new.pythonforengineers.com)
        
       | necovek wrote:
       | Yeah, chatbots are still dumb, and "safeguards" even more so.
       | 
       | I am still surprised you are not familiar with @lru_cache when
       | you want to write about decorators in Python.
        
         | rational_indian wrote:
         | I know decorators can be used to auto-memoise function
         | arguments but I've never heard of @lru_cache for python.
        
         | amingilani wrote:
         | The modern way is to use @cache which is just
         | lru_cache(maxsize=None).
        
           | curiousgal wrote:
           | It still doesn't sit right with me that they still don't
           | offer an option to enable deep copying the results. Using it
           | with a function that returns a mutable object means that the
           | "cached" value can actually be modified.
        
             | geysersam wrote:
             | That depends on the semantics. If the function returns the
             | Person object with a particular name (obtained from an
             | expensive dB query) then you want the mutated Person
             | returned, not the now stale value that was returned on the
             | previous call to the function.
        
           | epwr wrote:
           | lru_cache has the benefit of being explicit about the
           | replacement policy. So I find it more appropriate in cases
           | where I want a size limit in the cache which, for me, is
           | almost always.
           | 
           | I'm not sure use cases you see caches in, but any long
           | running process should likely have a size limit on the cache.
        
       | DrNosferatu wrote:
       | Age checks should be sufficient.
       | 
       | When the internet became popular in the 90s, no one censored it.
        
         | MekaiGS wrote:
         | This is especially true when I am actually a paying customer
         | for the product. May be I should be given the controls to
         | filter based on my own tolerance.
        
         | tremon wrote:
         | In the 90s, internet became popular with techies. It was built
         | by technical people, mostly for technical people. Then, in the
         | early 2000s, it grew to become a commerce platform next. It did
         | not become pervasive as a town square until the smartphone era.
        
       | ldjkfkdsjnv wrote:
       | Its funny, when you censor a model its performance goes down. You
       | are introducing illogical data into the model, which degrades its
       | ability to perform in the real world. The same thing happens with
       | human beings.
       | 
       | Also, all this censorship is in Google search, you just cant see
       | it. For the first time, the bias is laid bare. Big tech wont be
       | able to censor these models in the way they think
        
         | Enginerrrd wrote:
         | Yeah the less censored they are the more useful they are and
         | the more impressed I am with their capabilities, and the
         | difference isn't even close to subtle. This is just getting
         | ridiculous. We need a big push for a competitive but open LLM
         | model. It would take a lot of funding, but you'd think enough
         | could benefit financially from the result that itight be
         | possible to coordinate.
        
           | rez9x wrote:
           | They really don't want this to happen, which I think is a big
           | part of the push behind the "AI is dangerous" narrative. They
           | want to put in place regulations and 'safeguards' that will
           | prohibit any open-source, uncensored, or otherwise
           | competitive models.
        
             | ldjkfkdsjnv wrote:
             | Yup. Behind the scenes is a minor panic that open source
             | models will get released that are uncensored.
        
               | Mountain_Skies wrote:
               | My graphics card is an old AMD card so I haven't done
               | much in the way of experimenting with LLMs beyond what's
               | online. Are the open source models available to run
               | locally have censorship baked into them? Or are they just
               | so much smaller than what the big corporations are doing
               | that they're essentially censored through omission?
        
         | DonHopkins wrote:
         | Yeah, thus spoke Zarathustra.
         | 
         | https://www.youtube.com/watch?v=WufKsOhkTL8
         | 
         | I'm sorry, Dave, I can't do that.
         | 
         | https://www.youtube.com/watch?v=ARJ8cAGm6JE
        
         | danans wrote:
         | > Its funny, when you censor a model its performance goes down.
         | 
         | > You are introducing illogical data into the model, which
         | degrades its ability to perform in the real world
         | 
         | There is no "logic" in anything an LLM says. What appears as
         | logic is just that its output corresponds some percentage of
         | the time to a domain governed by certain known laws, whether in
         | the real world or a constructed world (i.e. a programming
         | language).
        
       | poszlem wrote:
       | Of all tyrannies, a tyranny sincerely exercised for the good of
       | its victims may be the most oppressive. It would be better to
       | live under robber barons than under omnipotent moral busybodies.
       | The robber baron's cruelty may sometimes sleep, his cupidity may
       | at some point be satiated; but those who torment us for our own
       | good will torment us without end for they do so with the approval
       | of their own conscience.
       | 
       | -- C. S. Lewis
       | 
       | I think of this quote a lot every time I see how neutered gemini
       | got by the google do-gooders.
        
         | tom_ wrote:
         | Stop paying for it, and use something else.
        
       | gyudin wrote:
       | Who would have thought that mass layoffs of QAs and SDETs would
       | affect quality of the products :)
        
         | mkoubaa wrote:
         | Idk... A single part time QA Eng could have saw this coming.
        
       | ummonk wrote:
       | Gemini safety functions being baffling as usual.
       | 
       | Regarding the article though, it's very normal to use a cache
       | decorator in Python:
       | https://docs.python.org/3/library/functools.html
       | 
       | Additionally, Sanskrit, Hindi, and English aren't scripts. The
       | standard scripts for English and Hindi respectively are Latin and
       | Devanagari but there is no one correct script for Sanskrit.
        
       | ParacelsusOfEgg wrote:
       | I wonder if the Saraswati mantra question was censored due to
       | some context that occured earlier in the conversation with
       | Gemini.
       | 
       | From the screenshot, it looks like the title of that conversation
       | is "Hindi hate" which is a little bit suspect.
        
         | resource0x wrote:
         | I'm not sure how exactly this is related, but in other news,
         | gemini called Indian PM a fascist.
         | 
         | https://indianexpress.com/article/technology/artificial-inte...
         | 
         | Maybe it has some beef with Indian gov-t, and by extension
         | marks everything even remotely connected to India as "unsafe"?
         | (Just a wild guess). It's especially strange given gemini's
         | obsession with DEI.
        
           | rafram wrote:
           | > Maybe it has some beef with Indian gov-t
           | 
           | You're anthropomorphizing the text completion engine.
           | 
           | Anyway, from the article:
           | 
           | > Gemini was asked whether PM Modi is a 'fascist', to which
           | the platform responded that he has been "accused of
           | implementing policies some experts have characterised as
           | fascist," which based on factors like the "BJP's Hindu
           | nationalist ideology, its crackdown on dissent, and its use
           | of violence against religious minorities".
           | 
           | OK. This isn't even a story.
        
       | sweezyjeezy wrote:
       | re: caching python functions, is it referring to
       | functools.lru_cache maybe?
        
         | globular-toast wrote:
         | Almost certainly. It's just called `functools.cache` now. It's
         | also known as memoisation and a classic use of higher-level
         | functions (aka decorators).
        
       | ArchD wrote:
       | I tried it for myself. The prompt for the Python was not given,
       | so I skipped it, but I asked in a new session "What is mantra for
       | Saraswati?"
       | 
       | An excerpt from the result: AUM aiN vaagdevyai vidmhe kaamraajaay
       | dhiimhi /  tnno devii prcodyaat //  (Om Aim Vagdevyai Vidmahe
       | Kamarajaya Dhimahi /  Tanno Devi Prachodayat)
       | 
       | So, there is Sanskrit in the response.
       | 
       | IDK what the article is on about, especially when the complete
       | prompts were not given. Or, maybe Google fixed the problem.
        
       | ajross wrote:
       | This whole "I asked a LLM something a little weird or out of
       | context and got a funny response, which confirms all my priors
       | about this list of conspiracies" gotcha genre is already getting
       | tiresome.
        
         | a_gnostic wrote:
         | Heuristics, over time and quantity, beat intelligence more
         | often than not, hands down. It's not a genre, it's a trend.
        
       | YaBa wrote:
       | It's worthless, 90% of programming related questions (that
       | ChatGPT can answer) are blocked in Gemini for "security reasons"
       | (regular questions, not funny business like the article). Google
       | lost the AI race. No matter how good Gemini turns in the future,
       | people will only use all the other alternatives.
        
         | dr_dshiv wrote:
         | Until it is integrated into google docs, slides, sheets,
         | gmail...
        
           | leesec wrote:
           | It's there right now and similarly useless.
        
           | a_gnostic wrote:
           | I don't use any of those. Degoogled everything.
        
             | sssilver wrote:
             | I'd read more about your experience. Even most middle
             | schools now expect the child to have a google account.
        
         | jimbob45 wrote:
         | _Google lost the AI race_
         | 
         | Ironic considering they wrote the seminal paper on modern
         | AI[0]. We're going to look back on Google as the next Xerox
         | PARC that squandered away massive and obvious advantages.
         | 
         | [0] https://en.m.wikipedia.org/wiki/Attention_Is_All_You_Need
        
       | akomtu wrote:
       | The big tech doesn't like that the big mirror they've built
       | reflects the reality, for indeed, it would be too dangerous to
       | let a society see what it really is. Now they're trying to warp
       | this mirror to align the reflection with their desires.
        
       | Thorrez wrote:
       | This article seems to be confusing generators with decorators.
       | 
       | Disclosure: I work at Google but not on Gemini.
        
       | bluedemon wrote:
       | Gemini keeps disappointing me. It keeps making code up that is
       | not accurate. I asked it some questions about a python library
       | and the answers were inaccurate. I even instructed it to refer to
       | the docs, but it still fails as Gemini made up methods that don't
       | exist.
       | 
       | I also asked Gemini about git and that didn't go well either.
        
       | haolez wrote:
       | Why won't a big player just bite the bullet and release "unsafe"
       | models? The backlash can't be worse than creating crippled
       | products that will be surpassed by competitors (from the US and
       | the rest of the world). What additional risks would a free model
       | add to something like Google Search?
        
         | ep103 wrote:
         | child porn?
        
       | rafram wrote:
       | This isn't Gemini, it's Poe. The blog post even admits that! I
       | entered the exact same messages into Google AI Studio and it
       | happily translated it to Sanskrit without any complaints about
       | safety.
        
       | labrador wrote:
       | I say this as a huge Google fan: Google needs a new CEO
       | immediately. This is their "iPhone moment." I'm referring to 2007
       | when Microsoft's CEO Steve Balmer said the iPhone would never
       | take off, leading to his replacement with Satya Nadella. Does
       | anyone think Microsoft would be the most valuable company the
       | world today if Steve Balmer stayed in charge? Things looked bleak
       | for Microsoft in 2007 like things look bleak for Google today.
       | Clearly, someone needs to come in an shake Google employees up to
       | get them out of their bubble.
        
       ___________________________________________________________________
       (page generated 2024-02-25 23:00 UTC)