[HN Gopher] General principles for the use of AI at CERN
___________________________________________________________________
General principles for the use of AI at CERN
Author : singiamtel
Score : 85 points
Date : 2025-11-24 10:37 UTC (12 hours ago)
(HTM) web link (home.web.cern.ch)
(TXT) w3m dump (home.web.cern.ch)
| singiamtel wrote:
| I found this principle particularly interesting:
| Human oversight: The use of AI must always remain under human
| control. Its functioning and outputs must be consistently and
| critically assessed and validated by a human.
| conartist6 wrote:
| It's still just a platitude. Being somewhat critical is still
| giving some implicit trust. If you didn't give it any trust at
| all, you wouldn't use it at all! So they endorse trusting it is
| my read, exactly the opposite of what they appear to say!
|
| It's funny how many official policies leave me thinking that
| it's a corporate cover-your-ass policy and if they really meant
| it they would have found a much stronger and plainer way to say
| it
| miningape wrote:
| I think you're more reading what you want to read out of that
| - but that's the problem, it's too ambiguous to be useful
| hgomersall wrote:
| That doesn't follow. Say you write a proof for a something I
| request, I can then check that proof. That doesn't mean I
| don't derive any value from being given the proof. A lack of
| trust does not imply no use.
| MaybiusStrip wrote:
| "You can use AI but you are responsible for and must validate
| its output" is a completely reasonable and coherent policy.
| I'm sure they stated exactly what they intended to.
| geokon wrote:
| If you have a program that looks at CCTV footage and IDs
| animals that go by.. is a human supposed to validate every
| single output? How about if it's thousands of hours of
| footage?
|
| I think parent comment is right. It's just a platitude for
| administrators to cover their backs and it doesn't hold to
| actual usecases
| pu_pe wrote:
| I don't see it so bleakly. Using your analogy, it would
| simply mean that if the program underperforms compared to
| humans and starts making a large amount of errors, the
| human who set up the pipeline will be held accountable.
| If the program is responsible for a critical task (ie the
| animal will be shot depending on the classification) then
| yes, a human should validate every output or be held
| accountable in case of a mistake.
| mattkrause wrote:
| Exactly.
|
| If some dogs chew up an important component, the CERN
| dog-catcher won't avoid responsibility just by saying
| "Well, the computer said there weren't any dogs inside
| the fence, so I believed it."
|
| Instead, they should be taking proactive steps: testing
| and evaluating the AI, adding manual patrols, etc.
| conartist6 wrote:
| I take an interest in plane crashes and human factors in
| digital systems. We understand that there's a very human
| aspect of complacency that is often read about in reports
| of true disasters, well after that complacency has crept
| deep into an organization.
|
| When you put something on autopilot, you also massively
| accelerate your process of becoming complacent about it
| -- which is normal, it is the process of building trust.
|
| When that trust is earned but not deserved, problems
| develop. Often the system affected by complacency drifts.
| Nobody is looking closely enough to notice the problems
| until they become proto-disasters. When the human finally
| is put back in control, it may be to discover that the
| equilibrium of the system is approaching catastrophe too
| rapidly for humans to catch up on the situation and
| intercede appropriately. It is for this reason that many
| aircraft accidents occur in the seconds and minutes
| following an autopilot cutoff. Similarly, every Tesla
| that ever slammed into the back of an ambulance on the
| back of the road was a) driven by an AI, b) that the
| driver had learned to trust, and c) the driver - though
| theoretically responsible - had become complacent.
| pu_pe wrote:
| Sure, but not every application has dramatic consequences
| such as plane or car crashes. I mean, we are talking
| about theoretical physics here.
| SiempreViernes wrote:
| > So they endorse trusting it is my read, exactly the
| opposite of what they appear to say!
|
| They endorse _limited_ trust, not exactly a foreign concept
| to anyone who 's taken a closer look at an older loaf of
| bread before cutting a slice to eat.
| Sharlin wrote:
| Interesting in what sense? Isn't it just stating something
| plainly obvious?
| jacquesm wrote:
| It is, but unfortunately the fact that to you - and me - it
| is obvious does not mean it is obvious to everybody.
| Sharlin wrote:
| Quite. One would hope, though, that it would be clear to
| prestigious scientific research organizations in
| particular, just like everything else related to source
| criticism and proper academic conduct.
| SiempreViernes wrote:
| Did you forget the entire DOGE episode where every government
| worker in the US had to send an weekly email to an LLM to
| justify their existence?
| Sharlin wrote:
| I'd hold CERN to a slightly higher standard than DOGE when
| it comes to what's considered plainly obvious.
| SiempreViernes wrote:
| Sure, but the way you maintain this standard is by
| codifying rules that are distinct from the "lower"
| practices you find elsewhere.
|
| In other words, because of the huge DOGE clusterfuck
| demonstrated how horrible practices people will actually
| enact, you need to put this into the principles.
| piokoch wrote:
| Oddly enough nowadays CERN is very much like a big corpo,
| yes they do science, but there is a huge overhead of
| corpo-like people who running CERN as an enterprise that
| should bring "income".
| elashri wrote:
| Can you elaborate on this, hopefully with details and
| sources including the revenue stream that CERN is getting
| as a cooperation?
| mk89 wrote:
| I want to see how obvious this becomes when you start to add
| agents left and right that make decisions automagically...
| Sharlin wrote:
| Right. It should be obvious that at an organization like
| CERN you're not supposed to start adding autonomous agents
| left and right.
| xtiansimon wrote:
| Where is "human oversight" in an automated workflow? I noticed
| the quote didn't say "inputs".
|
| And with testing and other services, I guess human oversight
| can be reduced to _looking at the dials_ for the green and red
| lights?
| SiempreViernes wrote:
| Someone's inputs is someone else's outputs, I don't think you
| have spotted an interesting gap. Certainly just looking at
| the dials will do for monitoring functioning, but falls well
| short of validating the system performance.
| monkeydust wrote:
| The real interesting thing is how does that principle interplay
| with their pillars and goals i.e. if the goal is to "optimize
| workflow and resource usage" then having a human in the loop at
| all points might limit or fully erode this ambition. Obviously
| it not that black and white, certain tasks could be fully
| autonomous where others require human validation and you could
| be net positive - but - this challenge is not exclusive to CERN
| that's for sure.
| contrarian1234 wrote:
| Do they hold the CERN Roomba to the same standard? If it cleans
| the same section of carpet twice is someone going to have to do
| a review?
| conartist6 wrote:
| Feels like the useless kind of corporate policy, expressed in
| terms of the loftiest ideals instead of how to make real trade
| offs with costs
| jordanpg wrote:
| Organizations above a certain size absolutely cannot help
| themselves but publish this stuff. It is the work of senior
| middle managers. Ark Fleet Ship B.
|
| I work in a corporate setting that has been working on a
| "strategy rebrand" for over a year now and despite numerous
| meeting, endless powerpoint, and god knows how much money to
| consultants, I still have no idea what any of this has to do
| with my work.
| alkonaut wrote:
| 99% of corporate policies are to be able to point to a document
| that says "well it's not my fault, I made the policy and
| someone didn't follow it".
| marginalia_nu wrote:
| You don't even need to go as far as saying someone didn't
| follow the policy, you can just say you need to review the
| policies. That way, conveniently enough, nobody is really
| ever at fault!
| SiempreViernes wrote:
| It is a _organisation wide_ document of "General principles",
| how could it possibly have something more specific to say that
| about the inherently context specific trade-offs of each
| specific use of AI?
| mariusor wrote:
| Well, CERN is not a corporation, it can afford not optimizing
| for "costs", whatever you mean by that in this context.
| oytis wrote:
| What's so special about military research or AI that the two
| can't be done together even though the organization is not in
| principle opposed to either?
| LudwigNagasena wrote:
| CERN is in principle opposed to military research. That and
| stuff like lawfulness, fairness, sustainability, privacy are
| just general CERN principles restated for fluff.
| oblio wrote:
| > CERN's convention states: "The Organization shall have no
| concern with work for military requirements and the results of
| its experimental and theoretical work shall be published or
| otherwise made generally available."
|
| CERN was founded after WW2 in Europe, and like all major
| European institutions founded at the time, it was meant to be a
| peaceful institution.
| oytis wrote:
| Sorry, looks like I misunderstood what "having no concern"
| means.
| danparsonson wrote:
| Yeah it's written as in, "we don't concern ourselves with
| that", i.e. it's none of their business
| jacquesm wrote:
| It's a bit of a fig leaf though, any high energy physics has
| military implications.
| tempay wrote:
| What does the LHC physics program have to do with military
| applications?
| miningape wrote:
| You'd be surprised how creative the military can be when
| there's demand
| oskarkk wrote:
| Research on interactions between particles can probably
| be helpful for nuclear weapons R&D.
| fainpul wrote:
| Doesn't _all_ of physics have some military implications?
|
| But at least they make everything public knowledge, instead
| of keeping it secret and only selling it to one nation.
| oblio wrote:
| > any _physics_ has military implications.
|
| Fixed that for you. That's been the case since we
| discovered sticks and stones, but it doesn't mean that CERN
| is lying when they say they want to focus on non-military
| areas.
|
| Let's not assume the worst of an institution that's been
| fairly good for the world so far.
| jacquesm wrote:
| > Fixed that for you.
|
| You didn't fix anything.
|
| > Let's not assume the worst of an institution that's
| been fairly good for the world so far.
|
| I'm not assuming the worst. I'm just being realistic, and
| I think it would be nice if CERN explicitly acknowledged
| the fact that what they do there could have serious
| implications for weapons technology.
| oblio wrote:
| By that logic a tire manufacturer should do the same.
|
| You're really grasping at straws here. CERN doesn't need
| to do anything. Nor do universities, for example.
| SideburnsOfDoom wrote:
| sure, though "have no concern with" comes across to me less
| like ""we avoid building anything that could be conceivably
| used as a weapon by anyone", and more as "We're not in that
| business, but it's not our concern if you manage to stab
| yourself with it. It's not secret".
| GuB-42 wrote:
| One reason I can think of is with regard to confidentiality. A
| lot of AI services are controlled by companies in the US or
| China, and they may not want military research to leak to these
| countries.
|
| Classified project obviously have stricter rules, such as
| airgaps, but sometimes, the limits are a bit fuzzy, like a non-
| classified project that supports a classified project. And I
| may be wrong but academics don't seem to be the type who are
| good at keeping secrets nor see the security implication of
| their actions. Which is a good thing in my book, science is
| about sharing, not keeping secrets! So no AI for military
| projects could be a step in that direction.
| Temporary_31337 wrote:
| blah, blah,people will simply use it as they see fit
| Schlagbohrer wrote:
| It's about as detailed and helpful as saying, "Don't be an
| asshole"
| elashri wrote:
| In such scientific environment, There are gentlemen agreements
| about many things that boils down to "Don't be an asshole" or
| "Be considerate of the others" with some hard requirements at
| this or that point for things that are very serious.
| blitzar wrote:
| "Don't be an asshole" could solve world peace.
| eisbaw wrote:
| So general that it says nothing. Very corporate.
| DisjointedHunt wrote:
| This corporate crap makes me want to puke. It is a consequence of
| the forced bureaucracy from European regulations, particularly
| the EU AI act which is not well thought out and actively adds
| liability and risk to anyone on the continent touching AI
| including old school methods such as bank credit scoring systems.
| fsh wrote:
| CERN is neither corporate, nor in the EU.
| DisjointedHunt wrote:
| The content is corporate. The EU AI Act is extra judicial.
| You don't have to be in the EU to adopt this very set of "AI
| Principles", but if you don't, you carry liability.
| GranularRecipe wrote:
| What I find interesting is the implicit priorisation:
| explainability, (human) accountability, lawfulness, fairness,
| safety, sustainability, data privacy and non-military use.
| peepee1982 wrote:
| Might be implicit prioritization, but I don't think it's
| prioritized by importance, rather than by likelihood of being a
| problem.
| annjose wrote:
| I agree, though I would prefer to highlight the first half of
| the first item - transparency. Also, perhaps make Safety an
| independent principle than combining with Security.
|
| These are a good set of principles for any company (or
| individual) can follow to guide them how they use AI.
| macleginn wrote:
| 'Sustainability: The use of AI must be assessed with the goal of
| mitigating environmental and social risks and enhancing CERN's
| positive impact in relation to society and the environment.' [1]
|
| 'CERN uses 1.3 terawatt hours of electricity annually. That's
| enough power to fuel 300,000 homes for a year in the United
| Kingdom.' [2]
|
| I think AI is the least of their problems, seeing as they burn a
| lot of trees for the sake of largely impractical pure knowledge.
|
| [1] https://home.web.cern.ch/news/official-news/knowledge-
| sharin... [2] https://home.cern/science/engineering/powering-cern
| hengheng wrote:
| That is equivalent to a continuous draw of 150 MW. Not great,
| not terrible.
|
| Far less power than those projected gigawatt data centers that
| are surely the one thing keeping AI companies from breaking
| even.
| macleginn wrote:
| I presume that this policy is not about building data-centres
| but about the use of AI by CERN employees, so essentially
| about marginal cost of generating an additional Python
| script, or something. Don't know if this calculation ever
| makes sense on the global scale, but if one's job is to
| literally spend energy to produce knowledge, it becomes even
| less straightforward.
| tempfile wrote:
| How did that turn into "not great, not terrible"? That's
| still 300,000 homes that could otherwise be powered. It's an
| enormous amount of electricity!
| ceejayoz wrote:
| And all we get out of CERN is... the entire modern economy.
|
| Their ledgers are balanced just fine for a while.
| Jean-Papoulos wrote:
| Humans have poured resources into the pursuit of _largely
| impractical pure knowledge_ for millenia. This has been said of
| an incredible number of human scientific endeavors, before they
| found use in other domains.
|
| Also, the web was invented at CERN.
| piokoch wrote:
| All this impractical knowledge people accumulated over
| centuries gave you cars, planes, computers, air condition,
| antibiotics, iphones, and, in fact, everything you have when
| human kind left the trees. So I would rather burn this 1,3
| terawatt on this than on, say, running Facebook or bitcoins
| mining.
| hexo wrote:
| from that picture it looks like they want to do everything with
| AI. this is very sad.
| dude250711 wrote:
| _> Responsibility and accountability: The use of AI, including
| its impact and resulting outputs throughout its lifecycle, must
| not displace ultimate human responsibility and accountability._
|
| This is critical to understand if the mandate to use AI comes
| from the top: make sure to communicate from day 1, that you are
| using AI as mandated and not increasing the productivity as
| mandated. Play it dumb, protect yourself from "if it's not
| working out then you are using it wrong" attacks.
| mark_l_watson wrote:
| Good guidelines. My primary principle for using AI is that it
| should be used as a tool under my control to make me better by
| making it easier to learn new things, offer alternative
| viewpoints. Sadly, AI training seems headed towards producing
| 'averaged behaviors' while in my career the best I had to offer
| employers was an ability to think outside the box, have different
| perspectives.
|
| How can we train and create AIs with diverse creative viewpoints?
| The flexibility and creativity of AIs, or lack of, guides proper
| principles of using AI.
| nathan_compton wrote:
| I'm not optimistic about this in the short term. Creative and
| diverse viewpoints seem to come from diverse life experiences,
| which AI does not have and, if they are present in the training
| data, are mostly washed out. Statistical models are like that.
| The objective function is to predict close to the average
| output, after all.
|
| In the long term I am at least certain that AI can emulate
| anything humans do en masse, where there is training data, but
| without unguided self evolution, I don't see them solving truly
| novel problems. They still fail to write coherence code if you
| go a little out of the training distribution, in my experience,
| and that is a pretty easy domain, all things considered.
| bryanlarsen wrote:
| The vast majority of advances seem to be of the form "do X
| for Y", where neither X nor Y is novel but the combination
| is. I have no idea whether AI is going to better than humans
| at this, but it seems like it could be.
___________________________________________________________________
(page generated 2025-11-24 23:01 UTC)