General Principles for the Use of AI at Cern
Postedabout 1 month agoActiveabout 1 month ago
home.web.cern.chResearchstory
informativeneutral
Artificial IntelligenceCernResearchScience
Key topics
Artificial Intelligence
Cern
Research
Science
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
37
0-3h
Avg / period
8
Key moments
- 01Story posted
Nov 24, 2025 at 5:37 AM EST
about 1 month ago
Step 01 - 02First comment
Nov 24, 2025 at 5:37 AM EST
0s after posting
Step 02 - 03Peak activity
37 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 26, 2025 at 12:37 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46032513Type: storyLast synced: 11/24/2025, 10:38:12 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.
When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by and A that b) the driver had learned to trust and c) the driver, though theoretically responsible, had become complacent.
"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"
Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.
"If the program is responsible for a critical task .."
See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?
If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."
Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.
They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.
In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.
And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
Fixed that for you. That's been the case since we discovered sticks and stones, but it doesn't mean that CERN is lying when they say they want to focus on non-military areas.
Let's not assume the worst of an institution that's been fairly good for the world so far.
You didn't fix anything.
> Let's not assume the worst of an institution that's been fairly good for the world so far.
I'm not assuming the worst. I'm just being realistic, and I think it would be nice if CERN explicitly acknowledged the fact that what they do there could have serious implications for weapons technology.
You're really grasping at straws here. CERN doesn't need to do anything. Nor do universities, for example.
I'm fine with CERN, its scientific mission and whatever they come up with there and have contributed to their cause in a minor way so I can do without the lecturing.
If you do research it is easy to stick your head in the ground and pretend that as an academic you have no responsibility for the outcome. But that's roughly analogous to a gun manufacturer pushing the 'guns don't kill people, people do' angle. CERN has a number of projects on the go whose only possible outcome will be more powerful or more compact weapons.
For instance, anti-matter research. If and when we manage to create anti-matter in larger quantities and to be able to do so more easily it will have potentially massive impact on the kind of threats societies have to deal with. To pretend that this is just abstract research is willfully abdicating responsibility.
Once it can be done it will be done, and once it will be done it is a matter of time before it is used. Knowledge, once gained can not be unlearned. See also: the atomic bomb. Now, CERN isn't the only facility where such research takes place and I'm well aware of the geopolitical impact of being 'late' when it comes to such research. I would just like them to be upfront about it. There is a reason why most particle accelerators and associated goodies are funded by the various departments of defense.
Your typical university research lab is not doing stuff with such impact, though, the biology department of some of these are investigating things that can easily be weaponized, and which should come with similar transparency about possible uses.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern
Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.
Their ledgers are balanced just fine for a while.
Is the scientific merit of such a thing always immediately apparent?
Also, the web was invented at CERN.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.