https://muldoon.cloud/2025/05/22/alignment.html
The Other Mickey Wiki [ ]
AboutConsulting
Problems in AI alignment: A scale model
May 22, 2025
After trying too hard for too to make sense about what bothers me
with the AI alignment conversation, I have settled, in true Millenial
fashion, on a meme:
ai-alignment-scale
Explanation:
The Wikipedia article on AI Alignment defines it as follows:
In the field of artificial intelligence (AI), alignment aims to
steer AI systems toward a person's or group's intended goals,
preferences, or ethical principles.
One could observe: we would also like to steer the development of
other things, like automobile transportation, or social media, or
pharmaceuticals, or school curricula, "toward a person or group's
intended goals, preferences, or ethical principles."
Why isn't there a "pharmaceutical alignment" or a "school curriculum
alignment" Wikipedia page?
I think that the answer is "AI Alignment" has an implicit technical
bent to it. If you go on the AI Alignment Forum, for example, you'll
find more math than Confucius or Foucault.
On the other hand, nobody would view "pharmaceutical alignment" (if
it were formulated as "[steering] pharmaceutical systems toward a
person's or group's intended goals, preferences, or ethical
principles") as primarily a problem for math or science.
While there always are things that pharmaceutical developers can do
inside the lab to at least try to promote ethical principles - for
example, perhaps, to minimize preventable hazards, even when not
forced to - we also accept that ethical work is done in large part
outside of the lab; in purchasing decisions, in the way that
pharmaceutical marketplaces operate, in the vast mess of the
medical-industrial-government complex. It's a problem so diffuse that
it hardly makes sense to gather it all into one coherent encyclopedia
entry.
The process by which the rest of the world influences the direction
of an industry, by way of purchasing, analyzing, regulating,
discussing, etc., is Selection. This comes from the terminology of
evolution - in this framing, dinosaurs didn't just decide to start
growing wings and flying; Nature selected birds to fill the new
ecological niches of the Jurassic period.
While Nature can't do its selection on ethical grounds, we can, and
do, when we select what kinds of companies and rules and power
centers are filling which niches in our world. It's a decentralized
operation (like evolution), not controlled by any single entity, but
consisting of the "sum total of the wills of the masses," as Tolstoy
put it.
The technical AI alignment problems (represented by the planets in
the meme) are surely important, but what happens outside of the lab
is just much bigger; AI is a small portion of the world economy; and
yet AI touches almost all of us. The ways we select how AI touches us
is, I want to suggest, the Big Question of AI Alignment. If you do
care about AI Alignment in general, it's a folly to ignore it.
In defense, a Selection-denier could argue that there is no progress
to be made in directing the "sum total of the wills of the masses"
towards the "group's intended goals, preferences, or ethical
principles." But that would amount to rejecting the Categorical
Imperative, and all the fun (and often very mathy) problems in game
theory, and giving up on humanity, and only losers do that.
One sociotechnical protocol that can be applied to improving
Selection efficiency is here: https://muldoon.cloud/2025/03/08/
civic-organizing.html. But there are many others. This is the big
work of AI Alignment. The meme said so.
The Other Mickey Wiki
* The Other Mickey Wiki
* muldoon.mickey@gmail.com
* muldoon2007
* mickeymuldoon