[HN Gopher] Teaching National Security Policy with AI
___________________________________________________________________
Teaching National Security Policy with AI
Author : enescakir
Score : 40 points
Date : 2025-06-10 13:54 UTC (9 hours ago)
(HTM) web link (steveblank.com)
(TXT) w3m dump (steveblank.com)
| troelsSteegin wrote:
| What's missing from this is the "before and after" - how this
| quarter's class experience was different from previous quarters
| without the AI tool emphasis.
| mapt wrote:
| The very first thing you have to learn about original research
| is the basis of the experimental scientific method, of the idea
| of empiricism and improvement through reason, observation, and
| iterative comparative testing. It is a little bit shocking when
| you encounter the broad swath of the population that has not
| internalized this.
| suddenlybananas wrote:
| >Policy students have to read reams of documents weekly. Our
| hypotheses was that our student teams could use AI to ingest and
| summarize content, identify key themes and concepts across the
| content, provide an in-depth analysis of critical content
| sections, and then synthesize and structure their key insights
| and apply their key insights to solve their specific policy
| problem.
|
| Yeah who cares about actually reading and properly understanding
| anything at all. Given the policy world is filled with so much
| BS, no wonder they like a BS machine.
| alephnerd wrote:
| Enhanced information retrieval is a good tool to have - at some
| point close reading does become difficult to scale out.
|
| Building experience on how to use tools to automated expected
| drudgery like making PPT slides or wordsmithing an NSC memo is
| a good skill to build.
|
| There is a lot of low hanging fruit in professional tooling
| that can and should be automated where possible, and some class
| similar to the "Missing Semester" at MIT except oriented
| towards productivity tools would be helpful.
| FuriouslyAdrift wrote:
| Synthesis and summarization is literally the main job of an
| analyst. Frequently the real information is hidden in the
| tone, tenor, and syntax, not necessarily in the broader
| content (aka reading between the lines).
| neilv wrote:
| Reading between the lines is also a skill in general human
| communication.
|
| Which is why, when someone sends me an AI-generated message
| that previously would've been written by them, it's like
| they're jamming one of my skills.
|
| Not only are they not giving me some information I had
| before (e.g., that the person thought of this aspect to
| mention, that they expressed it this way, that they
| invested this effort into this message to me, etc.), but,
| (if I don't know it's AI-generated) the message is giving
| me _wrong_ information about all those things I read into
| it.
|
| (I'm reasonably OK at reading between the lines, for
| someone with only basic schooling. Though sometimes I'm
| reminded that some of my humanities major friends are
| obviously much better at interpreting and expressing. Maybe
| they're going to be taking the AI-slop apocalypse even
| worse than I do.)
| vouaobrasil wrote:
| I don't think it is when the students are random variables,
| because enhanced information retrieval will increase the
| proportion of the lazy in the class.
| einpoklum wrote:
| National Security and AI -
|
| Two domains which are rife with hype, and self-serving self-
| nominated experts, and are both put to use for manipulating the
| public for questionable purposes.
| sidewndr46 wrote:
| A most perfect union?
| OWaz wrote:
| I find it perplexing how people are so open to just dumping
| personal effort onto these tools and believing the tools work
| accurately.
| sureokbutyeah wrote:
| Work accurately? Relative to what? Old humans? Make up
| something about psychology? Physics? Economics? History?
| Academics have been doing that for years and we all blindly
| agreed their work was accurate, lauded them, then found out
| decades later it was garbage.
|
| Seems typical for humans; centuries of false belief religion
| was accurate, now contemporary nation state politics,
| economics, and the engineered things they sell for profit.
|
| So long as enough stuff is available on shelves to keep people
| sedate, they'll believe whatever. Our biology couples us to
| knowing when we need food, water; keep those normal and no one
| cares about anything else. Riots only occur when biology is
| threatened. Everything else about humanity is 100% made up
| false belief, appeals to empty trust in what we say.
|
| Physics makes it pretty clear its all just skins suits pulling
| illusions out their ass all the way down. We can never change
| the immutable forces of physics, there's too much other stuff
| in universe rushing in to correct. This is it for humans; idle
| about on Earth hallucinating.
| sarchertech wrote:
| So if you're reading a summary of a bullshit document written
| by an old human, created by a machine trained on billions of
| bullshit documents written by old humans, what do you get out
| of that?
| jay_kyburz wrote:
| I can agree on Psychology, Economics, and History, but most
| of Physics is reproducible science.
|
| I think, now more than ever, we need to clearly distinguish
| reproducible science from untested hypothesise. Reality vs
| Opinion.
|
| update: opinion is not quite the right word here. Perhaps
| somebody else can think of a better word.
| cptroot wrote:
| This article says "the students did X", without providing any
| metrics to compare the result on. It's frustrating to again and
| again get articles saying "AI is great and speeds learning"
| without actually evaluating that learning process.
| bjelkeman-again wrote:
| It feels like the tools are used as a shortcut to not read
| documents, and then have the tools produce output from the
| shortcut taken. What did they accentually learn that they will
| retain afterwards?
| radioactivist wrote:
| At one point this states:
|
| > Claude was also able to create a list of leaders with the
| Department of Energy Title17 credit programs, Exim DFC, and other
| federal credit programs that the team should interview. In
| addition, it created a list of leaders within Congressional
| Budget Office and the Office of Management and Budget that would
| be able to provide insights. See the demo here:
|
| and then there is a video of them "doing" this. But the video
| basically has Claude just responding saying "I'm sorry I can't do
| that, please look at their website/etc".
|
| Am I missing something here?
| radioactivist wrote:
| It happens again in the next video. It says:
|
| > The team came up with a use case the teaching team hadn't
| thought of - using AI to critique the team's own hypotheses.
| The AI not only gave them criticism but supported it with links
| from published scholars. See the demo here:
|
| But the video just shows Claude giving some criticism but then
| just says go look at some journals and talk to experts (doesn't
| give any references or specifics).
| kenjackson wrote:
| That was really weird. I did do this with ChatGPT 4o and it
| seems to do a good job of creating this list. But I don't know
| anything about this field, so I don't know how accurate it is.
| bgwalter wrote:
| Just read Mearsheimer and the think tank policy papers if you
| want to _know_ what is actually going on. Go to the Stanford
| Hoover Institute if you want to _sell_ what is actually going on
| to the American public.
|
| Why would LLMs help, unless trained on classified information for
| which you could also use an internal search engine? In the end it
| comes down to how much military, economic and propaganda power
| you have and how much you are willing to deploy it.
|
| The whole interaction with LLMs, which focuses on clicking,
| wrestling with a stupid and recalcitrant dialogue partner
| distracts from thinking. Better read original information
| yourself and take a long walk to organize it in your own mind.
| formerphotoj wrote:
| And after the walk, talk with an intelligent human dialog
| partner to exchange ideas and concepts that illuminate the
| schemas. Heck, walk and talk together! :)
| psunavy03 wrote:
| Someone apparently is taking the old war college joke about "it's
| only a lot of reading if you do it" a little too seriously . . .
___________________________________________________________________
(page generated 2025-06-10 23:00 UTC)