[HN Gopher] The Slow Collapse of Critical Thinking in OSINT Due ...
       ___________________________________________________________________
        
       The Slow Collapse of Critical Thinking in OSINT Due to AI
        
       Author : walterbell
       Score  : 34 points
       Date   : 2025-04-03 18:21 UTC (4 hours ago)
        
 (HTM) web link (www.dutchosintguy.com)
 (TXT) w3m dump (www.dutchosintguy.com)
        
       | jruohonen wrote:
       | """
       | 
       | * Instead of forming hypotheses, users asked the AI for ideas.
       | 
       | * Instead of validating sources, they assumed the AI had already
       | done so.
       | 
       | * Instead of assessing multiple perspectives, they integrated and
       | edited the AI's summary and moved on.
       | 
       | This isn't hypothetical. This is happening now, in real-world
       | workflows.
       | 
       | """
       | 
       | Amen, and OSINT is hardly unique in this respect.
       | 
       | And implicitly related, philosophically:
       | 
       | https://news.ycombinator.com/item?id=43561654
        
         | cmiles74 wrote:
         | Anyone using these tools would do well to take this article to
         | heart.
        
       | FrankWilhoit wrote:
       | A crutch is one thing. A crutch made of rotten wood is another.
        
         | add-sub-mul-div wrote:
         | Also, a crutch for doing long division is not the same as a
         | crutch for general thinking and creativity.
        
       | nonrandomstring wrote:
       | > This isn't a rant against AI. I use it daily
       | 
       | It is, but it adds disingenuous apologetic.
       | 
       | Not wishing to pick on this particular author, or even this
       | particular topic, but it follows a clear pattern that you can
       | find everywhere in tech journalism:                 Some really
       | bad thing X is happening. Everyone knows X is happening.
       | There is evidence X is happening, But I am *not* arguing against
       | X       because that would brand me a
       | Luddite/outsider/naysayer.... and we       all know a LOT of
       | money and influence (including my own salary)       rests on
       | nobody talking about X.
       | 
       | Practically every article on the negative effects of smartphones
       | or social media printed in the past 20 years starts with the same
       | chirpy disavowal of the authors actual message. Something like;
       | 
       | "Smartphones and social media are an essential part of modern
       | life today... but"
       | 
       | That always sounds like those people who say "I'm not a racist,
       | but..."
       | 
       | Sure, we get it, there's a lot of money and powerful people
       | riding on "AI". Why water down your message of genuine concern?
        
       | palmotea wrote:
       | One way to achieve superhuman intelligence in AI is to make
       | humans dumber.
        
         | imoverclocked wrote:
         | That's only if our stated goal is to make superhuman AI and we
         | use AI at every level to help drive that goal. Point received.
        
       | treyfitty wrote:
       | Well, if I want to first understand the basics, such as "what do
       | the letters OSINT mean," I'd think the homepage
       | (https://osintframework.com/) would tell me. But alas, it does
       | not, and a simple chatgpt query would have told me the answer
       | without the wasted effort.
        
         | walterbell wrote:
         | GPU-free URL: https://en.wikipedia.org/wiki/OSINT
         | 
         | Offline version: https://www.kiwix.org
        
         | OgsyedIE wrote:
         | Similar criticisms that outsiders need to do their own research
         | to acquire foundational-level understanding before they start
         | on the topic can be made about other popular topics on Hn that
         | frequently use abbreviations, such as TLS, BSDs, URL and MCP,
         | but somehow those get a pass.
         | 
         | Is it unfair to make such demands for the inclusion of
         | 101-level stuff in non-programming content, or is it unfair to
         | give IT topics a pass? Which approach fosters a community of
         | winners and which one does the opposite? I'm confident that you
         | can work it out.
        
       | AIorNot wrote:
       | This is another silly against AI tools - that doesn't offer
       | useful or insightful suggestions on how to adapt or provide an
       | informed study of areas of concern and - one that capitalizes on
       | the natural worries we have on HN because of our generic fears
       | around critical thinking being lost when AI will take over our
       | jobs - in general, rather like concerns about the web in pre-
       | internet age and SEO in digital marketing age
       | 
       | OSINT only exists because of internet capabilities and google
       | search - ie someone had to learn how to use those new tools just
       | a few years ago and apply critical thinking
       | 
       | AI tools and models are rapidly evolving and more in depth
       | capabilities appearing in the models, all this means the tools
       | are hardly set in stone and the workflows will evolve with them -
       | it's still up to human oversight to evolve with the tools - the
       | skills of human overseeing AI is something that will develop too
        
         | card_zero wrote:
         | The article is all about that oversight. It ends with a ten
         | point checklist with items such as "Did I treat GenAI as a
         | thought partner--not a source of truth?".
        
         | cmiles74 wrote:
         | So weak! No matter how good a model gets it will always present
         | information with confidence regardless of whether or not it's
         | correct. Anyone that has spent five minutes with the tools I
         | knows this.
        
         | salgernon wrote:
         | OSINT (not a term I was particularly familiar with, personally)
         | actually goes back quite a ways[1]. Software certainly makes
         | aggregating the information easier to accumulate and finding
         | signal in the noise, but bad security practices do far more to
         | make that information accessible.
         | 
         | [1]
         | https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...
        
       | BariumBlue wrote:
       | Good point in the post about confidence - most people equate
       | confidence with accuracy - and since AIs always sound confident,
       | they always sound correct
        
       ___________________________________________________________________
       (page generated 2025-04-03 23:00 UTC)