[HN Gopher] The skill of the future is not 'AI', but 'Focus'
___________________________________________________________________
The skill of the future is not 'AI', but 'Focus'
Author : weird_trousers
Score : 150 points
Date : 2025-04-20 15:28 UTC (7 hours ago)
(HTM) web link (www.carette.xyz)
(TXT) w3m dump (www.carette.xyz)
| Ozzie_osman wrote:
| > Search enginers offer a good choice between Exploration (crawl
| through the list and pages of results) and Exploitation (click on
| the top result). LLMs, however, do not give this choice.
|
| I've actually found that LLMs are great at exploration for me.
| I'd argue, even better than exploitation. I've solved many a
| complex problem by using an LLM as a thought partner. I've
| refined many ideas by getting the LLM to brainstorm with me.
| There's this awesome feedback loop you can create with the LLM
| when you're in exploration mode that is impossible to replicate
| on your own, and still somewhat difficult even with a human
| thought partner.
| tombert wrote:
| I'm kind of in the same boat.
|
| I've started doing something that I have been meaning to do for
| years, which is to go through all the seminal papers on
| concurrency and make a minimal implementation of them. I did
| Raft recently, then Lamport timestamps, then a lot of the
| common Mutex algorithms, then Paxos, and now I'm working on
| Ambient Calculus.
|
| I've attempted this before, but I would always get stuck on
| some detail that I didn't fully grasp in the paper and would
| abandon the project. Using ChatGPT, I've been able to unblock
| myself much easier. I will ask it to clarify stuff in the
| paper, and sometimes it doesn't even matter if it's "wrong", so
| much as it's giving me some form of feedback and helps me think
| of other ideas on how to fix things.
|
| Doing this, I manage to actually finish these projects, and I
| think I more or less understand them, and I _certainly_
| understand them more than I would have had I abandoned them a
| quarter of the way through like I usually do.
| boleary-gl wrote:
| I was a skeptic until I started seeing it this way. I do think
| that this is exactly why we've seen LLMs overtake search
| engines so quickly in the last 12-18 months. They allow a
| feedback loop that just doesn't exist scrolling and clicking.
| HiPHInch wrote:
| the Exploitation and exploration got me thinking, what if LLM
| generate, say, 5 results at a time and let user choose the best
| mock-possum wrote:
| I have that experience plenty with gpt and gemini
| ToucanLoucan wrote:
| I'm definitely gonna get hate for saying this but: the rise of
| coding with LLM assistants is going to worsen an issue our
| industry is already struggling with: we have tons of developers
| out there who do not know their fundamentals in programming, who
| are utterly rudderless without heaps upon heaps of framework code
| doing lots of work for them, who are now being further enabled by
| machines that write even that code for them with some tweaking
| afterwards.
|
| I have interacted with software developers at conferences who
| cannot do basic things with computers, like navigate file
| systems, or make changes to the Windows registry, where to get
| and how to use environment variables, how to diagnose and fix PC
| issues... Like in a perfect world your IT department sorts this
| stuff for you but I struggle to take seriously someone who claims
| to create software who seemingly lacks basic computer literacy in
| a number of areas.
|
| And I'm sorry, "it compiles and runs" is the bare fucking minimum
| for software quality. We have machines these days that would run
| circles around my first PC in the late 90's, but despite that,
| _everything is slower and runs worse._ My desktop messaging apps
| are each currently sucking up over 600 MB of RAM apiece, which is
| nearly 3 times what my original PC had _total._ Everything is
| some bloated shite that requires internet access now at all times
| or it utterly crashes and dies, and I 'm sorry but I cannot
| separate in my mind the fact that we have seemingly a large
| contingent of software developers out there that can't bloody use
| computers to thank for this. And cheap-ass management, to be
| clear, but I think these are nested problems.
| iwontberude wrote:
| > It compiles and runs
|
| ...and rapidly becomes deprecated not due to quality but
| because the requirements for operation or development changed
| substantially. This second order effects make the "compile and
| run" focus paradoxically efficient and correct use of
| resources. Engineers, especially academically experienced ones,
| prematurely optimize for correctness and arbitrary dimensions
| of quality because they are disconnected from and motivated by
| interests orthogonal to their users.
| ToucanLoucan wrote:
| > ...and rapidly becomes deprecated not due to quality but
| because the requirements for operation or development changed
| substantially.
|
| Did they? Like I have no data for this nor would I know how
| one would set about getting it, but like, from my personal
| experience and the experiences of folks I've spoken to for
| basically my entire career, the requirements we have for our
| software barely change at all. I do not expect Outlook to
| have chat and reaction functionality. I do not desire Windows
| to monitor my ongoing usage of my computer to make
| suggestions on how I might work more efficiently. These
| things were not requested by me or any user I have ever
| spoken to. In fact I would take that a step further and say
| that if your scope and requirements are shifting that wildly,
| that often, that you did a poor job of finding them in the
| first place, irrespective of where they've now landed.
|
| They are far more often the hysterical tinkerings demanded by
| product managers who must justify their salaries with _some
| notion_ of what 's "next" for Outlook, because for some
| reason someone at Microsoft decided that Outlook being a
| feature complete and good email client was suddenly, for no
| particular reason, not good enough anymore.
|
| And again speaking from my and friend's experiences, I would
| in fact _love it very much thank you_ if Microsoft would just
| make their products good, function well, look nice and be
| nice to use, and then _stop._ Provide security updates of
| course, maybe an occasional UI refresh if you 've got some
| really good ideas for it, but apart from that, just stop
| changing it. Let it be feature complete, quality software.
|
| > Engineers, especially academically experienced ones,
| prematurely optimize for correctness and arbitrary dimensions
| of quality because they are disconnected from and motivated
| by interests orthogonal to their users.
|
| I don't think we're disconnected at all from our users. I
| want, as a software developer, to turn out quality software
| that does the job we say it does on the box. My users,
| citation many conversations with many of them, want the
| software to do what it says on the box, and do it well. These
| motivations are not orthogonal at all. Now, certainly it's
| possible to get so lost in the minutia of design that one
| loses the plot, that's definitely where a good project
| manager will shine. However, to say these are different
| concerns entirely is IMO, a bridge too far. My users probably
| don't give a shit about the technical minutia of implementing
| a given feature: they care if it works. However, if I
| implement it correctly, with the standards I know to work
| well for that technology, then I will be happy, and they will
| be happy.
| exceptione wrote:
| MS produces some very good software, like .net core, garnet
| etc. Their biggest asset is however Marketing. They have
| perfected selling software, no matter how bad it is.
|
| Their end-user software ranges from "bad but could be
| worse" to "outlandish crap that should be illegal to ship".
| Their user base however doesn't know much better, and
| decision makers in commercial settings have different
| priorities (choosing MS would not be held against you).
|
| But even in tech circles MS Windows is still used. I know
| the excuses. MS can continue focusing their efforts
| productising the clueless user that doesn't understand
| anything and doesn't give a shit about all the leaks,
| brittle drivers, performance degradation, registry
| cluttering etc. MS follows the right $$ strategy, their
| numbers don't lie.
| ToucanLoucan wrote:
| > They have perfected selling software, no matter how bad
| it is.
|
| I agree in general with that statement, but we also need
| to acknowledge that those sales occur within a market
| that unequivocally endorses their products as "the
| standard," irrespective of quality, and further still the
| vast, vast, vast majority of their sold licenses are in
| corporate environments, where the people making the
| purchasing decisions and the people utilizing the
| software are rhetorically different species. I would be
| shocked if you could find a single person who prefers
| Teams to Slack, yet tons of organizations use Teams, not
| because it's good, but because it comes bundled with 365
| services, and you're already paying for Outlook,
| OneDrive, Word, and Excel at the minimum. And you're
| certainly not going to not have those pieces of
| software... and therein lies the problem.
|
| > MS can continue focusing their efforts productising the
| clueless user that doesn't understand anything and
| doesn't give a shit about all the leaks, brittle drivers,
| performance degradation, registry cluttering etc.
|
| But they _do give a shit._ There 's just no meaningful
| alternative. I run into people who absolutely 100% give a
| shit and are incredibly frustrated at just how BAD
| computing is lately, even if they lack the vocabulary to
| explain massive memory mismanagement means their phone
| gets hot in their hand when they're just reading goddamn
| text messages, they still understand that it sucks and it
| wasn't always like this.
|
| > MS follows the right $$ strategy, their numbers don't
| lie.
|
| That statement however is so vague it's unfalsifiable. We
| do know Microsoft has previously "lost" battles with
| individual applications in individual fields, it is
| completely believable that they could again and more (the
| entire XBox division comes to mind). What Microsoft has
| truly mastered is anti-competitive business practices
| that hobble their competition from the word go, and make
| it more or less impossible to compete with them on a
| software quality axis.
|
| The only office suites I know of that even have numbers
| that are visible next to Microsoft are LibreOffice and
| the Apple suite, neither of which are actually _sold_ at
| all.
| qsort wrote:
| I don't think it's necessarily worsening, it's just becoming
| more evident.
|
| The way I conceptualize this is that there are two kinds of
| knowledge. The first is fundamental knowledge. If you learn
| what is computational complexity and how to use it, or what is
| linear algebra and why do we care, then you're left with
| _something_. The second is what I call "transient" knowledge
| (I made up the word). If you learn by heart the DOM
| manipulation methods you have to invoke to make a webpage shiny
| (or, let's be real, the API of some framework), or what is the
| difference between datetime and datetime2 in SQL Server 2017,
| then it looks like you know how to do stuff, but none of those
| things are fundamental to the way the underlying technologies
| work: they are mostly pieces of trivia that are the way they
| are because of historical happenstance rather than actual
| technical reasons.
|
| To be effective at any given day job, one might need to learn a
| few pieces of knowledge of the second kind, but one should
| never confuse them for actual, real understanding. The problem
| is that the first kind can't be learned from youtube videos in
| increments of 15 minutes.
|
| That's what LLMs are exposing, IMO. If you don't know what is
| the syntax for lambdas in C# or how to structure components in
| React, any LLM will give you perfectly working code. If your
| code crumbles to pieces because you didn't design your database
| correctly or you're doing useless computations, you won't even
| know what you don't know.
|
| This transcends software development, by the way. We talk about
| how problem solving is a skill, but in my experience is more
| like physical form: if you don't keep yourself in shape, you
| won't be when you need it. I see this a lot in kids: the best
| ones are much smarter than I was at their age, the average
| struggles with long division.
| asdlkjlidj wrote:
| A guy I work with calls transient knowledge "arcana," I've
| come to appreciate the concept. Now I'm aware of when I'm
| generating arcana for other people to learn :)
| arkj wrote:
| Losing focus as a skill is something I see with every batch of
| new students. It's not just LLMs, almost every app and startup is
| competing for the same limited attention from every user.
|
| What LLMs have done for most of my students is remove all the
| barriers to an answer they once had to work for. It's easy to get
| hooked on fast answers and forget to ask why something works.
| That said, I think LLMs can support exploration--often beyond
| what Googling ever did--if we approach them the right way.
|
| I've seen moments where students pushed back on a first answer
| and uncovered deeper insights, but only because they chose to
| dig. The real danger isn't the tool, it's forgetting how to use
| it thoughtfully.
| schneems wrote:
| I feel that respecting the focus of others is also an important
| skill.
|
| If I'm pulled 27 different ways. Then when I finally get around
| to another engineer's question "I need help" is a demand for my
| synchronous time and focus. Versus "I'm having problems with X,
| I need to Y, can you help me Z" could turn into a chat, or it
| could mean I'm able to deliver the needed information at once
| and move on. Many people these days don't even bother to write
| questions. They write statements and expect you to infer the
| question from the statement.
|
| On the flip side, a thing we could learn more from LLMs is how
| to give a good response by explaining our reasoning out loud.
| Not "do X" but instead "It sounds like you want to W, and
| that's blocked by Y. That is happening because of Z. To fix it
| you need to X because it ..."
| daveguy wrote:
| > Many people these days don't even bother to write
| questions. They write statements and expect you to infer the
| question from the statement.
|
| This is one of my biggest pet peeves. Not even asking for
| help just stating a complaint.
| the_snooze wrote:
| In a way, I think it shows why "superfluous" things like sports
| and art are so important in school. In those activities, there
| are no quick answers. You need to persist through the initial
| learning curve and slow physical adaptation just to get
| baseline competency. You're not going to get a violin to stop
| sounding like a dying cat unless you accept that it's a gradual
| focused process.
| otabdeveloper4 wrote:
| > You're not going to get a violin to stop sounding like a
| dying cat unless you accept that it's a gradual focused
| process.
|
| You can sample that shit and make some loops in your DAW. Or
| just use a generative AI nowadays.
| tarboreus wrote:
| You can also just sit in the corner and never make
| anything. So what?
| rf15 wrote:
| There are many ways to be a skillless hack, but why
| celebrate it?
| otabdeveloper4 wrote:
| Beats me. You might ask Sam Altman and the other AI hype
| clowns. They're the authors of this hot take.
| _Algernon_ wrote:
| https://upload.wikimedia.org/wikipedia/en/b/b9/MagrittePipe
| ....
| Telemakhos wrote:
| Sports and art aren't superfluous: they teach gross and fine
| (respectively) motor skills. School isn't just about
| developing cognitive skills or brainwashing students into
| political orthodoxies: it's also about teaching students how
| to control their bodies in general and specific muscle
| groups, like the hands, in particular. Art is one way of
| training the hands; music is another (manipulating anything
| from a triangle to a violin), as is handwriting. Without that
| training. Students may well not get enough of that dexterity
| training at home, particularly in the age of tablets [0].
|
| [0] https://www.bbc.com/news/technology-43230884
| AllegedAlec wrote:
| With a bit more focus you might not have missed OP's point
| bob1029 wrote:
| > It's easy to get hooked on fast answers and forget to ask why
| something works
|
| This is really a tragedy because the current technology is
| arguably one of the best things in existence for explaining
| "why?" to someone in a very personalized way. With application
| of discipline from my side, I can make the LLM lecture me until
| I genuinely understand the underlying principles of something.
| I keep hammering it with edge cases and hypotheticals until it
| comes back with "Exactly! ..." after reiterating my current
| understanding.
|
| The challenge for educators seems the same as it has always
| been - How do you make the student _want_ to dig deeper? What
| does it take to turn someone into a strong skeptic regarding
| tools or technology?
|
| I'd propose the use of hallucinations as an educational tool.
| Put together a really nasty scenario (i.e., provoke a
| hallucination on purpose on behalf of the students that goes
| under their radar). Let them run with a misapprehension of the
| world for several weeks. Give them a test or lab assignment
| regarding this misapprehension. Fail 100% of the class on this
| assignment and have a special lecture afterward. Anyone who
| doesn't "get it" after this point should probably be filtered
| out anyways.
| daveguy wrote:
| I'm not sure if hammering an LLM until it agrees with you is
| the best way to get to the truth.
| bob1029 wrote:
| > with edge cases and hypotheticals
|
| not
|
| > conclusions I want to see
|
| The point is to be adversarial with your own ideas, not the
| opposite thing.
| daveguy wrote:
| So, just persist with your own ideas until it agrees with
| you, because eventually it always will. Then take that as
| a lesson?
| brightball wrote:
| This is my constant concern these days and it makes me wonder
| if grading needs to change in order to alleviate some of the
| pressure to get the right answer so that students can focus on
| how.
| nonrandomstring wrote:
| > Losing focus as a skill is something I see with every batch
| of new students.
|
| _Gaining_ focus as a skill is something to work on with every
| batch of new students
|
| We're on the same page. I'm turning that around to say: let's
| remember focus isn't something we're naturally born with, it
| has to be built. Worked on hard. People coming to that task are
| increasingly damaged/injured imho.
| throeijfjfj wrote:
| People will need AI just to communicate in a polite manner! What
| is todays politically correct language? Who is current approved
| celebrity? What if you quoted something, that is somehow
| offensive today?!
|
| No, skill for future is using AI to carve out safe space for
| yourself, so you can focus without distractions!
| djsavvy wrote:
| > This idea summarizes why I disagree with those who equate the
| LLM revolution to the rise of search engines, like Google in the
| 90s. Search enginers offer a good choice between Exploration
| (crawl through the list and pages of results) and Exploitation
| (click on the top result). > LLMs, however, do not give this
| choice, and tend to encourage immediate exploitation instead.
| Users may explore if the first solution does not work, but the
| first choice is always to exploit.
|
| Well said, and an interesting idea, but most of my LLM usage
| (besides copilot autocomplete) is actually very search-engine-
| esque. I ask it to explain existing design decisions, or to
| search for a library that fits my needs, or come up with related
| queries so I can learn more.
|
| Once I've chosen a library or an approach for the task, I'll have
| the LLM write out some code. For anything significantly more
| substantive code than copilot completions, I almost always do
| some exploring before I exploit.
| trollbridge wrote:
| I'm finding the same usage of LLMs in terms of what actually I
| use them for day to day. When I need to look up arcane
| information, an LLM generally does better than a Google search.
| bluefirebrand wrote:
| How do you verify the accuracy of "arcane information"
| produced by an LLM?
|
| "Arcane Information" is absolutely the worst possible use
| case I can imagine for LLMs right now. You might as well ask
| an intern to just make something up
| schneems wrote:
| The flip side of focus (to me) is responsiveness. A post to SO
| might deliver me the exact answer I need, but it will take focus
| to write the correct question and patience to wait for a response
| and then time spent iterating in the comments. In contrast an LLM
| will happily tell me the wrong thing, instantaneously. It's
| responsive.
|
| Good engineers must also be responsive to their teammates,
| managers, customers, and the business. Great engineers also find
| a way to weave in periods of focus.
|
| I'm curious how others navigate these?
|
| It seems there was a large culture shift when Covid hit and non-
| async non-remote people all moved online and expected online to
| work like in person. I feel pushed to be more responsive at the
| cost of focus. On the flip side, I've given time and space to
| engineers so they could focus only to come back and find they had
| abused that time and trust. Or some well meaning engineers got
| lost in the weeds and lost the narrative of *why* they were
| focusing. It is super easy to measure responsiveness: how long
| did it take to respond. It's much harder to measure quality and
| growth. Especially when being vulnerable about what you don't
| know or the failure to make progress is a truly senior level
| skill.
|
| How do we find balance?
| mrj wrote:
| Notification blindness.
|
| I've been struggling with finding balance for years as a front-
| line manager who codes. I need to be responsive-ish to incoming
| queries but also have my own tasks. If I am too responsive,
| it's easy for my work to become my evening time and my working
| hours for everybody else.
|
| The "weaving" in of periods of focus is maintained by ignoring
| notifications and checking them in batches. Nobody gets to
| interrupt me when I'm in focus mode (much chagrin for my wife)
| and I can actually get stuff done. This happened a lot by
| accident, I get enough notifications for long enough that I
| don't really hear or notice them just like I don't hear or
| notice the trains that pass near my house.
| MrDarcy wrote:
| This also worked for me. I flipped to permanent DND mode with
| clear communication I check notifications at specific times
| of day.
|
| There are very few notifications that can't wait a few hours
| for my attention and those that cannot have the expectation
| of being a phone call.
| jncfhnb wrote:
| This is why I honestly like discord over forums
| layer8 wrote:
| When you're on the asking side, sure, instant gratification
| is great. On the answering side, not so much. Chat interfaces
| are not a good fit for anything you may have to mull over for
| a while, or do some investigation before answering, and for
| anything where multiple such threads may occur in parallel,
| or that you want to reference later.
| jncfhnb wrote:
| I don't agree
|
| The thing is that most people seeking help are not able to
| form their question effectively. They can't even identify
| the key elements of their problem.
|
| They _need_ help with people trying to parse out their
| actual problem. Stack Overflow actively tells you to fuck
| off if you can't form your question to their standards, and
| unsurprisingly that's not very helpful to people that are
| struggling.
|
| You will need to repeat walking people through the same
| problems over and over. But... that's what helping people
| is like. That's how we teach people in schools. We don't
| just point them to textbooks. Active discords tend to have
| people that are willing to do this.
| billmalarky wrote:
| I built a distributed software engineering firm pre-covid, so
| all of our clients were onsite even though we were full-remote.
| My engineers plugged into the engineering teams of our clients,
| so it's not like we were building on the side and just handing
| over deliverables, we had to fully integrate into the client
| teams.
|
| So we had to solve this problem pre-covid, and the solution
| remained the same during the pandemic when every org went full
| remote (at least temporarily).
|
| There is no "one size fits all approach" because each engineer
| is different. We had dozens of engineers on our team, and you
| learn that people are very diverse in how they think/operate.
|
| But we came up with a framework that was really successful.
|
| 1) Good faith is required: you mention personnel abusing
| time/trust, that's a different issue entirely, no framework
| will be successful if people refuse to comply. This system only
| works if teammates trust the person. Terminate someone who
| can't be trusted.
|
| 2) "Know thyself": Many engineers wouldn't necessarily even
| know how THEY operated best (if they needed large chunks of
| focus time, or were fine multi-tasking, etc). We'd have them
| make a best guess when onboarding and then iterate and update
| as they figured out how they worked best.
|
| 3) Proactively Propagate Communication Standard: Most engineers
| would want large chunks of uninterrupted focus time, so we
| would tell them to EXPLICITLY tell their teammates or any other
| stakeholders WHEN they would be focusing and unresponsive
| (standardize it via schedule), and WHY (ie sell the idea). Bad
| feelings or optics are ALWAYS simply a matter of
| miscommunication so long as good faith exists. We'd also have
| them explain "escalation patterns", ie "if something is truly
| urgent, DM me on slack a few times and finally, call my phone."
|
| 4) Set comms status: Really this is just slack/teams. but
| basically as a soft reminder to stakeholders, set your slack
| status to "heads down building" or something so people remember
| that you aren't available due to focus time. It's really easy
| to sync slack status to calendar blocks to automate this.
|
| We also found that breaking the day into async task time and
| sync task time really helped optimize. Async tasks are tasks
| that can get completed in small chunks of time like code
| review, checking email, slack, etc. These might be large time
| sinks in aggregate, but generally you can break into small time
| blocks and still be successful. We would have people set up
| their day so all the async tasks would be done when they are
| already paying a context switching cost. IE, scheduled agile
| cadence meetings etc. If you're doing a standup meeting, you're
| already gonna be knocked out of flow so might as well use this
| time to also do PR review, async comms, etc. Naturally we had
| people stack their meetings when possible instead of pepper
| throughout the day (more on how this was accomplished below).
|
| Anyways, sometimes when an engineer of ours joined a new team,
| there might be a political challenge in not fitting into the
| existing "mold" of how that team communicated (if that team's
| comm standard didn't jive with our engineer's). This quickly
| resolved every single time when our engineer was proven out to
| be much more productive/effective than the existing engineers
| (who were kneecapped by the terrible distracting existing
| standard of meetings, constant slack interruptions, etc). We
| would even go as far as to tell stakeholders our engineers
| would not be attending less important meetings (not
| immediately, once we had already proven ourselves a bit). The
| optics around this weren't great at first, but again, our
| engineers would start 1.5-2X'ing productivity of the in-house
| engineers, and political issues melt away very quickly.
|
| TL;DR - Operate in good faith, decide your own best
| communication standard, propagate the standard out to your
| stakeholders explicitly, deliver and people will respect you
| and also your comms standard.
| quantadev wrote:
| For new developers wanting to learn to code, AI is great today to
| help them. For experienced developers AI is also great because it
| can write tons of code for us that we _already_ know how to
| evaluate and test, because of years of experience doing it in the
| "pre-AI" world.
|
| However the future is uncertain, when we reach a point where most
| developers have used _generated_ code most of their lives, and
| never developed the coding skills that are required to fully
| understand the code.
|
| I guess we'll adapt to it. We always do. I mean for example, I
| can no longer do long division on paper like I did in elementary
| school, so I rely totally on computers for all calculating.
| bufferoverflow wrote:
| This article makes no sense. It criticizes current LLMs and then
| without stopping for a second pretends future LLMs will have
| these problems. Even though hallucination levels have been going
| down with every generation. Even though every test and benchmark
| we can come up with, LLMs do better with every generation.
| ToucanLoucan wrote:
| > Even though hallucination levels have been going down with
| every generation.
|
| Gonna need a BIG citation on that one, chief.
|
| > Even though every test and benchmark we can come up with,
| LLMs do better with every generation.
|
| Has it occurred to you the people making the tests and
| benchmarks are, more often than not, the same people making the
| LLM? Like yeah if I'm given carte blanche to make my own test
| cases and I'm accountable to no one and nothing else, my output
| quality would be steadily going up too.
|
| The other day I tried asking Copilot for a good framework for
| accomplishing a task, and it made one up. I tried the query
| again, more specifically, and it referred me to a framework in
| another language. And yes, I specified.
| financetechbro wrote:
| > Gonna need a BIG citation on that one, chief.
|
| OP has consumed so much LLM they've started to hallucinate
| themselves
| ToucanLoucan wrote:
| Perhaps the real hallucinations were the friends we made
| along the way
| otabdeveloper4 wrote:
| > Future bicycles will fly. Just two more rounds of venture
| capital investment, trust the plan.
| gitroom wrote:
| Good read!
| obscurette wrote:
| I'm old enough to remember myriad of experts 10+ years ago who
| were active selling a view, that smartphones with constantly
| connected social media will change everything, we just have to
| learn to use it wisely.
| kennyadam wrote:
| They weren't wrong. Unfortunately, we didn't use it wisely and
| obliterated objective reality and allowed people to create
| spaces where they never have to engage with anything
| challenging.
| rglover wrote:
| And AI will be no different. People will rush head first into
| the fire and be flabbergasted when all of the hype and promise
| of utopia results in utter chaos and inequality.
|
| If we assume that civilization is already teetering thanks to
| the smartphone/social media, the fallout of AI would make
| Thomas Cole blush.
| alganet wrote:
| Using aimbot in Gunbound didn't make players better. Yes, it
| changed everything: it destroyed the game ecosystem.
|
| Can humanity use "literacy aimbot" responsibly? I don't know.
|
| It's just a cautionary tale. I'm not expecting to win an
| argument. I could come up with counter anectdotes myself:
|
| ABS made breaking in slippery conditions easier and safer. People
| didn't learned to brake better, they still pushed the pedal
| harder thinking it would make it stop faster, not realizing the
| complex dynamics of "making a car stop". That changed everything.
| It made cars safer.
|
| Also, just an anecdote.
|
| Sure, a lot of people need focus. Some people don't, they need to
| branch out. Some systems need aimbot (like ABS), some don't (like
| Gunbound).
|
| The future should be home to all kinds of skills.
| vjvjvjvjghv wrote:
| Being allowed to focus seems to be a privilege these days.
|
| When I started in the 90s I could work on something for weeks
| without much interruption. These days there is almost always some
| scrum master, project manager or random other manager who wants
| to get an update or do some planning. Doing actual work seems to
| have taken a backseat to talking about work.
| knallfrosch wrote:
| When I use LLMs, I quickly lose focus.
|
| Copy-paste, copy-paste. No real understanding of the solutions,
| even for areas of my expertise. I just don't feel like
| understanding the flood of information, without any real purpose
| behind the understanding. While I probably (?) get done more, I
| also just don't enjoy it. But I also can't go back to googling
| for hours now that this ready-made solution exists.
|
| I wish it would have never been invented.
|
| (Obviously scoped to my enjoyment of hobbyist projects, let's
| keep AI cancer research out of the picture..)
| spacemadness wrote:
| I recommend using them to ask questions about why something
| works rather than spit out code. They excel at that a lot of
| the time.
| dimal wrote:
| I've gotten into this mode too, but often when I do this, I
| eventually find myself in a rabbit hole dead end that the AI
| unwittingly lead me into. So I'm slowing down and using them to
| understand the code better. Unfortunately, all the tools are
| optimized for vibe coding, getting the quick answer without
| understanding, so it feels like I'm fighting the tools.
| bikedspiritlake wrote:
| Phrasing LLMs as _encouraging_ exploitation is important, because
| they can still be powerful tools for exploration. The difference
| comes in the interface for LLMs, which is heavily focused on
| exploitation whereas search engine interfaces _encourage_
| exploration.
|
| Newer models often end responses with questions and thoughts that
| encourage exploration, as do features like ChatGPT's follow up
| suggestions. However, a lot of work needs to be done with LLM
| interfaces to balance exploitation and exploration while avoiding
| limiting AI's capabilities.
| blotfaba wrote:
| On the other hand: with thinking models, agents, and future
| models to come we are offloading the exploration phase to the
| models themselves. It really depends on constraints and
| pressures.
| thih9 wrote:
| > LLMs, however, do not give this choice, and tend to encourage
| immediate exploitation instead. Users may explore if the first
| solution does not work, but the first choice is always to
| exploit.
|
| You can ask the llm to generate a number of solutions though -
| the exploration is possible and relatively easy then.
|
| And I say that as someone who dislikes llms with a passion.
| friendlyprezz wrote:
| The number one skill in the future is the ability to predict the
| future
|
| Always has been
| PaulRobinson wrote:
| It's going to be a different kind of focus.
|
| Technologies are regularly predicted to diminish a capability
| that was previously considered important.
|
| Babbage came up with the ideas for his engines after getting
| frustrated with log tables - how many people reading this have
| used a log table or calculated one recently?
|
| Calculators meant kids wouldn't need to do arithmetic by hand any
| more and so would not be able to do maths. In truth they just
| didn't have to do it by hand any more - they still needed the
| skills to interpret the results, they just didn't have to do the
| hard work of creating the outputs by pen and paper.
|
| They also lost the skill of using slide rules which were used to
| give us approximations, because calculators allowed us to be
| precise - they were no longer needed.
|
| Computers, similar story.
|
| Then the same came with search engines in our pockets. "Oh no,
| people can find an answer to anything in seconds, they won't
| remember things". This is borne out, there have been studies that
| show recall diminishes if your phone is even in the same room.
| But you still need to know what to look for, and know what to do
| with what you find.
|
| I think this'll still be true in the future, and I think TFA kind
| of agrees, but seems to be doing the "all may be lost" vibe by
| insisting that you still need foundational skills. You don't need
| to know the foundational skills if you want to know what the
| answer to 24923 * 923 is, you can quickly find out the answer and
| use that answer however you need.
|
| I just think the work shifts - you'll still need to know how to
| craft your inputs carefully (vibe coding works better if you
| develop a more detailed specification), and you'll still need to
| process the output, but you'll become less connected to the
| foundation and for 99% of the time, that's absolutely fine in the
| same way it has been with calculators, and so on.
___________________________________________________________________
(page generated 2025-04-20 23:01 UTC)