[HN Gopher] Facts will not save you - AI, history and Soviet sci-fi
___________________________________________________________________
Facts will not save you - AI, history and Soviet sci-fi
Author : veqq
Score : 146 points
Date : 2025-08-01 18:16 UTC (3 days ago)
(HTM) web link (hegemon.substack.com)
(TXT) w3m dump (hegemon.substack.com)
| pavel_lishin wrote:
| > _The initial draft was terrible in conveying tone, irony, or
| any kind of cultural allusion._
|
| My mother reads books mostly in Russian, including books by
| English-speaking authors translated into Russian.
|
| Some of the translations are _laughably_ bad; one recent example
| had to translate "hot MILF", and just translated "hot" verbatim
| - as in the adjective indicating _temperature_ - and just
| transliterated the word "MILF", as the translator (or machine?)
| apparently just had no idea what it was, and didn't know the
| equivalent term in Russian.
|
| As a mirror, I have a hard time reading things in Russian - I
| left when I was ten years old, so I'm very out of practice, and
| most of the cultural allusions go straight over my head as well.
| A good translation _needs_ to make those things clear, either via
| a good translation, or by footnotes that explain things to the
| reader.
|
| And this doesn't just apply to linguistic translation - the past
| is a foreign country, too. Reading old texts - any old texts -
| requires context.
| notpushkin wrote:
| As a Russian, _goriachaia milfa_ is exactly how I'd translate
| it. I think a lot of slang got borrowed like this in 2000-s.
| incone123 wrote:
| Reminds me of my Czech friend explaining some of the subtle
| humour in 'Kolya' which I would never have got just from the
| English subtitles.
| Joker_vD wrote:
| > and just translated "hot" verbatim - as in the adjective
| indicating temperature
|
| Well, "goriachii" does have figurative meaning "passionate"
| (and by transfer, "sexy") in Russian just as it has in English.
| Heck, English is even worse in this regard: "heated argument",
| seriously? Not only an argument doesn't have a temperature, you
| can't change it either (since it does not exist)! Yet the
| phrase exists just fine, and it translates as "hot argument" to
| Russian, no problem.
|
| No comments on "MILF" though. But I wouldn't be surprised if it
| actually entered the (youth/Internet) slang as-is: many other
| English words did as well.
| masfuerte wrote:
| > Still, without AI a story like this would have taken me several
| weeks to translate and polish, instead of one afternoon.
|
| I don't understand this. It's only six thousand words and it's
| the polishing that takes the time. How would it have taken weeks
| to do the initial draft?
| AlecSchueler wrote:
| I suppose different people work differently and the author
| knows themselves better than we do?
| WastedCucumber wrote:
| Maybe he's... not great at Russian? I'm at a loss, same as you.
|
| And I don't have any skill in Russian, but I would say that his
| translation is not good, or at least was not thoughtfully made,
| based solely on the fact that he did not write the author's
| name in it.
| shaky wrote:
| I don't understand: the translation is not good because he
| omitted the author's name? He stated it plainly in his
| article:
|
| > As it happens, I recently translated a short story by Kir
| Bulychev -- a Soviet science-fiction icon virtually unknown
| in the West.
|
| I for one enjoyed reading it! As for the article, it's on
| point. There will be fewer historians and translators, but I
| suspect those that stick around will be greatly amplified.
| chupasaurus wrote:
| > I for one enjoyed reading it!
|
| I gave a quick look and was surprised to see that the most
| of the first two paragraphs simply aren't there. I guess
| you've read something else!
|
| As for machine translation: currently it isn't remotely
| ready to deal with literature by itself, but it could be
| handy to assist translators.
| WastedCucumber wrote:
| In the PDF linked in the article there's only the title of
| the story, and not the author's name.
| fpoling wrote:
| The translated story is full of implicit cultural references.
| If the author have used AI to clarify some references about
| what the story was about, it could explain the vast time
| saving.
| myhf wrote:
| The article is written by LLM and published on Substack, so
| there's no expectation for it to make coherent points.
| glenstein wrote:
| I agree and disagree. It's certainly the case that facts imply
| underlying epistemologies, but it completely misses the point to
| treat that like it entails catastrophic relativism.
|
| Building up an epistemology isn't just recreational, ideally it's
| done for good reasons that are responsive to scrunity, standing
| firm on important principles and, where necessarily, conciliatory
| in response to epistemological conundrums. In short, such
| theories can be resilient and responsible, and facts based on
| them can inherent that resilience.
|
| So I think it completely misses the point to think that "facts
| imply epistemologies" should have the upshot of destroying any
| conception of access to authoritative factual understanding.
| Global warming is still real, vaccines are still effective,
| sunscreen works, dinosaurs really existed. And perhaps, more to
| the point in this context, there really are better and worse
| understandings of the fall of Rome or the Dark Ages or Pompeii or
| the Iraq war.
|
| If being accountable to the theory-laden epistemic status of
| facts means throwing the stability of our historical
| understanding into question, you're doing it wrong.
|
| And, as it relates to the article, you're doing it _super_ wrong
| if you think that creates an opening for a notion of human
| intuition that is fundamentally non-informational. I think it 's
| definitely true that AI as it currently exists can spew out
| linguistically flat translations, lacking such things as an
| interpretive touch, or an implicit literary and cultural
| curiosity that breathes the fire of life and meaning into
| language as it is actually experienced by humans. That's a great
| and necessary criticism. _But._
|
| Hubert Dreyfus spent decades insisting that there were things
| "computers can't do", and that those things were represented by
| magical undefined terms that speak to ineffable human essence. He
| insisted, for instance, that computers performing chess at a high
| level would never happen because it required "insight", and he
| felt similarly about the kind of linguistic comprehension that
| has now, at least in part, been achieved by LLMs.
|
| LLMs still fall short in critical ways, and losing sight of that
| would involve letting go of our ability to appreciate the best
| human work in (say) history, or linguistics. And there's a real
| risk that "good enough" AI can cause us to lose touch with such
| distinctions. But I don't think it follows that you have to draw
| a categorical line insisting such understanding is impossible,
| and in fact I would suggest that's a tragic misunderstanding that
| gets everything exactly backwards.
| mapontosevenths wrote:
| I agree with this whole-heartedly.
|
| Certainly some facts can imply a certain understanding of the
| world, but they don't _require_ that understanding in order to
| remain true. The map may require the territory, but the
| territory does not require the map.
|
| "Reality is that which, when you stop believing in it, doesn't
| go away." -- Philip K. Dick
| throwanem wrote:
| We require the map.
| richk449 wrote:
| In this analogy though, maps are the only things we have
| access to. There may be Truth, but we only approximate it
| with our maps.
| foxglacier wrote:
| This whole article is based on misinterpreting Microsoft's "AI
| applicability score" for "risk of job being made redundant by
| AI". From the original paper:
|
| "This score captures if there is nontrivial AI usage that
| successfully completes activities corresponding to significant
| portions of an occupation's tasks."
|
| Then the author describes their job qualitatively matching their
| AI applicability score by using AI to do most of their work for
| them.
|
| If there's a lot of unmet demand for low-priced high-quality
| translation, translators could end up having more work, not less.
| vouaobrasil wrote:
| > One, there is no way LLMs in their current state are capable of
| replacing human translators for a case like this. And two, they
| do make the job of translation a lot less tedious. I wouldn't
| call it pleasant exactly, but it was much easier than my previous
| translation experiences
|
| On the other hand, one day they will replace human beings. And
| secondly, if something like transalation (or in general, any
| mental work) becomes too easy, then we also run the risk of
| incresing the amount of mediocre works. Fact is, if something is
| hard, we'll only spend time on it if it's really worthwhile.
|
| Same thing happens with phone cameras. Yes, it makes some things
| more convenient, but it also has resulted in a mountain of
| mediocrity, which isn't free to store (requires energy and hence
| pollutes the environment).
| pests wrote:
| Technically modern LLMs are handicapped on translation tasks
| compared to the original transformer architecture. The origami
| transformer got to see future context as well as past tokens.
| vouaobrasil wrote:
| Okay, but I'm not really concerned with the state of the art
| now with any specific technology, but what will be the state
| of the art in 20 years.
| moritzwarhier wrote:
| One of the few articles about AI on the front page that doesn't
| make me want to throw my phone against a wall.
|
| Haven't even read it completely, but in contrast to the countless
| submissions regurgitating badly thought-out meta arguments about
| AI-supported software engineering, it actually seems to elaborate
| on some interesting points.
|
| I also think that the internet as a primary communication and
| mass medium + generative AI evokes 1984, very strongly.
| krunck wrote:
| AI represents a move to the massive centralization of information
| into a few sources. History (or it's "interpreters") should never
| be in the hands of these few powerful entities aligned with even
| more powerful governments.
| SilverElfin wrote:
| This. Any centralization or gate keeping of speech and
| knowledge is a danger to free societies. The tech is moving too
| fast for politics to keep up. I wonder if the only fix is to
| elect younger people, even if they're missing other types of
| knowledge and experience.
| saubeidl wrote:
| AI is the perfect propaganda technology.
|
| I don't believe the current race to build AI is _actually_
| about any productivity gains (which are questionable at best).
|
| I believe the true purpose of the outsized AI investments is to
| make sure the universal answer machine will give answers that
| conform to the ideology of the ruling class.
|
| You can read hints of that in statements like the Trump AI
| Action Plan [0], but also things like the Llama 4 announcement.
| [1]
|
| [0] "Ensure that Frontier AI Protects Free Speech and American
| Values" - https://www.whitehouse.gov/wp-
| content/uploads/2025/07/Americ...
|
| [1] "It's well-known that all leading LLMs have had issues with
| bias--specifically, they historically have leaned left when it
| comes to debated political and social topics. This is due to
| the types of training data available on the internet."
| https://ai.meta.com/blog/llama-4-multimodal-intelligence/
| add-sub-mul-div wrote:
| The fact that it can both displace vast amounts of labor and
| _also_ control information for people who won 't bother with
| primary sources anymore is what explains (for me) the endless
| funding and desperation to make it happen.
| mapontosevenths wrote:
| "This is due to the types of training data available on the
| internet."
|
| I'd love to see them prove this, but they can't.
| sofixa wrote:
| Why not? The fact that even the overtly Nazi Musk led xAI's
| Grok can't shake a "left" bias (left for the US political
| spectrum, which is to the right of the centre in most other
| developed countries) proves this wrong. The data, and
| _reality_ , tend to lean "left" (again, on the American
| spectrum). It isn't leftists in the US talking about
| controlling the weather, Haitians eating pets, proposing to
| nuke hurricanes, dismissing the Holocaust, denying climate
| change exists, and other such _objectively_ wrong things.
|
| This saying exists for a reason: https://en.m.wikipedia.org
| /?redirect=no&title=Reality_has_a_...
| saubeidl wrote:
| Maybe it's not a left bias in the AI, but a right bias in
| the ruling class instead?
| throwawayqqq11 wrote:
| Fascism is the most viable option for the ruling class to
| sustain their status quo in a declining capitalistic
| world with increasing public unrest. The left, anti-
| capitalistic narrative therefor needs to scrubbed off the
| people.
| NitpickLawyer wrote:
| > There can be no objective story since the very act of
| assembling facts requires implicit beliefs about what should be
| emphasized and what should be left out. History is therefore a
| constant act of reinterpretation and triangulation, which is
| something that LLMs, as linguistic averaging machines, simply
| cannot do.
|
| Yeah, no. I find it funny how _everyone_ from other specialties
| take offence when their piece of "advanced" whatever gets put on
| a list, but they have absolutely no issue with making uninformed,
| inaccurate and oversimplified remarks like "averaging machines".
|
| Brother, these averaging machines just scored gold at IMO. Allow
| me to doubt that whatever you do is more impressive than that.
| throwanem wrote:
| Okay, shapecel.
| Paratoner wrote:
| Oh, data gods! Oh, technocratic overlords! Milords, shant
| though giveth but a crumb of cryptocurrency to thy humble
| guzzler?
| NitpickLawyer wrote:
| I mean, I get the sarcasm, but don't get the cryptobabble.
| And this isn't about data or technoanything in particular. In
| order to get gold at IMO the system had to
|
| a) "solve" NLP enough to understand the problem b) reason
| through various "themes", ideas, partial demonstrations and
| so on c) verify some d) gather the good ideas from all the
| tried paths and come up with the correct demonstrations in
| the end
|
| Now tell me a system like this can't take source material and
| all the expert writings so far, and come up with various
| interpretations based on those combinations. And tell me
| it'll be less accurate than some historian's "vibes". Or a
| translator's "feelings". I don't buy it.
| tech_ken wrote:
| I dunno I can see an argument that something like IMO word
| problems are categorically a different language space than
| a corpus of historiography. For one, even when expressed in
| English language math is still highly, highly structured.
| Definitions of terms are totally unambiguous, logical
| tautologies can be expressed using only a few tokens, etc.
| etc. It's incredibly impressive that these rich structures
| can be learned by such a flexible model class, but it
| definitely seems closer (to me) to excelling at chess or
| other structured game, versus something as ambiguous as
| synthesis of historical narratives.
|
| > Now tell me a system like this can't take source material
| and all the expert writings so far, and come up with
| various interpretations based on those combinations. And
| tell me it'll be less accurate than some historian's
| "vibes".
|
| Framing it as the kind of problem where accuracy is a well-
| defined concept is the error this article is talking about.
| Literally the historian's "vibes" and "feelings" are the
| product you're trying to mimic with the LLM output, not an
| error to be smoothed out. I have no doubt that LLMs can
| have real impact in this field, especially as turbopowered
| search engines and text-management tools. But the point of
| human narrative history is fundamentally that we tell it to
| ourselves, and make sense of it by talking about it.
| Removing the human from the loop is IMO like trying to
| replace the therapy client with a chat agent.
| Obscurity4340 wrote:
| > History is therefore a constant act of reinterpretation and
| triangulation, which is something that LLMs, as linguistic
| averaging machines, simply cannot do.
|
| Well-put
| Phui3ferubus wrote:
| > they are acts of interpretation that are never recognized as
| such by outsiders
|
| And that is exactly why translators are getting replaced by
| ML/AI. Companies don't care about quality, that is the reason
| customer support was the first thing axed, companies see it only
| as a cost.
| munificent wrote:
| _> There can be no objective story since the very act of
| assembling facts requires implicit beliefs about what should be
| emphasized and what should be left out. History is therefore a
| constant act of reinterpretation and triangulation, which is
| something that LLMs, as linguistic averaging machines, simply
| cannot do._
|
| This exactly why tech companies want to replace those jobs with
| LLMs.
|
| The companies control the models, the models control the
| narrative, the narrative controls the world.
|
| Whoever can get the most stories into the heads of the masses
| runs the world.
| randysalami wrote:
| "tech companies", "companies [who] control the models",
| "whoever"
|
| To be more discrete, patchwork alliances of elites stretching
| decades and centuries back to concentrate power. Tech companies
| are under the thumb of the US government and the US government
| is under the thumb of the elites. It's not direct but it
| doesn't need to be. Many soft power mechanisms exist and can be
| deployed when needed e.g. Visa/Mastercard censorship. The US
| was always founded for elites, by elites but concessions needed
| to be made to workers out of necessity. With technology and the
| destruction of unions, this is no longer the case. The veracity
| of this statement is still up for debate but truth won't stop
| them from giving it a shot (see WW2).
|
| "Whoever can get the most stories into the heads of the masses
| runs the world."
|
| I'd argue this is already the case. It has nothing to do with
| transformer models or AGI but basic machine learning algorithms
| being applied at scale in apps like TikTok, YouTube, and
| Facebook to addict users, fragment them, and destroy their
| sense of reality. They are running the world and what is
| happening now is their plan to keep running it, eternally, and
| in the most extreme fashion.
| throwawayq3423 wrote:
| I think you dramatically overestimate the effectiveness of
| trying to shape narratives and change people's minds.
|
| Yes, online content is incredibly influential, but it's not
| like you can just choose which content is effective.
|
| The effectiveness is tied to a zeitgeist that is not
| predictable, as far as I have seen.
| randysalami wrote:
| Let's concede you can't shape narrative or change peoples
| minds through online content (though I would disagree on
| this). The very act of addicting people to digital platforms
| is enough for control. Drain their dopamine daily, fragment
| them into isolated groups, use influencers as proxies for
| control, and voila, you have an effect.
| throwawayq3423 wrote:
| I would agree with you. It's easier to just muddy the
| waters and degrade people's ability to hold attention or
| think critically. But that is not the same thing as
| convincing them of what you want them to think.
|
| It's always easier to throw petrol on an existing fire than
| to light one.
| pessimizer wrote:
| If you censor all opposing arguments, all you need to do to
| convince the vast majority of people of most things is too
| keep repeating yourself until people forget that there ever
| were opposing arguments.
|
| In this world you can get people censored for slandering
| beef, or for supporting the outcome of a Supreme Court case.
| Then pay people to sing your message over and over again in
| as many different voices as can be recruited. Done.
|
| edit: I left out "offer your most effective enemies no-show
| jobs, and if they turn them down, accuse them of being
| pedophiles."
| throwawayq3423 wrote:
| i'm having trouble following what you are saying. Are you
| describing something that's happening now or will happen in
| the future ?
|
| I'm unaware of any mass censoring apparatus that exists
| outside of authoritarian countries, such as China or North
| Korea.
| munificent wrote:
| I think you dramatically underestimate how much your own mind
| is a product of narratives, some of which are self-selected,
| many of which are influenced by others.
| throwawayq3423 wrote:
| Perhaps, but what you said is unfalsifiable and/or
| unknowable.
|
| Ideologies are not invented unless you're a caveman. We all
| got to know the world by listening to others.
|
| The subject of discussion is if and when external forces
| can alter those ideologies at will. And I have not seen any
| evidence to support the feasibility of that at scale.
| surebut wrote:
| That's their self selecting goal, sure. Fortunately for
| humanity the main drivers are old as hell, physics is ageist.
| Data centers are not a fundamental property of reality. They
| can be taken offline; sabotage or just loss of skills in time
| to maintain them leading to cascading failures. A new pandemic
| could wipe out billions and the loss of service workers cause
| it all to fail. Wifi satellites can go unreplaced.
|
| They're a long long ways from "protomolecule" that just carries
| on infinitely on its own
|
| CEOs don't really understand physics. Signal loss and such.
| Just data models that only mean something to their immediate
| business motives. They're more like priests; well versed in
| their profession, but oblivious to how anything outside that
| bubble works.
| glenstein wrote:
| >History is therefore a constant act of reinterpretation and
| triangulation, which is something that LLMs, as linguistic
| averaging machines, simply cannot do.
|
| I know you weren't necessarily endorsing the passage you
| quoted, but I want to jump off and react to just this part for
| a moment. I find it completely baffling that people say things
| in the form of "computers can do [simple operation], but
| [adjusting for contextual variance] is something they simply
| cannot do."
|
| There was a version of this in the debate over "robot umps" in
| baseball that exposed the limitation of this argument in an
| obvious way. People would insist that automated calls of balls
| and strikes loses the human element, because human umpires
| could situationally squeeze or expand the strike zone in big
| moments. E.g. if it's the World Series, the bases are loaded,
| the count is 0-2, and the next pitch is close, call it a ball,
| because it extends the game, you linger in the drama a bit
| more.
|
| This was supposedly an example of something a computer could
| not do, and frequently when this point was made it induced lots
| of solemn head nodding in affirmation of this deep and
| cherished baseball wisdom. But... why TF not? You actually
| _could_ define high leverage and close game situations, and
| define exactly how to expand the zone, and machines could call
| those too, and do so _more accurately_ than humans. So they
| could _better_ respect contextual sensitivity that critics
| insist is so important.
|
| Even now, in fits and starts, LLMs are engaging in a kind of
| multi-layered triangulating, just to even understand language.
| It can pick up on multilayered things like subtext or balance
| of emphasis, or unstated implications, or connotations, all
| filtered through rules of grammar. It doesn't mean they are
| perfect, but calibrating for context or emphasis that is most
| important for historical understanding seems absolutely within
| machine capabilities, and I don't know what other than punch
| drunk romanticism for "the human element" moves people to think
| that's an enlightened intellectual position.
| delusional wrote:
| > "[...] dynamically changing the zone is something they
| simply cannot do." But... why TF not?
|
| Because the computer is fundamentally knowable. Somebody
| defined what a "close game" ahead of time. Somebody defined
| what a "reasonable stretch" is ahead of time.
|
| The minute it's solidified in an algorithm, the second
| there's an objective rule for it, it's no longer dynamic.
|
| The beauty of the "human element" is that the person has to
| make that decision in a stressful situation. They will not
| have to contextualize it within all of their other decisions,
| they don't have to formulate an algebra. They just have to
| make a decision they believe people can live with. And then
| they will have to live with the consequences.
|
| It creates conflict. You can't have a conflict with the
| machine. It's just there, following rules. It would be like
| having a conflict with the beurocrats at the DMV, there's no
| point. They didn't make a decision, they just execute on the
| rules as written.
| Spivak wrote:
| You also can't argue over whether the machine made the
| right call over a pint of beer. Or yell at the Robo-Ref
| from the stands. It not only makes the game more sterile
| and less following the "rule of cool" but it also
| diminishes the entertainment value for the people the game
| is actually for.
|
| Sports is literally, in the truest sense of the word,
| reality TV and people watch reality TV for the drama and
| because it's messy. It's good tea, especially in golf.
| jadbox wrote:
| Could we maybe say that an LLM which can update its own
| model weights using its own internal self-narrating log may
| be 'closer' to being dynamic? We can use Wolfram's
| computational irreducibility principle which says that even
| simple rule-based functions will often cause unpredictable
| patterns of its return values. You could say that computer
| randomness is deterministic, but could we maybe say that
| ontologically a Quantum-NN LLMs could perhaps be classified
| on paper as being Dynamic? (unless you believe that quantum
| computing is deterministic).
| dale_glass wrote:
| > The deeper risk is not that AI will replace historians or
| translators, but that it will convince us we never needed them in
| the first place.
|
| I think the bigger danger would be that they'd lose the
| unimportant grunt work that helped the field exist.
|
| Fields need a large amount of consistent routine work to keep
| existing. Like when analog photography got replaced by digital. A
| photo lab can't just pay the bills with the few pro photographers
| that refuse to move to digital or have a specific need for
| analog. They needed a steady supply of cat pictures and terrible
| vacation photos, and when those dried up, things got tough.
|
| So things like translation may go that way too -- those that need
| good quality translation understand the need very well, but
| industry was always supported by a lot of less demanding grunt
| work that now just went away.
| spwa4 wrote:
| "Convince us"? There's no need for that at all. We've done all
| that ourselves.
|
| Just check the latest budgets for university history
| departments.
| fpoling wrote:
| There are nice example how even after human input the translation
| misses things.
|
| For example, the price of the fish was stated as 2.40 rubles.
| This is meaningless outside the context and does not explain why
| it was very expensive for the old man who checked the fish first.
| But if one knows that this was Soviet SF that was about a life in
| a small Soviet town of that time, then one also knows that a
| monthly pension was like 70-80 rubles so the fish cost was a
| daily income.
|
| Then one needs to remember that the only payment method was cash
| and people did not go out with amount more than they would expect
| to spend to minimize the loss in case of thievery etc. and
| banking was non-existing in practice so people hold the savings
| in cash at home. That explains why Lozhkin went to home for the
| money.
| golergka wrote:
| This is also something that an LLM would be able to easily
| explain. I'm pretty confident that all 4o-level models know
| these facts without web search.
| dimitri-vs wrote:
| Not in my experience. Unless you explicitly prompt and bias
| that model for that kind of deep answer (which you won't
| unless you are already experienced in the field) you're going
| to get some sycophantic superficial dribble that's only
| slightly better than the wikipedia page.
| 7734128 wrote:
| Would you want a translator to somehow jam that context into
| the story? Otherwise, I fail to see how it's an issue of
| translation.
|
| If I had learned Russian and read the story in the original
| language, I would be in the same position regardless.
| dale_glass wrote:
| Some translators may add notes on the bottom of the page for
| things like that.
|
| It's going to greatly vary of course. Some will try to
| culturally adapt things. Maybe convert to dollars, maybe
| translate to "a day's wages", maybe translate as it is then
| add an explanatory note.
|
| You might even get a preface explaining important cultural
| elements of the era.
| AlotOfReading wrote:
| It's pretty common for translators to do exactly that,
| usually via either footnotes or context created by deliberate
| word choice. Read classical translations for example and
| they'll often point out wordplay in the original language
| that doesn't quite work in translation. I've even seen that
| in subtitles.
| orbital-decay wrote:
| LLMs tend to imitate that practice, e.g. Gemini seems to be
| doing that by default in its translations unless you stop
| it. The result is pretty poor though - it makes trivial
| things overly verbose and rarely gets the deeper cultural
| context. The knowledge is clearly here, if you ask it
| explicitly it does it much better, but the generalization
| ability is still nowhere near the required level, so it
| struggles to connect the dots on its own.
| AlotOfReading wrote:
| I was going to say that I'm not certain the knowledge is
| there and tried an experiment: If you give it a random
| bible passage in Greek, can it produce decent critical
| commentary of it? That's something it's _certainly_
| ingested mounds of literature on, both decent and
| terrible.
|
| Checked with a few notable passages like the household
| codes and yeah, it does a decent (albeit superficial)
| job. That's pretty neat.
| fpoling wrote:
| Sometimes when re-publishing an older text references are
| added to clarify the meaning that people would miss otherwise
| from the lack of knowledge of cultural references.
|
| But here there is need to even put a references. A good
| translation may reword "too expensive!" into "what? I can
| live the whole day on that!" to address things like that.
| ysofunny wrote:
| the more we can all dump EVERYTHING we got into the same giant
| mega AI,
|
| the better off we will all be.
|
| but of course, this goes directly against how so many people
| think, that I won't even bo
| derbOac wrote:
| > But as Goethe said, every fact is already a theory. That's why
| facts cannot save you. "Data" will not save you. There can be no
| objective story since the very act of assembling facts requires
| implicit beliefs about what should be emphasized and what should
| be left out.
|
| There may be no objective story, but some stories and fact
| theories are more rigorous and thoroughly defended than others.
| stereolambda wrote:
| Many historians work on manuscripts and/or large archives of
| documents that might not be digitized, let alone be accessible in
| the internet. The _proportion_ of human knowledge that is
| available in the internet, especially if we further constrain to
| English-language and non-Darkweb or pirated, is greatly
| exaggerated. So there are infrastructure problems that LLMs by
| themselves don 't solve.
|
| On the other hand, people tend to be happy with a history that
| ignores 90+% of what happened, instead focusing on a "central"
| narrative, which traditionally focussed on maybe 5 Euro-Atlantic
| great powers, and nowadays somewhat pretends not to.
|
| That being said, I don't like the subjectivist take on historical
| truth advanced by the article. Maybe it's hard to positively
| establish facts, but it doesn't mean one cannot negatively
| establish falsehoods and this matters more in practice, in the
| end. This feels salient when touching on opinions of Carr's as a
| Soviet-friendly historian.
| dguest wrote:
| here's the paper this guy seems to be reacting to
| https://arxiv.org/abs/2507.07935v1
| tombert wrote:
| I'm sorry, as someone who genuinely likes AI, I still have to say
| that I have to call bullshit on Microsoft's study on this. I use
| ChatGPT all the time, but it's not going to "replace web
| developers" because that's almost a statement that doesn't even
| make sense.
|
| You see all these examples like "I got ChatGPT to make a JS space
| invaders game!" and that's cool and all, but that's sort of
| missing a pretty crucial part: the beginning of a new project is
| _almost always_ the easiest and most fun part of the project.
| Showing me a robot that can make a project that pretty much any
| intern could do isn 't so impressive to me.
|
| Show me a bot that can maintain a project over the course of
| months and update it based on the whims of a bunch of incompetent
| MBAs who scope creep a million new features and who don't
| actually know what they want, and I might start worrying. I don't
| know anything about the other careers so I can't speak to that,
| but I'd be pretty surprised if "Mathematician" is at severe risk
| as well.
|
| Honestly, is there any reason for Microsoft to even be honest
| with this shit? Of course they want to make it look like their AI
| is so advanced because that makes them look better and their
| stock price might go up. If they're wrong, it's not like it
| matters, corporations in America are never honest.
| ping00 wrote:
| This is a very tangential comment, but I read the short story
| (https://www.dropbox.com/scl/fi/8eh2woz05ndfxinbf9vdh/Goldfis...)
| and loved it (took me around 15 minutes to read).
|
| Went down a bit of a rabbit hole on the original author, Kir
| Bulychev, and saw that he wrote many short stories set in Veliky
| Guslar (which explained the name Greater Bard). The overall tone
| is very very similar to R.K. Narayan's Malgudi Days (albeit
| without the fantastical elements of talking goldfish), which is a
| favorite of mine. If anyone wants to get into reading some easily
| approachable Indian English literature, I always point them to
| Narayan and Adiga (who wrote The White Tiger).
|
| On that note, does anyone else have any recommendations on
| authors who make use of this device (small/mid-sized city which
| serves as a backdrop for an anthology of short stories from a
| variety of characters' perspectives)?
___________________________________________________________________
(page generated 2025-08-04 23:01 UTC)