[HN Gopher] Three things for the next 100 years of Computer Science
       ___________________________________________________________________
        
       Three things for the next 100 years of Computer Science
        
       Author : AstixAndBelix
       Score  : 105 points
       Date   : 2022-12-27 12:47 UTC (10 hours ago)
        
 (HTM) web link (but-her-flies.bearblog.dev)
 (TXT) w3m dump (but-her-flies.bearblog.dev)
        
       | [deleted]
        
       | 082349872349872 wrote:
       | How about some interesting new theories of Computer Science, so
       | we have something more than plug-and-chugging cottage-type-
       | systems to do with these theorem provers?
        
         | weakfortress wrote:
         | It would be nice to see theoretical computer science pushed
         | forward again. I'm certainly tired of every other paper being
         | some spin on the same old ML nonsense. Make Computer Science
         | Computer Science again.
        
       | j2kun wrote:
       | 2122 for Homomorphic Encryption?? I'm working on it right now! I
       | hope it's ready in the next ten years.
        
         | archagon wrote:
         | What is missing today that will make it viable 10 years from
         | now?
        
           | fooker wrote:
           | The ability to perform arbitrary computation on encrypted
           | data efficiently.
           | 
           | I don't think we'll get there in 10 years.
           | 
           | This requires fundamental innovations all the way down.
        
       | GMoromisato wrote:
       | Here's six things for the next 100 years:
       | 
       | 1. A visual programming language. In 2122 you'll be able to use
       | 2D or 3D space for your program, not just a 1D character stream.
       | 
       | 2. A natural language programming language. In 2122 you will no
       | longer have to learn to program.
       | 
       | 3. A simple parallel programming language. In 2122 a program for
       | 1,000 cores is as easy as a single-threaded program.
       | 
       | 4. A universal optimizing compiler. In 2122, all languages will
       | be as fast as machine code.
       | 
       | 5. A 1000x lossless compression algorithm. In 2122 diskspace and
       | bandwidth are no longer constraints on anything.
       | 
       | 6. A solution to the "Mythical Man Month" problem. In 2122,
       | adding more programmers to a project will actually make it go
       | faster.
       | 
       | We've been dreaming about these "breakthroughs" ever since the
       | beginning. They are no more likely to happen in 2122 than
       | perpetual motion or the philosopher's stone.
        
         | zirgs wrote:
         | Visual programming languages already exist.
        
           | bawolff wrote:
           | The breakthrough would be making one that professional
           | programmers actually like.
        
             | captaincaveman wrote:
             | yeah it could use something like pictorial writing, lets
             | call it hieroglyphics or better still a logogram language
             | where each symbol means a thing like Chinese, hmm or maybe
             | we could create an alphabet of characters where on there
             | own they don't mean anything but with a simple small set we
             | could combine create as many words as we want and we could
             | program with ...
             | 
             | half joking I don't really know much about languages, I
             | just don't see how a graphical programming model would be
             | better than a text based one in expressiveness or
             | compactness?
        
         | weakfortress wrote:
         | (6) is already solved. Unfortunately, I believe you are
         | correct. It will take another 100 years for the industry to
         | fire 99% of PMs, dumb agile and "professional agile
         | consultants", put biz dev in an anechoic chamber, and use the
         | money saved by getting rid of the navel gazers to pay engineers
         | and hire more.
        
         | winstonprivacy wrote:
         | > 5. A 1000x lossless compression algorithm. In 2122 diskspace
         | and bandwidth are no longer constraints on anything.
         | 
         | This seems that it would violate a fundamental law of
         | information theory in that a data stream can only be compressed
         | to the minimum number of bits necessary to communicate non-
         | random information present within it.
         | 
         | To state it another way, let us suppose that an excellent
         | compression ratio in 2022 is ~70% (an oversimplification, as
         | the ratio varies wildly based on the source data). 100Mb of
         | data thus compresses down to 30Mb.
         | 
         | But with a 1000x compression ratio, that same 100Mb would
         | compress down to just 100kb.
         | 
         | I am not aware of any data streams that carry so little
         | information where such a ratio could be achievable.
        
         | fooker wrote:
         | Good one!
         | 
         | I think the optimizing compiler one might actually work out.
         | 
         | However, I guess there might be some goalpost shifting around
         | what compilers are expected to do.
        
       | jll29 wrote:
       | It took 50 years to be able to print (from dot matrix to laser
       | printers that stopping jamming) and things are still far from
       | perfect. So I don't assume programmers to be redundant anytime
       | soon (just last week, I recommended a student "software engineer
       | or undertaker" as safe jobs from which to choose).
       | 
       | It's interesting to make such a list, but the 3 item seem to be a
       | bit arbitrary - what about user interfaces, for example?
        
         | AstixAndBelix wrote:
         | Of course they're arbitrary, they're just 3 things out of the
         | thousands that will happen. You can share your ideas too!
        
       | civilized wrote:
       | Homomorphic encryption is extremely inefficient. Is there any
       | reason to believe this will improve enough for it to be widely
       | used?
        
         | bawolff wrote:
         | In 100 years? Yeah there is plenty of reason to think its
         | possible that much of the overhead would be gone. Anything can
         | happen in 100 years, and even in the near term lots of progress
         | is being made (albeit we are still far off from practicality)
         | 
         | The more fundamental limit is probably that you can't do lots
         | of the tricks that make normal programs efficient. You can't
         | exit early or have data dependent branches.
        
       | MontyCarloHall wrote:
       | These all sound like things that are plausible within the next 10
       | years, not 100.
       | 
       | In 100 years, I expect programming languages will be mostly
       | obsolete, since computers will be able to translate natural
       | language into machine instructions with 100% effectiveness.
        
         | ebjaas_2022 wrote:
         | That's a prediction you'll make only if you're not a programmer
         | yourself. The entire purpose of a programming language, as
         | opposed to a natural language, is to be able to communicate
         | precise intent to a computer. A programming language is
         | designed to be able to communicate precise intent, and a
         | natural language isn't.
         | 
         | Even if you could, you would not want to use natural language
         | to instruct a computer. You don't want the AI, or the computer,
         | to guess your intent.
        
         | weakfortress wrote:
         | I can't wait for the next round of space-MBAs to suggest that
         | COBOL 2122 will solve all no-code problems and finally allow
         | them to reduce developer salaries to blue collar levels.
        
         | eterps wrote:
         | How would you prevent errors?
        
           | MontyCarloHall wrote:
           | The same way an engineering manager prevents errors when
           | telling a very talented individual contributor what to code.
           | If a natural language statement is ambiguous, the computer
           | (AKA talented IC) would ask you to clarify it. Unlike the IC,
           | however, the computer would always be aware of things like
           | time complexity, edge cases, etc. of the code it writes, and
           | could alert the user accordingly.
        
         | Beldin wrote:
         | The vision of computer-supported and -verified proofs of maths
         | was old when i studied. And I'm old nowadays - or, at least,
         | not young anymore.
         | 
         | There has been some progress in the last couple of decades, but
         | not so much that you'd expect this to be completely solved in
         | the next decade.
        
         | otabdeveloper4 wrote:
         | > computers will be able to translate natural language into
         | machine instructions with 100% effectiveness
         | 
         | People have been saying this since 1960 at the very least.
         | 
         | In reality, it is vastly more likely that humans will finally
         | learn to speak computer.
        
         | AstixAndBelix wrote:
         | After more than 50 years, programmers of all skill levels still
         | make the same mistakes. I wouldn't bet any of this stuff will
         | become mainstream in 10 years. The tech has to mature to a
         | level where it's as easy to create an ironclad function as it
         | is to make something in Scratch.
        
           | 082349872349872 wrote:
           | People who think we've made serious qualitative advances in
           | debugging would do well to reread H.H. Goldstine's and J. von
           | Neumann's thoughts on the topic from 1947.
           | 
           | What we have done well is to quantitatively scale so that
           | we're applying the classical debugging techniques to Big
           | Balls of Mud, and not just a few lines of assembly.
           | 
           | (the record among my colleagues was writing, then fixing, 3
           | bugs in a single line of assembly)
        
             | ben_w wrote:
             | > reread H.H. Goldstine's and J. von Neumann's thoughts on
             | the topic from 1947.
             | 
             | Do you have a link? Searching only gave me irrelevant
             | results.
        
               | 082349872349872 wrote:
               | IIRC it was
               | https://www.sothebys.com/en/auctions/ecatalogue/2018/the-
               | lib... (or at least in that series) and finding a
               | samizdat'ed scan took a good deal of work (and maybe even
               | some poisk'ing).
               | 
               | Best I can do at the moment is to recommend searching
               | from http://www.cs.tau.ac.il/~nachumd/term/EarlyProof.pdf
               | which was one of the papers I passed on the way... (IIRC
               | Goldstine and von Neumann has diagrams resembling Figure
               | 2; I don't recall if Figure A had been anticipated or not
               | -- remember all these people had been attending the same
               | conferences, so it's very likely)
               | 
               | Basically, back in the rosy-fingered dawn of electronic
               | computation, they already had in mind "know what your
               | state _should_ look like at all points, so you can binary
               | chop to find what control flow first made it go wonky ".
               | 
               | (a question I have not investigated: how well developed
               | were debugging techniques in the card computing era? I've
               | read horror stories of Los Alamos computations being
               | corrected on-the-fly, with "new code" cards on a
               | different coloured stock being run at the same time
               | through machines that were still busy processing "old
               | code" cards on normal coloured stock.
               | 
               | And in terms of "know what your state should look like",
               | card machines did have facilities to abort jobs if the
               | input card deck should fail simple sanity check logic,
               | implying that's been a thing since, probably, Hollerith
               | ca. 1890.)
               | 
               | [Edit: I'll have to dig it out of my library, but an old
               | book my spouse got me on Naval computation, from back
               | when "computer" was a job title, not an object, talks
               | about how to organise jobs that would take a week or two
               | to run. I'm sure they had plenty of double-checking going
               | on to make sure a slipup on Wednesday wouldn't result in
               | complete garbage by the following Monday.]
        
               | ngneer wrote:
               | https://ieeexplore.ieee.org/document/194051
        
           | ben_w wrote:
           | 50 years ago some of the mistakes was "I forgot to label my
           | punched cards and now they're in the wrong order", or "we
           | have a new computer and we now have to rewrite everything
           | from scratch, because C has only just been invented 6 months
           | ago and the book about it hasn't reached our physical library
           | yet so of course we didn't use that for the previous
           | version".
           | 
           | Conversely, one of the problems I had in a previous place was
           | the code being unclear because another developer kept all the
           | obsolete code in the code base "for reference", and also put
           | 1000 lines inside a single if clause, and duplicated class
           | files because he didn't want to change some private: access
           | modifiers to public:. Those kinds of problems weren't
           | possible on such limited hardware and languages.
        
             | pwinnski wrote:
             | Putting functions in the wrong place is still a thing.
             | Having to refactor or rewrite things because of decisions
             | made at an executive level is still a thing.
             | 
             | Over-complicating code has always been a thing, too.
             | Admittedly, "Information Hiding" has only really been a
             | thing since Parnas, but I think it's more true than not
             | that most developers are making the same categories of
             | mistakes developers have always made, and when new
             | programming languages and paradigms are created
             | specifically to avoid those errors, they bring with them
             | new categories of errors.
             | 
             | Syntactically, the majority of code today looks completely
             | different from 50-year-old code, of course. But I'm not
             | sure things have really changed all that much on the human
             | side.
        
         | 082349872349872 wrote:
         | Similar reasoning might suggest the mathematicians of 2022 are
         | 100% effective in writing proofs when compared to the
         | mathematicians of 1922?
        
         | pjmorris wrote:
         | I feel like I've heard "In 10 years, I expect programming
         | languages will be mostly obsolete, since computers will be able
         | to translate natural language into machine instructions with
         | 100% effectiveness." every decade for the last four decades. I
         | feel like the problem with such predictions is that imprecision
         | is a featured of natural languages that is incompatible not
         | only with precision required by our current computing hardware
         | and programming languages but incompatible with the precision
         | required for understanding. If you could translate natural
         | language into machine language with 100% effectiveness, you
         | could get rid of lawyers, judges, juries, and courts, not just
         | programmers. I don't see that happening.
        
           | ryanklee wrote:
           | It's a ridiculous assertion. There is no correct,
           | deterministic path from natural language expressions to
           | directed machine behavior, as there is no correct,
           | deterministic natural language expression in the first place.
           | It's not static, it has varying levels of precision, meaning
           | is context and listener dependent, natural grammars are
           | descriptive not prescriptive, speaker perspectives are
           | regional, historical, fluid, metacognitive, and full of
           | ambiguity as feature, and always in need of hermeneutic
           | methods to adjust and adjudicate interpretations.
           | 
           | There is almost nothing about natural language use that ought
           | to lead anyone to believe we are going to get computers to
           | understand our requirements any better than we do, which
           | often, is very poorly even when we try very hard.
        
         | Existenceblinks wrote:
         | I would instead obsolete natural language and have a few proper
         | precise specs on how to call things, behaviors etc.
        
         | codethief wrote:
         | > since computers will be able to translate natural language
         | into machine instructions with 100% effectiveness
         | 
         | So they will translate imprecise natural language to buggy
         | business logic, great. :)
         | 
         | In my experience, between envisioning a new product/feature (at
         | the product management level) and actually implementing it (at
         | the engineering level) there's always the stage where the
         | engineers discover that the PM's requirements and acceptance
         | criteria are not precise enough and there are dozens of edge
         | cases where additional input by the PM is needed in order to
         | decide how the software should behave.
         | 
         | Sure, the AI could recognize those edge cases, ask the PM for
         | what they want, and therefore help them spec out the entire
         | application in natural language. That spec will then become the
         | source of truth, i.e. the thing to version-control. I'm not
         | sure this would be a very efficient process, though. At the end
         | of the day, natural language will always be ambiguous and so,
         | in the same way as the language of RFCs is regulated, one might
         | need to restrict oneself to a more well-defined subset of
         | natural language that avoids ambiguities as far as possible.
         | But then you're essentially back to programming.
        
           | MontyCarloHall wrote:
           | >That spec will then become the source of truth, i.e. the
           | thing to version-control.
           | 
           | How is this a problem? This is already how PMs and
           | engineering managers work: they tell individual contributors
           | what to implement, hashing out any ambiguities or edge cases
           | via natural language dialogs. They have no need to use
           | version control on the code level; simply tracking changes to
           | their high level spec works fine.
           | 
           | >one might need to restrict oneself to a more well-defined
           | subset of natural language that avoids ambiguities as far as
           | possible. But then you're essentially back to programming.
           | 
           | Again, PMs and engineering managers get by just fine with
           | standard natural language.
        
             | codethief wrote:
             | > How is this a problem?
             | 
             | I never said this was a problem.
             | 
             | > Again, PMs and engineering managers get by just fine with
             | standard natural language.
             | 
             | I agree only to some extent. At least in my experience,
             | edge cases are often hashed out between the PM and
             | engineers in spontaneous meetings, Slack conversations, or
             | <insert medium of choice>. If ticket descriptions /
             | acceptance criteria then get adapted accordingly, great.
             | (Most of the time they don't - since we're so fucking
             | agile.) Though, even if the descriptions/criteria do get
             | updated, they often still lack precision and require
             | interpretation - which is no problem, since all engineers
             | now know what is meant by the ticket description and how to
             | handle the edge case. However, if your entire spec is
             | natural-language-based and you want your AI to reliably
             | generate the same code every single time without needing to
             | provide additional context, you better make sure to remove
             | any ambiguity from the spec. But removing ambiguity from
             | natural language basically comes down to speaking like a
             | mathematicion / programmer, so the natural language spec
             | will end up being rather similar to (pseudo) code again. So
             | you're essentially back to programming.
        
             | 1270018080 wrote:
             | > PMs and engineering managers get by just fine with
             | standard natural language.
             | 
             | ...do they? Poorly defined requirements done through
             | natural language could possibly be the most costly and
             | wasteful aspect of any business.
        
             | dragonwriter wrote:
             | > Again, PMs and engineering managers get by just fine with
             | standard natural language.
             | 
             | Actually, they often have _lots_ of problems with "standard
             | natural language", which is why large projects tend to
             | develop considerable volumes of specialized jargon (often
             | multiple layers, such as project-specific, organization-
             | specific, etc.), which is either subject to extensive
             | documentation (which often becomes a maintenance issue) or
             | which becomes a contributor itself to miscommunication as
             | understanding gaps emerge as turnover and communications
             | silos lead to differing understands among project team
             | members and between the project team and other
             | stakeholders.
             | 
             | In fact, there is a lot of specialized, formalized,
             | controlled language that is standardized beyond the
             | organizational level (including visual languages), which is
             | designed to mitigate parts of this problem (though it often
             | fails, in part because usage in practice doesn't
             | consistently follow the formal standard.)
        
       | blippage wrote:
       | "intuitive and feature rich formal verification frameworks for
       | the major programming languages"
       | 
       | I doubt that such things are really possible. I see two problems:
       | 1. computers in general don't have domain knowledge, so they
       | can't really say what "correct" means; 2. computers don't have a
       | conceptual model of what it is a human is trying to do, so they
       | can't determine if that way is correct.
       | 
       | I thought about this when I was playing with microcontrollers.
       | There are all sorts of bizarre flags and ways and means of
       | configuring and controlling a peripheral. If there was only one
       | correct way of doing things, then it wouldn't make much sense to
       | give such fine-grained control.
       | 
       | In one particular case I was writing to a peripheral. Normally
       | one would block processing until the transfer was complete.
       | However, I needed the operation to be done frequently, and
       | blocking would have consumed much-needed computing cycles. My
       | solution was to simply write to the peripheral. I knew that it
       | would be complete by the time I made the next transfer.
       | 
       | Well, the thing is, a checker just can't reason in that way. It
       | takes a human to do that. Humans can of course be wrong, and
       | often are, but thems the breaks.
       | 
       | Formal checking may be able to establish a few things, but as a
       | general exercise, there is no more chance of some kind of AI
       | proving your program to be right as there is of solving the
       | halting problem.
        
       | slmjkdbtl wrote:
       | I hope macos can get rid of .DS_Store in 100 years
        
       | netfortius wrote:
       | Next 100 years? What about fixing this [1] in the next 50?
       | 
       | [1] https://news.ycombinator.com/item?id=34146054
        
         | cokeandpepsi wrote:
         | i don't know what to tell you don't use a shitty web-browser?
        
           | netfortius wrote:
           | It's not only about safari, if you get the thread reference,
           | i.e. the issue is not confined to browser design only.
           | Happens at times/sites with kiwi, FF, brave, chrome, adblock
           | browser & samsung internet.
        
         | tgv wrote:
         | A nitpick from someone who can't be bothered to write a better
         | UX?
        
       | khaledh wrote:
       | Computers were invented in the 1940s. We're only about 80 years
       | out from that period and look where we are now: a worldwide
       | network of computing devices from tiny watches to planet-scale
       | clouds.
       | 
       | In 100 years I expect there will be a fundamental shift in how
       | computing is done. I don't know what that will look like, but it
       | will be nothing like what we know today.
        
         | usrusr wrote:
         | In 1953, people looking back at the first 50 years of powered
         | flight imagined the wildest things to happen in the next 50.
         | Today we have B-52 built no later than 1963 scheduled to fly
         | well into the 2050ies. To me, the modesty of the article's
         | predictions was a pleasant surprise after the expectations set
         | by the headline.
        
           | _a_a_a_ wrote:
           | My understanding is this is sort-of true. There have been
           | continual upgrades for the entire life of the plane so you
           | can't point to the first ever and assume it's anywhere near
           | the same as one which flies today. (from memory. One more
           | knowledgeable should chime in here)
        
             | usrusr wrote:
             | There have been upgrades of various kinds, they probably
             | don't fly with Garmins velcroed to the cockpit. Weapons
             | load interfaces have surely seen considerable
             | modernization, they keep adapting to the very latest
             | munitions (e.g. B52 from 60 years ago are what they use to
             | test fire hypersonic missile prototypes). An engine update
             | seems to finally be signed, to a variant of a design from
             | the mid nineties. After several decades of studies and
             | reviewing offers, operational 2028 to 2035 [0]. Wheels do
             | seem to turn a little slower than they used to.
             | 
             | What your memory probably brought up was the fact that all
             | the 52s flying today are 52H, the final iteration following
             | A through G with considerable changes between them. But the
             | H is the one built 1961 through 1963, from today's
             | perspective they are not really much younger than their
             | mothballed/disarmed siblings. The reason I suspect that
             | this is what you are talking about is that the same thing
             | happened to me, I roughly knew about the A through H thing
             | but placed the H _much_ closer to the present and was very
             | surprised on a recent read-up.
             | 
             | [0] https://www.afmc.af.mil/News/Article-
             | Display/Article/2789821...
        
           | khaledh wrote:
           | True, but we also have planes that can almost fly themselves
           | (for the most part), drones that are remotely controlled,
           | stealth jets, and rockets that can reach Mars.
        
             | kens wrote:
             | Historical note: remote-controlled drones are not a new
             | thing. The first remote-controlled drone aircraft was the
             | RP-1 (Radio Plane 1), demonstrated in 1938. Almost 15,000
             | Radio Plane drones were built in World War II.
             | 
             | Reference: https://www.historynet.com/drones-hollywood-
             | connection/?f
        
           | dmarchand90 wrote:
           | Totally agree. The biggest fallacy in tech circles is
           | confusing s-curves with exponentials
        
             | usrusr wrote:
             | Nicely condensed, I hope I'll remember for future use. But
             | I'd replace the "in tech circles" with "about technological
             | development", because I don't think that people further
             | away from tech are any less suspect. (but in the other hand
             | they certainly don't indulge in rose-tinted extrapolation
             | as much...)
        
         | otabdeveloper4 wrote:
         | It has been how many years since the wheel was invented?
         | 
         | Somehow no fundamental shifts in how rolling on the ground is
         | done are foreseen, despite the time frame.
        
           | bawolff wrote:
           | I don't know, i feel like going from a wheel barrow to a
           | train (or car) was a pretty big shift.
        
           | lnenad wrote:
           | That's a pretty shitty comparison, comparing the concept of
           | computers to an object.
        
         | dinkumthinkum wrote:
         | But consider that most fundamental advances in computer
         | science, other than maybe graphics, happened by the 1970s. What
         | we have had since have been largely incremental improvements.
        
         | nonethewiser wrote:
         | Widespread brain implants. Which might entail what is
         | effectively telepathy among other weird shit that seems
         | impossible.
         | 
         | I'm not confident that will be the case. But its plausible from
         | a technical standpoint. On a far shorter timeline than 100
         | years.
        
           | [deleted]
        
           | tgv wrote:
           | Not telepathy: you wouldn't even know how to define it.
           | "Sub"-vocal communication over radio is totally feasible,
           | though.
        
           | Mistletoe wrote:
           | Outside of neuralink fantasy PR, is it really possible to
           | interface a computer with the brain like that? I feel like we
           | don't have the slightest clue how the brain works and then I
           | don't see how we could ever make electronics interface with
           | synapses in a way that makes you see video, audio, etc.
           | 
           | It seems I'm not alone and Noam Chomsky feels the same.
           | https://www.inverse.com/article/32395-elon-musk-neuralink-
           | no...
        
             | dwringer wrote:
             | I can envision a system interfacing to advanced
             | language/image models using genetic algorithms to spit out
             | candidate outputs and using biometric readings (in the vein
             | of an FMRI, or maybe just a polygraph) to judge how
             | satisfied a user is with various outputs in different
             | categories, then rapidly using the feedback to develop an
             | output that most satisfies the user's desire. How well this
             | could work, or _if_ it would work in practice, seem to be
             | the biggest questions, but I think we definitely have
             | avenues worth exploring using existing technology. I have
             | to say the idea of such a technology is a little
             | disquieting if it were used the way things like polygraphs
             | already are, and optimized for something other than the
             | user 's preferences.
             | 
             | EDIT: Now I can't shake the mental image of Alex from _A
             | Clockwork Orange_ strapped into his chair for his
             | "treatment". And certain scenes toward the end of _The Men
             | Who Stare at Goats_.
        
             | AussieWog93 wrote:
             | > I feel like we don't have the slightest clue how the
             | brain works and then I don't see how we could ever make
             | electronics interface with synapses in a way that makes you
             | see video, audio, etc.
             | 
             | For what its worth, I did (well, started) a PhD in this and
             | you're bang on. We don't understand the brain at all, and
             | trying to fudge this immense knowledge gap with Machine
             | Learning is getting us nowhere. Doesn't stop the grant
             | money from rolling in, though.
        
             | phh wrote:
             | Considering how we do limb prosthetics, we won't know how
             | to interface a brain with a computer. We know how to make a
             | computer compatible enough for the brain to interface with
             | the computer. This can be the way this plays out.
             | 
             | Arguably, we already did that (make an interface that our
             | brain adapts to): programming languages, keyboards, are
             | already man-made interfaces our brain adapted to.
        
             | dsr_ wrote:
             | It's a gap in understanding, and it may be a gigantic gap.
             | 
             | Such gaps sometimes get closed by monumental efforts,
             | sometimes by small groups, and sometimes they take
             | centuries.
        
           | spaceman_2020 wrote:
           | You reckon we'll have AGI, or at least something resembling
           | it by then?
           | 
           | Brain implants + near-AGI would essentially make us
           | superhuman
        
             | harikb wrote:
             | For a monthly fee of $999 no less
        
               | islon wrote:
               | Or you can use the ad-supported version "for free".
        
               | ta988 wrote:
               | Or we pay you $5/mo to use your available brain cycles to
               | mine braincoin.
        
             | waffletower wrote:
             | I already have a reasonable Adjusted Gross Income on my
             | annual tax return. We already have an American Geological
             | Institute. Have many Adventure Game Interpreters to choose
             | from. Amplified Geochemical Imaging is a mature field.
             | Hell, we even have some early contenders for proto
             | Artificial General Intelligence if that is what you meant.
             | But obviously we don't have Acronym Gist Inflation, and
             | your text remains unqualified and vague.
        
             | phh wrote:
             | Tell a human 200 years ago that we can access any knowledge
             | of the humanity within minutes, that we can travel at
             | 900kph, that we can drink sea water. They'll tell you that
             | we are indeed super-humans.
        
             | usrusr wrote:
             | Might just as well make us artificially happyfied super-
             | vegetables, completely incapable of taking decisive action
             | required in some unexpected crisis.
             | 
             | If you replace metalworking with the reward mechanism hacks
             | Facebook et al use to optimize their success metrics, the
             | threshold to a paperclip maximizer scenario gets
             | considerably lower. Smartphones are bad enough, add MMI to
             | the mix and all bets are off.
        
       | jschveibinz wrote:
       | All opinion, but worth considering:
       | 
       | It is probable, based solely on the rate of technological change,
       | that the concepts of engineering, computing, programming, etc.
       | will be historical by the middle of this century. Electronic
       | engineering was the dominant technology trade in the late 80's.
       | Within 20 years, computer programming became the dominant
       | technology trade. With AI advances, programming by humans will
       | all but disappear and this field will be replaced by "cognitive
       | architects" that build artificial intelligence systems and
       | solutions. After that, who knows?
        
       | college_physics wrote:
       | 100 years is a lot of time. Will the material substrate of
       | computing still be silicon (e.g., how about quantum computing -
       | overhyped as of today but surely not over the very long term)
       | 
       | Another branch that might eventually become something of a
       | science might be massively parallel computing?
        
         | slfnflctd wrote:
         | Those are both great questions, and there is plenty of other
         | terrific speculation in this thread, but one thing I haven't
         | seen yet is the word "collapse".
         | 
         | In 100 years, it's possible the only computing being done will
         | be of the biological sort, by far smaller and more resilient
         | life forms than humanity. Even if we survive, the seemingly
         | inexorable progression of technology may take a back seat to
         | survival, and we could easily be struggling just to maintain
         | functionality of devices we no longer have the ability to
         | manufacture.
         | 
         | I suppose I should temper the doom & gloom a bit by mentioning
         | that I think we still have several chances to squeak out a
         | pathway to avoid massive and violent die-offs, but that window
         | is certainly narrowing and we aren't doing nearly enough.
         | Hundreds of millions of climate refugees (or more) are going to
         | disrupt pretty much everything if we don't start planning what
         | to do with them _now_.
        
           | dinkumthinkum wrote:
           | Climate change is a very important issue but when we make
           | loose claims or imply the earth will be a hellscape with few
           | or no surviving humans in one generation, it only emboldens
           | those that think the real issues are hyperventilating
           | nonsense.
        
           | college_physics wrote:
           | Its a 50/50 chance we avoid collapse but there is nothing to
           | do but condition on the positive scenario. We already have
           | plenty of hints how dystopic tech looks like.
           | 
           | In the positive scenario tech becomes a significant enabler
           | of sustaibility. I don't think we need major computer science
           | breakthroughs for this to happen though. Its purely a human
           | moral, behavioral, economic and political challenge.
        
       | AndyMcConachie wrote:
       | CS like other disciplines will be dominated by the climate change
       | fallout for the next 100 years. One thing I think about is that
       | for the entire history of computing there have always been more
       | programmers this year than the last. The number of programmers on
       | the planet has always been increasing.
       | 
       | What happens when that trend reverses? Better yet, what happens
       | when the number of people who are capable of writing software
       | plummets quickly due to a mass extinction event?
       | 
       | What kinds of systems will the survivors design and implement?
       | How will the relationship between development and maintenance
       | change in this new world?
        
         | weakfortress wrote:
         | I found Al Gore's HN account everyone. Climate alarmists have
         | predicted 200 of the last 0 extinction events. I won't hold my
         | breath. The greatest threat to developers is outsourcing not
         | ice ages.
        
         | rightbyte wrote:
         | Less JS then I guess? More static pages?
         | 
         | Save the planet with uBlock ome ad at a time. (I am not
         | sarcastic here BTW).
        
         | thesuperbigfrog wrote:
         | >> What kinds of systems will the survivors design and
         | implement? How will the relationship between development and
         | maintenance change in this new world?
         | 
         | https://www.destroyallsoftware.com/talks/the-birth-and-death...
        
         | dinkumthinkum wrote:
         | Mass extinction? Being so hyperbolic about climate change
         | doesn't persuade people to your viewpoint, it encourages the
         | opposite.
        
       | ternaus wrote:
       | Looking at the slow, but consistent pace of the Generated Imagery
       | I assume and hope that there will be a new step in
       | personalization of the services.
       | 
       | Right now Google, YouTube, StackOverflow, PornHub, Netflix
       | perform search in the database.
       | 
       | But imagine, that you ask for something on Netflix and a whole
       | movie will be generated for you.
       | 
       | We can see glimpses of this in ChatGPT vs Google, and I hope that
       | this direction will grow to the mature phase.
        
       | walnutclosefarm wrote:
       | In the author's defense, he only says "3 things," not "the most
       | important or significant 3 things." Still, this seems like an
       | agenda for maybe 20 years, not one for 100.
       | 
       | To me, if you're really look out even to the end of this century
       | (which would double the amount of time humanity has been
       | seriously evolving digital computing technology), the big change
       | is likely to be that innovation in ideas, technology, and
       | technique will shift from human minds to digital intelligences. I
       | don't think that means a full-on Kurzweillian singularity, but I
       | think we've basically just landed on the near shore of the
       | generative intelligence continent, and there is way more to come.
       | 
       | Think about it this way: in 80 years, we've evolved raw
       | computation power in a single installation from 500 or so FLOPS,
       | to the petaFLOP range - a factor of over 10^12. So, let's be
       | modest and scale down our expectations, and try to imagine what
       | generative intelligence that is 100 times (not 1 trillion) as
       | capable as, say, AlphaZero is, might do. I'm sure I can't
       | actually imagine the results, but I do predict it will shift the
       | balance of innovative "thinking" from human minds to digital
       | intelligences.
        
       | stephc_int13 wrote:
       | We might see the limits of Moore's Law in the coming decades.
       | 
       | I hope that will translate in hardware standardization and
       | stability, as this is probably the best thing that could happen
       | to software engineering.
        
         | zozbot234 wrote:
         | Moore's law is still going strong _at the trailing edge of
         | fabrication_. Leading-edge nodes are getting more and more
         | expensive though, which is the exact opposite to what Moore 's
         | Law would predict.
        
         | ben_w wrote:
         | "Might"? Hmm, I guess it depends how you phrase the law. For
         | feature size, either we will hit it within a decade or we go
         | subatomic. For $ cost per transistor or J cost per operation,
         | those may continue significantly longer.
        
       | glitchc wrote:
       | Optical computing, where the base unit is a photon instead of an
       | electron, is the future of all computing.
       | 
       | Once optical computing becomes a reality, we will look fondly
       | upon our quaint silicon devices. Optical computers capable of
       | measuring interactions of single photons will eventually lead to
       | quantum computing in all things.
       | 
       | Likely, quantum computing will be a dedicated core/unit in a
       | heterogeneous optical computing architecture, as linear
       | operations are still need to manage I/O, memory and peripherals
       | (displays and what not).
        
         | fooker wrote:
         | There is a fundamental reason why this might not work out. We
         | might be getting close to information theoretic and
         | thermodynamic limits of computers with modern transistors.
         | 
         | For example, photons don't help if it's theoretically
         | impossible to dissipate heat any faster.
        
         | ta988 wrote:
         | It is more than likely that the devices will be photonic and
         | electronic at the same time. A lot of things are much more
         | efficient to do with electrons and electrical fields in lower
         | frequency range than the photons ones.
        
         | dinkumthinkum wrote:
         | This doesn't sound like quantum computing at all.
        
       | pharmakom wrote:
       | We make proofs so that we can be confident something is true. But
       | what happens when the proofs get so large that no human can
       | practically check them? What happens when the proofs are also
       | generated by an ANN? I wonder if this stuff around proof
       | languages will seem totally misguided in the long run.
        
         | fallat wrote:
         | Exactly this. Already we have this issue with typed languages
         | today.
         | 
         | Proof languages will be welcomed but we must apply discipline
         | as we do today with type heavy languages.
        
         | nimonian wrote:
         | Ideally, we would prove the theorem that any proof the software
         | produces must be correct.
        
         | bawolff wrote:
         | > But what happens when the proofs get so large that no human
         | can practically check them
         | 
         | The 4 colour theorem says hello from 1976.
         | 
         | Ultimately though proofs are not just about being confident
         | something is true, its usually also about why something is
         | true.
        
         | valenterry wrote:
         | There were already cases where proofs were considered true and
         | then later flaws where found that rendered them invalid.
         | 
         | I think the answer is quite simple: if something happens that
         | contradicts the proof then a lot of resources will be used to
         | doublecheck the proof and ensure its correctness and it will
         | just be accepted that proofs are not always 100% valid.
        
         | jwolfe wrote:
         | If you're confident about your proof checker being correct,
         | then why would you not be confident about the validity of a
         | proof that your proof checker says is correct?
        
           | fallat wrote:
           | They're not even touching that but it's possible. The issue
           | is you can still write proofs that appear correct, but
           | really, they aren't.
        
       | walnutclosefarm wrote:
       | One very specific thought about the author's three ambitions: I
       | really wonder about homomorphic encryption as a foundation for
       | data science. It seems to me that any really good homomorphic
       | encryption creates a dual space for a data set, so that anything
       | discoverable in the original is equally preserved, and at least
       | in theory, discoverable, from the encrypted dual. If your
       | homomorphic encryption doesn't create a true dual space, then how
       | do we trust the insights it is capable of developing as being
       | either valid, or complete?
        
       | ugjka wrote:
       | Hopefully the default keyboard layout won't be QWERTY anymore
        
         | pkoird wrote:
         | It's fairly easy to learn, already universal, and there is no
         | incentive at all for 99% of people to change it (only a
         | fraction of the population would ever want the super fast
         | typing, rest of us have to think hard and type few). I suspect
         | that it'll stay around unless a new form of input device that
         | directly interfaces with human mind is created.
        
         | pwinnski wrote:
         | Why do you would it ever change, in 100 or 1000 years?
        
       | revskill wrote:
       | I think you're describing unit test.
        
       | [deleted]
        
       | als0 wrote:
       | Is Ada SPARK a realistic option today or do I need to be an
       | academic?
       | 
       | Is the total cost of learning Ada and SPARK greater than learning
       | Rust or Frama-C?
        
         | thesuperbigfrog wrote:
         | Ada SPARK is a realistic option today:
         | 
         | https://learn.adacore.com/courses/intro-to-spark/index.html
         | 
         | >> Is the total cost of learning Ada and SPARK greater than
         | learning Rust or Frama-C?
         | 
         | It is a comparable amount of effort.
        
       | josephd79 wrote:
       | Skynet
        
       | skrtskrt wrote:
       | I want non-physical-media HCI, just for my work as an engineer.
       | 
       | I have always thought that mouse, keyboard, and screens are quite
       | a tedious and uncomfortable way to interact with a computer, and
       | god knows how many health problems sitting at a computer for
       | decades contributes to.
       | 
       | I'd love to be programming laying in a park without being limited
       | by not having a large monitor and quality keyboard/mouse.
        
         | pitaj wrote:
         | Human-Computer Interface, in case anyone else was wondering.
        
         | thedorkknight wrote:
         | Considering the unintended and unforseen side effects of health
         | that our current interfaces have had on our bodies and minds,
         | I'm way more freaked out by the possibilities of what could go
         | wrong sticking a direct I/O device into my head
        
           | jfzoid wrote:
           | When I was in high school I used to dream about sticking high
           | speed ethernet ports into my skull (because reading on a
           | screen was too slow, plus this was the 80s and we though
           | Neuromancer was really going to happen)
           | 
           | Now I realize 1. I don't want buggy hardware in my head 2. I
           | don't want to go to the surgeon ("wetdoc" or whatever the
           | Cyberpunk RPG called it) every time a new model comes out
        
         | luxuryballs wrote:
         | I always felt the exact same way and thought there will
         | eventually be some interface I can wear on my fingers and
         | "type" anywhere, laying down, standing up, etc. but now I'm
         | thinking it won't be a hardware change at all but more like a
         | hardware elimination: ie, as long as I can speak and the
         | computer can hear me we will be able to build software
         | anywhere. I will be able to develop software faster as I can
         | start prototyping and testing things while I'm still in the
         | midst of writing them on paper, I can have unit tests churning
         | out automatically while I'm still "white boarding" or whatever
         | it will be called then. I just imagine an AI assistant that can
         | be one step ahead of me and make me able to work so much
         | smarter and more efficiently but also eliminate almost all
         | boilerplate and tedious editing and code organization and
         | greatly reduce code rot as the code base will be "self
         | arranging" and refactors automatic.
        
           | skrtskrt wrote:
           | Yes - imagine you start sketching out a data model and
           | something is hot-reloading generating the database schema and
           | autogenerates CRUD-type data flow through the DB to see if my
           | system can actually accomplish what I'm imagining.
           | 
           | Same for like a distributed system with message/event queues,
           | it can start demonstrating various data flows that introduce
           | possible race/concurrency conditions as I'm still in the
           | design phase.
        
       | de_keyboard wrote:
       | Will we care about proofs and type-systems once we have AI
       | systems that reliably answer questions like "does this code do
       | X?" and "under what conditions does this code violate invariant
       | Y?"
        
       | graycat wrote:
       | 100 years?
       | 
       | First, how 'bout 100 years ago, i.e., back to 1923?
       | 
       | Telephone: By 1915 in the US we had long distance telephone coast
       | to coast.
       | 
       | Cars: The cars of 1923 were not so bad -- had tops, doors, glass
       | windows, electric lights, rubber tires filled with air, a rear
       | axle differential, ....
       | 
       | https://en.wikipedia.org/wiki/Category:Cars_introduced_in_19...
       | 
       | Airplanes: The Wright Brothers were at Kitty Hawk in 1915, and
       | airplanes were important in WWI.
       | 
       | https://en.wikipedia.org/wiki/Kitty_Hawk
       | 
       | Physics: By 1923 we had explained the photoelectric effect (one
       | of the foundations of quantum mechanics) and both special and
       | general relativity (1915),
       | 
       | https://en.wikipedia.org/wiki/General_relativity
       | 
       | The Schrodinger equation was on the way, published in 1926,
       | 
       | https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation
       | 
       | From the past, one possible lesson is that _progress_ is enabled
       | or constrained by _fundamentals_.
       | 
       | So for the next 100 years, what are some of the likely important
       | fundamentals?
        
       ___________________________________________________________________
       (page generated 2022-12-27 23:01 UTC)