[HN Gopher] Project Verona: Fearless Concurrency for Python
___________________________________________________________________
Project Verona: Fearless Concurrency for Python
Author : ptx
Score : 155 points
Date : 2025-05-15 10:58 UTC (3 days ago)
(HTM) web link (microsoft.github.io)
(TXT) w3m dump (microsoft.github.io)
| fmajid wrote:
| Microsoft laid off the Faster CPython lead Mark Shannon and ended
| support for the project, where does this leave the Verona
| project?
| pjmlp wrote:
| They belong to Microsoft Research not DevDiv, so while that
| doesn't protect them from layoffs, certainly gives them some
| protection being under different management.
|
| Microsoft Research sites tend to be based in collaborations
| with university research labs.
| tmpz22 wrote:
| > Microsoft Research sites tend to be based in collaborations
| with university research labs.
|
| Oh that should help with the project's stability and funding.
| sitkack wrote:
| Boycott Microsoft. Don't work there, don't use their products.
| 90s_dev wrote:
| Why?
| smt88 wrote:
| How does using their Python tools help Microsoft?
| hilsdev wrote:
| Embrace, extend, destroy
| loloquwowndueo wrote:
| Extinguish! Mind the alliteration! :)
| akkad33 wrote:
| At the least telemetry and recognition in the software
| community. At worst training their AI
| lucianbr wrote:
| It's a large corporation. I'm certain someone asked that
| question and got an answer before starting producing Python
| tools. It's management's job to ask that question and get
| answers, you know.
| davesque wrote:
| Would be helpful to know why _you_ think this. Even if there
| are common reasons that others could point to (and please don
| 't; it won't be helpful), your comment doesn't make any sense
| without that context.
| kubb wrote:
| Sounds like a fun job, I'd love to do something like this in my 9
| to 5.
|
| It's also amazing how much work goes into making Python a decent
| platform because it's popular. Work that will never be finished
| and could have been avoided with better design.
|
| Get users first, lock them in, fix problems later seems to be the
| lesson here.
| fastball wrote:
| Imo it is less about locking anyone in (in this case) and more
| about what Python actually enables: exceedingly fast
| prototyping and iteration. Turns out the ability to ship fast
| and iterate is actually more useful that performance, esp in a
| web context where the bottlenecks are frequently not program
| execution speed.
| procaryote wrote:
| Python has compounding problems that make it extremely tricky
| though.
|
| If it was just slow because it was interpreted they could
| easily have added a good JIT or transpiler by now, but it's
| also extremely dynamic so anything can change at any time,
| and the type mess doesn't help.
|
| If it was just slow one could parallelise, but it has a GIL
| (although they're finally trying to fix it), so one needs
| multiple processes.
|
| If it just had a GIL but was somewhat fast, multiple
| processes would be OK, but as it is also terribly slow, any
| single process can easily hit its performance limit if one
| request or task is slow. If you make the code async to fix
| that you either get threads or extremely complex cooperative
| multitasking code that keeps breaking when there's some bit
| of slow performance or blocking you missed
|
| If the problem was just the GIL, but it was OK fast and had a
| good async model, you could run enough processes to cope, but
| it's slow so you need a ridiculous number, which has knock-on
| effects on needing a silly number of database/api connections
|
| I've tried very hard to make this work, but when you can
| replace 100 servers struggling to serve the load on python
| with 3 servers running Java (and you only have 3 because of
| redundancy as a single one can deal with the load), you kinda
| give up on using python for a web context
|
| If you want a dynamic web backend language that's fast to
| write, typescript is a much better option, if you can cope
| with the dependency mess
|
| If it's a tiny thing that won't need to scale or is easy to
| rewrite if it does, I guess python is ok
| johnisgood wrote:
| Or Elixir / Erlang instead of Java / Kotlin, and Go instead
| of Python, for this use case.
| pjmlp wrote:
| > If it was just slow because it was interpreted they could
| easily have added a good JIT or transpiler by now, but it's
| also extremely dynamic so anything can change at any time,
| and the type mess doesn't help.
|
| See Smalltalk, Common Lisp, Self.
|
| Their dynamism, image based development, break-edit-
| compile-redo.
|
| What to change everything in Smalltalk in a single call?
|
| _a becomes: b_
|
| Now every single instance of a in a Smalltalk image, has
| been replaced by b.
|
| Just one example, there is hardly anything that one can do
| in Python that those languages don't do as well.
|
| Smalltalk and Self are the genesis of JIT research that
| eventually gave birth to Hotspot and V8.
| kubb wrote:
| I agree that fast iteration and the ,,easy to get something
| working" factor is a huge asset in Python, which contributed
| to its growth. A whole lot of things were done right from
| that point of view.
|
| An additional asset was the friendliness of the language to
| non-programmers, and features enabling libraries that are
| similarly friendly.
|
| Python is also unnecessarily slow - 50x slower than Java, 20x
| slower than Common Lisp and 10x slower than JavaScript. It's
| iterative development is worse than Common Lisp's.
|
| I'd say that the biggest factor is simply that American
| higher education adopted Python as the introductory learning
| language.
| beagle3 wrote:
| For American higher education, It was Pascal ages ago, and
| then it was Java for quite a while.
|
| But Java is too bureaucratic to be an introductory
| language, especially for would-be-non-programmers. Python
| won on "intorudctoriness" merits - capable of getting
| everything done in every field (bio, chem, stat,
| humanities) while still being (relatively) friendly. I
| remember days it was frowned upon for being a "script
| language" (thus not a real language). But it won on merit.
| darkwater wrote:
| > Get users first, lock them in, fix problems later seems to be
| the lesson here.
|
| Or with a less cynical spin: deliver something that's useful
| and solves a problem for your potential users, and iterate over
| that without dying in the process (and Python suffered a lot
| already in the 2 to 3 transition)
| kubb wrote:
| 2 to 3 was possibly precisely because of user lock-in and
| sunk cost. This kind of global update was unprecedented, and
| could have been totally avoided with better design.
| exe34 wrote:
| > could have been totally avoided with better design
|
| This is why taxi drivers should run the country!
| meindnoch wrote:
| The JS playbook.
| legends2k wrote:
| Isn't this so common in computer science, haven't you heard of
| Worse is Better [1]?
|
| [1]: https://en.m.wikipedia.org/wiki/Worse_is_better
| materielle wrote:
| Python is about 35 years old at this point. It _was_ the better
| language that had the better design and the fixed problems at
| some point in time.
|
| Sure, maybe a committee way back in 1990 could have shaved off
| of some the warts and oopsies that Guido committed.
|
| I'd imagine that said committee would have also shaved off some
| of the personality that made Python an enjoyable language to
| use in the first place.
|
| People adopted Python because it was way nicer to use compared
| to the alternatives in say, 2000
| rixed wrote:
| Yes, writting CGI with python the configuration language was
| so much better than with perl the shell replacement!
| zahlman wrote:
| I would say it was closer to 2005 that Python really took
| off. Coincidentally around when I started using it, but I
| remember a noticeable increase in "buzz".
| pjmlp wrote:
| This looks like a pivot on the Project Verona research, as there
| have not been much other papers out since the initial
| announcement, regarding the programming language itself.
| zenkey wrote:
| I've been programming with Python for over 10 years now, and I
| use type hints whenever I can because of how many bugs they help
| catch. At this point, I'm beginning to form a rather radical
| view. As LLMs get smarter and vibe coding (or even more abstract
| ways of producing software) becomes normalized, we'll be less and
| less concerned about compatibility with existing codebases
| because new code will be cheaper, faster to produce, and more
| disposable. If progress continues at this pace, generating tests
| with near 100% coverage and fully rewriting libraries against
| those tests could be feasible within the next decade. Given that,
| I don't think backward compatibility should be the priority when
| it comes to language design and improvements. I'm personally
| ready to embrace a "Python 4" with a strict ownership model like
| Rust's (hopefully more flexible), fully typed, with the old
| baggage dropped and all the new bells and whistles. Static typing
| should also help LLMs produce more correct code and make
| iteration and refactoring easier.
| oivey wrote:
| I mean, why not just write Rust at that point? Required static
| typing is fundamentally at odds with the design intent of the
| language.
| trealira wrote:
| A lot of people want a garbage collected Rust without all the
| complexity caused by borrow checking rules. I guess it's
| because Rust is genuinely a great language even if you ignore
| that part of it.
| logicchains wrote:
| Isn't garbage collected Rust without a borrow checker just
| OCaml?
| johnisgood wrote:
| Pretty much, I would say, in fact, I like OCaml better if
| we put the borrow checker aside.
| pjmlp wrote:
| Thankfully, like many other languages that rather combine
| models instead of going full speed into affine types,
| OCaml is getting both.
|
| Besides the effects type system initially introduced to
| support multicore OCaml, Jane Street is sponsoring the
| work for explicit stack allocation, unboxed types, modal
| types.
|
| See their YouTube channel.
| johnisgood wrote:
| Yeah, I have watched a couple of videos and read blog
| posts from Jane Street. They are helping OCaml a lot!
| Elucalidavah wrote:
| > a garbage collected Rust
|
| By the way, wouldn't it be possible to have a garbage-
| collecting container in Rust? Where all the various objects
| are owned by the container, and available for as long as
| they are reachable from a borrowed object.
| morsecodist wrote:
| Isn't this what Rc is?
| coolcase wrote:
| Is Go that language?
| spookie wrote:
| D and Go exist.
|
| There are alternatives out there
| tgv wrote:
| Not only that: Rust is considerably faster and more reliable.
| Since you're not writing the code yourself, Rust would be an
| objectively better choice.
|
| Who are we trying to fool?
| procaryote wrote:
| 100% coverage won't catch 100% of bugs of course
| kryptiskt wrote:
| I'd think LLMs would be more dependent on compatibility than
| humans, since they need training data in bulk. Humans can adapt
| with a book and a list of language changes, and a lot of
| grumbling about newfangled things. But an LLM isn't going to
| produce Python++ code without having been trained on a corpus
| of such code.
| johnisgood wrote:
| It should work if you feed the data yourself, or at the very
| least the documentation. I do this with niche languages and
| it seems to work more or less, but you will have to pay
| attention to your context length, and of course if you start
| a new chat, you are back to square one.
| energy123 wrote:
| I don't know if that's a big blocker now we have abundant
| synthetic data from a RL training loop where language-
| specific things like syntax can be learned without any human
| examples. Human code may still be relevant for learning best
| practices, but even then it's not clear that can't happen via
| transfer learning from other languages, or it might even
| emerge naturally if the synthetic problems and rewards are
| designed well enough. It's still _very_ early days (7-8
| months since o1 preview) so to draw conclusions from current
| difficulties over a 2-year time frame would be questionable.
|
| Consider a language designed only FOR an LLM, and a
| corresponding LLM designed only FOR that language. You'd
| imagine there'd be dedicated single tokens for common things
| like "class" or "def" or "import", which allows more
| efficient representation. There's a lot to think about ...
| jurgenaut23 wrote:
| It's just as questionable to declare victory because we had
| a few early wins and that time will fix everything.
|
| Lots of people had predicted that we wouldn't have a single
| human-driven vehicle by now. But many issues happened to be
| a lot more difficult to solve than previously thought!
| LtWorf wrote:
| How would you debug a programming language made for LLMs?
| And why not make an LLM that can output gcc intermediate
| representation directly then?
| energy123 wrote:
| You wouldn't, this would be a bet that humans won't be in
| the loop at all. If something needs debugging the LLM
| would do the debugging.
| LtWorf wrote:
| Lol.
| ModernMech wrote:
| One has to wonder, why would there be any bugs at all if
| the LLM could fix them? Given Kernighan's Law, does this
| mean the LLM can't debug the bugs it makes?
|
| My feeling is unless you are using a formal language,
| then you're expressing an ambiguous program, and that
| makes it inherently buggy. How does the LLM infer your
| intended meaning otherwise? That means programmers will
| always be part of the loop, unless you're fine just
| letting the LLM guess. Kernighan's Law -
| Debugging is twice as hard as writing the code in the
| first place.
| energy123 wrote:
| The same applies to humans, who are capable of fixing
| bugs and yet still produce bugs. It's easier to detect
| bugs with tests and fix them than to never have
| introduced bugs.
| ModernMech wrote:
| But the whole idea of Kernighan's law is to not be so
| clever that no one is available to debug your code.
|
| So what happens when an LLM writes code that is too
| clever for it to debug? If it weren't too clever to debug
| it, it would have recognized the bug and fixed it itself.
|
| Do we then turn to the cleverest human coder? What if
| _they_ can't debug it, because we have atrophied human
| debugging ability by removing them from the loop?
| pseudony wrote:
| Ownership models like Rust require a grester ability for
| holistic refactoring, otherwise a change in one place causes a
| lifetime issue elsewhere. This is actually exactly what LLM's
| are doing the worst at.
|
| Beyond that, a Python with something like lifetimes implies
| doing away with garbage-collection - there really isn't any
| need for lifetimes otherwise.
|
| What you are suggesting has nothing to do with Python and
| completely misses the point of why python became so widely
| used.
|
| The more general point is that garbage collection is very
| appealing from a usability standpoint and it removes a whole
| class of errors. People who don't see that value should look
| again at the rise of Java vs c/c++. Businesses largely aren't
| paying for "beautiful", exacting memory management, but for
| programs which work and hopefully can handle more business
| concerns with the same development budget.
| pjmlp wrote:
| While I go into another direction in a sibling comment,
| lifetimes does not imply not needing garbage collection.
|
| On the contrary, having both allows the productivity of
| automatic resource management, while providing the necessary
| tooling to squeeze the ultimate performance when needed.
|
| No need to worry about data structures not friendly to
| affine/linear types, Pin and Phantom types and so forth.
|
| It is no accident that while Rust has been successful
| bringing modern lifetime type systems into mainstream, almost
| everyone else is researching how to combine
| linear/affine/effects/dependent types with classical
| automatic resource management approaches.
| vlovich123 wrote:
| Rust lifetimes are generally fairly local and don't impact
| refactoring too much unless you fundamentally change the
| ownership structure.
|
| Also a reminder that Rc, Arc, and Box are garbage collection.
| Indeed rust is a garbage collected language unless you drop
| to unsafe. It's best to clarify with tracing GC which is what
| I think you meant.
| pjmlp wrote:
| I think people are still fooling themselves about the relevance
| of 3GL languages in an AI dominated future.
|
| It is similar to how Assembly developers thought about their
| relevance until optimising compilers backends turned that into
| a niche activity.
|
| It is a matter of time, maybe a decade who knows, until we can
| produce executables directly from AI systems.
|
| Most likely we will still need some kind of formalisation tools
| to tame natural language uncertainties, however most certainly
| they won't be Python/Rust like.
|
| We are moving into another abstraction layer, closer to the
| 4GL, CASE tooling dreams.
| zenkey wrote:
| Yes I agree this is likely the direction we're heading. I
| suppose the "Python 4" I mentioned would just be an
| intermediate step along the way.
| sanderjd wrote:
| I think the question is: What is the value of that
| intermediate step? It depends on how long the full path
| takes.
|
| If we're one year away from realizing a brave new world
| where everyone is going straight from natural language to
| machine code or something similar, then any work to make a
| "python 4" - or any other new programming languages /
| versions / features - is rearranging deck chairs on the
| Titanic. But if that's _50_ years away, then it 's the
| opposite.
|
| It's hard to know what to work on without being able to
| predict the future :)
| albertzeyer wrote:
| 4GL and 5GL are already taken. So this is the 6GL.
|
| https://en.wikipedia.org/wiki/Programming_language_generatio.
| ..
|
| But speaking more seriously, how to get this deterministic?
| pjmlp wrote:
| Fair enough, should have taken a look, I stopped counting
| when computer magazines buzz about 4GLs faded away.
|
| Probably some kind of formal methods inspired approach,
| declarative maybe, and less imperative coding.
|
| We should take an Alan Kay and Bret Victor like point of
| view where AI based programming is going to be in a decade
| from now, not where it is today.
| Wowfunhappy wrote:
| Assemblers and compilers are (practically) deterministic.
| LLMs are not.
| pjmlp wrote:
| Missed the part?
|
| > Most likely we will still need some kind of formalisation
| tools to tame natural language uncertainties, however most
| certainly they won't be Python/Rust like
| Wowfunhappy wrote:
| No, I didn't miss it. I think the fact that LLMs are non
| deterministic means we'll need a lot more than "some kind
| of formalization tools", we'll need real programming
| languages for some applications!
| pjmlp wrote:
| How deterministic are C compilers at -O3, while compiling
| exactly the same code across various kinds of vector
| instructions, and GPUs?
|
| We are already on the baby steps down that path,
|
| https://code.visualstudio.com/docs/copilot/copilot-
| customiza...
| spookie wrote:
| Take a look at the following:
| https://reproduce.debian.net/
|
| Granted, lot's of different compilers and arguments
| depending on packages. But you need to match this
| reproducibility in a fancy pants 7GL
| pjmlp wrote:
| And still its behaviour isn't guaranteeded if the
| hardware isn't exactly the same as where the binaries
| were produced.
|
| That is why on high integrity computing all layers are
| certified, and any tiny change requires a full stack re-
| certification.
| almostgotcaught wrote:
| You moved the goal posts and declared victory - that's
| not what deterministic means. It means same source, same
| flags, same output. Under that definition, the actual
| definition, they're 99.9% deterministic (we strive for
| 100% but bugs do happen).
| pjmlp wrote:
| Nope the goal stayed at the same position, people argue
| for deterministic results while using tools that by
| definition aren't deterministic unless a big chunck of
| work is done ensuring that it is indeed.
|
| "It means same source, same flags, same output", it
| suffices to change the CPU and the Assembly behaviour
| might not be the same.
| almostgotcaught wrote:
| Do you like have any idea what you're talking about? Or
| are you just making it up for internet points? The target
| is part of the input.
|
| Lemme ELI5
|
| https://github.com/llvm/llvm-
| project/tree/main/llvm/test/Cod...
|
| You see how this folder has folders for each target? Then
| within each target folder there are tests (thousands of
| tests)? Each of those tests is verified
| _deterministically_ on each commit.
|
| Edit: there's an even more practical way to understand
| how you're wrong: if what you were saying were true,
| ccache wouldn't work.
| sitkack wrote:
| You keep being you, but you also have to admit, not only
| do you move goal posts, but most of arguments are on
| dollies, performing elaborate choreographies that would
| make Merce Cunningham blush.
| fulafel wrote:
| pjmlp did originally say "compiling exactly the same code
| across various kinds of vector instructions, and GPUs".
| ModernMech wrote:
| You have a point, but in making it I think you're
| undermining your argument.
|
| Yes, it's true that computer systems are nondeterministic
| if you deconstruct them enough. Because writing code for
| a nondeterministic machine is fraught with peril, as an
| industry we've gone to great lengths to move this
| nondeterminism as far away from programmers as possible.
| So they can at least _pretend_ their code code is
| executing in a deterministic manner.
|
| Formal languages are a big part of this, because even
| though different machines may execute the program
| differently, at least you and I can agree on the meaning
| of the program in the context of the language semantics.
| Then we can at least agree there's a bug and try to fix
| it.
|
| But LLMs bring nondeterminism right to the programmer's
| face. They make writing programs so difficult that people
| are inventing new formalisms, "prompt engineering", to
| deal with them. Which are kind of like a mix between a
| protocol and a contract that's not even enforced. People
| are writing full-on specs to shape the output of LLMs,
| taking something that's nondeterministic and turning into
| something more akin to a function, which is deterministic
| and therefore useful (actually as an aside, this also
| harkens to language design, where recently languages have
| been moving toward immutable variables and idempotent
| functions -- two features that combined help deal with
| nondeterministic output in programs, thereby making them
| easier to debug).
|
| I think what's going to happen is the following:
|
| - A lot of people will try to reduce nondeterminism in
| LLMs through natural language constrained by formalisms
| (prompt engineering)
|
| - Those formalisms will prove insufficient and people
| will move to LLMs constrained with formal languages that
| work with LLMs. Something like SQL queries that can talk
| to a database.
|
| - Those formal languages will work nicely enough to do
| simple things like collecting data and making view on
| them, but they will prove insufficient to build systems
| with. That's when programming languages and LLMs come
| back together, full circle.
|
| Ultimately, my feeling is the idea we can program without
| programming languages is misunderstanding what
| programming languages are; programming languages are not
| for communicating with a computer, they are for
| communicating ideas in an unambiguous way, whether to a
| computer or a human or an LLM. This is important whether
| or not a machine exists to execute those programs. After
| all, programming languages are _languages_.
|
| And so LLMs cannot and will not replace programming
| languages, because even if no computers are executing
| them, programs still need to be written in a programming
| language. How else are we to communicate what the program
| does? We can't use English and we know why. And we can't
| describe the program to the LLM in English for the same
| reason. The way to describe the program to the LLM is a
| programming language, so we're stuck building and using
| them.
| traverseda wrote:
| LLMs are deterministic. So far every vendor is giving them
| random noise in addition to your prompt though. They don't
| like have a free will or a soul or anything, you feed them
| exactly the same tokens exactly the same tokens will come
| out.
| jnwatson wrote:
| Only if you set temperature to 0 or have some way to set
| the random seed.
| vlovich123 wrote:
| Locally that's possible but for multi tenant ones I think
| there's other challenges related to batch processing (not
| in terms of the random seed necessarily but because of
| other non determinism sources).
| mmoskal wrote:
| If you change one letter in the prompt, however
| insignificant you may think it is, it will change the
| results in unpredictable ways, even with temperature 0
| etc. The same is not true of renaming a variable in a
| programming language, most refactorings etc.
| chowells wrote:
| That's the wrong distinction, and bringing it up causes
| pointless arguments like are in the replies.
|
| The right distinction is that assemblers and compilers have
| semantics and an idea of correctness. If your input doesn't
| lead to a correct program, you can find the problem. You
| can examine the input and determine whether it is correct.
| If the input is wrong, it's theoretically possible to find
| the problem and fix it without ever running the
| assembler/compiler.
|
| Can you examine a prompt for an LLM and determine whether
| it's right or wrong without running it through the model?
| The idea is ludicrous. Prompts cannot be source code. LLMs
| are fundamentally different from programs that convert
| source code into machine code.
|
| This is something like "deterministic" in the colloquial
| sense, but not at all in the technical sense. And that's
| where these arguments come from. I think it's better to
| sidestep them and focus on the important part: compilers
| and assemblers are intended to be predictable in terms of
| semantics of code. And when they aren't, it's a compiler
| bug that needs to be fixed, not an input that you should
| try rephrasing. LLMs are not intended to be predictable at
| all.
|
| So focus on predictability, not determinism. It might
| forestall some of these arguments that get lost in the
| weeds and miss the point entirely.
| krembo wrote:
| Wild thought: maybe coding is a thing of the past? Given that
| an llm can get fast&deterministic results if needed, maybe a
| backend for instance, can be a set of functions which are all
| textual specifications and by following them it can do
| actions (validations, calculations, etc), approach apis and
| connect to databases, then produce output? Then the llm can
| auto refine the specifications to avoid bugs and roll the
| changes in real time for the next calls? Like a brain which
| doesn't need predefined coding instructions to fulfill a
| task, but just understand its scope, how to approach it and
| learn from the past.
| TechDebtDevin wrote:
| I really want to meet these people that are letting an LLM
| touch their db.
| krembo wrote:
| Fast forward to the near future, why wouldn't it with the
| correct restrictions? For instance, would you let it
| today run SELECT queries? as Hemingway once said "if it's
| about price we know who you are".
| sitkack wrote:
| > It is a matter of time, maybe a decade who knows, until we
| can produce executables directly from AI systems.
|
| They already can.
| dragonwriter wrote:
| > I think people are still fooling themselves about the
| relevance of 3GL languages in an AI dominated future.
|
| I think, as happens in the AI summer before each AI winter,
| people are fooling themselves about both the shape and
| proximity of the "AI dominated future".
| brookst wrote:
| It will be approximately the same shape and proximity as
| "the Internet-dominated future" was in 2005.
| mpweiher wrote:
| "Since FORTRAN should virtually eliminate coding and
| debugging..." -- FORTRAN report, 1954 [1]
|
| If, as you seem to imply and as others have stated, we should
| no longer even look at the "generated" code, then the LLM
| prompts are the programs / programming language.
|
| I can't think of a worse programming language, and I am not
| the only one [2]
|
| However, it does indicate that our current programming
| languages are way to low-level, too verbose. Maybe we should
| fix that?
|
| [1] http://www.softwarepreservation.org/projects/FORTRAN/Back
| usE...
|
| [2] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD
| 667...
|
| [3] https://objective.st/
| fulafel wrote:
| > embrace a "Python 4" with a strict ownership model like Rust
|
| Rust only does this because it targets low-level use cases
| without automatic memory management, and makes a conscious
| tradeoff against ease of programming.
| _ZeD_ wrote:
| You think of code as an asset, but you're wrong: code is a
| cost.
|
| Feature is what you want, and performance, and correctness, and
| robustness; not code
|
| Older code is tested code, that is known to work, with known
| limitations and known performances
| shiandow wrote:
| A corollary is that if at all possible try to solve problems
| without code or, failing that, with _less_ code.
| Beltiras wrote:
| Given that you want to solve problems with a computer, what
| is the alternative to code?
| ModernMech wrote:
| If there isn't one, then as little code as possible.
| shiandow wrote:
| That is almost never truly a given. And even if it is,
| _how_ you use the computer can be more important than the
| code.
|
| And if you already have some code, simplifying it is also
| an option.
| brookst wrote:
| Why do you want to use a computer? When I hang pictures I
| have never once thought "I want to use a hammer"
| codr7 wrote:
| Restating the context as one where the problem doesn't
| exist.
|
| Any fool can write code.
| abirch wrote:
| I agree, older code is evidence of survivorship bias. We
| don't see all of the code that was written with the older
| code that was removed or replaced (without a code
| repository).
| qaq wrote:
| You pretty much described Mojo
| adsharma wrote:
| You described the thinking behind py2many.
|
| Code in the spirit of rust with python syntax and great devx.
| Give up on C-API and backward compat with everything.
|
| Re: lifetimes
|
| Py2many has a mojo backend now. You can infer lifetimes for
| simple cases. See the bubble sort example.
| anon-3988 wrote:
| > I'm personally ready to embrace a "Python 4" with a strict
| ownership model like Rust's (hopefully more flexible), fully
| typed, with the old baggage dropped and all the new bells and
| whistles. Static typing should also help LLMs produce more
| correct code and make iteration and refactoring easier.
|
| So...a new language? I get it except for borrow checking, just
| make it GC'ed.
|
| But this doesn't work in practice, if you break compatibility,
| you are also breaking compatibility with the training data of
| decades and decades of python code.
|
| Interestingly, I think as we use more and more LLMs, types gets
| even more and more important as its basically a hint to the
| program as well.
| mountainriver wrote:
| At the point which you describe we could easily write Rust or
| even just C
| zzzeek wrote:
| > As vibe coding becomes normalized
|
| Just want you to know this heart monitor we gave you was
| engineered with vibe coding, that's why your insurance was able
| to cover it. Nobody really knows how the software works
| (because...vibes), but the AI of course surpasses humans on all
| current (human-created) benchmarks like SAT and bar exam tests,
| so there's no reason to think its software isn't superior to
| human-coded (crusty old non "vibe coded" software) as well. You
| should be able to resume activity immediately! good luck
| brookst wrote:
| What percent of applications require that level of
| reliability?
|
| Vibe coding will be normalized because the vast, vast
| majority of code is not life or death. That literally what
| "normal" means.
|
| Exceptional cases like pacemakers and spaceflight will
| continue to be produced with rigor. Maybe even 1% of produced
| code will work that way!
| zzzeek wrote:
| this is black and white thinking. if the practice of "let
| the AI write the code and assume it's fine because I'm only
| an incurious amateur anyway" becomes normalized, the
| tendency of AI to produce inaccurate slop will become more
| and more part of software we use every day and definitely
| will begin impacting functions that are more and more
| critical over time.
| Findecanor wrote:
| The filename of the formal paper[1] reveals the internal
| codename: "Pyrona".
|
| 1: https://www.microsoft.com/en-us/research/wp-
| content/uploads/...
| throwaway81523 wrote:
| I wish Python had moved to the BEAM or something similar as part
| of the 2 to 3 transition. This other stuff makes me cringe.
| pansa2 wrote:
| Python's core developers don't even seem to care about other
| Python implementations (only about CPython).
|
| There's no way they would move to, say, PyPy as the official
| implementation - let alone to a VM designed for a completely
| different language.
| throwaway81523 wrote:
| At the time of the original Py3 release, PyPy was not ready
| for wide use. Otherwise maybe there could have been a chance
| of it replacing CPython. They were in too big a hurry to ship
| Py3 though. Tragedy.
| pjmlp wrote:
| Which is a pity, Python ends up being the only major dynamic
| language, where for all pratical purposes there is no JIT
| support, because while there are alternative implementations
| with great JIT achievements, the comunity behaves as if all
| that effort was for nothing other than helping PhD students
| doing their thesis.
| toast0 wrote:
| I'm a big BEAM person, but python 3.0.0 was released december
| 2008. At that time, I believe OTP R12 was current, and it only
| gained SMP support in R11. [1] In 2008, I don't know that it
| would have been clear that the BEAM would be a good target. And
| I don't know how switching to BEAM then would have addressed
| what I think is the core issue python 3 was working on, unicode
| strings; BEAM didn't start taking on unicode until R13 and
| IMHO, is kind of on the slow end of unicode adoption (which
| isn't always bad... being late means adopting industry
| consensus with less of the intermediate false steps)
|
| [1] https://erlang.org/euc/08/euc_smp.pdf
| froh wrote:
| I'd rather love to see confluent persistence in python, i.e. a
| git-like management of an object tree.
|
| so when you create a new call stack ( generator, async sth,
| thread) you can create a twig/branch, and that is modified in-
| place, copy on write.
|
| and you decide when and how to merge a data branch,there are
| support frameworks for this, even defaults but in general merging
| data is a deliberate operation. like with git.
|
| locally, a python with this option looks and feels single
| threaded, no brain knots. sharing and merging intermediate
| results becomes a deliberate operation with synchronisation
| points that you can reason about.
| traverseda wrote:
| This is achievable with deepdiff today:
| https://pypi.org/project/deepdiff/
|
| Maybe not as performant as if you designed your data structures
| around it. But certainly achievable.
| froh wrote:
| interesting project thanks for the pointer
|
| however it's not what I have in mind
|
| the point is not being able to dog and patch object graphs
|
| the point is copy on write of data that isn't local to the
| current call stack, automatic immutability with automatic
| transients per call stack.
|
| a delta will then guide the merge of branches. but the Delta
| emergency from the CoW, instead of being computed as in
| deepdiff.
| bgwalter wrote:
| This is the true Python concurrency effort! I know, I have
| followed many! (Life of Brian)
|
| So they sounded out the Faster CPython team, which is now fired
| (was van Rossum fired, too?):
|
| "Over the last two years, we have been engaging with the Faster
| CPython team at Microsoft as a sounding board for our ideas."
|
| And revive the subinterpreter approach yet again.
| zem wrote:
| this will work very well with free threaded python you don't
| need sub interpreters. I agree that it's the most promising
| approach I've seen yet.
| OutOfHere wrote:
| Microsoft just fired 3% of its staff, more than it ever did
| before. I would stick with type-checked free-threaded Python with
| locks and queues. Someone should be able to enhance the type
| checker to also check for unsafe mutation of variables.
| surajrmal wrote:
| Only 3%. Hard to say this effort was affected
| pansa2 wrote:
| 3% of the whole company - but a lot of Python specialists
| were in that 3%. Including, apparently, the entire "Faster
| CPython" team.
| akkad33 wrote:
| "fearless concurrency" reminds of the buzzword for another
| language
| mgraczyk wrote:
| Is anyone familiar with Instagram's cinder?
|
| https://github.com/facebookincubator/cinder
|
| cinder includes changes for immutable module objects. I wonder if
| the implementation is similar? Or is cinder so old that it would
| be incompatible with the future noGil direction?
___________________________________________________________________
(page generated 2025-05-18 23:01 UTC)