[HN Gopher] Faster CPython at PyCon, part one
___________________________________________________________________
Faster CPython at PyCon, part one
Author : jwilk
Score : 195 points
Date : 2023-05-11 14:21 UTC (8 hours ago)
(HTM) web link (lwn.net)
(TXT) w3m dump (lwn.net)
| titzer wrote:
| I like tinkering with interpreters, and I do it a lot, but this
| is chump change compared to even a baseline compiler, especially
| one that makes use of type profiles. There is a pretty big
| investment in introducing a compiler tier, but I'm not convinced
| that it's less engineering effort than squeezing more out of an
| interpreter that fundamentally has dispatch and dynamic typing
| overhead. There aren't that many bytecode kinds in CPython, so it
| seems worth doing.
| tekknolagi wrote:
| That's what we did in Skybison before it got wound down:
| https://github.com/facebookexperimental/skybison
| titzer wrote:
| Are there publicly-available performance results?
| tekknolagi wrote:
| Kind of! In my fork I run microbenchmarks on each PR. So
| you can see on, for example,
| https://github.com/tekknolagi/skybison/pull/456, that the
| change had a 3.6% improvement on the compilation benchmark.
| If you expand further, you can see a comparison with
| CPython 3.8. Unfortunately Skybison is still on 3.8.
| mardifoufs wrote:
| What's the difference between that and cynder? I know they
| both come from Meta/IG but I wonder if cynder is an evolution
| from skybison or is it more of a different approach?
| tekknolagi wrote:
| Cinder and Skybison are different approaches. Skybison was
| a ground-up rebuild of the object model and interpreter,
| whereas Cinder is a JIT that lives on top of CPython.
| Seattle3503 wrote:
| Small improvements to cpython are multiplied out to the
| millions of machines cpython is installed on by default.
| AbsoluteCabbage wrote:
| I prefer C without braces
| albertzeyer wrote:
| > LOAD_ATTR_ADAPTIVE operation can recognize that it is in the
| simple case
|
| How can it do that? The `self` variable could be anything, or
| not?
| Jasper_ wrote:
| If it finds itself not in the simple case it specialized for,
| it reverts back to the unspecialized case. In JIT compiler
| terms, this is called a "guard"
| albertzeyer wrote:
| But isn't this the same as what LOAD_ATTR also does? It first
| checks the simple case, and if that fails, it goes the
| complex fallback route. Why does this need a new op? Or why
| would the standard LOAD_ATTR not do this?
|
| Or are there multiple versions of the compiled function, and
| it modifies the bytecode per each call of the function?
| adrian17 wrote:
| The issue is that there are many different "simple cases".
| The more checks for "fast paths" you add, the more overhead
| of individually checking for them.
|
| Also consider that:
|
| - this is more branch predictor friendly (a generic opcode
| encounters many different cases, while a specialized opcode
| is expected to almost never fail),
|
| - the inline cache has limited space and is stateful
| (imagine it's a C union of cache structs for different fast
| paths). If your "generic" LOAD_ATTR found a simple case B,
| it'd still need to ensure that its inline cache was
| actually primed for simple case B. This is not the issue
| with specialized opcodes; the cache is set to correct state
| when the opcode gets specialized.
| Sohcahtoa82 wrote:
| If you define a .__getattr__ method, then "self.y" is no longer
| simple.
| AbsoluteCabbage wrote:
| I love me some Python, trust me. But this is just putting
| lipstick on a pig. The language needs to be rewritten from the
| ground up. If you've ever analyzed the machine code a simple
| python script spits out you'll see how helplessly bloated it is.
| leetrout wrote:
| Do you have thoughts around the concept of Python becoming a
| Rust macro based language? I read another comment about that
| being a potential future for Python but I don't know how
| feasible it is.
|
| I've moved on to Go for things where I need to care about
| performance but like thinking about things in a similar way
| Python lets me think about them.
| cfiggers wrote:
| Nooooo if Python gets faster then all the Mojo demos are going to
| sound so much less impressive nooo
| traverseda wrote:
| Meh, mypyc and codon both do the same thing as mojo. Codon
| seems to have similar speed improvements and the source code is
| available.
| jjtheblunt wrote:
| Do they compile to MLIR so when Alteryx writes a code
| generator you can target FPGAs, for example?
| npalli wrote:
| I don't know, looks like ~15% improvement [1]. Doubt the Mojo
| guys are going to lose much sleep (they claimed 35,000x
| improvement).
|
| [1]
| https://speed.python.org/comparison/?exe=12%2BL%2B3.11%2C12%...
| TwentyPosts wrote:
| Somehow I'm still skeptical! Feels like "strict superset of
| Python, all Python code works" and "several orders of
| magnitude faster" just sounds like you're trying to have your
| cake and eat it, too.
|
| I doubt that the Mojo developers have some sort of 'secret
| sauce' or 'special trick' that will get them there. And even
| if they have something, I don't see why the Python devs
| wouldn't just implement the same approach, considering
| they're currently trying to make Python faster.
|
| I assume that (as long as Mojo wants to stick to its goal of
| being a strict superset of Python), there will be a lot of
| things that just cannot be 'fixed'. For example, I'd be
| surprised if Python's Global Interpreter Lock isn't entangled
| with the language in a few nasty ways that'd make it really
| difficult to replace or improve upon (while retaining
| compatibility).
|
| Then again, the developer of Swift is working on it, right? I
| guess he's got the experience, at least.
| umanwizard wrote:
| IDK about 35,000x but Python really is _outrageously_ slow
| relative to other popular languages, even other interpreted
| ones. It 's more comparable to something like Ruby than to
| something like JavaScript.
| williamstein wrote:
| From their FAQ: "How compatible is Mojo with Python
| really? Mojo already supports many core features of
| Python including async/await, error handling, variadics,
| etc, but... it is still very early and missing many
| features - so today it isn't very compatible. Mojo
| doesn't even support classes yet!".
|
| Overall, pure Python seems to be about 100x slower than
| what you can reasonably get with a compiled language and
| some hard work. It's about 10x slower than what you can
| get from JITs like Pypy and Javascript, when such
| comparisons makes sense.
|
| I agree that Mojo remind me of Cython, but with more
| marketing and less compatibility with Python. Cython
| aspired to be a nearly 99% superset of Python, at least
| that's exactly what I pushed my postdocs and graduate
| students to make it be back in 2009 (e.g., there were
| weeks of Robert Bradshaw and Craig Citro pushing each
| other to get closures to fully work). Mojo seems to be a
| similar modern idea for doing the same sort of thing. It
| could be great for the ecosystem; time will tell. Cython
| could still also improve a lot too -- there is a major
| new 3.0 release just around the corner!:
| https://pypi.org/project/Cython/#history
| [deleted]
| pavon wrote:
| I am a little skeptical as well, but I do think there is a
| lot of areas for improvement beyond the changes going in
| mainline. Note that while Mojo intends to support all of
| python, they never claimed that it will still be fast if
| you make heavy use of all the dynamic features. The real
| limiting factors are how often those dynamic features are
| used, and how frequently the code needs to be checking
| whether they are used.
|
| The fast CPython changes are still very much interpreter-
| centric, and are checking whether assumptions have changed
| on every small operation. It seems to me that if you are
| able to JIT large chunks of code, and then push all the JIT
| invalidation checks into the dynamic features that break
| your assumptions rather than in the happy path that is
| using the assumptions, you ought to be able to get much
| closer to to Javascript levels of performance when dynamic
| features aren't being used.
|
| Then support for those dynamic features becomes a fallback
| way of having a huge ecosystem from day one, even if it is
| only modestly faster than CPython.
| wiseowise wrote:
| Isn't Mojo compiled? While Python needs to be interpreted.
| mikepurvis wrote:
| I think the point is that it's _really_ hard to get those
| gains from "being compiled" when every function or
| method call, every use of an operator, everything is
| subject to multiple levels of dynamic dispatch and
| overrideability, where every variable can be any type and
| its type can change at any time and so has to be
| constantly checked.
|
| A language that's designed to be compiled doesn't have
| these issues, since you can move most of the checking and
| resolution logic to compile-time. But doing that requires
| the language's semantics to be aligned to that goal,
| which Python's most certainly are not.
|
| The achievements of Pypy are pretty incredible in the JIT
| space, but the immense effort that has gone in there
| definitely is a good reason to be skeptical of Mojo's
| claims of being both compiled and also a "superset" of
| Python.
| waterhouse wrote:
| So you want a system that figures out that the vast
| majority of function and method calls are _not_
| overridden, that these variables and fields are in
| practice always integers, etc.; generates some nice
| compiled code, and adds some interlock so that, if these
| assumptions _do_ change, the code gets invalidated and
| replaced with new code (and maybe some data structures
| get patched).
|
| I think that this kind of thing is in fact done with the
| high-performing Javascript engines used in major
| browsers. I imagine PyPy, being a JIT, is in a position
| to do this kind of thing. Perhaps Python is more
| difficult than Javascript, and/or perhaps a lot more
| effort has been put into the Javascript engines than into
| PyPy?
| TwentyPosts wrote:
| Mojo bothers offers Just-In-Time and Ahead-Of-Time
| models. Proper AOT compilation is going to offer a
| significant speedup, but probably not enough to get to
| their stated goals. And good luck carrying all of
| Python's dynamic features across the gap.
| Conscat wrote:
| Well, they do have MLIR, which is probably the closest
| thing to "secret sauce" they've got. I'm excited by some of
| the performance-oriented features like tiling loops, but
| they'll need the multitude of optimization-hints that GCC
| and Clang have, too. They also have that parallel runtime,
| which seems similar to Cilk to me, and Python fundamentally
| can never have that as I understand it.
| spprashant wrote:
| The way I read it, Python code as it is, won't see a huge
| bump it's like 10x or something.
|
| You have to use special syntax and refactoring to get the
| 1000x speed ups. The secret sauce is essential a whole new
| language within the language which likely helps skip the
| GIL issues.
| tgma wrote:
| That comparison seems quite cherry picked. Unlikely that it
| is generalizable. In my tests with Mojo, it doesn't seem to
| behave like Python at all so far. Once they add the dynamism
| necessary we can see where they land. I'm still optimistic
| (mostly since Python has absolute garage performance and
| there's low hanging fruit) but it's no panacea. I feel their
| bet is not to have to run Python as is but have Python
| developers travel some distance and adapt to some
| constraints. They just need enough momentum to make that
| transition happen but it's a transition to a distinct
| language, not a Python implementation. Sort of what Hack is
| to PHP.
| ehsankia wrote:
| 35,000x suddenly becomes 30,000x, not as impressive!
| KeplerBoy wrote:
| I don't buy the mojo hype.
|
| The concept of compiling python has been tried again and
| again. It has its moments, but anything remotely important is
| glued in from compiled code anyways.
| grumpyprole wrote:
| Mojo compiles "Python" to heterogeneous hardware, I don't
| believe that has been tried before.
| KeplerBoy wrote:
| It has.
|
| There's Numba, CuPy, Jax and torch.compile. Arguably they
| are more like DSLs, which happen to integrate into Python
| than regular Python
|
| Of course I don't know what Mojo will actually bring to
| the table since their documentation doesn't mention
| anything GPU specific, but the idea isn't completely
| novel.
| ptx wrote:
| From a quick reading of the Mojo website, it sounds like
| gluing in compiled code is exactly what they're doing,
| except this time the separate compiled language happens to
| look sort of like Python a bit. For the actual Python code
| it still uses CPython, so that part doesn't get any faster.
| morelisp wrote:
| Pyrex, Cython, mypyc, Codon... we've been down this road
| before plenty too.
|
| If Mojo succeeds it will be purely based on quality of
| implementation, not a spark of genius.
| sfpotter wrote:
| Cython works and is widely used. Problem is that there
| aren't too many people working on it (from what I can
| tell), so the language has some very rough edges. It
| seems like it has mostly filled the niche of "making good
| wrappers of C, C++, and Fortran libraries".
|
| Cython has always been advertised as a potential
| alternative to writing straight Python, and there are
| probably a decent number of people who do this. I work in
| computational science and don't personally know anyone
| that does. I use it myself, but it's usually a big lift
| because of the rough edges of the language and the sparse
| and low quality documentation.
|
| If Cython had 10x as many people working on it, it could
| be turned into something significantly more useful. I
| imagine there's a smarter way of approaching the problem
| these days rather than compiling to C. I hope the Mojo
| guys pull off "modern and actually good Cython"!
| morelisp wrote:
| Cython was, of course, "modern and actually good Pyrex."
|
| In the end the way the industry works guarantees a
| endless stream of these before some combination of
| boredom and rentiership result in each getting abandoned.
| It's just a question of whether Mojo lasts 1, 3, or god
| willing 5 years on top.
| sfpotter wrote:
| Is it the way the industry works, or is the way open
| source works? Pyrex had basically one person behind it
| (again, from what I can tell), and Cython currently has
| ~1 major person behind it. Not enough manpower to support
| such large endeavors.
|
| Ideally, government or industry would get behind these
| projects and back them up. Evidently Microsoft is doing
| that for Python. For whatever reason, Cython and Pyrex
| both failed to attract that kind of attention. Hopefully
| it will be different with Mojo.
|
| Here's to 5 years!
| KeplerBoy wrote:
| There's also Numba, which seems a lot more active and
| compiles directly to LLVM IR.
| [deleted]
| pdpi wrote:
| The "spark of genius" might very well be a novel
| implementation strategy.
| neolefty wrote:
| It leverages the Multi-Level Intermediate Representation
| (MLIR) https://mlir.llvm.org/ which is a follow-on from
| LLVM, with lots of lessons learned:
| https://www.hpcwire.com/2021/12/27/lessons-from-llvm-an-
| sc21...
| morelisp wrote:
| Again, "compile to an IR" isn't exactly ground-breaking.
| The devil will be in the details.
| posco wrote:
| Great implementation is genius.
| morelisp wrote:
| But grueling and desperately dependent on broader trends.
|
| I wouldn't want to bet on "lots of work for a chance to
| get incrementally better."
| klyrs wrote:
| I've been rewriting Python->C for nearly 20 years now. The
| expected speedup is around 100x, or 1000x for numerical stuff
| or allocation-heavy work that can be done statically.
| Whenever you get 10,000x or above, it's because you've
| written a better algorithm. You can't generalize that. A 35k
| speedup is a cool demo but should be regarded as hype.
| sk0g wrote:
| I wrote a simple Monte Carlo implementation in Python 3.11
| and Rust. Python managed 10 million checks in a certain
| timeframe, while Rust was could perform 9 billion checks in
| the same timeframe. That's about a 900x speedup, if I'm not
| mistaken. I suspect Mojo's advertised speedup is through
| the same process, except on benchmarks that are not
| dominated by syscalls (RNG calls in this instance).
|
| The one difference was the Rust one used a parallel
| iterator (rayon + one liner change), whereas I have found
| Python to be more pain than it's worth, for most usecases.
| kouteiheika wrote:
| > The expected speedup is around 100x, or 1000x for
| numerical stuff or allocation-heavy work that can be done
| statically. Whenever you get 10,000x or above, it's because
| you've written a better algorithm.
|
| Anecdotally I recently rewrote a piece of Python code in
| Rust and got ~300x speedup, but let's be conservative and
| give it 100x. Now let's extrapolate from that. In native
| code you can use SIMD, and that can give you a 10x speedup,
| so now we're at 1000x. In native code you can also easily
| use multiple threads, so assuming a machine with a
| reasonably high number of cores, let's say 32 of them
| (because that's what I had for the last 4 years), we're now
| at 32000x speedup. So to me those are very realistic
| numbers, but of course assuming the problem you're solving
| can be sped up with SIMD and multiple threads, which is not
| always the case. So you're probably mostly right.
| klyrs wrote:
| Trivially parallelizable algorithms are definitely in the
| "not generally applicable" regime. But you're right,
| they're capable of hitting arbitrarily large, hardware-
| dependent speedups. And that's definitely something a
| sufficiently intelligent compiler should be able to
| capture through dependency analysis.
|
| Note that I don't doubt the 35k speedup -- I've seen
| speedups into the millions -- I'm just saying there's no
| way that can be a representative speedup that users
| should expect to see.
| piyh wrote:
| Python can use multiprocessing with a shared nothing
| architecture to use those 32 threads.
| malodyets wrote:
| I was about to say the same thing.
|
| Multiprocessing on Python works great and isn't even very
| hard if you use say async_apply with a Pool.
|
| Comparing single-threaded Python with multiprocesssing in
| Language X is unfair if not disingenuous.
| heavyset_go wrote:
| There's a serialization overhead both on dispatch and
| return that makes multiprocessing in Python unsuitable
| for some problems that would otherwise be solved well
| with threads in other languages.
| mlyle wrote:
| > Multiprocessing on Python works great and isn't even
| very hard if you use say async_apply with a Pool.
|
| Multiprocessing works great if you don't really need a
| shared memory space for your task. If it's very loosely
| coupled, that's fine.
|
| But if you use something that can benefit from real
| threading, Python clamps you to about 1.5-2.5 cores worth
| of throughput very often.
| bombolo wrote:
| Hear me out... we can write bad python code to justify
| impressive speed boosts rewriting it in rust.
|
| In this way we can justify rewriting stuff in rust to our
| bosses!
|
| If we write decent python, and perhaps even replace 1
| line to use pypy, the speedup won't be impressive and we
| won't get to play with rust!
| coldtea wrote:
| And each of these threads will still have the Python
| interpreter performance.
|
| Nothing preventing something like Mojo to also use those
| same 32 threads but with 10-100x the performance instead.
| vlovich123 wrote:
| The other languages are not taking/releasing a globally
| mutually exclusive GIL every time it crosses an API
| boundary and thus "shared nothing" in those languages is
| truly shared nothing. Additionally, Python's
| multiprocessing carries a lot of restrictions which makes
| it hard to pass more complex messages.
| nomel wrote:
| > The expected speedup is around 100x, or 1000x for
| numerical stuff
|
| What if you stay in the realm of numpy?
|
| What's the biggest offender that you see?
| klyrs wrote:
| > What if you stay in the realm of numpy?
|
| You mean, what if you're only doing matrix stuff? Then
| it's probably easier to let numpy do the heavy lifting.
| You'll probably take less than a 5x performance hit, if
| you're doing numpy right. And if you're doing matrix
| multiplication, numpy will end up faster because it's
| backed by a BLAS, which mortals such as myself know
| better than to compete with.
|
| > What's the biggest offender that you see?
|
| Umm... every line of Python? Member access. Function
| calls. Dictionaries that can fundamentally be mapped to
| int-indexed arrays. Reference counting. Tuple allocation.
|
| One fun exercise is to take your vanilla python code,
| compile it in Cython with the -a flag to produce an HTML
| annotation. Click on the yellowest lines, and it shows
| you the gory details of what Cython does to emulate
| CPython. It's not exactly what CPython is doing (for
| example, Cython elides the virtual machine), but it's
| close enough to see where time is spent. Put the same
| code through the python disassembler "dis" to see what
| virtual machine operations are emitted, and paw through
| the main evaluation loop [1]; or take a guided
| walkthrough at [2].
|
| [1] https://github.com/python/cpython/blob/v3.6.14/Python
| /ceval.... (note this is an old version, you can change
| that in the url)
|
| [2]
| https://leanpub.com/insidethepythonvirtualmachine/read
| m3affan wrote:
| This guy cythons
| vbarrielle wrote:
| Due to the possibility to fuse multiple operations in C++
| (whereas you often have intermediate arrays in numpy), I
| routinely get 20x speedups when porting from numpy to
| C++. Good libraries like eigen help a lot.
| arjvik wrote:
| What's an example of fusing operations?
|
| Are you talking about combinations of operations that are
| used commonly enough to warrant Eigen methods that
| perform them at once in SIMD?
| celrod wrote:
| Probably that eigen uses expression templates to avoid
| the needles creation of temporaries.
| make3 wrote:
| if it's an improvement that big it needs to be with GPUs, &
| gpus can be used normally with torch etc
| Alex3917 wrote:
| How does that reconcile with the benchmarks here, which say
| that Python 2.12 is currently somewhere between 5% slower to
| 5% faster?
|
| https://github.com/faster-cpython/benchmarking-public
| albertzeyer wrote:
| JAX or torch.compile also do sth similar as Mojo, right?
| brrrrrm wrote:
| Once fused into super-instructions/adaptive instructions, does
| `disas` still allow users to manipulate the bytecode as if it
| were not optimized?
|
| That'd be a very nice feature and I'm not sure it exists in other
| languages.
| low_tech_punk wrote:
| When will such goodness come to JavaScript engines? I'm curious
| if there is some fundamental design flaws that made JavaScript
| difficult to optimize.
| Jasper_ wrote:
| JavaScript engines have been using inline caches and
| specialization for over 10 years now. It's Python that's
| catching up to JavaScriptCore from 2007.
| brrrrrm wrote:
| I wonder if the parent comment was made tongue-in-cheek :)
| formerly_proven wrote:
| I mean JavaScript is definitely gross to optimize, if you'd
| design a language for easy optimization and jitting it would
| look different. Python is just way worse, especially because of
| the large and rich CPython C API, which used invasive
| structures almost everywhere (it has been slowly moving away
| from that). The CPython people, GvR in particular, always
| considered it very important for the codebase to remain simple
| as well, and it is. CPython is a really simple compiler and
| straightforward bytecode interpreter.
| esaym wrote:
| Node.js has been on par with Java for many years now due to the
| Jit and tons of money google has pumped into that ecosystem:
| https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
| robertlagrant wrote:
| It would be cool if they'd put some resource into Pypy as well.
| Feels like that project is such a great achievement, and with
| just a bit more resource it would've been an amazing alternative.
| As it is, it takes a long time to get the new language features
| out, which always hamstrings it a little.
| adamckay wrote:
| There's a limited number of volunteers with the experience to
| improve (C)Python, though, so the resources are better spent on
| improving the CPython implementation to such a point where PyPy
| is no longer necessary.
| girvo wrote:
| See: HHVM and PHP. HHVM is what lit the fire under the PHP
| team's collective butts to improve perf, and now most just
| run the mainline PHP interpreter (well, they did years ago
| when all this happened and I was still doing PHP)
| qaq wrote:
| Now there is a number of payed MS employees lead by Guido van
| Rossum working on improving (C)Python
| Spivak wrote:
| Speaking of Python speed you gotta hand it to the core Python
| devs having some really dedicated numerical analysis folks. A
| really good gentle introduction talk
| https://www.youtube.com/watch?v=wiGkV37Kbxk
|
| Python really has earned its place as the language of choice for
| scientific, research, and ML. I've stolen quite a few algos from
| CPython.
| wheelerof4te wrote:
| "Speaking of Python speed you gotta hand it to the core Python
| devs having some really dedicated numerical analysis folks."
|
| Well, they are _Python_ devs after all.
| JaDogg wrote:
| Shameless plug: https://yakshalang.github.io/
|
| This is my passion project. (Less activity these days as I'm
| sick)
| mpenick wrote:
| I hope you feel better. This is a neat project and I'm looking
| forward to trying it out.
| tpoacher wrote:
| +1 same here
| [deleted]
| [deleted]
| TheMagicHorsey wrote:
| Python's real strength at this point is the ecosystem. If you
| want speed you wouldn't use Python. Having said that, its better
| for the planet if default Python is more efficient. Everyone has
| seen Python code doing stuff Python code should not do at scale.
| mardifoufs wrote:
| Yes we all know python is slow. But that's not a virtue of the
| language, it's not a feature that anyone really wants. We live
| with it but increasing speed is always a good thing. Which is
| why even Python's creator is making it his priority right now.
| melenaboija wrote:
| Thanks for the information, it has never been said before in HN
| when an improvement to Python happens and I am sure there is
| still someone out there thinking that Python is fast.
|
| BTW from the PEP:
|
| "Motivation
|
| Python is widely acknowledged as slow"
| jerf wrote:
| There are a lot of people out there who interpret "Python is
| a slow language" as an _attack_ , rather than just something
| that engineers need to know and account for at solution time.
|
| There are also a non-trivial number of people out there who
| read about the constant stream of performance improvements
| and end up confusing that with absolute high performance; see
| also the people who think Javascript JITs mean Javascript is
| generally at C performance, because they've consumed too many
| microbenchmarks where it was competitive with C on this or
| that task, and got 500% faster on this and 800% faster on
| that, and don't know that in totality it's still on the order
| of 10x slower than C.
|
| It is, unfortunately, something that still needs to be
| pointed out.
| melenaboija wrote:
| I am more of the idea that people that truly need
| performance already know about the differences of Python vs
| C and my experience with slow Python code is almost always
| due to lack of basic CS knowledge rather than the language
| per se.
|
| Maybe my comment was too mean, but every time that
| something related to Python core enhancements appears
| someone has to say that C is faster and I am not sure if
| someone interested on how the generated bytecode improves
| the performance needs to be reminded that C is faster.
| jerf wrote:
| I write a lot of code in the "Python would be really
| chugging here, though it would get the job done if you
| put a bit of elbow grease into it, but Go chows right
| through this no problem with no special effort" space.
| I'm not generally in the "keeping 100 servers 90%
| occupied space" but it wouldn't take much more in
| percentage terms for me to fall off of Python. Python's
| slowness is definitely real, as well as its general
| difficulty in keeping multiple CPUs going. (Python has a
| lot of solutions to this problem precisely _because_ none
| of them are really all that good, compared to language
| that is 1. compiled and 2. has a solid threading story.
| It wouldn 't have many solutions if one of them was an
| actual solution to the problem.)
|
| It isn't just Python. I'm getting moved more into a PHP
| area right now, and there's a system in there that I need
| to analyze to see whether its performance problems are
| architectural, or if it's just not a task PHP should be
| doing and Go(/Rust/C#/Java/... Go is just the one of that
| set most in my and my team's toolbelt) would actually
| perform just fine. I don't _know_ which it is yet, but
| based on what I know at the moment, both are plausible.
|
| And I'm not a "hater". I just have a clear view of the
| tradeoffs involved. There are many tasks for which Python
| is more power than one knows what to do with. Computers
| have gotten pretty fast.
| morelisp wrote:
| Last year we had a team switch from Java to Python for what
| should be a high-performance API, because "they're really
| speeding up Python 3 these days."
|
| It's now at least 2x off its original perf target and only
| the trivial case is being handled correctly.
|
| I think reminding people is good.
___________________________________________________________________
(page generated 2023-05-11 23:01 UTC)