Post 9xWVTdP71ZlPcoeZ6W by allison@blob.cat
(DIR) More posts by allison@blob.cat
(DIR) Post #9xU89Fdcfc3O4Wx0SG by sir@cmpwn.com
2020-07-26T17:38:06Z
1 likes, 0 repeats
There is no good in the world, god is dead
(DIR) Post #9xU8UN4qCubvE0kj7A by wolf480pl@mstdn.io
2020-07-26T17:42:49Z
1 likes, 0 repeats
@sirThere is good, it's just that people are obstructing it. Remove all humans and what remains will be 100% good
(DIR) Post #9xU9CZLiGCv4oobJUu by rune@mastodon.nzoss.nz
2020-07-26T17:50:00Z
0 likes, 0 repeats
@wolf480pl @sir Good is a human concept. Remove all humans and nothing is good.
(DIR) Post #9xUAY9udc7uUFknNNQ by wolf480pl@mstdn.io
2020-07-26T18:05:47Z
2 likes, 0 repeats
@rune @sir "is" is a human concept. Remove all humans and nothing is.
(DIR) Post #9xUAiPXaMbN1cQtwkC by rune@mastodon.nzoss.nz
2020-07-26T18:07:45Z
0 likes, 0 repeats
@wolf480pl @sir Yes, that's what I said.
(DIR) Post #9xUAiqxkPRLAfOLils by nocko@makerdon.org
2020-07-26T18:06:24Z
0 likes, 0 repeats
@sir this rings true.
(DIR) Post #9xUAtRMvuHGoTTB66S by wolf480pl@mstdn.io
2020-07-26T18:09:43Z
0 likes, 0 repeats
@rune by that logic, we can conclude that speed if light is 3 * 10^8 km/s only if humans exist.
(DIR) Post #9xUBUqYTEDPNInPYXo by rune@mastodon.nzoss.nz
2020-07-26T18:15:52Z
0 likes, 0 repeats
@wolf480pl The logic is as sound as equating removal of all humans to "100% good". As if other animals aren't just as cruel and good as humans.Dolphins have been observed helping humans for no apparent reason, and orcas observed hunting for sport and playing with their prey.Either we attribute all definitions of "good/bad" to only humans, or we attribute it equally to all animals. Can't have it both ways.
(DIR) Post #9xUBWNDVHI5GtHKeYK by wolf480pl@mstdn.io
2020-07-26T18:16:47Z
0 likes, 0 repeats
@rune but playing with prey is good...
(DIR) Post #9xUBdrAMKFGlntQh4S by rune@mastodon.nzoss.nz
2020-07-26T18:18:06Z
0 likes, 0 repeats
@wolf480pl Except if you're a human. Then it's animal cruelty.
(DIR) Post #9xUCEKyAzhZ6f97u7s by wolf480pl@mstdn.io
2020-07-26T18:24:46Z
0 likes, 0 repeats
@rune hm... yeah, you're right.I guess what I really wanted to say is, if all humanity were gone, all of Drew's problem would also be gone.
(DIR) Post #9xUKOCatmusxDRSBG4 by ml123@fosstodon.org
2020-07-26T19:54:02Z
0 likes, 0 repeats
@sir There is no good in yourself firstly.Then why pull your rat-shit to everyone?
(DIR) Post #9xULwC6uC1T3W1CGqu by josias@theres.life
2020-07-26T20:12:11Z
0 likes, 0 repeats
@sir No good != God is dead.
(DIR) Post #9xUM5u2SEpbaMXkDUe by sir@cmpwn.com
2020-07-26T20:14:25Z
0 likes, 0 repeats
@josias correct, but god *is* dead, and there *is* no good.
(DIR) Post #9xUMFEhYmbvJ3PQCoK by josias@theres.life
2020-07-26T20:15:49Z
0 likes, 0 repeats
@sir Are those just two unrelated statements or does one follow from another?
(DIR) Post #9xUMMazkINPwDrRASu by sir@cmpwn.com
2020-07-26T20:16:34Z
0 likes, 0 repeats
@josias they're unrelated
(DIR) Post #9xUMtCGdFklf5Tsgs4 by wolf480pl@mstdn.io
2020-07-26T20:24:11Z
0 likes, 0 repeats
@rune on top of that, I think the non-human part of the universe is fascinating to study, can be understood, is really satisfying to understand, and once you understand it it all fits together in a way that makes sense.OTOH, humans are hard to understand, do a lot of things that make no sense, and create needless complexity.Well, then there is Fraunhoffer equation... sigh...
(DIR) Post #9xUN9SbOHwJCNlLVAm by josias@theres.life
2020-07-26T20:27:04Z
0 likes, 0 repeats
@wolf480pl @sir What does "good" mean then? It is entirely subjective and if we remove people, there is no one to decide whether it is good or bad. Nature doesn't exibit morality, people do. So it wouldn't be "100%" good if people didn't exist. It would be 100% neutral.
(DIR) Post #9xUNgrXQb6gtHCQ51U by wolf480pl@mstdn.io
2020-07-26T20:33:09Z
0 likes, 0 repeats
@josias @sir see the rest of the thread
(DIR) Post #9xUUIxJOBB93ubOt9c by emacsomancer@fsmi.social
2020-07-26T21:38:11.508511Z
1 likes, 0 repeats
@sir God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?
(DIR) Post #9xUvoXoXDCTurTFuvQ by wyatwerp@fsmi.social
2020-07-27T02:54:55.581653Z
1 likes, 0 repeats
@wolf480plWell said.Replace good by self-regulated, to keep the language lawyers happy?God is beyond our comprehension, by definition. Like the cosmos. Or the depths of the ocean.Instead, somewhat reachable is Gaia (as in the hypothesis). Doing right by that might be a start. You know, try to stick to local sources for all ye needs so that you know if it is produced in a self-regulated way or not. This could go far, even to choosing only shale oil over Gulf oil for cars.
(DIR) Post #9xUvtXAB3kOcI4l43k by icedquinn@blob.cat
2020-07-27T02:56:31.376227Z
0 likes, 0 repeats
@wyatwerp @wolf480pl buy local kind of seems more like a meme that serves small businesses than necessarily making something more karmaically clean. :blobcatshrug:
(DIR) Post #9xUyPtGkbZew1gOCAa by wyatwerp@fsmi.social
2020-07-27T03:21:10.542188Z
1 likes, 0 repeats
@icedquinn @wolf480pl yeah, I was not detailed in saying that.Locally produced, so that you know what went into making/procuring it. I have had good success just asking about locally-procured stuff in different stores and they respond to the visible demand by getting it. Small business has to see a business justification, yes, but they can be fairly responsive just on account of being small.
(DIR) Post #9xUznuSVvuqOcVuOPY by zachdecook@social.librem.one
2020-07-27T03:38:31Z
0 likes, 0 repeats
@sirIs it internally consistent for someone to be a nihilist **and** promote a software development and design philosophy as being more good than other philosophies?
(DIR) Post #9xVGNmQsqnbPQ1GwXA by wolf480pl@mstdn.io
2020-07-27T06:46:00Z
0 likes, 0 repeats
@wyatwerpHave you seen locally produced computers?@icedquinn
(DIR) Post #9xVwjDwdx5Fgu12FeK by wyatwerp@fsmi.social
2020-07-27T14:33:12.278980Z
0 likes, 0 repeats
@wolf480plWhy? Did your country close its silicon manufacturing and outsource it to Taiwan?@icedquinn
(DIR) Post #9xVwjEeFKy895FR4c4 by wolf480pl@mstdn.io
2020-07-27T14:40:29Z
0 likes, 0 repeats
@wyatwerp @icedquinn The best microprocessor ever manufactured in Poland was an 8080 clone. The fab that made it was closed down in 1994.
(DIR) Post #9xVzu7BbijPKzMv4EK by wyatwerp@fsmi.social
2020-07-27T14:56:02.368864Z
0 likes, 0 repeats
@wolf480plAh! Cold country? Yeah, I don't know what they do for vegetables there, computers might theoritically be easier to make!Seriously, in case there are countries dependent on 1000-mile imports for even food, I am not sure enough computers are imported to matter overall. Population being lower means they can go a long way with non-local consumption without bothering Gaia.@icedquinn
(DIR) Post #9xVzu7pJL7AOyVUm7E by wolf480pl@mstdn.io
2020-07-27T15:16:04Z
0 likes, 0 repeats
@wyatwerp @icedquinn but you still get shitty computers with shitty firmware made by people cutting corners.Maybe Gaia doesn't suffer from that, but you do.
(DIR) Post #9xWTwLAzfq2XaAbrrU by wyatwerp@fsmi.social
2020-07-27T20:31:57.369079Z
0 likes, 0 repeats
@wolf480plPay more and get a non-shitty computer? Maybe by sharing costs with others? Cheap computers being shitty is not a surprise.@icedquinn
(DIR) Post #9xWTwM8Y6Pg8YsTQgq by wolf480pl@mstdn.io
2020-07-27T20:52:37Z
0 likes, 0 repeats
@wyatwerp @icedquinn All CPUs I'm aware of are either vulnerable to Spectre (read: shitty) or orders of magnitude slower.All motherboard vendors either go for the least effort with their firmware, making notoriously broken BIOSes, or are control freaks with heaps of DRM and vendor lock-in.Now, I can live with that.I can forgive them, pick a less bad option, and accept its drawbacks.Some people can't.
(DIR) Post #9xWU7FjmEhWxto5Bvk by wolf480pl@mstdn.io
2020-07-27T20:54:36Z
0 likes, 0 repeats
@wyatwerp @icedquinn Imagine if all the books in the world lacked punctuation.And you were one of the few people aware that punctuation is necessary.
(DIR) Post #9xWVAgsvrGS6v2ZHxA by icedquinn@blob.cat
2020-07-27T21:06:28.922593Z
0 likes, 0 repeats
@wolf480pl @wyatwerp Do SPARC, POWER and ARM actually have spectre/meltdown issues? I was always more worried about the entire para-OS they run in secret :blobcatgrimacing:
(DIR) Post #9xWVKlcZ6XiyVGiXiK by newt@stereophonic.space
2020-07-27T21:08:09.807861Z
0 likes, 0 repeats
@icedquinn @wolf480pl @wyatwerp SPARC is mostly dead and irrelevant. POWER and ARM both have out-of-order implementations, so they very well might. In fact, Spectre was verified on some ARM CPUs as well.
(DIR) Post #9xWVTdP71ZlPcoeZ6W by allison@blob.cat
2020-07-27T21:09:55.092048Z
0 likes, 0 repeats
@newt @icedquinn @wolf480pl @wyatwerp UltraSPARC T series and POWER6 are probably the only reasonably modern server CPUs that are in-order, it's safe to assume that everything else is at least affected by some variant of Spectre from the late 90s onwards (and some ARM chips are vulnerable to Meltdown as well)
(DIR) Post #9xWVYY7c5hzYgroOfo by wolf480pl@mstdn.io
2020-07-27T21:10:44Z
0 likes, 0 repeats
@icedquinn @wyatwerp AFAIK only Intel has Meltdown-type vulns. These are a more obvious fuckup.Spectre is more-or-less an emergent behaviour of an out-of-order CPUs, and it's hard to make one without Spectre AFAIK.ARM doesn't run para-OS unless the board manufacturer puts one there. Some ARMs are in-order, they are slower and don't have Spectre. The faster, out-of-order ARMs AFAIK are vulnerable to Spectre.SPARC is more-or-less dead.I'm curious about POWER9, but probably vulnerable.
(DIR) Post #9xWVa2pPPXgmDL0fi4 by newt@stereophonic.space
2020-07-27T21:11:05.668394Z
0 likes, 0 repeats
@allison @icedquinn @wolf480pl @wyatwerp >UltraSPARC T series and POWER6 are probably the only reasonably modern server CPUs that are in-order>reasonably modernMy sarcasm sense is tingling :ablobfoxhyperthinking:
(DIR) Post #9xWViB6AXuZ0BGjAo4 by allison@blob.cat
2020-07-27T21:12:32.546856Z
1 likes, 0 repeats
@newt @icedquinn @wolf480pl @wyatwerp I'm actually not being sarcastic, I'm just comparing to pretty much every other in-order CPU in existence
(DIR) Post #9xWVkXuCilKzUjGC48 by wolf480pl@mstdn.io
2020-07-27T21:12:54Z
1 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp 2007 sounds reasonably modern to me. Same era as Core 2, and I still have some of Core 2-based Xeons in prod.It's definitely not retro.
(DIR) Post #9xWVq2hZeGqoSIyQDo by wolf480pl@mstdn.io
2020-07-27T21:13:55Z
0 likes, 0 repeats
@allison @icedquinn @wyatwerp @newt what about Cortex-A53 ? it's in-order, and newer than those two
(DIR) Post #9xWW18W0NIOVfeqthw by allison@blob.cat
2020-07-27T21:15:54.098191Z
0 likes, 0 repeats
@wolf480pl @icedquinn @wyatwerp @newt Newer, but I'm not sure how it benches next to the in-order UltraSPARCs or *especially* POWER6 (which I would readily expect to trounce it). I also forgot the in-order BlueGene POWER core IBM just freed, but that might as well not even count since it wasn't available for general use until now.
(DIR) Post #9xWW4EnLU2Le12RSOO by newt@stereophonic.space
2020-07-27T21:16:33.030835Z
0 likes, 0 repeats
@allison @icedquinn @wolf480pl @wyatwerp alright, I take your challenge and raise it with... Itanium. It was totally and completely in-order with the last CPU model released in 2017. Unfortunately, it was still a piece of piss-flavoured crap.
(DIR) Post #9xWW75MB1ymnc25l7Q by wolf480pl@mstdn.io
2020-07-27T21:16:59Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp Itanium was so beautiful in theory tho...
(DIR) Post #9xWW89AQLibJih6Ixk by allison@blob.cat
2020-07-27T21:17:14.482661Z
0 likes, 0 repeats
@newt @icedquinn @wolf480pl @wyatwerp Itanium is a special case, as are VLIW chips in general.
(DIR) Post #9xWWF7kA2ZsFKd8wds by wolf480pl@mstdn.io
2020-07-27T21:18:26Z
0 likes, 0 repeats
@allison @icedquinn @wyatwerp @newt TIL Nvidia Tegra is VLIWhttps://en.wikipedia.org/wiki/Project_Denver
(DIR) Post #9xWWIEKI1SrAcHWVc0 by allison@blob.cat
2020-07-27T21:19:03.865377Z
0 likes, 0 repeats
@wolf480pl @newt @icedquinn @wyatwerp Honestly Itanium is fine so long as you have compilers that aren't total garbage for it. Unfortunately, for the longest time, gcc *was not* one of those
(DIR) Post #9xWWIG1vghMHtuwsGu by newt@stereophonic.space
2020-07-27T21:19:05.190579Z
0 likes, 0 repeats
@allison @icedquinn @wolf480pl @wyatwerp very special indeed
(DIR) Post #9xWWNE6WHA2RNLStBg by newt@stereophonic.space
2020-07-27T21:19:58.768193Z
0 likes, 0 repeats
@allison @wolf480pl @icedquinn @wyatwerp gcc still isn't one of those. Turns out, if you want to predict in compile-time factors that only become available in run-time, you're only destined for utter failure.
(DIR) Post #9xWWUHyAIrbjOGvOyW by wolf480pl@mstdn.io
2020-07-27T21:21:10Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp That'd mean Java would be faster than C on Itanium, if only somoene made a good JIT
(DIR) Post #9xWWaqC1yPw9GqXCxE by newt@stereophonic.space
2020-07-27T21:22:26.410290Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp no, it wouldn't.
(DIR) Post #9xWWecVcnAeHzNV9yy by allison@blob.cat
2020-07-27T21:23:05.808772Z
0 likes, 0 repeats
@newt @icedquinn @wolf480pl @wyatwerp IDK, Digital's lineage of compilers (among others) seemed to do fine with it.
(DIR) Post #9xWWjj63j7YRrsyie0 by wolf480pl@mstdn.io
2020-07-27T21:23:58Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp Ok, what are those things that you can only predict in runtime?Cause conditional branches are definitely not one of them.
(DIR) Post #9xWWk4ilKPzWw0FQO0 by newt@stereophonic.space
2020-07-27T21:24:06.826680Z
0 likes, 0 repeats
@allison @icedquinn @wolf480pl @wyatwerp what does fine mean exactly? Because I don't remember Itanium outperforming comparably priced x86 like ever.
(DIR) Post #9xWWrcpKvBKMjuvHbk by wolf480pl@mstdn.io
2020-07-27T21:25:18Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp I think it'd be more fair do normalize by TDP rather than price.Price depends on all kinds of things, like demand and economies of scale, that don't tell you anything about the design itself.
(DIR) Post #9xWWsvJZfb38MrgYqW by newt@stereophonic.space
2020-07-27T21:25:42.823702Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp presense of data in CPU cache or registers (shadowed ones) vs having to wait for memory access, for example.
(DIR) Post #9xWWyarkbvwggVNJLc by allison@blob.cat
2020-07-27T21:26:43.195714Z
1 likes, 0 repeats
@newt @icedquinn @wolf480pl @wyatwerp I'd have to ask my one friend who administered Itanium systems, but the long and short of it was that there were workloads Itanium outperformed x86 considerably on, and certain strands of corporate clients were willing to pay top dollar for Itanium systems for that reason.
(DIR) Post #9xWWzeipl81qvxi7c0 by newt@stereophonic.space
2020-07-27T21:26:55.212615Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp what's fair? When I consider which CPU to choose, performance to price ratio is one of the leading factors, as well as software availability. Itanium loses in both.
(DIR) Post #9xWX6amwmaoklEwTiq by newt@stereophonic.space
2020-07-27T21:28:08.941882Z
1 likes, 0 repeats
@allison @icedquinn @wolf480pl @wyatwerp I'd be really curious to read about that. Tag me when you do, please.
(DIR) Post #9xWXUtfGFn6QFnw7Sy by wolf480pl@mstdn.io
2020-07-27T21:32:29Z
0 likes, 0 repeats
@newt @icedquinn @wyatwerp That'd be the case if you were seriously considering buying an Itanium. Which IIRC netiher I nor @allison suggested.I think we can all agree that buying Itanium today as a daily driver would be bad idea.What I'm more interested in at this point is whether the design was a dead end or not.
(DIR) Post #9xWYiEiHPFwZFoGzAW by newt@stereophonic.space
2020-07-27T21:46:11.487355Z
0 likes, 0 repeats
@wolf480pl @icedquinn @wyatwerp @allison Itanium failed for a whole plethora of reasons.If you look at the history of Intel's CPU design attempts, they've tried to step away from x86 multiple times. First with iAPX432, then i860 and i960. Itanium afaik was their third attempt. They almost did it, but then it turned out that making a working compiler for that is actually nigh-impossible. And then amd64 happened. That was when Itanium became doomed for utter and complete failure, because it was just a lot lot cheaper to buy amd64 chips.But that aside, Itaniums weren't very good at power conservation either. If you look up their TDP, it was quite high at the time.https://en.wikipedia.org/wiki/List_of_Intel_Itanium_microprocessorsFirst AMD64 chips were released in 2003 and had slightly lower TDP, less than 90 watts, while providing the clockrate at least twice as high, starting from 2200MHz, vs Itanium 2 starting at 900MHz up to 1600MHz.Not sure comparing the CPU frequency is of any use though. I managed to find some articles claiming that Itanium 2 at 1GHz was about as fast as Athlon 64 at 2.2GHz, so we can pretty much assume they were on par with one another. But again, this was in 2003.
(DIR) Post #9xWZ1SpFA3mOWahNVx by icedquinn@blob.cat
2020-07-27T21:49:35.017515Z
0 likes, 0 repeats
@newt @wolf480pl @allison @wyatwerp we could always go back to harvard chips :blobcatsurprised:
(DIR) Post #9xWZAa5PXFQPeLblei by wolf480pl@mstdn.io
2020-07-27T21:51:13Z
1 likes, 0 repeats
@icedquinn @allison @wyatwerp @newt AVR is still a thing.Also, 8051 lives on in all sorts of embedded microcontrollers.Harvard chips are among us.
(DIR) Post #9xWZB8iwklFh4oMp05 by newt@stereophonic.space
2020-07-27T21:51:22.531133Z
0 likes, 0 repeats
@icedquinn @allison @wolf480pl @wyatwerp we actually already kinda have. Most operating systems have NX support enabled and enforced.
(DIR) Post #9xWZIgsMAyrB1WMwds by wolf480pl@mstdn.io
2020-07-27T21:52:42Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp You can still mmap rwx though.Also, what good is OS-level NX if your CPU has Meltdown? :P
(DIR) Post #9xWZQ3KT5qI4gl1plg by newt@stereophonic.space
2020-07-27T21:54:06.527066Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp what good is harvard architecture if I can't use it to watch porn?
(DIR) Post #9xWZZWQ6Isa5WDB7Xk by wolf480pl@mstdn.io
2020-07-27T21:55:45Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp Yeah, comparing clocks is no use.And comparing instructions per second would make only slightly more sense.You could compare scores in passmark, prime95, hpl, etc. but these are very specific workloads and vector instructions kinda let you cheat on those without improving general performance. And even then, it might be hard to find an Itanium to run a benchmark on.As for compilers, I wonder if LLVM would make it easier to make a good one for Itanium.
(DIR) Post #9xWZx9TzEfAPKVGPUe by wolf480pl@mstdn.io
2020-07-27T21:59:58Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp do you even have shadowed registers in VLIWs?Anyway, in-order or out-of-order, there's no way you're going to hide L2 miss latency. And on a VLIW, a load will always go at least to L1. So an easy thing to do would be to have a number of delay slots after a load (i.e. instructions which don't see the effect of a load) equal to the number of cycles it takes to hit L1. I wonder if hiding L1 miss would be possible or worth trying.
(DIR) Post #9xWa2Zy4BwsCv5vB1k by wolf480pl@mstdn.io
2020-07-27T22:00:58Z
0 likes, 0 repeats
@newt @allison @icedquinn @wyatwerp now, if Itanium did not have delayed load instructions... well fuck
(DIR) Post #9xWa6wZI6d4wAjPiXA by newt@stereophonic.space
2020-07-27T22:01:51.766838Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp yes, Itanium had register renaminghttps://courses.cs.washington.edu/courses/csep548/06au/readings/itanium.pdf
(DIR) Post #9xWbMUMmpJuDIGXcYq by newt@stereophonic.space
2020-07-27T22:15:52.492215Z
0 likes, 0 repeats
@wolf480pl @allison @icedquinn @wyatwerp by the way, I forgot to mention another downside of VLIW architectures. They have a lower code density (number of instructions per unit memory) when compared to hybrid CISC/RISC ones like the modern x86_64 or ARM. That puts extra pressure on a memory bus, making things even slower unless you can hit a perfect zero cache miss rate for code.
(DIR) Post #9xWgF8gPXwqNms2FFI by wyatwerp@fsmi.social
2020-07-27T23:07:09.055705Z
0 likes, 0 repeats
@newt @allison @icedquinn @wolf480pl this thread really took off. If only the people involved thought this much about driving trucks belching into the air, or spraying tons of chemicals on crops to be eaten.Literally none of the CPU vulnerabilities would need to exist, if software (and protocols) were actually optimized instead of freeriding on Moore's law. Policy too - everyone should have been able run their site accessible via IPv6.
(DIR) Post #9xWgFA0IdULZsqLdsu by icedquinn@blob.cat
2020-07-27T23:10:32.562683Z
1 likes, 0 repeats
@wyatwerp @newt @allison @wolf480pl i suspect some of the shenanigans have more to do with optimally multi-threading than moore's law. out of order execution for instance; erlang derives its extreme mileage out of efficient micro-tasking around blocked jobs.
(DIR) Post #9xWgUDE2wKiQDexILQ by icedquinn@blob.cat
2020-07-27T23:13:17.114816Z
1 likes, 0 repeats
@wyatwerp @allison @newt @wolf480pl but you are mostly right.. complexity breeds bugs and trying to jam a single chip with so much work breeds complexity. why i mentioned the harvards is because of things like GreenArrays where work can be split up in to separate truly parallel units that just do a single simpler job.It doesn't help when you really do need to heave a bigger stone, but a lot of jobs are just dealing with a large amount of tiny stones and it might be better to go that route.
(DIR) Post #9xWgXmApSaSLTg9LVo by icedquinn@blob.cat
2020-07-27T23:13:56.330805Z
1 likes, 0 repeats
@allison @newt @wolf480pl @wyatwerp when your tasklet gets its own core to itself some of these other complexities are moot. who cares if you can peep the cache when the cache is only storing yourself?
(DIR) Post #9xWgkBItquq2tOq9M8 by newt@stereophonic.space
2020-07-27T23:16:10.759344Z
2 likes, 0 repeats
@wyatwerp @allison @icedquinn @wolf480pl literally none of the CPU vulnerabilities would need to exist if physics and economics worked differently. But neither do.Here's a fun fact for you. One of the reasons Pentium 4 was such a huge dumpster fire (metaphorically AND literally) was because up until about that point it was thought that the ratio between CPU frequency and heat was linear. I remember reading materials a few months ago that in the early 2000s Intel planned for a 10GHz CPU after a few die shrinks.Then, after approaching around 2GHz speed, it turned out the graph is SUDDENLY quadratic. I found a pic for another CPU, but it's about the same for all silicon out there, so see below. Like, almost literally nobody planned for that. Pentium 4 was certainly not designed for that, and it cause many units to literally fry themselves.So, basically, the last 15 years of CPU evolution were desperate attempts to get more performance while staying withing the same range of frequencies. Introducing multiple cores and more dark silicon certainly helped.
(DIR) Post #9xWgwRbd4FOmvZRq40 by allison@blob.cat
2020-07-27T23:18:20.466368Z
0 likes, 0 repeats
@newt @wyatwerp @icedquinn @wolf480pl IIRC the Alpha's development roadmap was predicated around similar assumptions, but they were a tad less naive about it and Intel buying out all of DEC's outstanding assets semiconductor-wise made it a moot point anyway
(DIR) Post #9xWhKKDkB3HEyMfAwq by newt@stereophonic.space
2020-07-27T23:22:42.900095Z
1 likes, 0 repeats
@wyatwerp @allison @icedquinn @wolf480pl not quadratic, exponential. I gotta sleep.
(DIR) Post #9xWhYp3uJ4ZwgByVvc by wyatwerp@fsmi.social
2020-07-27T23:23:07.349577Z
1 likes, 0 repeats
@allison @newt @icedquinn @wolf480pl i was more thinking of Alan Kay's point that cpu, memory, etc. got a million times faster, software only got a thousand times faster. Is the computer really getting 1000x more work done than it did 20-30 years ago?Why does OSX or Windows use 1G of RAM to do nothing? Android is getting there.
(DIR) Post #9xWhYpnHaMsIwvCkee by newt@stereophonic.space
2020-07-27T23:25:18.149884Z
0 likes, 0 repeats
@wyatwerp @allison @icedquinn @wolf480pl I'm sure, if you take software from Alan Kay's era, it most certainly would not be a million times faster. Otherwise we'd all use software from the 70s.
(DIR) Post #9xWhhqfpgS40Hh1C9Q by allison@blob.cat
2020-07-27T23:26:56.880303Z
1 likes, 0 repeats
@newt @wyatwerp @icedquinn @wolf480pl Yeah I think the Kay talking point is a *vast* oversimplification even if there are obviously performance issues and pain points with most (ostensibly) "modern" software.
(DIR) Post #9xWjV6StrOODP81Gfw by icedquinn@blob.cat
2020-07-27T23:47:02.730924Z
1 likes, 0 repeats
@newt @wyatwerp @allison @wolf480pl Alan Kay's smalltalk is also kinda slow by design :blobcatsurprised: although i agree kinda. smalltalk-80 was slow in ways because polymorphic JITs didn't exist yet, but also traded in an exceptional amount of user customization (the dynabook was supposed to be an extension of the self, much like an emacs lisp programmer's emacs.)it's concerning that we have more powerful and bloated runtimes now but they don't even carry the upsides the dinosoars had :blobcatwaitwhat:
(DIR) Post #9xWkgzQTxNPD0D7WPQ by newt@stereophonic.space
2020-07-28T00:00:25.339175Z
1 likes, 0 repeats
@icedquinn @allison @wolf480pl @wyatwerp smalltalk is still around. Pharo had a release earlier this year. Why don't you use it?
(DIR) Post #9xWlEbdIvRwF3T9Bj6 by icedquinn@blob.cat
2020-07-28T00:06:29.912484Z
1 likes, 0 repeats
@newt @allison @wolf480pl @wyatwerp cog is a very nice vm but it still doesn't really do threads :blobcatsad:
(DIR) Post #9xWlJ8SioDRG21MCkC by icedquinn@blob.cat
2020-07-28T00:07:19.165808Z
1 likes, 0 repeats
@newt @allison @wolf480pl @wyatwerp pharo had a thing to put in optimized assembly in some slow spots but that project deprecated too.
(DIR) Post #9xWlgQXxjld8mQumOG by icedquinn@blob.cat
2020-07-28T00:11:31.173581Z
2 likes, 0 repeats
@newt @allison @wolf480pl @wyatwerp when i poked around with squeak and pharo, i also ran in to things like getting it to communicate with native libs. i think there is a cffi system in pharo, so i could go sit and do up a wrapper for gtk and force it to use a native interface (pharoers don't acknowledge native interfaces as a thing, and unfortunately, clients who pay for software do) or with VM primitives.VM primitives are a little bongos to get working since you have to build the VM from an existing VM and i seem to recall cog even has versioning issues where you have to basically have a "vm builder vm" to compile a new runtime that supports your new prim's. although the cffi isn't nearly as bad.
(DIR) Post #9xWlmoCigzvztE1PG4 by icedquinn@blob.cat
2020-07-28T00:12:40.848574Z
1 likes, 0 repeats
@allison @newt @wolf480pl @wyatwerp it was a quite pleasant system to use when i wasn't fighting its limitations though.