[HN Gopher] Doom Running on an IKEA Lamp [video]
___________________________________________________________________
Doom Running on an IKEA Lamp [video]
Author : kregasaurusrex
Score : 934 points
Date : 2021-06-14 02:55 UTC (20 hours ago)
(HTM) web link (www.youtube.com)
(TXT) w3m dump (www.youtube.com)
| RosanaAnaDana wrote:
| We were so busy wondering if it could be done, we never stopped
| to ask if it should be done. Will the future forgive us?
| swiley wrote:
| You can get absolutely tiny chips (maybe 4x the area of the
| shadow of a ball-point pen ball) that can run Linux for ~$1.
| Computers in IoT cost nothing unless you need to do graphics or
| run electron/Ml.
| intricatedetail wrote:
| This is what you have to do if you run out of chips.
| ravenstine wrote:
| Wow, it runs even better on that lamp than it did when I
| installed it on my iPod! (using Linux)
| teekert wrote:
| We should all take some time to consider that our light bubs have
| more powerful computers than the first computer many of us once
| owned.
|
| This perspective makes scifi stuff like "smart dust" seem a lot
| more feasible. Ubiquitous computing, what will it bring us?
| m_st wrote:
| While I'm a great fan of ubiquitous tech/computing in general,
| I must also say that it feels weird applying firmware updates
| to light bulbs. However, you want to stay safe these days, so
| it's better to keep up to date, right?
| flyinghamster wrote:
| Unfortunately, my experience has been that Ikea will push out
| a firmware update, and then I don't discover it until the
| outside lights fail to turn on or off at their appointed time
| and have to be rebooted. Yes, we live in an age when you can
| reboot a light bulb.
|
| Very much to their credit, though, the Tradfri hub _doesn 't_
| depend on a cloud service just to operate. If that ever
| happens, thus endeth my foray into smart lighting. I've put
| my foot down: if it needs Somebody Else's Computer to
| function, I don't want it.
| brainless wrote:
| Thank you, I think I am just gonna quit my work now and spend
| the day thinking about it walking around and being anxious too.
| Sometimes it takes a while to understand how far we have come
| with miniaturization of tech.
| tomxor wrote:
| > our light bubs have more powerful computers than the first
| computer many of us once owned
|
| Mine don't, and I first owned an AtariST with a 68000 CPU.
|
| These are "smart bulbs". We are still in the .com bubble of
| IoT, so there are going to be a lot of silly things we can run
| Doom on for a while until it dies down. Lights don't need
| computers to operate, but that doesn't stop people trying to
| add "features" to lights with computers.
| lupire wrote:
| Computers have lights, why shouldn't lights have computers?
| nabla9 wrote:
| In 2018 IBM already demoed 1mm x 1mm sized computer on chip
| concept for crypto anchor that almost can almost run Doom.
| simias wrote:
| A problem with this from my point of view is that while
| hardware engineers did an incredible job increasing the
| processing power of our thinking rocks, us software devs did a
| tremendous job of squandering 90% of it away. Of course there
| also are market incentives for doing so (time to market, dev
| costs etc...).
|
| Empirically it seems that software simply doesn't scale as well
| as hardware does. I feel like this overhead would make "smart
| dust" impractical.
|
| Or I guess I could put it that way: one hand hand you could be
| impressed that a modern light bulb can run Doom, on the other
| you could be alarmed that you need a Doom-capable computer to
| run a modern light bulb.
| dwild wrote:
| > you could be alarmed that you need a Doom-capable computer
| to run a modern light bulb.
|
| An ESP8266 microcontroller can be bought in low quantity for
| less than a dollar. I means sure any cost reduction at scale
| is meaningful, but at that point I don't think the silicon is
| the expensive part at that point. It just doesn't make sense
| to give WIFI devices anything less than that performance, the
| gains in silicon space will be meaningless and you'll spend
| more managing that than anything.
| Narishma wrote:
| Software seems to scale in the other direction. The faster
| the hardware you give developers, the slower the software
| they produce becomes.
| psyc wrote:
| What doesn't scale well is John Carmack (and those of similar
| devotion).
|
| And yes, I'm aware that there are also those with the chops,
| who are not permitted by their money masters (or alternately
| by their master, money) to write performant software.
| hasmanean wrote:
| Yeah, doom advanced the state of the art by maybe a
| decade(?). Instead of needing a silicon graphics
| workstation or a Graphics accelerator, it allows a
| generation of kids to play on any hardware.
|
| If you want to know how the world would be like without
| video game programmers, just like at internal corporate
| software and how slow it is.
|
| How many other advances are we tossing away because people
| don't know how to optimize and code for speed and fun!
| bottled_poe wrote:
| Those "money masters" are responding to market conditions.
| If the market demanded greater efficiency (eg. Through
| climate policy for example), we would quickly see a change
| in priorities.
| Dah00n wrote:
| So basically we need a climate tax on software to fix the
| problem. Putting the tax directly at energy would not
| cause much optimization in software in my opinion. I
| don't believe software development has a culture than can
| take on responsibility of its actions, neither in energy
| usage nor in security, which all leads back to
| programmers not being an engineer kind of worker but more
| like an author/writer. Hardware engineers on the other
| hand can and often do take responsibility. All in all I
| don't have any hope of software developers being up to
| the task if it landed in their lap to fix this so if we
| wanted their hand forced the tax needs to be directly on
| software instead of hardware or energy. I don't believe
| this is mainly market driven as the market is unlikely to
| be able to fix it. It's at least as much a culture
| problem.
| kilroy123 wrote:
| My guess is, that in the future, we'll have computers writing
| some highly optimized software for certain things. I'm not
| saying all of use software people will be replaced 100% but
| some stuff will be replaced by automation.
|
| That's my prediction.
| KMag wrote:
| Will the specifications for the software also be machine-
| generated?
|
| If the specifications are human-generated, then they're
| just a form of high-level source code, and your prediction
| boils down to future programming languages simultaneously
| improving programmer productivity and reducing resource
| usage. That's not a controversial prediction.
|
| If I understand you correctly, I think you're correct that
| over time, we'll see an increase at the abstraction level
| at which most programming is done. I think the effort put
| into making compilers better at optimizing will largely
| follow market demand, which is a bit harder to predict.
|
| One interesting direction is the Halide[0] domain-specific
| language for image/matrix/tensor transformations. The
| programs have 2 parts: a high-level description, and a set
| of program transformations that don't affect results, but
| make performance tradeoffs to tune the generated code for
| particular devices. The Halide site has links to some
| papers on applying machine learning to the tuning and
| optimization side of things.
|
| I can imagine a more general purpose language along these
| lines, maybe in the form of a bunch of declarative rules
| that are semantically (though perhaps not syntactically)
| Prolog-like, plus a bunch of transformations that are
| effectively very high-level optimization passes before the
| compiler ever starts looking at traditional inlining, code
| motion, etc. optimizations.
|
| At some point, maybe most programmers will just be writing
| machine learning objective functions, but at present, we
| don't have good engineering practice for writing safe and
| reliable objective functions. Given some of the degenerate
| examples of machine learning generating out-of-the-box
| solutions with objective functions (throwing pancakes to
| maximize the time before they hit the ground, tall robots
| that fall over to get their center of mass moving quickly,
| etc.), we're a long way from just handing a machine broad
| objectives and giving it broad leeway to write whatever
| code it deems best.
|
| I suspect in the medium-term, we'll see a 3-way divergence
| in programming: (1) safety/security-critical programs
| generated from proofs (see Curry-Howard correspondence, and
| how the seL4 microkernel was developed) (2) performance-
| critical programs that are very intensive in terms of human
| expert time and (3) lots of cookie-cutter apps and websites
| being generated via machine learning from vague human-
| provided (under-)specifications.
|
| [0] https://halide-lang.org/
| airbreather wrote:
| Article yesterday says google uses AI to design chips in
| 6 hours, sounds like a long way is now yesterday.
| Blikkentrekker wrote:
| That is already done; such software is called a compiler.
|
| There is no reason to optimize the language that
| programmers work in when such optimizations can better be
| done on the generated machine code.
| [deleted]
| [deleted]
| Blikkentrekker wrote:
| Reading up upon some of the absolute ugly and unmaintainable
| hacks that software writers had to rely upon two decades back
| to fit things into the hardware, I am honestly quite glad the
| aera of that type of programming is now past us.
|
| It was certainly impressive, but it was aso a case of often
| design having to give way to such hacks, such as the fact
| that many older first person shooter games had to have their
| entire level design work around the idea that the engines
| could not support two walkable surfaces vertically above each
| other for performance reasons or the famous _Duke Nukem 3D_
| "mirror" hack.
| bottled_poe wrote:
| In the scheme of things, this is short term. Market
| incentives for pushing performance are currently minor, but
| will have increasing influence over the next decade. Factors
| such As processing power hitting physical limits and energy
| prices as a result of climate policy will force engineers to
| build more efficient systems.
| limaoscarjuliet wrote:
| I do not think we will see "processing power hitting
| physical limits" anytime soon. Moore's Law is not dead yet,
| it is a good question if it ever will be dead. As Jim
| Keller says, the only thing that is certain is number of
| people saying Moore's law is dead doubles every 18 months.
|
| https://eecs.berkeley.edu/research/colloquium/190918
| Sohcahtoa82 wrote:
| Eh...for single-threaded processing, I'd say Moore's Law
| is dead and has been dead for more than a couple CPU
| generations now.
|
| What we're seeing now is massive parallelism, which means
| if your task is embarrassingly parallel, then Moore's Law
| is very much alive and well. Otherwise, no.
| jerf wrote:
| Yes, it has to die. In this universe things can only grow
| indefinitely by less than an n^3 factor, because that's
| as fast as the lightcone of any event grows. Exponential
| growth, no matter how small the factor, will eventually
| outgrow n^3.
|
| Once we attain the limits of what we can do in 2
| dimensions, we aren't that many exponential growth events
| from what we can achieve in 3. Or once we achieve the
| limits of silicon technology, we aren't that many
| exponential growth events from the limits of photonics or
| quantum or any other possible computing substrate. Unless
| we somehow unlock the secret of how to use things smaller
| than atoms to compute, and can keep shrinking those,
| we're not getting very much farther on a smooth
| exponential curve.
| squeaky-clean wrote:
| Sure, it has to die eventually, but the key phrase is
| anytime soon. But do we have any evidence it will during
| our lifetime? Or even our great-great-great-great-great-
| great-grand-child's lifetime?
| papito wrote:
| This guy gets it.
| hasmanean wrote:
| Someone should remake Doom using today's "best practises"
| coding standards and see how much performance it gives up.
| otabdeveloper4 wrote:
| > using today's "best practises"
|
| That would be WebGL and Javascript, I presume?
|
| I tried running a Quake port in that vein, but sadly none
| of the computers I own were able to play it without
| stuttering.
| Sohcahtoa82 wrote:
| I'd like to see the DOOM engine written entirely in
| Python without using any GPU rendering besides the frame
| buffer.
|
| DOOM ran in 320x200 at 30 fps on a 33 Mhz CPU, which
| gives it less than 18 clock cycles per pixel rendered. I
| doubt Python could get anywhere close to that.
| vsareto wrote:
| Smart dust will probably be terrible for human respiratory
| systems anyway
| IgorPartola wrote:
| Not if it can work it's way out of your lungs, being smart
| as it is.
|
| One of my favorite dad jokes is that if Smart Water was so
| smart, why did it get trapped in a bottle?
| noir_lord wrote:
| The problem is that the extreme effort to optimise hardware
| pays off for _everyone_ , the extreme effort to optimise
| "random software project" rarely does (unless random software
| project is a compiler/kernel of course).
|
| So the RoI is just different.
| neolog wrote:
| Browsers too
| funcDropShadow wrote:
| And they contain a compiler and almost an operating
| system ;-)
| noir_lord wrote:
| Indeed, browsers are a good example of ubiquitous
| software.
| TheBigSalad wrote:
| Go back to using the software of the 80s then, before the
| evil software engineers made it all so bad.
| N00bN00b wrote:
| >A problem with this from my point of view is that while
| hardware engineers did an incredible job increasing the
| processing power of our thinking rocks, us software devs did
| a tremendous job of squandering 90% of it away.
|
| That works both ways though. The highly qualified software
| devs did indeed squander some of it away.
|
| But I'm a rather bad dev that writes really inefficient code
| (because it's not my primary concern, I'm not a programmer, I
| just need custom software that does the things I need done
| that can't be done by software other people write).
|
| All this overpowered hardware allows my code to work very
| well.
|
| I've been in situations where I could pick between "learn to
| program properly and optimize my code" or "throw more
| hardware at it" and throwing more hardware at it was
| definitely the faster and more efficient approach in my case.
| pwagland wrote:
| Well, yes and no... as always!
|
| The trick is that you probably don't need all, or even most,
| of that power to run the light. Sure the Zigbee protocol is
| _probably_ being done via software, and not a dedicated chip,
| but even then. The big thing is that this chip is most likely
| so cheap, especially in bulk, that it doesn't make sense to
| get the "cheaper" variant, even if that was still available.
| This is kind of supported by the "new" Tradfri having an
| updated chip, even though the Tradfri line never changed it's
| capabilities, it was probably cheaper to get the new more
| powerful chip and/or they could no longer get the old one
| with a five year supply guarantee.
| a10c wrote:
| In a similar vein, from memory the i3, i5 & i7 chip is
| absolutely identical in every physical way from a
| manufacturing point of view, except that the less powerful
| chips have cores / features disabled.
| sly010 wrote:
| No manufacturing process is perfect, so you just sort,
| label and price the output accordingly. This is fairly
| normal practice. LEDs, vegetables, even eggs...
| pc86 wrote:
| I have to wonder why this is done. I know it must make
| sense or it wouldn't be done, I just don't understand it.
|
| If you're intentionally disabling functionality in order
| to sell at a lower cost, you're not actually saving any
| money because you still have to manufacture the thing. It
| also (I assume) opens up a risk to someone finding out
| how to "jailbreak" the extra cores and now you can buy an
| i7 for the price of an i3. Is the cost of having three
| different manufacturing processes so large that it's not
| worth switching? Is the extra revenue from having three
| different versions of the same physical chip enough to
| justify the jailbreak risk?
| rickdeckard wrote:
| This is done because your production yield is not 100%.
| So instead of throwing away every produced component
| which doesn't achieve the target of your 100%-product,
| you "soft-lock" the components with 80~99% performance
| into a 80%-product category, and the ones with 60~80%
| into a 60%-product. This way you increase the total
| yield-rate, and produce less waste. The counter-intuitive
| waste happens when the demand for the 60%-product is
| exceeding your "natural" supply of 60%-output, so you
| have to start to "soft-lock" some of your "80%-product"
| production to the 60%-grade to fulfill demand...
| zingar wrote:
| But do these production defects really meet the demand of
| the lower tiers? Also how is it possible to predict the
| number of defects in advance so that they can make useful
| promises to distributors?
| Pet_Ant wrote:
| Well eventually as yields improve you start handicapping
| perfectly valid chips to main market segmentation.
|
| I cannot say this for certain in CPUs but I know in other
| electronics with PCBs that this is how it is done.
| Sometimes lower-end SKUs are made by opening a higher-end
| one and cutting a wire or a trace.
| mekkkkkk wrote:
| I'm curious about this as well. It seems inevitable that
| some batches will be "too good" to satisfy demand of low
| end chips.
|
| Either they just accept the fluctuations in order to
| maximize output of high end chips, or they would have to
| cripple fully functional ones to maintain a predictable
| supply. Interesting business.
| BeeOnRope wrote:
| It's not _primarily_ about using defective chips (but
| that 's a nice side effect). As a process becomes mature,
| yield rates become very high and there wouldn't be enough
| defective chips to meet demand for a lower tier, so good
| chips are binned into those tiers anyway.
|
| The primary purpose is market segmentation: extracting
| value from customers who would pay more while not giving
| up sales to more price sensitive clients who nevertheless
| pay more than the marginal cost of production.
| mekkkkkk wrote:
| That makes sense, thanks. I wonder if it would be
| possible to de-bin one of the lower end ones, assuming it
| is a binned version of a fully functional higher tier
| chip. Or perhaps they completely destroy the offlined
| cores/features.
| HeavyStorm wrote:
| Are you guys sure? I think manufacturing has nothing to
| do with it.
|
| The real reason IMHO, is to have a larger range of
| product prices so you can cater for specific audiences.
|
| It seems people are confusing cost with price. Those two
| things are orthogonal.
| Arrath wrote:
| This tends to be the case later on in a product's
| production run, as the manufacturer has fine tuned the
| process and worked out most of the kinks, the pass-rate
| of finished items increases.
|
| At this point, yes they may lock down perfectly good high
| end CPUs to a midrange model spec to meet a production
| quota.
| lordnacho wrote:
| I think of you Google price discrimination or similar
| economic terms you get some explanations for this.
|
| If you just have one price, you cut out people who can't
| afford it and people who can afford to pay more get away
| with more of the surplus.
|
| If you have several prices and create just enough
| difference in the product that it doesn't change the
| expense much, you can suck dry every class of user.
|
| Bit of an MBA trick.
| lupire wrote:
| "suck dry" is excessive editorializing for a practice
| that make sit possible for a market to exist without,
| well, "sucking the manufacturer dry".
| boygobbo wrote:
| It's called 'market segmentation'. It's why there are
| different brands of soap powder from the same
| manufacturer even though they are all essentially the
| same.
| HeavyStorm wrote:
| Yep. I don't think it has anything to do with manufacture
| issues.
| HumblyTossed wrote:
| > I don't think...
|
| Instead of assuming, it's easy enough to confirm that CPU
| binning is real.
| MontyCarloHall wrote:
| This is done with cores/memory banks that didn't pass QC.
| For example, a 6 core CPU and an 8 core CPU might have
| the same die as a 12 core CPU, but 6/4 cores,
| respectively, were defective, so they get disabled. I
| don't think they're crippling fully functional hardware.
|
| See here: http://isca09.cs.columbia.edu/pres/09.pdf
|
| Also here: https://www.anandtech.com/show/2721
|
| "When AMD produces a Phenom II die if part of the L3 is
| bad, it gets disabled and is sold as an 800 series chip.
| If one of the cores is bad, it gets disabled and is sold
| as a 700 series chip. If everything is in working order,
| then we've got a 900."
| flyinghamster wrote:
| Specifically regarding Phenom II, I have a 550 Black
| Edition still plugging away, serving different roles over
| the years, and I was able to successfully unlock and run
| the two locked-out cores (via a BIOS option). It's never
| skipped a beat at stock clock. It could be that there was
| an oversupply of quad-cores, or perhaps (since it was a
| Black Edition part marketed to overclockers) the extra
| cores failed when overclocked. I know I wasn't able to
| have both overclock _and_ four cores, but I considered
| the extra cores more important, since it was already a
| reasonably fast chip for its day.
| SAI_Peregrinus wrote:
| It's likely the latter (that it couldn't work when
| overclocked with all cores). The market for those is to
| allow overclocking, so if it can't do _any_ overclocking
| with all cores AMD likely wouldn 't want to sell it as a
| 4-core Black Edition, since it'd probably just get
| returned.
| gurkendoktor wrote:
| > ...but 6/4 cores, respectively, were defective, so they
| get disabled. I don't think they're crippling fully
| functional hardware.
|
| Hmm, but what if 3 cores are defective? If that can
| happen(?), then it seems one extra functional core is
| disabled to get to an even core number.
|
| Apple's M1 GPUs are the first where I've seen the choice
| between 7 and 8 cores (as opposed to 6/8 or 4/8).
| simondotau wrote:
| I imagine there is some trade off to be made between
| increasingly surgical disabling of components and
| avoiding a menagerie of franken-SKUs. Presumably the
| fault rate is low enough that tolerating a single GPU
| core drop takes care of enough imperfect parts.
|
| Perhaps there is fault tolerance hidden elsewhere, e.g.
| the neural engine might have 17 physical cores and one is
| always disabled. Although this seems unlikely as it would
| probably waste more silicon than it would save.
| geoduck14 wrote:
| > Hmm, but what if 3 cores are defective?
|
| It gets sold as a coaster
| Sohcahtoa82 wrote:
| More likely, a keychain:
|
| https://www.amazon.com/Keychain-Ryzen-Threadripper-
| Computer-...
|
| https://www.amazon.com/Keychain-Intel-Core-Computer-
| Chain/dp...
| islon wrote:
| A coaster that allows you to play Doom.
| Peaches4Rent wrote:
| That's because there is a huge variance in the quality of
| chips produced because the process isn't 100% precise.
|
| So the best chips which have the least errors in the
| manufacturing process are sold as top tier. The ones
| which have more mistakes in them get their defective
| parts disabled and then get sold as lower tier ones
| jffry wrote:
| The term for this is "binning", and the explanation is
| wholly innocent. Manufacturing silicon chips is not an
| exact process, and there will be some rate of defects.
|
| After manufacture, they test the individual components of
| their chips. These chips are designed in such a way that
| once they identify parts of a chip that are defective,
| they can disconnect that part of the chip and the others
| still work. (I believe they physically cut stuff with
| lasers but my knowledge is out of date). This process can
| also includes "burning in" information on the chip
| itself, like setting bits in on-die ROMs, so that if your
| OS asks your CPU for its model number it can respond
| appropriately.
|
| Interesting side note: The same thing happens when
| manufacturing even basic electronic components like
| resistors. All the resistors that are within 1% of the
| target resistance get sold as "+-1%" resistors, which
| means it's pretty likely that if you buy the cheaper
| "+-5%" resistors and test them, you'll find two clusters
| around -5% and +5% and very few at the target value.
| magicalhippo wrote:
| > The same thing happens when manufacturing even basic
| electronic components like resistors.
|
| EEVblog did some tests[1][2] some time ago on +-1%
| resistors, and found that while his samples were fairly
| Gaussian and within the spec, the ones from his second
| batch were consistently low. That is, none were above the
| rated value.
|
| So yeah, don't assume a perfect Gaussian distribution
| when using resistors.
|
| [1]: https://www.youtube.com/watch?v=1WAhTdWErrU
|
| [2]: https://www.youtube.com/watch?v=kSmiDzbVt_U
| candu wrote:
| I'll choose instead to be amazed that Doom-capable computers
| are now inexpensive and ubiquitous enough that it makes total
| financial sense to use one in a light bulb!
|
| More seriously, I see this argument all the time: that we are
| just squandering our advances in hardware by making
| comparably more inefficient software! Considering that
| efficiency used to be thought of on the level of minimizing
| drum rotations and such: the whole point is that we're now
| working at a much higher level of abstraction, and so we're
| able to build things that _would not have been possible to
| build before_. I for one am extremely grateful that I don 't
| have to think about the speed of a drum rotating, or build
| web applications as spiders' nests of CGI scripts.
|
| Are there modern websites and applications that are
| needlessly bloated, slow, and inefficient? Certainly - but
| even those would have been _impossible_ to build a few
| decades ago, and I think we shouldn 't lose sight of that.
| curtis3389 wrote:
| I get your point, but putting these 2 thoughts together:
|
| > we are just squandering our advances in hardware by
| making comparably more inefficient software
|
| > we're able to build things that would not have been
| possible to build before
|
| We get that not only are we able to build things that
| weren't possible before, but we can build things that are
| more inefficient than was possible before.
|
| We can expect in the future to see new levels of
| inefficiencies as hardware developments give us more to
| waste.
|
| Without something to balance this out, we should expect to
| see our text editors get more and more bloated in cool and
| innovative ways in the future.
|
| It makes me think of fuel efficiency standards in cars.
| Sohcahtoa82 wrote:
| I dunno...this attitude scares me a bit, that you would
| just shrug away wasted CPU cycles and accept the low
| performance.
|
| CPUs are getting faster, and yet paradoxically, performance
| is worse, especially in the world of web browsers.
|
| The original DOOM ran at 30 fps in 320x200, which meant it
| rendered 1,920,000 pixels per second with only a 33 Mhz
| CPU. That's less than 18 clock cycles per pixel, and even
| that's assuming no CPU time spent on game logic. If DOOM
| were written today with a software renderer written in C#,
| Python, or JS, I'd be surprised if it could get anywhere
| near that level of clocks/pixel.
|
| These days, the basic Windows Calculator consumes more RAM
| than Windows 98, and that's just inexcusable.
| derefr wrote:
| What's "low performance"? Humans measure tasks on human
| timescales. If you ask an embedded computer to do
| something, and it finishes doing that something in 100ms
| vs 10ms vs 1us, it literally _doesn 't matter_ which one
| of those timescales it happened on, because those are
| _all_ below the threshold of human latency-awareness. If
| it isn 't doing the thing a million times in a loop
| (where we'd start to take notice of the speed at which
| it's doing it), why would anyone ever optimize anything
| past that threshold of human awareness?
|
| Also keep in mind that the smaller chips get, the more
| power-efficient they become; so it can actually cost less
| in terms of both wall-clock time _and_ watt-hours
| consumed, to execute a _billion_ instructions on a modern
| device, than it did to execute a _thousand_ instructions
| on a 1990s device. No matter how inefficient the
| software, hardware is _just that good_.
|
| > These days, the basic Windows Calculator consumes more
| RAM than Windows 98
|
| The Windows Calculator loads a large framework (UWP) that
| gets shared by anything else that loads that same
| framework. That's 99% of its resident size. (One might
| liken this to DOS applications depending on DOS -- you
| wouldn't consider this to be part of the app's working-
| set size, would you?)
|
| Also, it supports things Windows 98 didn't (anywhere, not
| just in its calculator), like runtime-dynamically-
| switchable numeric-format i18n, theming (dark mode
| transition!) and DPI (dragging the window from your hi-
| DPI laptop to a low-DPI external monitor); and extensive
| accessibility + IME input.
| IncRnd wrote:
| That's well and good - when your program is the only
| software running, such as an a dedicated SBC. You can
| carefully and completely manage the cycles in such a
| case. Very few people would claim software bloat doesn't
| otherwise affect people. Heck the software developers of
| that same embedded software wish their tools were faster.
|
| > No matter how inefficient the software, hardware is
| just that good.
|
| Hardware is amazing. Yet, software keeps eating all the
| hardware placed in front of it.
| derefr wrote:
| I mean, I agree, but the argument here was _specifically_
| about whether you 're "wasting" a powerful CPU by putting
| it in the role of an embedded microcontroller, if the
| powerful CPU is only 'needed' because of software bloat,
| and you could theoretically get away with a much-less-
| powerful microcontroller if you wrote lower-level,
| tighter code.
|
| And my point was that, by every measure, there's no point
| to worrying about this particular distinction: the more-
| powerful CPU + the more-bloated code has the same BOM
| cost, the same wattage, the same latency, etc. as the
| microcontroller + less-bloated code. (Plus, the platform
| SDK for the more-powerful CPU is likely a more
| modern/high-level one, and so has lower CapEx in
| developer-time required to build it.) So who cares?
|
| Apps running on multitasking OSes _should_ indeed be more
| optimized -- if nothing else, for the sake of being able
| to run more apps at once. But keep in mind that
| "embedded software engineer" and "application software
| engineer" are different disciplines. Being cross that
| _application_ software engineers should be doing
| something but aren 't, shouldn't translate to a whole-
| industry condemnation of bloat, when other verticals
| don't have those same concerns/requirements. It's like
| demanding the same change of both civil and automotive
| engineers -- there's almost nothing in common between
| their requirements.
| smoldesu wrote:
| I think the other comment has a point though: these
| frameworks are definitely powerful, but they have no
| right to be as large as they actually are. Nowadays,
| we're blowing people's minds by showing 10x or 100x
| speedups in code by rewriting portions in lower-level
| languages; and we're still not even close to how
| optimized things used to be.
|
| I think the more amicable solution here is to just have
| higher standards. I might not have given up on Windows
| (and UWP) if it didn't have such a big overhead. My
| Windows PC would idle using 3 or 4 gigs of memory: my
| Linux box struggles to break 1.
| derefr wrote:
| Have you tried to load UWP apps on a machine with less
| memory? I believe that part of what's going on there is
| framework-level shared, memory-pressure reclaimable
| caching.
|
| On a machine that doesn't _have_ as much memory, the
| frameworks don 't "use" as much memory. (I would note
| that Windows IoT Core has a minimum spec of _256MB of
| RAM_ , and runs [headless] UWP apps just fine! Which in
| turn goes up to only 512MB RAM for GUI UWP apps.)
|
| Really, it's better to not think of reclaimable memory as
| being "in use" at all. It's just like memory that the OS
| kernel is using for disk-page caching; it's different in
| kind to "reserved" memory, in that it can all be
| discarded at a moment's notice if another app actually
| tries to malloc(2) that memory for its stack/heap.
| simias wrote:
| I think I would be more willing to embrace this sort of
| tech if there computing resources were easily accessible to
| hack on.
|
| If I could easily upload my code to this smart bulb and
| leverage it either for creative or practical endeavors then
| I wouldn't necessarily consider it wasted potential.
|
| But here you have this bloated tech that you can't even
| easily leverage to your advantage.
|
| I do agree with the general point that the progress we've
| made over the past few decades is mind blowing, and we
| shouldn't forget how lucky we are to experience it first
| hand. We're at a key moment of the evolution of humankind,
| for better or worse.
| 2OEH8eoCRo0 wrote:
| I'm tired of the bloated software take. Hardware is meant to
| be used. Without these abstractions most software would be
| practically impossible to create. Without software solving
| more problems what's the point of the hardware?
| hasmanean wrote:
| How's this take:
|
| What is the minimal computer you can both _compile_ and run
| Doom on?
| ant6n wrote:
| I don't think the Point of N Ghz + 8 gb ram hardware is for
| me to sit and stare at a spinning mouse pointer while
| waiting for the explorer to change to another directory.
| 2OEH8eoCRo0 wrote:
| I dislike Nautilus too
| funcDropShadow wrote:
| Of course can good abstractions and tools help to make
| software possible that were practically impossible before.
| But there is also a tendency to add abstraction layers of
| questionable value. An Electron-based UI to copy a disk
| image to an usb stick comes to my mind, e.g. Certainly it
| is possible to create a GUI for a file to disk copy
| operation without two JavaScript engines, an html/css
| renderer, lot's of network code, etc. This is just a silly
| example, I know. But this happens all the time. That
| phenomenon isn't even new. Anybody, remember when the first
| end-user internet providers would all distribute their own
| software to dial in? In my experience, most problems with
| the internet access at that time could be fixed by getting
| rid of that software and entering the phone number and
| dial-in credentials in their corresponding Windows dialog
| windows.
| 2OEH8eoCRo0 wrote:
| >An Electron-based UI to copy a disk image to an usb
| stick comes to my mind
|
| Subjective. Questionable to you. Nobody is bloating dd
|
| There is definitely bloated software but it's not a huge
| issue. If it were, then the customer would care. If the
| customer cares the business would care.
| Nextgrid wrote:
| > Ubiquitous computing, what will it bring us?
|
| Ads, obviously.
| handrous wrote:
| Ubiquitous spyware in everything we interact with, in order
| to make ads, on average, 5% more efficient--juuuuust enough
| more efficient that having a stream of massive-scale spyware
| data is necessary to compete in the ad sales market. Totally
| a good trade-off, making all the world's computing
| adversarial so ads work slightly better.
| SyzygistSix wrote:
| When adblocking software becomes indistinguishable from
| stealth technology.
| Angostura wrote:
| I always like the fact that the average musical birthday card,
| popular in the 90s had more compute capacity than the computer
| in the Apollo command module.
| formerly_proven wrote:
| Those ICs don't have any compute capacity at all. They're an
| analog oscillator driving an address counter connected to an
| OTP ROM whose data pins go into a DAC.
| rrrazdan wrote:
| Citation please? That sounds unlikely!
| notwedtm wrote:
| This reminds me of the fact that 54% of all statistics are
| made up on the spot.
| Eduard wrote:
| I doubt.
| nl wrote:
| https://hackaday.com/2011/11/22/musical-greeting-card-
| with-m... shows building a music card on an ATTiny 85.
|
| These are around 20 MIPS. The Apollo guidence computer had
| around 1 MIPS.
| kolinko wrote:
| You can build a music card on ATTiny, but music cards
| didn't use ATTiny.
| nl wrote:
| Sure, but I couldn't find a teardown.
|
| It seems likely they are using something similar. It's
| difficult to find a cheaper chip, broadly available chip
| these days.
|
| You can find Z80 clones, but even they are generally
| upgraded and therefore more powerful than the Apollo
| computer.
| nl wrote:
| Wow the downvotes on this are pretty harsh!
|
| Do people really think the Apollo computer is more
| powerful? And have any evidence? I'd be surprised if you
| can get a microcontroller with as little processing power
| these days.
| schlupa wrote:
| Mmmmh, AGC was not that low level. It was a 16 bit computer,
| running at 1Mhz with 72 Kb of ROM and 4 Kb of RAM.
| soheil wrote:
| Given how computation is more and more energy efficient and
| requires near zero material to build, will there be a day that
| we consider computing cycles a priori a bad thing? Maybe there
| will be an argument about how terrible it is to have smart dust
| by those who consider it to be a new form of pollution and
| toxicity.
| npteljes wrote:
| >Ubiquitous computing, what will it bring us?
|
| Ads, propaganda and surveillance.
| nickpp wrote:
| I think we already have those, even without Ubiquitous
| computing. Hell, we had them even before we had any computing
| whatsoever. Sure they are more efficient now (what isn't) but
| they always existed...
| SyzygistSix wrote:
| Considering how much computing power is available and how
| much of it is used to deceive, misinform, or manipulate
| people, this sounds likely.
|
| The only thing more disturbing and sad about this is how much
| consumer demand there is and will be for deception,
| misinformation, and manipulation.
| TheOtherHobbes wrote:
| Optimistic. It's a small step from those to compulsory
| "social credit" - like money, but worse - and other more or
| less overt forms of thought control and behaviour
| modification.
| teekert wrote:
| I'm sorry you had to go through your childhood without Star
| Trek ;)
| numpad0 wrote:
| The concept of digital advertisement was unknown to Humans
| in ST universe until Ferengis brought an example to
| Federation Starbase Deep Space 9 in 2372, so that's one
| divergence between our universe and Star Trek version of
| it.
| lanerobertlane wrote:
| Off topic, but Deep Space 9 was not a federation
| starbase. It was a Bajorian Republic station under
| Federation administration following the Cardassian
| withdrawal.
| numpad0 wrote:
| Oh I assumed it was under sole Federation control from
| the prefix "Deep Space", wasn't aware it was under
| Bajorian ownership. I stand corrected.
| medstrom wrote:
| S/he's very sorry for misusing "starbase", s/he means a
| station.
| SyzygistSix wrote:
| Sao Paulo did away with public advertising for a couple
| decades.
|
| I believe it is creeping back in now. But it can be done.
| sn41 wrote:
| There's an idea for a series: a space opera with a cutting
| edge analytics-enhanced Ferengi ship where "it's continuing
| mission is to explore strange new ad streams, to fleece out
| new life and new civilisations, to boldly trade where no
| one has gone before".
|
| The main adversary will be the cookie monster.
| npteljes wrote:
| I really did go without Star Trek! I had some small
| exposure to Star Wars, but what really grabbed my attention
| was the Neuromancer novel, and later the Matrix film
| series. Of course I'm cherry-picking my experience but it's
| a valid observation of yours that while I'm a technologist
| through and through, I often focus at its ugly side.
| teekert wrote:
| Yeah I really enjoy optimistic sci-fi :)
|
| I do enjoy Dark sci-fi also every now and then but I
| generally like my heroes to be scientists, explorers, to
| solve ethical questions.
| adrianN wrote:
| Not too far off:
|
| https://en.wikipedia.org/wiki/The_Game_(Star_Trek%3A_The_Ne
| x...
| Sohcahtoa82 wrote:
| I just lost the game.
| Andrex wrote:
| Such a shame the current writers of the franchise
| apparently didn't, either. Which is depriving current and
| recent generations of kids of that optimistic ideal.
|
| (Yes, Discovery season 3 is a thing I know about.)
| clownpenis_fart wrote:
| wow. I don't think anyone realized that before. really makes u
| think
| torginus wrote:
| I remember reading about some circuit where they replaced an
| 555 timer whose job was to generate a PWM signal with a fully
| featured microcontroller, because it was cheaper that way.
| canadianfella wrote:
| How do you pronounce 555?
| [deleted]
| winrid wrote:
| This is just the case, right? None of the internals (chip,
| screen) are from the lamp?
| grillvogel wrote:
| kinda lame tbh
| alkonaut wrote:
| Agree it should be able to run on a 1x1 resolution (the lamp)
| and no audio, out of the box. It wouldn't be a very cool video
| though.
| icelancer wrote:
| No, the chips and internals are from the bulb. The screen
| obviously is not.
| winrid wrote:
| Awesome that a light has 100mb of ram, if I remember from the
| video correctly.
| tyingq wrote:
| Not quite. The MGM210L has 108kB RAM, 1MB of Flash, and an
| 80MHz Cortex M33.
|
| He added external memory (an 8MB W25Q64).
| winrid wrote:
| Ah. I only watched the video briefly. Thanks.
|
| Amazing clock speed for a lamp. I guess they need it to
| get the OS started quickly...
| blauditore wrote:
| Only tangentially related, but has been bothering me: How does a
| simple rechargable bicycle light cost $20 upwards?
|
| - It can't be about chip/logic, as that's a commodity these days
| (as this post celebretes).
|
| - It can't be LEDs, because they are dirt cheap too, especially
| red ones.
|
| - Building the plastic case doesn't seem to warrant such a high
| price.
|
| - The battery needs very little capacity, magnitudes lower than
| e.g. that of a phone.
|
| - Is it maybe the charging mechanism through USB? Are there some
| crazy patent fees?
| milesvp wrote:
| Generally you'll find that the msrp is going to be roughly
| 9xBOM (bill of materials). That leaves wholesale prices to be
| roughly 3xBOM so that there's some profitability at that stage.
| This is at least a common heuristic that I use when designing
| hardware. It's easy to say, oh, this chip is way better and it
| only costs 1 dollar more in quantity, but now your final price
| is $9 more and you may have priced youself out of the market.
| These numbers change depending on volume, and how many zeros
| the final price has. And of course demand will also inform
| final price, but they're numbers that seem to hold in a lot of
| manufacturing going back the early 80's.
|
| As for the BOM cost, you're right that for the board, the
| highest costs are probably the charge circuit followed by the
| processor. Battery probably costs the most, but don't discount
| the cost of the mould for the plastic, it's a high up front
| cost that needs to be replaced more frequently than you'd
| guess.
|
| In the end, that $20 bike lamp probably costs the shop $7-10 to
| aquire. And any shop that doesn't charge at least 2x their
| average cost for small items will tend to find their
| profitability eroded by fielding returns and other hassle
| customers.
| chasd00 wrote:
| the price will be what the market will bear. How could it be
| otherwise?
| jccooper wrote:
| Low volume products need high margins to be worthwhile. Which
| is another way of saying "no one has found it worthwhile to
| sell a simple rechargable bicycle light for $18."
| spython wrote:
| The bike lights that cost 15-20 EUR in Europe cost 2-5EUR in
| bulk on alibaba. Literally the same model. I guess it's mostly
| shipping, import duties, taxes, marketplace fees, free shipping
| to customer, returns handling and profit margin.
| rwmj wrote:
| The price of something isn't (usually) the cost of the parts +
| a profit margin. There's a whole theory behind pricing.
| lddemi wrote:
| "rechargeable bike light" on aliexpress (hell even amazon)
| yields several significantly below $20 options.
| webinvest wrote:
| It costs $20 and up because somebody priced it at $20 and up.
| Ekaros wrote:
| Because they can charge that much?
|
| Actually, I find funny that the traditional solution of light
| and dynamo is 6EUR+5EUR shipping+taxes...
| sireat wrote:
| This was a nice porting job! https://www.reddit.com/r/itrunsdoom/
| - could use some new content.
|
| Next level would be finding the cheapest modern mass produced
| device that can run Doom with no hardware modifications.
|
| This means use whatever I/O the device comes with for controller
| and display.
|
| Using external display sort of distracts from the coolness.
|
| Second part - it has to be currently in production(like this Ikea
| lamp). I mean you can find a $10 device/computer from 10 years
| ago that will run Doom.
| djmips wrote:
| I agree, adding a display and other mods isn't so impressive.
| They might as well order the microprocessor board from the
| manufacturer.
| croes wrote:
| Next step, can it run Crysis?
| int_19h wrote:
| Doom was released in 1993, so 28 years ago. Crysis was released
| in 2007, so ... maybe in 2035?
| optimalsolver wrote:
| Reminds me of this comic:
|
| https://www.smbc-comics.com/comic/2011-02-17
| pwagland wrote:
| SMBC is starting to reach parity with XKCD... between the two
| of them there really _should_ be a comic for everything!
| vmception wrote:
| That was entertaining
| quickthrower2 wrote:
| But can it run Slack?
| etrautmann wrote:
| What an awesome project. I need to update my intuitions a bit
| kregasaurusrex wrote:
| The creator posted a full write-up here: https://next-
| hack.com/index.php/2021/06/12/lets-port-doom-to...
| failwhaleshark wrote:
| Let's make a fortune by making lamp crypto malware.
|
| Who's down?
|
| _How do I type IDSPISPOPD on this thing?_
| chews wrote:
| Golf clap good human!
| timonoko wrote:
| Fake News (Just a little bit). I was just thinking about using
| unmodified lamp as a Linux Terminal. I learned morse in the army
| in 70's. I already had a lamp which morsed time of day, but it
| was annoying, because morse numbers are so long.
| nabaraz wrote:
| Reminds me of Ship of Theseus.
|
| "If you replace all the parts of a ship is it still the same
| ship?".
|
| This project is equivalent to "Doom running on 40-MHz Cortex M4
| found in Ikea lamps".
|
| Good work nevertheless!
| jonas21 wrote:
| I think it's fair to say they're playing Doom on the lamp (and
| even more impressively, it's not a whole lamp, but just a light
| bulb!). They use an external keyboard, monitor, speakers, and
| storage for the game data, but the processor and RAM are from
| the original bulb.
|
| If someone said "I'm playing Doom on my PC" in 1993, they would
| also have been using an external keyboard, monitor, and
| speakers. And the game would have shipped on external storage
| (floppy disks).
| Fatalist_ma wrote:
| Before clicking I assumed the lamp had a small screen and
| they were using that screen...
| Aeronwen wrote:
| Was hoping they stuck the light behind a Nipkow Disk. I
| didn't really expect it to happen, but I still want to see
| it.
| squeaky-clean wrote:
| I was hoping to see them running a 1-pixel version of doom
| on an RGB bulb.
| loritorto wrote:
| Actually, the correct technical term for "lightbulb" is
| "lamp", and the correct term for "lamp" is "fixture" :)
| midasuni wrote:
| Maybe in your language, but looking at my English
| dictionary it clearly says a lamp is a device for jibing
| light consisting of a bulb, holder and shade.
|
| Historically a lamp would consist of the wick, oil and
| holder.
| loritorto wrote:
| From wikipedia (Electric light): " In technical usage, a
| replaceable component that produces light from
| electricity is called a lamp." EDIT: Yes, later in the
| same page: "Lamps are commonly called light bulbs;"
| maybeOneDay wrote:
| "In technical usage" means that this level of nitpicking
| isn't really accurate. When you say "they ran doom on a
| lamp" that isn't a piece of scientific literature. It's
| just conversational English and as such using the common
| dictionary definition of the word lamp as opposed to a
| technical definition is entirely appropriate.
| [deleted]
| discardable_dan wrote:
| Agreed. The actual lamp isn't... the thing. It's just reusing a
| chip with a monitor.
| colonwqbang wrote:
| A playstation doesn't have a monitor, controller, loudspeaker
| etc. built in. It's all external stuff you have to plug in
| before you can play.
|
| Still, we say "I'm playing Doom on my playstation".
| SiempreViernes wrote:
| You also say you are "playing video games on my
| playstation" which doesn't make much technical sense, so
| clearly appeals to common idioms aren't without problems.
|
| In any case, the argument is that the mini console they
| built is _no longer a lamp_ , not that you can't play games
| on a console.
| squeaky-clean wrote:
| Ehhhh. Those things are meant to plug right in. I've never
| had to solder together my own breakout board and carrier
| board to hook a Playstation up to a TV while breaking the
| Playstation's main features in the process. That lightbulb
| is completely disassembled and won't function as a
| lightbulb anymore. And nothing they added was plug-n-play.
|
| Edit: It's still a fun and cool project. But more like
| running Doom on hardware salvaged from an IKEA lamp.
| dspillett wrote:
| Maybe this could be a new take on "a modern pocket calculator
| has far more computing power than the moon landing systems in
| '69". A modern lavalamp has as much computing power as my mid
| 90s desktop PC.
| loritorto wrote:
| I think that the goal of the project is not "Doom running on a
| 40-Cortex M4" (actually an 80 M33...), which is pretty easy I
| guess, but "Doom running with only 108 kB of RAM", while
| keeping all the features (which is pretty hard, I guess). I
| recall that I had to bend over backwards to get it running on
| my 386 with only 4MB RAM.
| audunw wrote:
| The game is cheating a little bit, since it loads a lot of
| read-only data from external SPI flash memory, and all the
| code is in the internal 1MB flash. On your 386, everything
| including the OS had to fit on that 4MB RAM.
|
| It also doesn't have quite all the features. No music, and no
| screen wipe effect (I worked on a memory constrained Doom
| port myself, and that silly little effect is incredibly
| memory intensive since you need two full copies of the frame
| buffer)
| int_19h wrote:
| Overlays were a thing in DOS days, as well. Not for Doom
| specifically, but I've seen quite a few 16-bit games using
| them.
| Narishma wrote:
| They're too slow for action games like Doom.
| danellis wrote:
| Yeah, "Game that ran on an 8MHz Arm in 1993 running on a 40MHz
| Arm in 2021" wouldn't have been as attention-grabbing.
| loritorto wrote:
| * "Game that ran in a 486 @ 66 MHz (for the same fps), with 4
| MB RAM in 1993 running on a 80 MHz Cortex M33, with 0.108 MB
| RAM"
| peoplefromibiza wrote:
| tbf they ported a low RAM version, specifically the GBA
| version, not the original one.
|
| So the OP is not entirely wrong: they ported a game played
| on a 16.8Mhz system with 256kb of RAM to a 80Mhz with 108Kb
| of RAM.
|
| The writeup explicitly says <<we could trade off some
| computing power of the 80Mhz Cortex M33 to save memory>>
| pigeck wrote:
| Still the original Doom features are all there, except
| multiplayer. They also restored some missing graphics
| features of the GBA port, like z-depth lighting. Yes 4MB
| vs 108kB is more impressive than 256k vs 108k, but
| cutting in half the memory requirements is still
| notewhorty.
| dolmen wrote:
| So a port of a port isn't a port?
| Narishma wrote:
| At a quarter the resolution.
| ralmidani wrote:
| Is it fair to say "anything that can run Doom, eventually will
| run Doom"?
| grecy wrote:
| When I worked at the Department of Defence I got Quake III
| running on a monster SGI supercomputer that was somewhere
| around the $5mil mark.
| eloisius wrote:
| And people talk about $500 hammers...
| peterburkimsher wrote:
| Another guy got Doom running on potatoes, so I'd say yes.
|
| https://www.youtube.com/watch?v=KFDlVgBMomQ
| dhosek wrote:
| Back in the 80s/90s there were some questionable ports of TeX
| do unlikely hardware. Perhaps the most egregious of these was
| TeX running on a Cray supercomputer. Time on these machines was
| metered heavily. I can't imagine anyone actually used it for
| formatting papers. I had a dream of doing a hand port of TeX to
| 6502 assembly to see if I could get it running on a 128K Apple
| //e. I envisioned heavy use of bank switching to enable code to
| jump back and forth between the two 64KB banks which could only
| be switched in predefined blocks as I recall (the page 1/2
| text/lores blocks, page 1/2 hires blocks, the high 16k and I
| think the low 48k that wasn't part of the lores/hires blocks
| but it's a _long_ time since I played with that hardware.
| bombcar wrote:
| 128k seems at least in the same ballpark as the PDP-10 so it
| should be possible - especially if disk is available.
| [deleted]
| kencausey wrote:
| So, why does this device need such processing power? Can this
| really be cost effective?
| Karliss wrote:
| It is a IoT thingy with wireless connection which puts it in
| category where certain factors combine.
|
| * Cost effective solution for lightbulb would be not having it
| wireless connection at all instead of having less powerful MCU.
| So it being IoT already means that target audience doesn't care
| about the price that much. * It uses of the shelf FCC certfied
| wireless module with embedded antenna. For product designer it
| makes sense using a ready module because it avoids need to do
| RF design and the certification.It also simplifies the design
| if you run your user application inside the wireless module
| instead of having an additional MCU communicating with wireless
| module. Such modules tend to have medium to high end MCUs.
|
| Why do wireless modules need such processing power?
|
| * 2.4 Ghz Antenna has certain size requirements so the size
| constraints for rest of the system aren't too tight. * Wireless
| circuitry puts the module in certain price category, it makes
| sense to have the MCU in similar price category. Wireless
| certification is a pain, so there will be less product variants
| for wireless modules compared to something like 8bit micro
| controller which come in wide variety of memory, size and IO
| configurations. If you have single product variant better have
| slightly more powerful MCU part making it suitable for wider
| range of applications. * The wireless communication part itself
| has certain amount of memory and computing requirements. Might
| as well split the resource budget on the same order of
| magnitude for wireless part and user application. N kB of
| memory for wireless and N kB for user application instead of N
| and 0.001N especially if the first part can easily vary by 10%
| or more due to bugfixes and compiler changes. Similarly there
| are basic speed requirements for digital logic buffering the
| bits from the analog RF part and doing the checksums. * Modern
| silicon manufacturing technologies allow easily running at few
| tens of MHz so if the SOC has memory in the range of 30-500KB
| and isn't targeting ultra low power category it is probably
| capable of running at that speed.
| marcan_42 wrote:
| The processing power needed to run a decent internet connected
| device with typical software stacks these days is about the
| same as the processing power needed to run Doom.
| nerfhammer wrote:
| I bet it connects to wifi and/or bluetooth so you can control
| it with some smartphone app
| eru wrote:
| By today's standard Doom doesn't need much processing power.
|
| You could probably find exactly the right chip that only has
| exactly just as many bits of RAM as you need for the lamp's
| functionality. But that would probably be more expensive to
| develop and chip than just using standard parts?
|
| Even more radical: I suspect with a bit of cleverness you could
| probably do everything the lamp needs to do with some relays
| and transistors. But today, that would be more expensive.
|
| Compare https://www.youtube.com/watch?v=NmGaXEmfTIo for the
| latter approach.
| jfrunyon wrote:
| Implementing wifi/RF with "some relays and transistors"
| doesn't sound fun.
| eru wrote:
| Yes. I should have been less sloppy: you'd have to rethink
| what the lamp needs to be able to do slightly, too.
| nxpnsv wrote:
| Not to you perhaps, but I'd watch the video if some
| patient/insane/genius built it...
| moftz wrote:
| You could do something very basic with discrete components
| for controlling wireless lighting systems but system starts
| to get out of hand when you need to have a bunch of lights
| nearby. It's much cheaper, simpler, and smaller to reduce it
| down to a chip and move to a digital RF system. I've got a
| bunch of RF controlled outlets in my house but it's just
| about the dumbest system you can buy. It's on par with the
| typical remote garage door opener. You can program the on/off
| buttons on the remote for each outlet but that's as far as it
| goes. I'd like to be able to remotely control them away from
| home or be able to give each light or outlet its own schedule
| and that requires either a central controller or each device
| having network access for network time and remote control.
|
| Interestingly, a friend rented a house in college once that
| had a system of low voltage light switches that ran back to a
| cabinet filled with relays that controlled light switches and
| outlets. No major benefit to the user other than a control
| panel in the master bedroom that lets you control the
| exterior and some interior lights. It was a neat system but
| definitely outdated. I'd imagine a retrofit would be to drop
| all of the relays for solid state and provide a networked
| controller to monitor status and provide remote control.
| foobar33333 wrote:
| It doesn't need it, its just that chips that can run doom are
| the dirt cheap bottom tier chips now. Rather than making some
| custom chip only just powerful enough to run the lamp software.
| You may as well just stick a generic one in.
|
| These IKEA smart bulbs cost about $9 so yes, it is cost
| effective.
| marcan_42 wrote:
| Chips that can run Doom are nowhere near the dirt cheap
| bottom tier. The dirt cheap bottom tier is this $.03 chip:
|
| https://hackaday.com/2019/04/26/making-a-three-cent-
| microcon...
|
| Chips that can run Doom, though, _are_ just about at the low
| end for internet-connected devices. You can 't run an IoT
| stack on that $.03 thing. The chip in the bulb is exactly in
| the right ballpark for the application. You _do_ need a
| fairly beefy chip to run multiple network protocols
| efficiently.
| foobar33333 wrote:
| There is no network stack on the ikea bulbs. They only
| support local communication via zigbee. No IP/TCP/etc. Its
| the gateway device that does wifi/networking.
| marcan_42 wrote:
| ZigBee is a network protocol with a network stack. Just
| because it isn't TCP/IP does not mean it's not a network.
| It has addressing, routing and routing tables,
| fragmentation and reassembly, discovery, error detection,
| packet retransmission, and everything else you'd expect
| from a functional network protocol.
| spookthesunset wrote:
| Because mass produced microcontrollers are dirt dirt cheap.
| It's easier to source a way overpowered CPU than some perfectly
| spec'd one.
|
| Plus how else will malware people run their stuff?
| foobar33333 wrote:
| The ikea bulbs are actually pretty good malware wise. The
| bulbs do not connect to the internet, they use zigbee to
| communicate with a remote. Which can either be a dumb offline
| remote or the gateway device. The gateway also does not
| connect to the internet, it is local network only and can be
| hooked up to the Apple/Google/Amazon systems for internet
| access.
|
| If you had to design an IoT bulb, this is the ideal setup.
| jfrunyon wrote:
| > The gateway also does not connect to the internet, it is
| local network only and can be hooked up to the
| Apple/Google/Amazon systems for internet access.
|
| In other words, it _does_ connect to the internet, it also
| sits on the LAN to give attackers access to all your other
| devices, AND it sits on Zigbee to give attackers access to
| those as well.
| midasuni wrote:
| You obviously put it on it's one /30 on the lan and
| limits it's connection to what's needed
| franga2000 wrote:
| No, it _can_ be _commanded_ _from_ the Internet - big
| difference. It never has a direct connection to the
| Internet and even that is entirely optional and a hard
| opt-in (buying more hardware).
|
| And if you have attackers on your LAN, you're at the
| point where controlling your lightbulb is the least of
| your problems. As for Zigbee, go on, present your
| alternative - I'm all ears!
| marcan_42 wrote:
| As far as I know only Apple does the local network stuff.
| If a device is Alexa or Google Home compatible, it talks
| directly to some cloud service from the manufacturer on
| the Internet which then talks to Google or Amazon. So it
| connects directly to the internet, and moreover there is
| the additional attack/privacy surface of the
| manufacturer's cloud service.
|
| Source: I run a HomeAssistant local IoT hub and to
| integrate it with Google Home I had to give it a public
| hostname and sign up as an IoT vendor with Google to
| register it as a developer/testing mode service (if I
| were a real vendor it would be one cloud hub for all my
| customers, it wouldn't be Google to individual homes,
| it's just that in my case there is only one user and the
| server is at my house).
| foobar33333 wrote:
| >If a device is Alexa or Google Home compatible, it talks
| directly to some cloud service
|
| This is how some IoT devices work. As far as I can tell.
| IKEA has no servers or infrastructure for their devices.
| And the Apple/Google hubs manage everything for them.
| vinay427 wrote:
| The IoT device can certainly work like that. The comment
| is specifically talking about Google Assistant support,
| which as HomeAssistant users have experienced, does
| require cloud server access even if this seems
| unnecessary in cases when the devices are only being
| controlled within a local network.
| marcan_42 wrote:
| IKEA _has_ to have servers for their devices to integrate
| with Google Home and Alexa. That 's how those systems
| work. Only Apple offers direct local connectivity as far
| as I know.
|
| These days Google Home has local fulfillment, but that
| seems to only be offered as an _addition_ to cloud
| fulfillment. It always has a cloud fallback path.
|
| Here's how you hook up Home Assistant to Google cloud. As
| you can see, turning it into a cloud service from
| Google's POV is required. You can either use Home
| Assistant Cloud (see? cloud service) or set up your own
| single-user cloud integration (which is what I do),
| turning your "local" server into a cloud service (with
| public IP and SSL cert and domain and everything) and
| registering yourself as an IoT vendor in their developer
| console, pointing at your "cloud" service URL.
|
| https://www.home-
| assistant.io/integrations/google_assistant/
|
| There is no way to keep the entire system local and have
| the Google Home devices only access it locally, without
| any cloud infrastructure. The commands flow from Google
| Home devices, to Google's cloud, to the vendor's cloud,
| to the vendor's devices. There is a bypass path these
| days for local access, but it is always in addition to
| the cloud path, and only an optimization.
| psanford wrote:
| I don't know how the IKEA hardware works. However it is
| not true that Alexa has to talk to a cloud service to
| integrate with all IoT devices.
|
| I know this because I run some local Raspberry PIs that
| pretend to be WeMo devices and I'm able to control them
| without any cloud connections from the PIs. The echo
| discovers the WeMo devices via upnp.
|
| This has been a thing for quite a while[0].
|
| I believe you are correct that Google Home has no local
| network device control.
|
| [0]: https://hackaday.com/2015/07/16/how-to-make-amazon-
| echo-cont...
| franga2000 wrote:
| We're still talking about the light bulbs, aren't we?
| Bulb --Zigbee--> Zigbee Gateway --WiFi/eth--> LAN. If
| further integration is desired, an Internet gateway can
| be used (could be a Google/Apple/Amazon box thing, but
| could also be a Pi with HomeAssistant!). How that gateway
| connects to the Internet is up to it - but at no point is
| either the lightbulb or its LAN gateway in any way
| connected to the Internet. Therefore, neither the bulb
| nor the gateway pose a direct security or privacy risk.
| All the security is offloaded to the gateway and you are
| entirely free to chose the vendor of your Internet
| gateway or indeed opt for none at all (and possibly use a
| VPN if external access is desired)
| marcan_42 wrote:
| foobar33333 said "The gateway also does not connect to
| the internet", which cannot be true, because connecting
| to the internet to speak to a manufacturer-provided cloud
| service that then speaks to Google is _required_ to
| integrate with Google Home. That 's how it works. The
| IKEA gateway _has_ to talk to an IKEA cloud service. If
| you think otherwise, please link to the Google Home docs
| that explain how that could possibly work, because I can
| 't find them.
|
| Here's how you hook up Home Assistant to Google cloud. As
| you can see, turning it into a cloud service from
| Google's POV is _required_. You can either use Home
| Assistant Cloud (see? cloud service) or set up your own
| single-user cloud integration (which is what I do),
| turning your "local" server into a cloud service (with
| public IP and SSL cert and domain and everything) and
| registering yourself as an IoT vendor in their developer
| console, pointing at your "cloud" service URL.
|
| https://www.home-
| assistant.io/integrations/google_assistant/
|
| There is no way to keep the entire system local and have
| the Google Home devices only access it locally, without
| any cloud infrastructure. The commands flow from Google
| Home devices, to Google's cloud, to the vendor's cloud,
| to the vendor's devices.
|
| Bulb --> Zigbee --> Zigbee Gateway --> WiFi/eth --> LAN
| --> Your router --> WAN --> IKEA cloud --> Google cloud
| --> WAN --> Your router --> LAN --> WiFi --> Google Home
| device.
|
| If that sounds stupid, congrats, this is why they call it
| the internet of shit.
| franga2000 wrote:
| See, I have in fact set this up in the past, although not
| with IKEA lamps, but some other cheap Zigbee-compatible
| ones. The Zigbee-LAN gateway (along with all the other
| WiFi devices) sat on its own VLAN with no Internet access
| at all and a HomeAssistant box had access to both the IoT
| VLAN and the main one (that had Internet access). The
| HomeAssistant instance was configured with a dev account
| to work with Google's crap, but the devices themselves
| only ever talked to it, not Google or any vendor-provided
| server.
|
| EDIT: Perhaps the terminology got somewhat twisted around
| here: when I talked about the LAN gateway, I meant
| specifically the thing that does Zigbee-LAN
| "translation". Now, that same physical box might also
| have the capability to work as a Zigbee-Alexa or Zigbee-
| Google transaltor, which would require a vendor server as
| you said, but those options are, well, optional. You can
| certainly disable them and use something like HASS or
| openHAB as the bridge to whatever cloud service you wish.
| Same way that my home router has a built-in VPN feature,
| but I don't use it because I run a VPN server on my NAS
| instead.
| marcan_42 wrote:
| Of course, if you set up Home Assistant you can firewall
| them off the internet. That's how I do it too, with an
| IoT VLAN. It's not how these devices are intended to
| work, and not how they work if you just follow the
| manufacturer's instructions for Google/Alexa integration.
| You're replacing the vendor's cloud service with Home
| Assistant, effectively.
|
| For example, I had to work out that in order to get
| Broadlink devices to stop rebooting every 3 minutes
| because they can't contact their cloud crap you have to
| broadcast a keepalive message on the LAN (it normally
| comes from their cloud connection, but their message
| handler also accepts it locally, and thankfully that's
| enough to reset the watchdog). This involved decompiling
| their firmware. I think that patch finally got merged
| into Home Assistant recently.
|
| My point is that this is not the intended use for these
| devices. Normal people are going to put the gateways on
| the internet and enable the Google integration; in fact,
| it's quite likely that they will sign in to some IKEA
| cloud service as soon as you put the gateways on a
| network with outgoing internet connectivity, even before
| you enable the integration.
| bouke wrote:
| This is why HomeKit is superior: when you're in your
| home, it doesn't need WAN to function. The connection
| would be Bulb --> Zigbee --> Zigbee Gateway --> Wifi -->
| iPhone.
|
| When you're away from home, iCloud will be used, and no
| IoT vendor systems come into play. This means that all
| IoT devices can be kept offline and limited to your LAN.
| The connection would be Bulb --> Zigbee --> Zigbee
| Gateway --> Hone Hub (Apple TV or iPad or HomePod) -->
| WAN --> iCloud --> WAN --> iPhone.
| jfrunyon wrote:
| > all IoT devices can be kept offline and limited to your
| LAN
|
| > Hone Hub (Apple TV or iPad or HomePod) --> WAN
| bouke wrote:
| IoT devices being smart appliances. Not Home Hub (Apple
| device).
| jfrunyon wrote:
| I'm not sure why you think that having a proxy in the
| middle will protect you.
| bouke wrote:
| Simple: I trust the security of Apple/iCloud over the
| security of the servers of any IoT vendor.
| jfrunyon wrote:
| > No, it can be commanded from the Internet - big
| difference.
|
| Big difference from what?
|
| You do realize that the vast majority of remotely
| exploitable security vulnerabilities are in software
| _which can be commanded from the Internet_ , right?
| franga2000 wrote:
| Source?? I'm quite certain that it's much harder to
| exploit something that you can't even send a TCP packet
| to. If the device is only directly connected to the hub
| (amazon/google/apple box) and the hub is only connected
| to the cloud service, how would you even send a payload
| to the device, even if an exploit existed?
|
| You could exploit the cloud service directly and gain
| control of the device, but that's like stealing the
| security guard's master keys - you can't call that a
| vulnerability in the door lock, can you?
| foobar33333 wrote:
| There are 2 layers of relay devices in between. And the
| only one with direct internet access is a device you
| already have on the internet and is developed by the top
| brains and maintained for many years to come unlike your
| average smart bulb directly hooked to the internet with
| minimal security.
|
| If you buy the dumb remote, you get a useful smart light
| setup with no internet or even local network
| connectivity. Its useful because you can turn a room full
| of lamps on at once or adjust their color.
| jfrunyon wrote:
| Oh boy, it sure is impossible to exploit something
| through a proxy in a training diaper!
| mrb wrote:
| The system-on-chip is the MGM210L which needed to be powerful
| enough to run multiple wireless protocols (Zigbee, Thread,
| Bluetooth), so the lamp can be controlled by any of these.
| These are very complex protocols. The Thread spec for example
| is hundreds of pages. I did a formal security review of it on
| behalf of Google back in 2015. Bluetooth is even more complex.
| RF signal processing, packet structure, state machine, crypto,
| logical services, etc.
|
| The software complexity of these protocols is greater than the
| complexity of a rudimentary 3D game like Doom, so it's expected
| that whatever chip can run these protocols can also run Doom.
|
| Datasheet of MGM210L for the curious:
| https://www.silabs.com/documents/public/data-sheets/mgm210l-...
| Cthulhu_ wrote:
| I would have thought they'd have specialized chip hardware
| already to deal with these, but maybe I don't know enough
| about that kinda thing. Pretty sure it'd have specialized
| hardware and/or instructions to deal with the cryptographic
| aspects though.
| nabla9 wrote:
| It's already there.
|
| The chip is SoC with Arm core, FP, Crypto Acceleration, DSP
| extensions and FPU unit plus radio parts.
| gtsteve wrote:
| Baking something like that into hardware is probably not a
| good idea because then you can't update it when
| vulnerabilities are found.
| nekopa wrote:
| Makes me smile to think that one day my bank account will
| be drained of all its BTC because I forgot to patch my
| bedside lamp last tuesday...
| kwdc wrote:
| That would make me frown. Just saying.
| dolmen wrote:
| But do you get software update (and deployment to your
| device) when vulnerabilities are found? ;)
|
| The real reason is it reduces the cost (and duration) of
| iterating in the development phase.
| elondaits wrote:
| My Hue lamps had their firmware updated through the app a
| couple of times, and I had them for 4-5 years.
| kmadento wrote:
| Since Ikea update the firmware of the tradlos units quite
| often and has security updates in the changelog I would
| guess... yes They also mentions improvement of stability
| and performance towards different protocols in the
| changelogs.
| phkahler wrote:
| >> The software complexity of these protocols is greater than
| the complexity of a rudimentary 3D game like Doom
|
| That's unfortunate. Protocols, particularly those used for
| security should be a simple as possible. I know it's a hard
| problem and people tend to use what is available rather than
| solving the hard problem.
| oaiey wrote:
| complexity .. maybe, 80Mhz ... heck no. However, this is all
| off-the-shelf ware and as long as the energy consumption is
| not a problem, I am fine.
| marcan_42 wrote:
| It's burst processing. You do actually need high processing
| speeds for short periods of time to implement network
| protocols like these effectively. Think cryptography,
| reliability, etc. The CPU isn't doing anything most of the
| time, but it makes a big difference if it can get the job
| done in 500ms instead of 5ms when there is something to do
| (like process a packet).
|
| Also, higher clock speed = lower power consumption. It
| sounds counterintuitive, but getting the job done quickly
| so you can go back to low power mode sooner actually saves
| power, even if the instantaneous power draw while
| processing is higher.
| baybal2 wrote:
| Nearly all cryptography on MCUs is hardware based. It
| would have otherwise be completely impossible to do
| timing sensitive crypto work like WiFi, or BT.
|
| The WiFi, or BT protocol stacks themselves are however
| almost always in software on MCUs simply because nobody
| would bother making a separate ASIC IP for that which
| will be outdated by the next standard errata.
| marcan_42 wrote:
| Symmetric cryptography is often hardware based, but
| asymmetric crypto rarely is. The latter is commonly used
| for pairing/key exchanges, and would be painfully slow on
| a slow MCU.
| baybal2 wrote:
| > The Thread spec for example is hundreds of pages.
|
| And this also assured it being dead on arrival.
| mrb wrote:
| Actually Thread is alive and kicking! It has a lot of
| momentum at the moment. But if it gets displaced, IMHO it's
| going to be by WiFi Ha-Low or successors.
| baybal2 wrote:
| It is not alive. I haven't seen a single device working
| with it, while I at least seen one Apple Home compatible
| device in an Apple store.
|
| Tuya on other hand is just in every retail outlet, it's
| just people don't know that Tuya is under the bonnet.
| mrb wrote:
| Maybe not in your country(?) In the US there are quite a
| few commercial systems built on Thread: Apple HomePod
| mini, Google Nest security system, etc. Don't get me
| wrong: we are still very early in real-world deployments.
| It was just 2 years ago that the consortium/Google
| released the open source OpenThread which is when
| momentum really picked up.
| notwedtm wrote:
| Please stop lying to people:
| https://www.threadgroup.org/thread-group#OurMembers
| kortex wrote:
| That just means little more than "who bought or used the
| spec at some point." It has little bearing on
| contemporary real-world commercialization of the thread
| protocol.
| baybal2 wrote:
| I am not lying, the assortment of Thread based devices
| ever seeing sale is much smaller than the size of
| Thread's illustrious board of directors.
|
| They literally have more talkshops per months than
| devices released.
| [deleted]
| remarkEon wrote:
| These are always awesome and I never stop being impressed by what
| folks can do.
|
| What's left in the world for "DOOM running on ____"?
|
| Here's my idea:
|
| Could we do this with a drone swarm? And have players still
| control DOOM guy somehow? I'm imagining sitting on the beach at
| night and someone is playing a level while everyone looks up at
| the sky at a massive "screen".
| masswerk wrote:
| What about a swarm of pigeons? Three RFCs can't be wrong...
|
| [0] https://datatracker.ietf.org/doc/html/rfc1149
|
| [1] https://datatracker.ietf.org/doc/html/rfc2549
|
| [2] https://datatracker.ietf.org/doc/html/rfc6214
| unhammer wrote:
| DOOM over Avian Carrier (since obvs trained pidgeons can
| implement a turing machine, see also
| https://en.wikipedia.org/wiki/IP_over_Avian_Carriers ).
|
| DOOM in Game of Life.
|
| DOOM as a Boltzmann brain (might take a while before that's
| implemented, but I bet it'll happen eventually)
| TheOtherHobbes wrote:
| It would be hard to prove that it hasn't already.
| TheCraiggers wrote:
| Just because pigeons can deliver messages doesn't mean it's
| turing complete. Although they could be used in data
| transfer. I've never seen anybody suggest that RFC1149 is
| Turning Complete, anyway.
|
| Game of Life is totally Turing Complete though, so it's
| already proven that you can indeed run Doom on it.
| PhasmaFelis wrote:
| Years ago, I saw someone implemented a vector display using a
| powerful visible-light laser on a gimbal, instead of an
| electron gun with magnetic deflection.
|
| Then they used it to play Tetris on passing clouds.
| usrusr wrote:
| Too easy. At first I was imagining some amazing drone dance,
| but then I realized that it would be just a wireless screen
| with horrible battery endurance.
| Cthulhu_ wrote:
| They've been playing Tetris and Snake on some weird things
| already (I've seen Tetris on a high rise, and Snake on a
| christmas tree)
| ant6n wrote:
| Game boy color (8-bit cpu, 2MHz, 40k ram).
|
| Supposedly the guy (Palmer) who created the commercial gba
| version had done a tech demo for gbc, but Carmack decided it
| was too stripped down and proposed a commander keen port for
| gbc at the time instead. Gba came out a couple of years later
| and was powerful enough.
| SonicScrub wrote:
| This reminds me of this Saturday Morning Breakfast Cereal Web
| Comic
|
| https://www.smbc-comics.com/comic/2011-02-17
| dolmen wrote:
| Or on a building.
|
| See project Blinkenlights from the early 2000 (not Doom, but
| still video games).
| https://en.wikipedia.org/wiki/Project_Blinkenlights
| Out_of_Characte wrote:
| The current world record for drones is ~1300, 320*200
| resolution has 50 times more pixels than drones. Therefore you
| need is powerfull 8-bit color drones and someone to design good
| edge representation. or just have massive fleet for better
| resolution.
|
| https://www.airlineratings.com/news/art-sky-check-spectacula...
| piceas wrote:
| Just dangle a string of 50 LED's below each drone. Close
| enough!
| remarkEon wrote:
| Someone will do it, I'm sure.
|
| Now, to figure out how to pump in the soundtrack ...
| aninteger wrote:
| "Of course it runs Doom"
|
| Nice work!
| systemvoltage wrote:
| Constrained resources often lead to exceptionally brilliant
| software. It's as if resource constraints inspire a form of
| discipline and frugality that's demanded from us. We become
| acutely aware of it, and reprimand our brains from complacency.
| From Apollo program to today's Internet-of-Shit, somewhere we
| lost our ability to focus and bloated ourselves with sugar-high
| of processing power and memory. Just because it exists, and is
| cheap, doesn't mean it's good for you.
| runawaybottle wrote:
| Constraints is the mother of all creativity. Like Andy Dufresne
| and the the rock hammer, how about I give you nothing, could
| you possibly dig your way out?
| Jare wrote:
| Creativity would ensue, success probably not.
| bayesian_horse wrote:
| Truly a light bulb moment.
| baybal2 wrote:
| That a very powerful lightbulb!
|
| It has more CPU perf than my first computer, and costs 1000 times
| less at the same time.
|
| The progress of semiconductor industry is fantastical.
| stuntkite wrote:
| With the chip shortage, you best believe some people are going to
| be buying some of these to scavenge for project parts.
| kwdc wrote:
| Is there a PCB shortage as well? I feel like that couid put a
| dampener on the proceedings. Asking for a _friend_.
| Doxin wrote:
| I doubt it. PCBs are much more of a commodity than chips. You
| can make passable PCBs at home without a crazy amount of
| effort, and you can definitely get to professional-grade PCBs
| with some effort.
| kwdc wrote:
| I haven't etched a pcb for years. This will be a good
| weekender.
| stuntkite wrote:
| I mean maybe? I don't think it's that consiquential for
| hobbyists as the lack of microcontrollers or other
| components. For instance, I can electroplate quite a few
| different substrates if I want to cut my own boards. Also
| after years of doing this, I have so much proto crap I'm
| probably good till the next pandemic but probably only have a
| couple unused Teensys and maybe an arduino or two laying
| around. I don't see the supply chain ever springing back to
| what it was. We are in a whole new world of scarcity for at
| least a few years IMO. Which I'm not that upset about at all
| really. It's inconvenient and possibly dangerous but I think
| the reorganization will create resilience in our supply
| chains and also the lack of gear will encourage people to do
| very interesting things with stuff that would be considered
| trash in 2019 and we need more of that. E-Waste is a huge
| problem.
| mschuster91 wrote:
| > I don't see the supply chain ever springing back to what
| it was. We are in a whole new world of scarcity for at
| least a few years IMO.
|
| The current issues are driven by the automotive industry
| having screwed up and shitcoin miners snatching up GPUs.
| Neither is going to be a long term issue.
| varjag wrote:
| It's more than that, supply chain issues started around
| 2016.
| varjag wrote:
| Yes, there is a copper laminate shortage at the moment.
| linuxhansl wrote:
| "Only 108kb of ram."
|
| That quote reminded me of my first computer: The ZX81, which had
| 1kb of RAM! About 150 bytes of that was used by system variables,
| and depending on how you use the screen up to 768 bytes are used
| by the display buffer.
|
| And yet I managed to write code on it as a kid. :) (Of course
| nothing like Doom)
| beebeepka wrote:
| Lamp, the final frontier.
|
| What now, you might ask. Well, RGB brain implants running Quake,
| of course.
| HugoDaniel wrote:
| the pregnancy test doom was fake! :O
| djmips wrote:
| This one is fake for me too since they had to add a screen and
| other stuff. Meh, I prefer my Doom runs on X to mean that X
| wasn't modified or beefed up.
| abluecloud wrote:
| it wasn't fake as much as misreported.
| DudeInBasement wrote:
| Underrated
| djmips wrote:
| The author helped that along.
| clownpenis_fart wrote:
| wow who would have believed that doom could be easily ported to
| any sufficiently powerful cpu+framebuffer hardware
| wonks wrote:
| Has the "running DOOM" meme gone too far?
| phkahler wrote:
| No.
| bayesian_horse wrote:
| A corollary of Moores Law: the size of things you can run Doom on
| halves roughly every two years.
| bognition wrote:
| Carmack's law?
| accountofme wrote:
| Excuse the swearing: but pretty fucking cool.
___________________________________________________________________
(page generated 2021-06-14 23:01 UTC)