[HN Gopher] Powerful supercomputer can now run on light instead ...
       ___________________________________________________________________
        
       Powerful supercomputer can now run on light instead of electric
       current
        
       Author : mardiyah
       Score  : 190 points
       Date   : 2021-12-27 12:34 UTC (1 days ago)
        
 (HTM) web link (www.techradar.com)
 (TXT) w3m dump (www.techradar.com)
        
       | neatze wrote:
       | Seems like you can try it out[1], I find this a bit funny that it
       | is easier to get trail access to quantum CPU and light based GPU,
       | but for Cerebras and Graphcore trial access you need spend
       | thousands of dollars.
       | 
       | [1]https://lighton.ai/cloud/
        
       | 1MachineElf wrote:
       | >According to LightOn, its Appliance can reach a peak performance
       | of 1.5 PetaOPS at 30W TDP and can deliver performance that 8 to
       | 40 times higher than GPU-only acceleration.
       | 
       | Impressive!
       | 
       | LightOn hasn't received much discussion on here before. Some
       | links have been submitted, and this is the only one I could find
       | comments on: https://news.ycombinator.com/item?id=27797829
        
         | fsh wrote:
         | From the website of the manufacturer [1] it appears that the
         | co-processor is essentially an analog computer for matrix-
         | vector multiplications. I am quite sceptical about the accuracy
         | and value range of the computations. Even puny single-precision
         | floating point operations are accurate to something like 7
         | decimal digits and have a dynamic range of hundreds of dB.
         | According to the spec sheet, the appliance only uses 6-bit
         | inputs and 8-bit outputs, so the relative errors are probably
         | on the percent level. This makes it hard to believe that any
         | signal will propagate through something like a DNN without
         | completely drowning in noise.
         | 
         | [1] https://lighton.ai/lighton-appliance/
        
           | bjornsing wrote:
           | If there's too much noise just lower the dropout probability
           | from 50 to 30%. ;)
           | 
           | Joking aside, it is interesting how much noise and
           | quantization these neural networks can work with. I think
           | there's a lot of room for low precision noisy computation
           | here.
        
           | orlp wrote:
           | > Even puny single-precision floating point operations are
           | accurate to something like 7 decimal digits and have a
           | dynamic range of hundreds of dB. According to the spec sheet,
           | the appliance only uses 6-bit inputs and 8-bit outputs, so
           | the relative errors are probably on the percent level. This
           | makes it hard to believe that any signal will propagate
           | through something like a DNN without completely drowning in
           | noise.
           | 
           | Maybe you aren't aware, but half-precision (16 bit float) is
           | already well-established in the AI community:
           | https://en.wikipedia.org/wiki/Bfloat16_floating-point_format.
           | In context single-precision isn't all that puny!
           | 
           | And there have already been successful experiments with
           | stronger quantization, like 8-bit neural nets, or even 1-bit
           | (!) neural nets. There is a lot of evidence that neural
           | networks can be very resilient to quantization noise.
        
             | bodhiandphysics wrote:
             | It's so annoying!!!! I work in computer graphics and
             | scientific computation and it's getting hard to get gpus
             | with adequate double performance.
        
               | chillee wrote:
               | You might be interested in the upcoming AMD Mi200 GPUs,
               | which have 96 teraflops of fp64 performance.
        
               | bodhiandphysics wrote:
               | Unfortunately, I need cuda (for now)
        
               | sudosysgen wrote:
               | With some tweaking, I've had sucess running CUDA code
               | with AMD's HIP framework. You'll need to do some changes
               | probably.
        
               | simplestats wrote:
               | I think it's hilarious. when I was in school,
               | microprocessors were 8-bit. Then as the world got
               | digitized it was 16-bit microprocessors. A least in my
               | narrow world. Then 32 bit and floats came along and
               | finally double floats. Each step was a big product effort
               | and launch we sell as better technology. And just as we
               | are finally got past the need to always make both 32 and
               | 64 bit versions of everything, we turn around and head
               | back down. Though I saw someone (Qualcomm?) with a press
               | release about their next gen microprocessor supporting 32
               | bit floats! Not sure if it counts as another U-turn or
               | they're just a straggler.
        
               | wtallis wrote:
               | I think you may be improperly conflating address or
               | pointer width with the data types used for integer or
               | floating-point arithmetic--we had floats up to 80 bits
               | with "16-bit" x86 processors. Nobody's moving backwards
               | in terms of pointer size. And any history of the
               | progression of bit widths is incomplete without
               | mentioning the SIMD trend that PCs have been on since the
               | mid-90s, starting with 64-bit integer vectors and
               | culminating in today's 512-bit vector instructions that
               | re-use those original 64-bit vector registers as mask
               | registers.
               | 
               | I don't think there's any point at which PCs have ever
               | abandoned 8-bit and 16-bit data types, or ever appeared
               | to be on a trajectory to do so. We've just had some
               | shifts over the years of what kinds of things we use
               | narrower data types for.
        
               | bee_rider wrote:
               | I think we'll just have to embrace it. Anyway, dealing
               | with reduced precision is more fun that throwing bits at
               | the problem.
        
             | ISL wrote:
             | I'd be real surprised if the neurons in our brains have
             | ADC-equivalents better than ~4 bits.
        
               | tgv wrote:
               | True, but ANNs are nowhere near as good as our brains,
               | nor do they operate in the same way.
        
               | xwolfi wrote:
               | I think you missed the fact we were talking about neural
               | network, not an animal brain self replicating and
               | branching and competing with itself for billions of years
               | until it becomes aware of itself. Give us the the same
               | time.
        
               | [deleted]
        
               | Retric wrote:
               | The earliest and simplest brains where still useful. Even
               | insects can fly around in 3D space, I doubt you need
               | something as complicated as a mouse brain to run a self
               | driving car let alone a drone.
        
               | gnatman wrote:
               | Insects also crash into stuff constantly!
        
               | ben_w wrote:
               | I'm not sure how much it matters given this thread looks
               | like it's going off on several successive tangents, but
               | the important (and hard) thing with a self-driving car is
               | making sure it doesn't hit stuff, not the actual driving
               | part.
               | 
               | And drones, trivially agree: _Megaphragma mymaripenne_
               | has 7400 neurones, compared to the 71 /14 million in a
               | house mouse nervous system/brain.
        
               | robwwilliams wrote:
               | And to stay on this odd tangent:
               | 
               | Best estimate I have for total cell numbers in mouse
               | brain--about 75 million neurons and 35 million other cell
               | types. This estimate is from a 476 mg brain of a C57BL/6J
               | case--the standard mouse used by many researchers.
               | 
               | Based on much other work with discrete neuron populations
               | the range among different genometypes of mice probable
               | +/- 40%.
               | 
               | for details see: www.nervenet.org/papers/brainrev99.html
               | 
               | Expect many more (and I hope better) estimates soon from
               | Clarity/SHIELD whole brain lighsheet counting with Al
               | Johnson and colleagues at Duke and team at Life Canvas
               | Tech.
        
             | thesz wrote:
             | These 1-bit-per-coefficient neural nets need pretty good
             | floating point implementation to train them. From what I
             | remember, they are trained with rounding - basically, a
             | floating point weight gets rounded to -1, 0 or 1 and
             | computed floating point gradient is added to a floating
             | point weight.
             | 
             | Inference is cheap, training is not.
        
           | tdrdt wrote:
           | But arent there other applications where this is ok?
           | 
           | For example path tracing (ray tracing) doesn't need to be
           | very accurate because multiple samples per pixel are used.
           | 
           | A gaming card that uses less power is very welcome in laptops
           | for example.
        
             | moonchrome wrote:
             | What do you mean? What kind of worlds can you represent
             | with 8 bits units ? Some small blocky voxel box ?
        
               | tdrdt wrote:
               | I mean inacurate vector math
        
               | moonchrome wrote:
               | But that is what I'm saying if your vectors are reduced
               | to 8 bit scalar components you can represent a
               | 256x256x256 worth of detail in the world (doesn't need to
               | be linear but still really limited details) ?
        
           | kortex wrote:
           | It's not a transmission line though, SNR does not apply in
           | the same way. It's more like CMOS where the signal is
           | refreshed at each gate. Each stage of an ANN applies some
           | weight and activation. You can think of each input vector as
           | a vector with a true value plus some noise. As long as that
           | feature stays within some bounds, it is going to represent
           | the same "thought vector".
           | 
           | It may require some architecture changes to make training
           | feasible, but it's far from a nonstarter.
           | 
           | And that is only considering backprop learning. The brain
           | does not use backprop, and has way higher noise levels.
        
             | dahart wrote:
             | I think the parent was referring to the same noise that you
             | are, compute precision, not transmission, and was
             | suggesting that perhaps it won't easily stay within bounds
             | due to the fact that some kinds of repeated calculations
             | lose more precision at every step.
             | 
             | Maybe it's application dependent, maybe NNs or other
             | matrix-heavy domains can tolerate low precision much more
             | easily than scientific simulations. It certainly wouldn't
             | surprise me if these "LightOPS" processors work well in a
             | narrow range of applications, and won't improve or speed up
             | just anything that needs a matrix multiply.
        
         | robert_tweed wrote:
         | For a few seconds I thought it was Lite-On, best known for
         | their cheapo CD/DVD drives. Seems to be completely unconnected
         | though.
        
           | Datagenerator wrote:
           | Brings back memories of the Plextor automatic duplicator
           | robot we had in company back in the nineties. Great times
        
         | inasio wrote:
         | I've never heard of LightOn, and wish the website had a bit
         | more concrete info on the specifics of the coprocessor, but I
         | am somewhat familiar with a similar photonic coprocessor made
         | by NTT (the Coherent Ising Machine). It's still in the research
         | stage, the logic uses interferometry effects, and requires
         | kilometers of fiber optic cables. Interestingly, there is a
         | simulator based on mean field theory that runs on GPUs and
         | FPGAs(*) that can solve some problems (e.g. SAT) with close to
         | state of the art performance.
         | 
         | (*) disclosure: my company helped build the simulator
        
           | dasudasu wrote:
           | There are other startups in the space that do it in
           | semiconductors. Look up Lightelligence and Lightmatter for
           | instance.
        
         | zapdrive wrote:
         | Nice. Can't wait for the new light based GPUs, all being
         | grabbed by greedy crypto miners and me still using my 6 year
         | old graphics card!
        
           | JohnJamesRambo wrote:
           | Relief is on the horizon. Ethereum should switch to proof of
           | stake in June 2022 and you are about to see an unholy torrent
           | of used GPUs hit the market. I would expect you can pick up
           | any you like for peanuts then.
        
             | demux wrote:
             | Ethereum PoW won't immediately disappear, and I'm sure
             | bitcoin folk will be all too happy to grab those extra GPUs
        
             | zapdrive wrote:
             | I have been hearing "Ethereum is switching to proof of
             | stake in a few months" for at least 6 years now. I don't
             | think it's going to ever happen.
        
       | mikewarot wrote:
       | The only actual hardware description I could find was in this
       | arXiv link, where you have a laser that is spread then put
       | through a light gate chip (as found in projectors), a random
       | mask, then to a CCD camera. This does random multiplies, in
       | parallel, which somehow prove useful.
       | 
       | I fail to see how it actually does matrix math.
       | 
       | https://arxiv.org/abs/1609.05204
        
         | [deleted]
        
         | daralthus wrote:
         | It does random projections [1] which is really useful for
         | dimensionality reduction.
         | 
         | [1] https://en.wikipedia.org/wiki/Random_projection
        
       | 14 wrote:
       | I remember hearing about Lifi [1] years ago and thinking we would
       | see it everywhere but this has not been the case. I wonder what
       | has held it back. Like this super computer Lifi promised
       | unmatched speeds that wifi can not do. Very neat to see the field
       | is still advancing.
       | 
       | [1] https://lifi.co/lifi-vs-wifi/
        
         | tomlue wrote:
         | there was a ted talk [1] that made me think lifi would be take
         | off much faster than it has.
         | 
         | [1]https://www.ted.com/talks/harald_haas_wireless_data_from_eve
         | ...
        
         | crate_barre wrote:
         | How could LiFi be truly wireless if anything can obstruct the
         | path of light? How's my phone in my pocket going to get
         | connection?
        
           | croes wrote:
           | If you don't need a cable it's wireless notwithstanding any
           | other disadvantages
        
           | dmm wrote:
           | Why restrict yourself to visible light? There are frequencies
           | of em waves that would easily penetrate your jeans:
           | https://en.wikipedia.org/wiki/Gamma_ray
        
             | [deleted]
        
             | croes wrote:
             | Isn't the use of visible light the whole point LiFi?
        
             | Denvercoder9 wrote:
             | Usually we go to the lower-energy, longer-wavelength part
             | of the spectrum for applications like this, since gamma
             | rays and the high-energy, short-wavelength part of the
             | spectrum is very hard to control and gives you cancer.
        
             | causality0 wrote:
             | Keeping a multi-watt transmitter of ionizing radiation in
             | your pocket is a very bad idea.
        
               | ben_w wrote:
               | Or indeed a single-milliwatt transmitter, if it's in your
               | pocket for more than an hour or so per lifetime.
        
               | causality0 wrote:
               | I see this "cell phones will use x-rays and gamma rays in
               | the future" claim often enough that I wish an expert in
               | the field would write an article on just how bad the
               | effects would be.
        
           | up6w6 wrote:
           | It seems that light reflected off walls can achieve up to 70
           | Mbit/s in a controlled environment. But yeah, it's still hard
           | to think about direct applications in our lives.
           | 
           | https://en.wikipedia.org/wiki/Li-Fi
           | 
           | https://www.abc.net.au/radionational/programs/scienceshow/th.
           | ..
        
           | 14 wrote:
           | there are many use cases that don't require you to be
           | connected 24/7. You could while driving under a street light
           | download a movie in seconds. Then go on your way loading your
           | cars infotainment with maps and several movies for your kids
           | in the back or whatever. I haven't looked into the tech in
           | years but the promised of faster downloads and a city wide
           | array of street lights made it seem like the future.
        
             | zbrozek wrote:
             | Aperture fouling (dirt / grime) seems like it would make
             | such a service flaky. Aside from needlessly low data caps,
             | the cell network is pretty great for road trip
             | connectivity. High bandwidth intermittent connectivity
             | would be nice for self-driving car sensor log offload.
             | 
             | Inside my home it would be cool to have 10 gbps to my
             | laptop, but I don't have a real use case where that's
             | meaningfully better than the 500-800 mbps that I already
             | get with WiFi.
        
               | 14 wrote:
               | The sensor could be mounted behind your windshield which
               | we already have windshield wash built into every car and
               | be cleaned effortlessly. I don't see that being an issue.
        
               | zbrozek wrote:
               | Maybe, but the other side also needs to be cleaned. And
               | the car side TX won't be anywhere near as bright as a
               | street lamp (and probably IR) and will be more
               | vulnerable.
               | 
               | As you point out, all of this is likely surmountable,
               | but... why bother if the status quo is serving the use
               | case?
        
               | bogwog wrote:
               | > pretty great for road trip connectivity
               | 
               | Until people start getting seizures while driving from
               | all the flashing lights
        
               | 14 wrote:
               | The flashing happens so fast it is invisible to human
               | eye. LED lights already do with with pulse width
               | modulation to make them as efficient as they are and it
               | is not an issue.
        
               | zbrozek wrote:
               | Correct, though I was actually referring to the existing
               | cell network as great for road trips.
        
               | dekhn wrote:
               | I can see LEDs flashing out of the corner of my eye when
               | it's moving quickly, it's really obnoxious.
        
         | causality0 wrote:
         | That page is amateurish and riddled with absurd statements,
         | like the claim visible light travels faster than radio waves.
         | It also claims LiFi offers speeds "100 times faster than wifi"
         | but their thousand dollar router can only do 100Mbps. To add
         | insult to injury it rips off the Wi-Fi trademark logo as well.
         | 
         | There may be a market for visible light networking but LiFi is
         | total garbage.
        
       | crate_barre wrote:
       | Would this generate less heat? Heat is still a bottleneck in some
       | ways right? What about size? Can we get smaller?
       | 
       | Would we need semiconductors anymore?
        
         | adgjlsfhk1 wrote:
         | heat is just power. 10x less power is 10x less heat.
        
           | foobarian wrote:
           | 10x less power is 10x more gear I can run off the same outlet
           | :-)
        
           | goldenkey wrote:
           | Indeed. All electrical devices are just space heaters that do
           | something logical with the electricity while it's heating up
           | the wires :-)
        
             | amelius wrote:
             | https://en.wikipedia.org/wiki/Adiabatic_circuit
        
       | ww520 wrote:
       | I'm not super well versed on this topic. But the basic physics of
       | photon and electron make light a poor source for computing
       | compared to electricity. Photons don't interact with each other
       | much when crossed, while electrons interfere each other greatly.
       | It's really difficult to build logic gates out of photons while
       | electrons work great with semiconducting materials to build
       | gates.
       | 
       | Light is good for communication due to its non-interfering nature
       | to pump up the bandwidth. There's a saying light is great for
       | data transmission while electricity is great for computing.
       | 
       | So when people claim making supercomputer out of optical
       | computing, take it with a grain of salt.
        
         | aidenn0 wrote:
         | Photons may not interact with themselves much, but they
         | interact with other materials in significant and useful ways,
         | and there are plenty of materials that change their optical
         | properties in response to an electric field (which can be
         | either electronic or optical in origin, given that light,
         | particularly coherent light can have fairly strong electric
         | fields).
         | 
         | There are millions of things that an electronic digital
         | computer can do that are unlikely to be replaced by photonics,
         | but a hybrid approach may offer advantages in computing
         | specific things. As we have slowed down getting perf/watt
         | advantages from shrinking processes, more and more specialized
         | hardware has been used for performing calculations. It's not
         | that far-fetched to think that photonics might have a niche
         | where it has performance advantages.
        
           | ww520 wrote:
           | You're right that photons interact with many materials.
           | However, few exhibit semiconducting properties that's useful
           | for building logic gates.
           | 
           | Silicon has a kind of perfect energy band gap for its valence
           | band electrons jumping to conducting band free elections to
           | allow electricity flow. The gap is not too small to introduce
           | ambiguity or too big to require too much energy to move from
           | non-conductive to conductive state. Photonics needs to find a
           | semiconducting material/alloy that beats silicon to be a
           | viable option.
           | 
           | Most research in building logic gate with light is a
           | combination of light and electricity. The most promising
           | recently is using Josephson junction in a superconducting
           | electric current loop. A photon hitting the loop adds energy
           | to the superconducting current. With enough energy the
           | current becomes critical current, which moves the Josephson
           | junction from 0 voltage to a voltage. The raised voltage
           | gives off the energy and the current falls back to non-
           | critical. Continuous photons hitting the loop causes the
           | Josephson junction to have an extremely high frequency AC
           | voltage. That's a photon controlled gate.
           | 
           | But the research is still really early and it requires low
           | temperature superconductivity. It's still a long way to be
           | competitive in reality.
        
         | krasin wrote:
         | As other commenter said, the magic happens not just between
         | photons, but between photons and non-linear optical materials,
         | in which these photons travel.
         | 
         | While not directly related to computing, I was fascinated to
         | learn that lasers routinely use crystals to cut wavelength of
         | the light by half ([1], [2]).
         | 
         | 1. https://en.wikipedia.org/wiki/Second-harmonic_generation
         | 
         | 2.
         | https://en.wikipedia.org/wiki/Potassium_dideuterium_phosphat...
        
           | ww520 wrote:
           | That's true but so far none of those materials found, to be
           | competitive with electricity+silicon.
        
       | faeyanpiraat wrote:
       | There is an upcoming Veritasium video about the "comeback of
       | analogue computers".
       | 
       | Part one (teaser): https://www.youtube.com/watch?v=IgF3OX8nT0w
        
         | sshlocalhost98 wrote:
         | Yup, I saw it, what is he actually trying to say ? How can
         | analog computer supercede digital, analogue is made too
         | specific for only 1 task
        
           | JumpCrisscross wrote:
           | We have _tonnes_ of computing power dedicated to repeatedly
           | solving a simply-parameterised problem.
        
             | rawoke083600 wrote:
             | Stupid question... with todays memory capacities....At what
             | limit/size do we stop using matrix-ops and simply use
             | lookup tables todo 'matrix-math' ?
        
               | adgjlsfhk1 wrote:
               | TLDR is you can't. For a very simple example, storing the
               | products of all 3x3 matrices * length 3 vectors in
               | Float16 precision would take 2^193 bytes (which is
               | obviously impractical).
        
               | jeffbee wrote:
               | Not a stupid question. Economically, memory density will
               | hit a brick wall soon. Developers should prefer to waste
               | time and save space, since parallel computation will not
               | hit a similar limit in the foreseeable future. Memory-to-
               | core ratio is going to be falling.
        
           | aidenn0 wrote:
           | Your CPU has dedicated circuitry for performing CRC32 and AES
           | rounds. That's as specific as any analogue computer...
        
           | kortex wrote:
           | If that "1 task" happens to be performing matrix
           | multiplication (or even merely fused multiply add), you can
           | do a heck of a lot with that. You still need digital
           | circuitry to support the IO, but the key idea is doing linear
           | algebra in a way that is faster and/or generates less heat
           | per unit compute.
        
       | magicalhippo wrote:
       | On a related note, the Huygens Optics channel on YouTube
       | published a video[1] earlier this year on the attempts to make
       | optical logic gates.
       | 
       | He didn't get as far as making working devices, work in progress,
       | but interesting still IMHO. He also has a lot of other great
       | videos about optics.
       | 
       | [1]: https://www.youtube.com/watch?v=pS1zAAD1nXI
        
       | vmception wrote:
       | There are some people that view these as a solution to Proof of
       | Work energy use
       | 
       | I've been researching OPUs for this purpose, as in, the concept
       | of optical processors only came to my attention because of the
       | people looking for an edge in mining and energy use.
       | 
       | From what I can tell, it could only be a stopgap or decade long
       | solution to PoW, as people would just hoard these over time till
       | the energy use was the same
       | 
       | But any slowdown is good
       | 
       | Good for sellers of OPUs though
        
         | adgjlsfhk1 wrote:
         | Light based computers work best on linear operations. They are
         | extremely unlikely to work well for computing hashes.
        
           | vmception wrote:
           | Right well, thats why there is a new hashing algorithm. And
           | they want other energy heavy cryptocurrencies to switch to
           | it.
           | 
           | Basically there is a whole row going on in the far far
           | corners of crypto land, where some Cambridge and Columbia
           | university alumni have made a new hashing algorithm
           | (HeavyHash) that is heavy on linear operations specifically
           | so that it would theoretically have an advantage on light
           | based computers and be a sustainability solution [for now].
           | 
           | They stood up an example project back in May and forgot about
           | it ("optical bitcoin", oBTC), but people kept mining it. Just
           | on GPUs and FPGAs, as the low power better processors don't
           | exist yet so there isn't really anything different at the
           | moment. Because these are students with no funding, the
           | management is very weak.
           | 
           | There is at least one fork of that project that has better
           | management ("photonic bitcoin", pBTC). But they are waiting
           | for OPUs to exist at all, as there are a variety of vaporware
           | companies out there with massive funding.
           | 
           | HeavyHash itself is being used in more and more other newly
           | launched cryptocurrencies. Hoping for OPUs to become
           | available to actually make their networks different than
           | others.
           | 
           | They all so far only aspire to be examples for other major
           | energy consuming cryptocurrencies we've heard of to switch
           | to. The university students had submitted a proposal (BIP) to
           | Bitcoin-core, and that slow moving network typically needs
           | real world examples and fear of missing out for consensus to
           | shift.
        
         | simias wrote:
         | I don't see how that adds up, unless production of these light
         | computers is extremely constrained they are just going to be
         | produced en masse if they provide an edge for PoW mining, then
         | you'll just have many more computers using the same amount of
         | energy. And if the supply is constrained in such a way that
         | only a tiny amount of people have access to the technology it
         | just means that you have a centralized mining pool that can
         | easily perform a 51% attack.
         | 
         | It's always a zero-sum game in the end. The only way to "fix"
         | PoW is to get rid of it.
        
           | vmception wrote:
           | yeah my post covered that, with much more brevity. read it
           | again, slowly, specifically the third line.
        
             | vlovich123 wrote:
             | I think the contention is around that it could even be a 10
             | year stopgap. I too think it would be much shorter - a
             | couple of years at most.
        
               | vmception wrote:
               | ah okay so some of us are are agreeing about the limited
               | utility.
               | 
               | I mostly think the production and distribution would be
               | constrained for the level of demand out there.
        
               | simias wrote:
               | Indeed, I should have made that clearer. 10 years is what
               | separates the iphone 1 and the iphone 8, or the
               | PlayStation and the PlayStation 3. If there's a
               | breakthrough in computing technology it'll be everywhere
               | within a couple of years IMO, and miners will be among
               | the first ones served because they're willing to pay
               | above market price and in bulk (see the GPU situation at
               | the moment).
        
           | danlugo92 wrote:
           | > it just means that you have a centralized mining pool that
           | can easily perform a 51% attack.
           | 
           | Miners do not ultimately decide what blocks get into the
           | blockchain or not, full nodes[0] do.
           | 
           | Here's a collection of 8 articles explaning who (if anyone)
           | controls Bitcoin and why miners are not who are in control:
           | 
           | https://endthefud.org/control
           | 
           | [0] ~$200 of hardware, and there are about 12k of them right
           | now.
        
             | simias wrote:
             | I don't understand the relevance of your articles. What
             | would nodes have to do with the issue of who creates the
             | nodes? Why would nodes reject valid blocks arbitrarily? Why
             | would it be an improvement?
             | 
             | I'm talking about a situation where a small group of people
             | would have access to exclusive technology that would let
             | them mine blocks faster than the rest of the miners with no
             | way for others to compute without losing money. Nodes are
             | irrelevant here unless they decide to arbitrarily reject
             | valid blocks coming for certain miners because they'd deem
             | them "unfair competition", but that's a huge can of worms.
             | Who decides who goes on the list? Based on what? Could it
             | not be trivially worked around?
        
         | blihp wrote:
         | The whole point of PoW is that it requires a given amount of
         | effort at a given level of difficulty to maintain a given level
         | of production. (i.e. it's a function of the capital cost of the
         | equipment and the operational cost of running it) All else
         | being equal, if the amount of effort to produce a given result
         | is reduced it will result in an increased level of difficulty
         | netting out to a similar level of energy use.
         | 
         | This is the reason that Bitcoin, for example, keeps ratcheting
         | up the difficulty: to counteract the increased performance of
         | CPUs, then GPUs, then FPGAs and finally ASICs over time. It's
         | an arms race that you can't 'win' for any extended period of
         | time since the difficulty is not a constant, but rather
         | determined by the desired level of production.
        
       | mrobot wrote:
       | Some info on China's Jiuzhang 2
       | 
       | https://interestingengineering.com/chinas-new-quantum-comput...
        
       | rahimiali wrote:
       | The lighton device accelerates random linear projections. Its
       | input is a vector x and its output is a vector W*x, where W is a
       | fixed random matrix. This is useful for a class of machine
       | learning algorithms, but not frequently used in deep learning.
        
         | sudosysgen wrote:
         | Can you not decompose general matrix multiplication into the
         | sum of vector matrix multiplications?
        
           | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-12-28 23:01 UTC)