[HN Gopher] Making a Website Under 1kB
___________________________________________________________________
Making a Website Under 1kB
Author : iloverss
Score : 152 points
Date : 2022-09-09 09:12 UTC (13 hours ago)
(HTM) web link (tdarb.org)
(TXT) w3m dump (tdarb.org)
| SahAssar wrote:
| I think not being able to see where a link is going is pretty
| bad. I would remove that "hack" even if it means the site is
| slightly over 1kb.
| jehna1 wrote:
| If 1kb websites interst you, check out the https://js1k.com that
| has awesome Javascript demos within 1kb!
| SahAssar wrote:
| Also in the same vein: https://www.dwitter.net/ which is JS
| animations in under 140 characters.
| lifthrasiir wrote:
| Or its spiritual successor JS1024 [1].
|
| [1] https://js1024.fun/
| scns wrote:
| Wild.
|
| https://js1k.com/2013-spring/demo/1555
| thomasmg wrote:
| Or chess (including computer opponent):
| https://js1k.com/2010-first/demo/750
| yamtaddle wrote:
| OMG, finally a computer chess program I can beat!
|
| _five minutes later_
|
| Nope.
| culi wrote:
| Reminds me of the demoscene days
| xwdv wrote:
| More impressive is to build a website that can be sent in a
| single TCP packet. This means a site can be no more than 14kb
| compressed and should serve everything in just a single request.
| Would have to base 64 encode images as well. This would probably
| end up as a meatier website but still blazing fast.
| jandrese wrote:
| I assume you mean 1.4kb? Squeezing graphics into that is
| possible but very difficult for anything larger than an icon.
| If the browser supports vector graphics of some flavor you
| could do a lot so long as the shapes aren't overly complex.
| xwdv wrote:
| Ah 1.4kb for a single packet,
|
| But the initial round trip can be 14kb, 10 packets. So under
| that the user will not need anymore trips.
| [deleted]
| kirbys-memeteam wrote:
| It's... just text on a page. Weird.
| ollybee wrote:
| One of the linked examples zenofpython.org reliably crashes
| chrome on my phone, although not when rehosting same content on
| my own server. Can anyone reproduce that?
| poleguy wrote:
| Confirmed on android. Instant crash.
| Orangeair wrote:
| I got that too. And on desktop it just never loads, pretty
| strange.
| turtlebits wrote:
| HTML is just text and it isn't inherently hard to stay under 1k.
| What is this other than a weird way to flex?
|
| The linked article is bigger than 1kb, and the <1kb site is just
| a list of links...
| username223 wrote:
| That's pretty hacky, but still clever.
|
| Out of curiosity, I recently wrote a little Elisp function to
| compute the "markup overhead" on a typical NY Times article, i.e.
| the number of characters in the main HTML page versus the number
| in a text rendering of it. It turns out that the page is 98.5%
| overhead. That doesn't even count the pointless images, ads, and
| tracking scripts that would also get pulled in by a normal
| browser. Including those, loading a simple 1000-word article
| probably incurs well over 99% overhead. Wow!
| anthonyhn wrote:
| Nice, always refreshing to see these small web pages projects.
| Shameless plug, but I had recently started a search engine [0]
| with the goal of generating search result pages that are only a
| few KB in size, backward compatible (HTML 4), and only takes 1
| HTTP request per page (no images, inlined CSS, base64 encoded
| favicon). It's surprising how big the page sizes are for the
| popular search engines, you would think these pages would be
| small, but a google search result page can be over 1 MB in size
| and take over 70 requests.
|
| [0] https://simplesearch.org
| bArray wrote:
| Some comments here mentioned that these pages were essentially
| just text. I wanted to create something that would be useful,
| showcasing some basic HTML, CSS and JS.
|
| Here's an entry I hacked up together:
| https://coffeespace.org.uk/colour.htm
|
| It comes in at 1015 bytes and converts HTML colours into their
| shortened form (i.e. #00F for blue) and displays the colour
| visually.
| timxor wrote:
| Can you please add me to the club?
|
| My site is 0.306 kB.
|
| http://superimac.com
| demindiro wrote:
| Staying under 1KiB is pretty hard when just plain text alone
| easily takes more than 1KiB.
|
| Checking my own site with `find . -name '*.md' -exec wc -c {} + |
| sort -h` I find that 30 Markdown files are under 1KiB and 40 are
| over 1kiB. The largest file is a 18266 bytes post which is still
| quite small compared to most other blogs AFAICT. It also excludes
| the possibility of including images with more than a dozen
| pixels.
| tekinosman wrote:
| I had a look at https://cv.tdarb.org/. You could reduce it even
| further.
|
| By: - removing quotes around attribute values
| - replacing href values with the most frequent characters
| - sorting content alphabetically - foregoing paragraph
| and line-break tags for a preformatted tag
|
| I was able to bring it from 730 bytes (330 had you compression
| enabled) down to 650 bytes (313 bytes after compressing with
| Brotli). Rewording the text might get you even more savings. Of
| course I wouldn't use this in production.
|
| Here it is: https://jsbin.com/cefuliqadi/edit?html,output
| skyfaller wrote:
| Why not eliminate quotes in production, if you know the value
| doesn't need quotes? That's still valid, it's optional HTML:
| https://meiert.com/en/blog/optional-html/
|
| Sorting content alphabetically and that sort of thing to
| improve compression may be silly code golfing and impractical
| for page content, but on the other hand I don't see that it
| costs you anything (aside from time experimenting with it) when
| applied to the <head> / metadata.
| https://www.ctrl.blog/entry/html-meta-order-compression.html
|
| I think that both of these methods could be used in production,
| and I intend to do so when possible.
| eska wrote:
| I tried that too but decided that saving a few bytes is not
| worth the parser restarting, so I adhered to strict XHTML for
| fast page load times.
| tekinosman wrote:
| Yes, I don't think it's worth it other than as an exercise in
| byteshedding.
| chrismorgan wrote:
| Not worth the... what? I'm not sure what you're talking about
| or thinking of, but I think you're wrong. Parser restarting
| is purely when _speculative_ parsing _fails_ , and there's
| nothing here that can trigger speculative parsing, or
| failures in it.
|
| If you're using the HTML parser (e.g. served with content-
| type text/html), activities like including the html/head/body
| start and end tags and quoting attribute values will have a
| negligible effect. It takes you down slightly different
| branches in the state machines, but there's very little to
| distinguish between them, one way or the other. For example,
| consider quoting or not quoting attribute values: start at ht
| tps://html.spec.whatwg.org/multipage/parsing.html#before-a...
| , and see that the difference is very slight; depending on
| how it's implemented, double-quoted _may_ have simpler
| branching than unquoted, or may be identical; and if it
| happens to be identical, then omitting the quotes will
| probably be faster because there are two fewer characters
| being lugged around. But I would be mildly surprised if even
| a synthetic benchmark could distinguish a difference on
| browsers' parser implementations. Doing things the XHTML way
| will not speed your document parse up.
|
| As for the difference achieved by using the XML parser (serve
| with content-type application/xhtml+xml), I haven't seen any
| benchmarks and don't care to speculate about which would be
| faster.
| luzifer42 wrote:
| AWS also likes to play this game:
| https://docs.aws.amazon.com/elasticloadbalancing/latest/APIR... :
| " MessageBody: ... Maximum length of 1024."
|
| I have implemented some fixed error pages for my company
| including its logo in svg below 1KiB.
| xani_ wrote:
| > Building a website that actually serves useful content while
| squeezing its page size under 1,024 bytes is no easy feat.
|
| Narrator: It was extremely easy feat.
|
| Just make spec-invalid webpage and skip all the heads, bodies,
| htmls and rest of it.
| hawski wrote:
| It is a valid page. Check it out with w3c validator. It only
| gets a warning for missing lang attribute on html tag.
| tobyhinloopen wrote:
| Omitting many kinds of tags is perfectly valid in html5. Most
| of my websites feature no explicit body or head tags. Html open
| tag is "required" because of the lang-attribute.
|
| You also don't need to quote attribute values. You don't need
| closing tags for many html elements.
| samatman wrote:
| 1kB is charming, the site shows the limitation: it's not even
| very much text, there's limited room in the medium to leave one's
| mark.
|
| Not a thing wrong with haiku, but for a whole art movement, we'll
| get more mileage out of a one-trip website.
|
| Ignoring a bunch of caveats I won't get into, a normal TCP packet
| is no larger than 15kB, for easy transit across Ethernet: the
| header is 40 bytes, leaving 1460 for data. Allowing for a
| reasonable HTTP response header, we're in the 12-13kB range for
| the file itself.
|
| That's enough to get a real point across, do complex/fun stuff
| with CSS, SVG, and Javascript, and it isn't arbitrary: in
| principle, at least, the whole website fits in a single TCP
| packet.
| tomxor wrote:
| You forgot the HTTP header... Which is variable size, but
| unfortunately usually quite large.
|
| They can easily be in the 500 to 1000 byte range, taking up
| most of the first TCP packet. e.g this HN page has a 741 byte
| header. I suppose if you control the web server you could
| feasibly skim this down to the bare minimum for a simple static
| page - not sure what that would be.
| sjsdaiuasgdia wrote:
| Think you got your math off there. MTU of 1500 bytes = 1.5kB,
| not 15kB.
| diocles wrote:
| I believe the parent comment was thinking of 1 round trip
| time.
|
| Typically the TCP initial congestion window size is set to 10
| packets (RFC 6928), hence ten packets can be sent by the
| server before waiting for an ACK from the client.
|
| So under 15kB or so (minus TLS certs and the like) website
| loading has the minimum latency possible given any other
| network factors.
| remram wrote:
| I think they might be referring to the initial TCP "window
| size", which is how much data can be sent before the
| recipient acknowledges it (in multiple packets but without
| round trips).
| Karrot_Kream wrote:
| Ethernet MTU is 1500 bytes (not 15000 bytes or 15 Kbyte),
| assuming non-jumbo frames. TCP MSS tends to be 1460 bytes. But
| then, opening a TCP connection requires 3 packets anyway (so
| 4380 bytes.) TLS connections usually take another 2-3 packets
| (depending on your version of TLS/TLS parameters) so now your
| total payload just to establish the connection is sending
| 5840-7300 bytes over the wire. If we take the handshake as 50%
| overhead (which is quite sad mind you), then we can transmit a
| 7300 byte or 7.12 KB website, which can definitely make a
| pretty decent website.
|
| Optimizing for 1 kB can be a fun creative exercise, but I think
| it's practically a bit meaningless. It's better to target
| something like a 28.8 Kbps connection and try to get the page
| to load under a second (including connection handshake < 20.1
| KB), which is more than enough to have a rich web experience.
| jaimehrubiks wrote:
| Last website in the club closes my mobile chrome
| Kiro wrote:
| Same. I wonder what causes it.
| oynqr wrote:
| Works for me. JIT disabled.
| palijer wrote:
| Same! Interestingly only crashing Chrome on Android, but
| Firefox is handling it fine.
|
| https://zenofpython.org/
| numlock86 wrote:
| For nowadays standards I consider anything below 500kB and with
| less than 6 server requests already pretty minimal, yet those are
| still numbers that can achieve a lot of things and still look so
| "modern"/"normal" that you'd have to look into the dev console to
| really appreciate the effort that has been put into these.
| Whenever I see websites that use 5MB large PNGs for photos and
| have over 30 requests over a time span of multiple seconds I just
| question the general state of web-development these days.
|
| I just recently stumbled upon a site that had it all: webp/avif
| everywhere, minified CSS, even ditching unused classes from used
| frameworks, CSS data:-embedded and subsetted fonts (I think it
| even used a recent version of FontAwesome 6, but it still managed
| to make the WOFF2 be only 2kB in size because they just utilized
| like two dozen logos), only one request each for CSS and
| JavaScript (everything concatenated and with nice cache policies)
| and the site was still usable/viewable even without either one of
| these if you wanted to. Everything was automated in their
| deployment pipeline even. It only came to my notice because they
| wrote an article about it. I can't find it in my history but
| those things will stick to your head for a while.
| iLoveOncall wrote:
| > For nowadays standards I consider anything below 500kB and
| with less than 6 server requests already pretty minimal, yet
| those are still numbers that can achieve a lot of things and
| still look so "modern"/"normal" that you'd have to look into
| the dev console to really appreciate the effort that has been
| put into these
|
| 500kB honestly doesn't require much effort at all. My WordPress
| blog with a popular theme and a few plugins has each almost
| every article below 500kB, despite looking like any modern blog
| and having at least 1 image per post.
|
| Actually, if I remove the Facebook share button, they drop to
| almost 250kB. Time to remove it I guess.
| nlitened wrote:
| Honest question--do actual real people use Facebook share
| buttons? Have they ever?
|
| To me it looks like it has always been a fairytale made up by
| Facebook to spread their analytics scripts all over internet.
| Wistar wrote:
| Coincidentally, I used the fb share button this morning but
| it is a very rare occurrence.
| iLoveOncall wrote:
| I don't know, my blog is small, but I don't think so.
| Especially since it's a blog about programming I'm sure
| it's not the kind of stuff you'll share on Facebook...
|
| I removed it today after seeing the impact on page size.
|
| I guess it works for some niches like news, online quizz,
| etc.
| hammyhavoc wrote:
| Sure they do, but do a meaningful number of people use it
| for niche geek blogs? No. Major news sites? Sure.
| jbreckmckye wrote:
| For small websites I wrote an observables microlibrary, one
| afternoon. It's called Trkl and minified & gzipped the code is
| about 400 bytes: https://github.com/jbreckmckye/trkl
| cosmodisk wrote:
| From time to time I visit websites that my i7 16gb dell xps
| struggles to process. Bloody hell, 20 years ago that kind of
| power would have powered a small super computer and now it
| can't run a news websites.
| jrockway wrote:
| "i7" isn't a particularly useful measure of computer
| performance. It can refer to over 12 years worth of
| processors, including the ultra-low power version of the
| first model, a 1.07GHz processor. That thing probably has
| trouble loading Emacs.
| bbarnett wrote:
| _That thing probably has trouble loading Emacs._
|
| And with those historic words, the vi vs emacs debate was
| finally won.
| netr0ute wrote:
| > That thing probably has trouble loading Emacs.
|
| I don't know about that, because my RISC-V SBC with a
| 1-core in-order 1GHz CPU and no GPU can load Emacs GUI just
| fine.
| giantrobot wrote:
| I love the "ackshually..." replies. The point of the exercise is
| to get a marginally useful web page to fit in 1kB. A landing page
| that fits into 1kB is pretty impressive. You could fit even more
| e.g. some bare minimum CSS, into 1kB with compression.
|
| The whole point of the 1MB club and other such efforts are to
| show what can be done without the equivalent of multiple copies
| of Doom[0] worth of JavaScript to display what is just a static
| landing page.
|
| There are completely legitimate uses for "web apps", things that
| are actual useful applications that happen to be built in a
| browser. No one is saying that web apps aren't a totally valid
| means of delivering software.
|
| The problem is every website written using the same frameworks
| Facebook and Google use for their web apps to build sites that
| could easily just be some static HTML[1]. We have devices in our
| pockets that would have been considered super computers a few
| decades ago. I have more RAM on my phone than I had hard drive
| storage on my first PC. My cellular connection is orders or
| magnitude faster with orders of magnitude better latency than my
| first dial-up Internet connection.
|
| Despite those things the average web pages loads slow as shit on
| my phone or laptop. They're making dozens to hundreds of
| requests, loading unnecessarily huge images, tons of JavaScript,
| autoplay videos, and troves of advertising scripts that only
| waste power and bandwidth. I find the web almost unusable without
| an ad blocker and even then I still find it ridiculous how poorly
| most sites perform.
|
| I _love_ waiting at a blank page while it tries to load a
| pointless custom font or refuses to draw if there 's heavy load
| on some third party API server. I also absolutely adore trying to
| use some bloated page when I'm on shitty 4G in between some
| buildings or on the outskirts of town.
|
| It would be nice if I _didn 't_ need the latest and greatest
| phone or laptop to browse the web. It would be nice if web pages
| rediscovered _progressive enhancement_. Add JavaScript to improve
| some default useful version of a page. You don 't need to load
| some HTML skeleton that only loads a bunch of JavaScript just to
| load a JSON document that contains the actual content of a page.
|
| [0] https://www.wired.com/2016/04/average-webpage-now-size-
| origi...
|
| [1] https://idlewords.com/talks/website_obesity.htm
| quickthrower2 wrote:
| 1kb for the entire site would be impressive! Would probably just
| do a single page of ASCII text written in shorthand for that.
|
| They actually mean 1kb per page, which is pretty slick and decent
| even on dialup.
| richrichardsson wrote:
| This is confusing me about the 1MB club. On the site under
| "submit" it says: The two rules for a web
| page to qualify as a member: Total website size
| (not just transferred data) must not exceed 1 megabyte
| The website must contain a reasonable amount of content /
| usefulness in order to be added - no sites with a simple line
| of text, etc.
|
| The github repo just says: An exclusive
| members-only club for web pages weighing less than 1 megabyte
|
| So which is it? Are sites with multiple pages under 1MB (but
| then the total for all pages exceeds 1MB) allowed, or must the
| entire site weigh in less than 1MB?
| xani_ wrote:
| It seems really silly as this is extremely low bar to get,
| you can get there by accident if you "just" use plain
| html/css.
|
| Then again in age of JS frameworks maybe it is an achievement
| for the new developer that was gaslighted into thinking 500MB
| of deps to make a simple site is normal
| 1776smithadam wrote:
| Why not shave a few more bytes removing double quotations around
| single token attributes?
|
| Before: <link rel="icon" href="data:,">
|
| After: <link rel=icon href="data:,">
| alexalx666 wrote:
| on what kind of connection you would feel the diff between 1kb
| and 1Mb? One of my projects for 2022 is a proxy that strips any
| website to 1Mb in one of standard layouts
| dna_polymerase wrote:
| You don't need the quotes in many cases.
|
| Instead of:
|
| <link rel="icon" href="data:,">
|
| Try
|
| <link rel=icon href="data:,">
| thegeekpirate wrote:
| <link href=data: rel=icon> will work just fine ;)
|
| Another fun trick is using <!doctypehtml> since the spec says
| to pretend a space is there if not present for whatever reason
| (https://html.spec.whatwg.org/multipage/parsing.html#parse-
| er...)
| tomcam wrote:
| Well damn. That is an oddity.
| skyfaller wrote:
| Cute, but I just ran both your suggestions through the HTML
| validator at https://validator.w3.org/nu/ and neither of them
| validated.
|
| The first error read "Bad value data: for attribute href on
| element link: Premature end of URI."
|
| The 2nd error read "Missing space before doctype name."
|
| Depending on the context these hacks may still be useful, but
| I personally think that both production sites and code
| golfing should require valid HTML.
| PuffinBlue wrote:
| I managed under 1.5KB for a page that has an actual function, not
| just a test:
|
| http://www.captiveportal.co.uk
|
| And yes, that's supposed to be a non-https link.
|
| I think the entire site, including favicon, might be under 5KB.
| You can check here:
|
| https://github.com/josharcheruk/CaptivePortal
| cocoflunchy wrote:
| Well, the function is not in the content so your website could
| even be completely empty. I would say https://cv.tdarb.org/ has
| more function.
| PuffinBlue wrote:
| Fair critique.
| cafeinux wrote:
| Interestingly, I surprised myself browsing the website you
| posted and others, clicking on every link, ready each new
| page before going back to finish reading the page I was
| coming from, jumping from link to link just as I remember
| doing 20 years ago. That is something I don't do anymore.
| Sure, I sometimes click on some links when I'm reading
| something, but I usually do it with a middle click (opens the
| link in a new tab) and continue reading the first article
| before closing it and seeing the new tab. And at this point I
| usually lost interest in the content I opened a few minutes
| before and just close the tab without reading. I was just
| wondering why I usually do this and why I didn't this time,
| and I realise that the reasons I open links in new tabs and
| don't consume them directly are : - opening it in a second
| tab lets it enough time to load completely, since everything
| is so bloated; - a website messing with your tab history or
| redirecting you 6 times before allowing you to get the
| content you were waiting for means that going back to my
| previous article will be a pain in the ass, and I'd rather
| play with two tabs rather than quintuple-click that back
| button just to find my previous article.
|
| Anyway, that didn't happen here, because I subconsciously
| knew that every link would load before I could even think of
| it, and that none would make coming back one step a pain in
| the ass, and that was refreshing and maybe even made me
| nostalgic. But more than anything else, it allowed me to read
| with more focus than I remember having the last few years. So
| yeah, I love that "bare ones" design.
|
| (PS: I also realise my comment is so long it could have been
| it's own blog post. Maybe I should start one...)
| nicbou wrote:
| example.com also works for that
| Aissen wrote:
| Is it really needed when browser (and linux distros!) have
| their own captive portal detection page ? (e.g
| http://detectportal.firefox.com/ or http://connectivity-
| check.ubuntu.com)
| wongarsu wrote:
| On desktop the automatic detection is pretty reliable. On
| mobile I find myself opening http://example.com with some
| regularity (the most common edge case is if a network with
| captive portal is configured to auto-connect, and I don't
| have a browser open).
| pxx wrote:
| Were you aware of http://neverssl.com before making this?
| Though I guess your page is slightly smaller (687 bytes vs 1900
| bytes compressed). http://captive.apple.com is even smaller
| though.
| PuffinBlue wrote:
| I didn't know about neverssl.com. I think I would have still
| made captiveportal.co.uk even if I did because it was fun :-)
| [deleted]
| PostOnce wrote:
| https://www.captiveportal.co.uk works and doesnt redirect to a
| non-https link, at this point its hard for normal users to go
| to an http:// link without their browser overriding them
|
| neverssl.com solves this by redirecting to a random subdomain
| (for some reason that isnt clear to me near midnight)
|
| a .co.uk equivalent is a great idea though, if it can be made
| accessible to users with hostile browsers
| oynqr wrote:
| Most browser still don't have force https enabled by default,
| so without HSTS there is nothing preventing plain http.
| sitzkrieg wrote:
| both firefox and chrome do for me, trying https first even
| when i type http://. particularly if https was ever used
| for a domain in past which drives me nuts for local network
| domains. the only way i make them stop is to clear history
| PuffinBlue wrote:
| I'll add it to the to do list. The current hosting won't be
| around after next year so it's getting moved in about 6
| months to where I have some more control and I can probably
| implement changes needed.
| avhon1 wrote:
| The purpose of the random subdomain is to ensure that the
| browser doesn't just show a cached version of the page.
| ehPReth wrote:
| SSL/TLS/https (the padlock symbol) prevents this.
| This site will never use those technologies.
|
| yet it has https? strange haha
| PuffinBlue wrote:
| The site is hosted in Fastmail - I think they must have
| enabled https or perhaps I missed that it was active when I
| put the site in there.
| tobyhinloopen wrote:
| You can remove the quotes around attribute values, they're
| optional
| tomxor wrote:
| Shameless plug
|
| I made an entire organ synthesizer in under 1024 bytes of html/js
| (use your keyboard):
|
| https://js1024.fun/demos/2022/23
|
| source / instructions / background:
|
| https://github.com/ThomasBrierley/js1024-mini-b3-organ-synth
|
| The javascript was significantly reduced in size by using
| regpack, which is a regex based dictionary compression algorithm
| targetting very small self decompressing javascript. Writing for
| regpack takes some thinking because you have to try and make the
| code the most naturally self similar, which often means
| consciously avoiding more immediate space saving hacks in order
| to make larger sequences of characters identical - e.g it would
| usually make sense to not duplicate an expression, unless it's
| very short and only used a couple times, it usually makes sense
| to store it in a variable, but the character cost in defining the
| variable is actually larger than simply duplicating the
| expression through regpack (even if it's used a lot).
|
| This might be worse than the lzw type compression used in HTTP, I
| haven't tested, but this was written for a code golfing
| competition where source size counts not transmitted size... Then
| again, conceptually this should lend itself to any any dictionary
| based compression, so zipping the original source (minus the
| whitespace), may also work out very well.
|
| Here's a web interface that includes terser and regpack:
| https://xem.github.io/terser-online/
|
| Note that it's usually best to hand minify for regpack, terser is
| only convenient for removing whitespace, but advanced automated
| js compression libraries usually cause the size to inflate when
| regpack is the final stage.
|
| [edit]
|
| zipping the source comes in at 952 bytes - so it _appears_ this
| technique applies when targetting dictionary compression in
| general, including the commonly used compression in HTTP.
| bheadmaster wrote:
| I just wanted to say this is really impressive :)
|
| The latency is almost zero. I've always thought of "browser
| applications" as these heavyweight mammoths where each
| keystroke needs at least a couple hundred milliseconds to
| process.
|
| Good to know these kinds of things are still possible. Browser
| synthesizers have become my new point of interest :)
| tomxor wrote:
| Thanks :) I was pleasantly surprised with the latency too,
| admittedly the web audio engine of the browser is doing all
| the heavy lifting - I noticed that it seems to sacrifice
| audio quality for speed if you push it too far, e.g my
| struggle was with keeping the number of simultaneous
| oscillators low enough, if you ask to simulate too many then
| clipping will start happening all over the place since there
| is going to be some limit based on the CPU... I don't have
| much experience with audio programming but I expect for this
| reason additive synthesis is probably not efficient enough
| for anything that's not as simple as an organ.
| timxor wrote:
| Nice username!
| tomxor wrote:
| haha, snap.
| squarefoot wrote:
| Just wow! And you implemented working drawbars too, really
| impressive!
| jeromenerf wrote:
| Anticlimactically boils down to 100 words on a page.
| cuu508 wrote:
| Let's go right to the 100 bytes club. Here's Sierpinski triangle
| in 32 bytes:
| https://www.pouet.net/topic.php?which=12091&page=1#c568712
|
| Edit: another version, 59 bytes:
| https://twitter.com/jonsneyers/status/1375828696846721031
___________________________________________________________________
(page generated 2022-09-09 23:01 UTC)