[HN Gopher] An interactive guide to the Fourier transform (2012)
       ___________________________________________________________________
        
       An interactive guide to the Fourier transform (2012)
        
       Author : uticus
       Score  : 167 points
       Date   : 2023-12-15 10:29 UTC (12 hours ago)
        
 (HTM) web link (betterexplained.com)
 (TXT) w3m dump (betterexplained.com)
        
       | alberto_ol wrote:
       | previous submission with 79 comments
       | 
       | https://news.ycombinator.com/item?id=27229836
        
         | dang wrote:
         | Thanks! Macroexpanded:
         | 
         |  _An Interactive Guide to the Fourier Transform (2012)_ -
         | https://news.ycombinator.com/item?id=27229836 - May 2021 (79
         | comments)
         | 
         |  _An Interactive Guide to the Fourier Transform_ -
         | https://news.ycombinator.com/item?id=10635075 - Nov 2015 (18
         | comments)
         | 
         |  _An Interactive Guide To The Fourier Transform_ -
         | https://news.ycombinator.com/item?id=4948082 - Dec 2012 (25
         | comments)
        
       | moritonal wrote:
       | Reading this I realise Bartosz Ciechanowski's work has ruined my
       | definition of "interactive" for blogs.
        
         | nimishk wrote:
         | Exactly what I was thinking!
        
       | Isamu wrote:
       | Adding epicycles to the circular orbits - an example before
       | Fourier!
       | 
       | https://en.m.wikipedia.org/wiki/Deferent_and_epicycle
        
       | earlymodernlvr wrote:
       | Request for someone to make an intuitive explanation of why the
       | Fourier Transform is (almost) it's own inverse. I know the math
       | proof from taking analysis, but the formula is too pretty and
       | symmetrical for the explanation to be so technical.
       | 
       | Same for why it preserves L2 norm.
        
         | duped wrote:
         | Same answer for both. It's an orthogonal transform, aka a
         | change of basis. You're conceptually rotating the
         | function/series to an equivalent one that's orthogonal to the
         | original. The magnitude/energy hasn't changed (Parseval's
         | theorem is a more succinct definition). And to perform the
         | inverse transform you need to conceptually rotate it back to
         | the original, which should mirror the original transform very
         | nicely.
         | 
         | If it _didn 't_ preserve energy then there wouldn't necessarily
         | be an inverse transform, since that implies information was
         | lost.
        
           | earlymodernlvr wrote:
           | I really appreciate this reply, since this is something I've
           | always been curious about. If you have time, I would really
           | appreciate it if you could elaborate on this point (maybe
           | with some equations), but you've already given me a ton to
           | think about, thank you!
           | 
           | Also, is the new basis orthogonal to the original, or just
           | another orthonormal basis? I don't see why it would be
           | orthogonal to the original.
        
         | AC_8675309 wrote:
         | It is a least squares fit of sine waves. (A^T)A = I because
         | sines are orthogonal.
         | 
         | Use Euler's eqn to convert e^jw into sin + cos and just work
         | through the algebra.
        
       | chasil wrote:
       | It would also be nice to have an understanding of its relation to
       | the Laplace transform, something more than saying that the real
       | component goes to zero.
        
         | segfaultbuserr wrote:
         | +1. Pop-sci explanations of the frequency domain and Fourier
         | transform are too many already, but does anyone know if there's
         | a similar physical interpretation or visualization of the
         | Laplace transform and the S domain?
        
         | sheepshear wrote:
         | Maybe this? https://youtu.be/iP4fckfDNK8
         | 
         | It shows how the FT is a 2D slice of the 3D LT in the s-domain.
        
       | sorenjan wrote:
       | The short explanation of the DFT that I like the most is that it
       | projects the signal vector on to the vector room with the
       | exponential functions as basis vectors. Then you can see how much
       | of the signal that each basis vector, corresponding to a
       | frequency, can explain.
       | 
       | It's intuitive to see if you start with a 2D vector space (the
       | regular euclidian plane) and a 2D vector, and then you can expand
       | the definition of vector spaces to include orthogonal functions
       | as bases.
        
         | sampo wrote:
         | > vector room
         | 
         | Is Swedish, "rum" means both space, and a room. In English,
         | "vector space" is used.
        
         | amelius wrote:
         | Minor point, it's not a projection because you don't lose
         | dimensions.
        
           | nh23423fefe wrote:
           | nit: the identity matrix is a projection
           | 
           | P^2 = P is the definition. not losing dimensions
        
             | amelius wrote:
             | True, but missing the point as you would not normally refer
             | to the identity matrix as a projection.
        
             | mrfox321 wrote:
             | Nit to your nit, which is incorrect w.r.t. parent reply:
             | 
             | A Fourier transform is not a projection, it's a change of
             | basis represented by a unitary transformation.
        
               | ndriscoll wrote:
               | The w Fourier coefficient F(w) is the dot product of f
               | with an exponential function, `e_w * f`, and is in that
               | sense a projection. The inverse Fourier transform writes
               | the original function as a sum of the projected
               | components: `f = sum_w (e_w * f) e_w = sum_w F(w) e_w`.
               | This is exactly how writing an "arrow" style 2- or 3-D
               | vector as a sum of orthogonal projections works.
        
           | anvuong wrote:
           | Is it a math terminology? Because the way I think it's still
           | a projection, just with 0 residuals?
        
       | kkapelon wrote:
       | It is super to easy to find resources on HOW the FT works. But I
       | find it very difficult to find resources on WHY we need it and
       | WHERE it is useful to have.
       | 
       | Does anybody have some good sources that explain the practical
       | applications and how it is useful on real world usage?
        
         | CaptainFever wrote:
         | From TFA:
         | 
         | Stop. Here's where most tutorials excitedly throw engineering
         | applications at your face. Don't get scared; think of the
         | examples as "Wow, we're finally seeing the source code (DNA)
         | behind previously confusing ideas".
         | 
         | If earthquake vibrations can be separated into "ingredients"
         | (vibrations of different speeds & amplitudes), buildings can be
         | designed to avoid interacting with the strongest ones.
         | 
         | If sound waves can be separated into ingredients (bass and
         | treble frequencies), we can boost the parts we care about, and
         | hide the ones we don't. The crackle of random noise can be
         | removed. Maybe similar "sound recipes" can be compared (music
         | recognition services compare recipes, not the raw audio clips).
         | 
         | If computer data can be represented with oscillating patterns,
         | perhaps the least-important ones can be ignored. This "lossy
         | compression" can drastically shrink file sizes (and why JPEG
         | and MP3 files are much smaller than raw .bmp or .wav files).
         | 
         | If a radio wave is our signal, we can use filters to listen to
         | a particular channel. In the smoothie world, imagine each
         | person paid attention to a different ingredient: Adam looks for
         | apples, Bob looks for bananas, and Charlie gets cauliflower
         | (sorry bud).
         | 
         | The Fourier Transform is useful in engineering, sure, but it's
         | a metaphor about finding the root causes behind an observed
         | effect.
        
           | kkapelon wrote:
           | Yes I saw that. But it still hides the "meaty" part. Where
           | are the articles that explain those processes?
           | 
           | For example I would love to find an article that starts with
           | "let's make a wav file smaller". And then somewhere in the
           | middle it just says "and here we will use FT to achieve X".
        
             | rescbr wrote:
             | It's not an article, but I found The Scientist and
             | Engineer's Guide to Digital Signal Processing book to be
             | very comprehensive. There are a couple of chapters on
             | applications, but not much code.
             | 
             | http://www.dspguide.com/pdfbook.htm
        
             | AlotOfReading wrote:
             | Here's a pretty good lecture on exactly that subject:
             | https://www.rose-hulman.edu/~bryan/invprobs/jpegtalk2.pdf
             | 
             | Honestly though, the Fourier transform is useful anywhere
             | that it's easy to think in terms of frequency instead of
             | time. 90% of use cases in the real world come about from
             | trying to do something that's that's easy to express in
             | terms of signal frequency and difficult to express in terms
             | of signal time/space. You can just use the Fourier
             | transform to convert between them efficiently.
             | 
             | To give a morbid, but non-obvious example, I once did some
             | work with a charity that was concerned with migrant deaths.
             | They wanted to figure out where the migrants most at risk
             | were working to approach the farmers hiring them directly.
             | We got a dataset of when migrant bodies were found and did
             | an FFT (among other processing) to find the periodicity of
             | the crops that they were coming to harvest. There's only a
             | few major crops and they tend to have distinct growing
             | periods, so this is enough information to pinpoint certain
             | crops like strawberries.
        
         | sizzzzlerz wrote:
         | Your cell phone wouldn't exist without the Fourier Transform,
         | or the discrete fourier transform, to be correct. Image
         | compression is another application, albeit, a 2-dimensional
         | version. Software defined radios, or SDR, are completely
         | dependent on the DFT. Radar processing. Earthquake analysis.
         | The list goes on and on. Basically, our technological society
         | would not exist in its current form without the fourier
         | transform. To me, it is one of the key mathematical algorithms
         | of the 20th century.
        
           | kkapelon wrote:
           | > Your cell phone wouldn't exist without the Fourier
           | Transform, or the discrete fourier transform, to be correct
           | 
           | Yes that is great to know. But where can I find an article
           | that explains how exactly FT helps in my cell phone? What
           | exactly do we do with FT in a cell phone?
        
             | rjeli wrote:
             | For example, to create a JPEG, part of the process is
             | removing the "high frequency" parts of your image, since
             | those take the most information to store. Here, high
             | frequency refers to noise, or any large difference between
             | neighboring pixels, as opposed to low frequency parts,
             | which are averages over larger groups of pixels. So, at the
             | extreme, if you average the whole image, you only have to
             | store one pixel, so it's obviously less data to store, vs
             | storing info about every single pixel. JPEG (and other
             | lossy compression formats) tries to find a good middle
             | ground between storing every pixel perfectly and storing
             | just one pixel.
             | 
             | So, how do you remove the high frequency parts? You apply a
             | Fourier transform to the image (in this case, it's a
             | "Discrete Cosine Transform", which is extremely similar to
             | a DFT and has no differences for the purpose of this
             | explanation) and get a 2d array. This 2d array has the low
             | frequency parts in the upper left corner, and the high
             | frequency parts everywhere else (to the right and down for
             | horizontal and vertical frequencies). So your compression
             | algorithm will simply zero out the high frequency parts so
             | you don't have to store them. In the simplest case, this is
             | equivalent to a simple blur of the image, but there are
             | some heuristics about how much to remove (zero out) to
             | minimize image degradation.
             | 
             | To decode the image, you take the 2d array and apply the
             | inverse FT to get the original image (now slightly blurry
             | because it's been compressed).
             | 
             | More details here: https://en.wikipedia.org/wiki/JPEG#Discr
             | ete_cosine_transform
        
           | cmehdy wrote:
           | We went from mostly fearing vibrations in a mechanical world
           | to harnessing them in a chemical and electrical world. It
           | entirely reshaped our relationship to the world and perhaps
           | so fast that our brains haven't entirely caught up yet. I
           | wonder how many well controlled oscillations of mostly
           | electrical charges happened between me swyping this and you
           | hearing it in your head. We rely on them instead of fearing
           | them now.
        
         | adamnemecek wrote:
         | Making convolution faster.
        
           | kkapelon wrote:
           | This is circular logic.
           | 
           | So what real world problems we have right now that depend on
           | making convolution faster?
        
             | adamnemecek wrote:
             | Literally everything. The list of thing that are not some
             | type of convolution is really short. Name 5 concepts and
             | I'll try to connect it to convolution.
        
         | jezzamon wrote:
         | Disclaimer: I wrote this, but it's a little more from an
         | engineering POV than a Math POV, and covers some applications:
         | jezzamon.com/fourier
         | 
         | Hacker news folks seem to enjoy it :)
         | 
         | Another thought on the "why": many things in the real world
         | operate on frequencies, sound being the most obvious but also
         | electrical signals, mechanical systems like springs, etc. Doing
         | the analysis representing a signal as a bunch of frequencies
         | makes more sense, and Fourier transforms lets us do that
        
         | photochemsyn wrote:
         | Here's a nice example with the Fast Fourier Transform (an
         | algorithm for computing the Dicrete Fourier Transform more
         | efficiently):
         | 
         | Source: PCA and Fourier Analysis (2010) J Banfelder, Weill
         | Cornell Medical College
         | 
         | > "A beautiful example of how this knowledge can be used in
         | medicine is found in the cochlear implant. This device is used
         | in patients with inner ear damage. The entire mechanical
         | transduction mechanism is bypassed when the device is
         | implanted. Instead, a microphone worn on the outer ear records
         | sound that is digitized and sent to a signal processor. Here an
         | FFT and an array of bandpass filters are applied. Results are
         | passed to the implanted device, which electrically stimulates
         | the neurons in the cochlea. Typical devices divide the
         | frequency range of 0 to 4 kHz into about 15 or 20 bands, and
         | stimulate neurons accordingly. However, profoundly deaf
         | patients have recovered their hearing and have been able to
         | understand speech even when as few as five bands are used."
         | 
         | See also:
         | 
         | https://allsignalprocessing.com/lessons/the-four-fourier-rep...
        
         | duped wrote:
         | I don't think there's a satisfying concise answer to this
         | question. You're asking to boil down multiple semesters of
         | engineering and mathematics courses to explain something that
         | isn't simple.
         | 
         | The "why" we need it is because it's a convenient mathematical
         | tool to simplify complex problems dealing with
         | sequences/series. It decomposes sequences of numbers (*) into
         | another sequence of numbers that represents a summation of
         | repeating patterns in the original sequence.
         | 
         | The "where" we use it is anywhere that knowing about which
         | patterns show up in sequence of numbers might be more
         | convenient than looking at the original sequence.
         | 
         | There are other nice properties, too. There's a mathematical
         | operation called convolution that's very useful in domains
         | operating with sequences of numbers (machine learning, control
         | systems, audio and image processing, etc). It happens that
         | convolution in the original signal's domain is equivalent to
         | multiplication in the fourier domain. Why that's useful is that
         | convolution is an O(N^2) algorithm but we can compute the
         | Fourier transform in O(nlogn) complexity (**) and
         | multiplication is constant. We can even do things like
         | deconvolution (taking an output signal that we know was
         | convolved with something else) and extract one of the inputs.
         | 
         | Another nice property of the Fourier transform is that most
         | sequences of numbers do not have lots of energy (meaning
         | magnitude) in "high frequency" (meaning fastly repeating
         | patterns in the original domain). The FT of a sequence of
         | numbers will naturally compact most of the information of the
         | sequence into few places. We can exploit this for lossy
         | compression (***).
         | 
         | Ultimately, it's a way of taking information that is hard to
         | grok and transforming it to a domain that's more meaningful and
         | operations/analysis are easier. Both for humans and machines.
         | 
         | * And it works on continuous functions too, but let's not get
         | into that
         | 
         | ** Technically this is a variant called "circular" convolution
         | 
         | *** The FT is not the most convenient tool for this job, so
         | very closely related transforms like the DCT can be used, which
         | have the same computational advantages of the FT.
        
         | gwbas1c wrote:
         | Some of the more well-known examples are:
         | 
         | Low-bitrate audio compression (MP3) use Fourier transforms.
         | 
         | Image and video compression use Fourier transforms.
         | 
         | Less-well-known:
         | 
         | AC electricity has to run at (almost) exactly 60hz / 50hz. The
         | power plants use Fourier transforms to measure the exact
         | frequency. (And then use the measurement to adjust
         | accordingly.)
         | 
         | Some telecommunications techniques may use Fourier transforms:
         | IE, the dial-up modems used in the 1990s used Fourier
         | transforms to interpret the analog signal and determine what
         | the bits were.
         | 
         | This article explains using a Fourier transform to remove the
         | "dots" from an image that was printed in a book:
         | https://matzjb.se/2015/08/08/smoothing-a-halftone-photo-usin...
        
       | ndngmfksk wrote:
       | This article incorrectly labels the DFT as as a Fourier
       | transform. To people who use these tools, the distinction is
       | important and misnomers like this will make learning harder for
       | people starting out. The article should be updated to correct
       | this error.
        
       | sizzzzlerz wrote:
       | Just a nit. The pair of equations the author showed at the
       | beginning of the article are not the equations of the Fourier
       | Transform and its inverse. The Transform is a continuous function
       | operating on an infinite input. The equation for the Transform
       | involves the use of the integral taken over +/- infinity. What is
       | shown, using the summation operator, is a discrete form on the
       | Transform where the input is a limited time series.
       | 
       | A good alternative explanation can be found on Grant Sanderson's
       | 3 Blue 1 Brown channel
       | https://youtu.be/spUNpyF58BY?si=uqq2OOSATYcWmaG8
        
         | keithalewis wrote:
         | Fourier transform can be defined for any locally compact
         | abelian group. Integers modulo n is one such.
        
           | penguin_booze wrote:
           | Given that monads are monoids in the category of
           | endofunctors, I'm perfectly fine with it.
        
       | white_beach wrote:
       | here's another recent book https://github.com/vadim-za/math-
       | intuition-book
        
       | kuharich wrote:
       | Past comments: https://news.ycombinator.com/item?id=4948082,
       | https://news.ycombinator.com/item?id=10635075
        
       | evereverever wrote:
       | There was an old macintosh application that had an interactive
       | FFT. I remember seeing it at the Haus Der Musik in Vienna and
       | have always looked for it.
        
       | kaleidawave wrote:
       | Just seen this, was playing with complex Fourier series
       | yesterday. Here < 100 LOC of python that calculates the
       | coefficients and draws the circle animation
       | https://gist.github.com/kaleidawave/bdaf8649e7917152b6cdd624...
       | 
       | Mind blowing that there might be a solar system out there with a
       | collection of orbiting objects whose final object could have
       | square orbit
        
         | mrfox321 wrote:
         | A square orbit would require infinite acceleration.
         | Specifically, a dirac delta.
        
           | zaik wrote:
           | Approximate square.
        
       | jameshart wrote:
       | I don't like the casting of the frequency domain view as the
       | 'recipe' and the time domain view as the 'product'. The point of
       | Fourier is that you can switch between these perspectives
       | losslessly - they contain equivalent information. The 'smoothie'
       | metaphor of 'unmixing' the smoothie to get the ingredients, and
       | then blending it to get the smoothie back conjures the impression
       | that Fourier transformation is some sort of entropy reversing
       | process, which is misleading.
       | 
       | A time domain function has a frequency domain interpretation -
       | that doesn't mean the frequency domain function is what 'made'
       | the time domain function. It's a chicken and an egg - both make
       | each other.
       | 
       | Just like a function has a derivative, and you can recover the
       | original function (modulo it's y-offset) by taking its
       | antiderivative - that doesn't make the derivative the 'recipe'
       | for the function.
        
         | diracs_stache wrote:
         | Agreed, in image processing the time-domain representation
         | doesn't always carry significance when spatial extent is
         | pertinent
        
         | nextaccountic wrote:
         | the best analogy is a basis change in a vector space: you can
         | have the same data (the same vector) viewed in different ways
         | if you look at it using different a base. for example, in
         | physics, the numerical value of the coordinates of an object
         | change in different frames of reference
         | 
         | and this is not just an analogy: in the vector space of
         | functions the fourier transform is indeed a basis change (to be
         | more precise, a rotation). and from this arises the fractional
         | fourier transform, which is a halfway change: if the fourier
         | transform is a 180 degrees rotation, a fractional transform is
         | something in between
         | 
         | https://en.m.wikipedia.org/wiki/Fractional_Fourier_transform
         | 
         | and this also explains why the fourier transform is the inverse
         | of itself: two 180 degrees rotations gets you to the same place
         | you were before
        
           | nicwilson wrote:
           | FT is a 90 degree rotation not a 180, the FT of the FT of a
           | function is the mirror image about the origin, not the
           | function itself.
        
             | meindnoch wrote:
             | 90 degree rotation? That would imply that the fourier
             | transform is orthogonal to the original function.
        
           | meindnoch wrote:
           | >the fourier transform is the inverse of itself
           | 
           | Not exactly. Almost.
        
       | gnarlouse wrote:
       | https://youtu.be/spUNpyF58BY?si=dM8J8Df5U7DTV9ls
       | 
       | This is the definitive, the last Fourier transform guide you'll
       | ever need. It's so intuitive and simple to understand. I watched
       | this video *once* six years ago, and I can still rebuild the
       | Fourier formula from memory.
       | 
       | If only this had been around during my digital signal processing
       | coursework in undergrad.
        
         | jameshart wrote:
         | 3b1b's video is excellent, agreed.
         | 
         | But I also want to give a shout out to this video:
         | https://youtu.be/ToMyB5Hk06w?si=yJLDbb82JireH4Q9
         | 
         | It does a really solid job of building the intuition for the
         | way the underlying mathematical machinery works, using 'inner
         | products' as its mode of thinking rather than complex analysis
         | - I think it's a great complement to Grant Sanderson's
         | explanation and for some people I think it's probably more
         | useful.
        
       | 2muchcoffeeman wrote:
       | I find all these Fourier explanations bad. The basic idea is very
       | simple:
       | 
       | Fourier takes a function, and converts it into the reciprocal
       | domain.
       | 
       | So if your X axis is time, t. Then Fourier gives you 1/t. What's
       | 1/t if t was some unit time? Frequency.
       | 
       | If you want a nice intuitive example of this, (hand waving
       | begins) a lens will give you a Fourier transform on its back
       | focal plane if all the light coming in are parallel.
       | 
       | Don't quote me on that I need to double check the precise
       | conditions.
       | 
       | But you can send laser light through an image printed on
       | transparency and manually apply filters to clean up small
       | artefacts.
        
       | dr_dshiv wrote:
       | One of the guys he thanks is Steve Lehar, who is a hero of mine.
       | Steve had incredible intuitions about Harmony and Resonance and
       | was crushed down by normies in psychology. But, indeed, the world
       | is made of waves and resonance/harmony does govern pretty much
       | everything! He says it better than me:
       | 
       | https://scholar.google.com/citations?user=UmJanU0AAAAJ&hl=en
        
       ___________________________________________________________________
       (page generated 2023-12-15 23:00 UTC)