[HN Gopher] Functions Are Vectors (2023)
___________________________________________________________________
Functions Are Vectors (2023)
Author : azeemba
Score : 141 points
Date : 2025-07-06 15:18 UTC (7 hours ago)
(HTM) web link (thenumb.at)
(TXT) w3m dump (thenumb.at)
| pvg wrote:
| Discussion at the time
| https://news.ycombinator.com/item?id=36921446
| nyrikki wrote:
| The one place that I think the previous discussion lost
| something important, at least to me with functions.
|
| The popular lens is the porcupine concept when infinite
| dimensions for functions is often more effective when thought
| of as around 8:00 in this video.
|
| https://youtu.be/q8gng_2gn70
|
| While that video obviously is not fancy, it will help with
| building an intuition about fixed points.
|
| Explaining how the _dimensions_ are points needed to describe a
| functions in a plane and not as much about orthogonal
| dimensions.
|
| Specifically with fixed points and non-expansive mappings.
|
| Hopefully this helps someone build intuitions.
| olddustytrail wrote:
| > infinite dimensions for functions is often more effective
| when thought of as around 8:00
|
| I guess it works if you look at it sideways.
| chongli wrote:
| I see this a lot with math concepts as they begin to get more
| abstract: strange visualizations to try to build intuition. I
| think this is ultimately a dead-end approach which misleads
| rather than enlightens.
|
| To me, the proper way of continuing to develop intuition is
| to abandon visualization entirely and start thinking about
| the math in a linguistic mode. Thus, continuous functions
| (perhaps on the closed interval [0,1] for example) are
| vectors precisely because this space of functions meet the
| criteria for a vector space:
|
| * (+) vector addition where adding two continuous functions
| on a domain yields another continuous function on that domain
|
| * (.) scalar multiplication where multiplying a continuous
| function by a real number yields another continuous function
| with the same domain
|
| * (0) the existence of the zero vector which is simply the
| function that maps its entire domain of [0,1] to 0 (and we
| can easily verify that this function is continuous)
|
| We can further verify the other properties of this vector
| space which are:
|
| * associativity of vector addition
|
| * commutativity of vector addition
|
| * identity element for vector addition (just the zero vector)
|
| * additive inverse elements (just multiply f by -1 to get -f)
|
| * compatibility of scalar multiplication with field
| multiplication (i.e a(bf) = (ab)f, where a and b are real
| numbers and f is a function)
|
| * identity element for scalar multiplication (just the number
| 1)
|
| * distributivity of scalar multiplication over vector
| addition (so a(f + g) = af + ag)
|
| * distributivity of scalar multiplication over scalar
| addition (so (a + b)f = af + bf)
|
| So in other words, instead of trying to visualize an
| infinite-dimensional space, we're just doing high school
| algebra with which we should already be familiar. We're just
| manipulating symbols on paper and seeing how far the rules
| take us. This approach can take us much further when we
| continue on to the ideas of normed vector spaces (abstracting
| the idea of length), sequences of vectors (a sequence of
| functions), and Banach spaces (giving us convergence and the
| existence of limits of sequences of functions).
| ajkjk wrote:
| Funny, I agree that visualizations aren't that useful after
| a point, but when you said "start thinking about the math
| in a linguistic mode" I thought you were going to describe
| what I do, but then you described an entirely different
| thing! I can't learn math the way you described at all:
| when things are described by definitions, my eyes glaze
| over, and nothing is retained. I think the way you are
| describing filters out a large percentage of people who
| would enjoy knowing the concepts, leaving only the people
| whose minds work in that certain way, a fairly small subset
| of the interested population.
|
| My third way is that I learn math by learning to "talk" in
| the concepts, which is I think much more common in physics
| than pure mathematics (and I gravitated to physics because
| I loved math but can't stand learning it the way math
| classes wanted me to). For example, thinking of functions
| as vectors went kinda like this:
|
| * first I learned about vectors in physics and
| multivariable calculus, where they were arrows in space
|
| * at some point in a differential equations class (while
| calculating inner products of orthogonal hermite
| polynomials, iirc) I realized that integrals were like
| giant dot products of infinite-dimensional vectors, and I
| was annoyed that nobody had just told me that because I
| would have gotten it instantly.
|
| * then I had to repair my understanding of the word
| "vector" (and grumble about the people who had overloaded
| it). I began to think of vectors as the N=3 case and
| functions as the N=infinity case of the same concept.
| Around this time I also learned quantum mechanics where
| thinking about a list of binary values as a vector ( |000>
| + |001> + |010> + etc, for example) was common, which made
| this easier. It also helped that in mechanics we created
| larger vectors out of tuples of smaller ones: spatial
| vector always has N=3 dimensions, a pair of spatial vectors
| is a single 2N = 6-dimensional vector (albeit with
| different properties under transformations), and that is
| much easier to think about than a single vector in R^6. It
| was also easy to compare it to programming, where there was
| little difference between an array with 3 elements, an
| array with 100 elements, and a function that computed a
| value on every positive integer on request.
|
| * once this is the case, the Fourier transform, Laplace
| transform, etc are trivial consequences of the model. Give
| me a basis of orthogonal functions and of course I'll write
| a function in that basis, no problem, no proofs necessary.
| I'm vaguely aware there are analytic limitations on when it
| works but they seem like failures of the formalism, not
| failures of the technique (as evidenced by how most of them
| fall away when you switch to doing everything on
| distributions).
|
| * eventually I learned some differential geometry and Lie
| theory and learned that addition is actually a pretty weird
| concept; in most geometries you can't "add" vectors that
| are far apart; only things that are locally linear can be
| added. So I had to repair my intuition again: a vector is a
| local linearization of something that might be
| macroscopically, and the linearity is what makes it
| possible to add and scalar-multiply it. And also that there
| is functionally no difference between composing vectors
| with addition or multiplication, they're just notations.
|
| At no point in this were the axioms of vector spaces (or
| normed vector spaces, Banach spaces, etc) useful at all for
| understanding. I still find them completely unhelpful and
| would love to read books on higher mathematics that omit
| all of the axiomatizations in favor of intuition.
| Unfortunately the more advanced the mathematics, the more
| formalized the texts on it get, which makes me very sad. It
| seems very clear that there are two (or more) distinct ways
| of thinking that are at odds here; the mathematical
| tradition _heavily_ favors one (especially since Bourbaki,
| in my impression) and physics is where everyone who can 't
| stand it ends up.
| chongli wrote:
| _I can 't learn math the way you described at all: when
| things are described by definitions, my eyes glaze over,
| and nothing is retained. I think the way you are
| describing filters out a large percentage of people who
| would enjoy knowing the concepts, leaving only the people
| whose minds work in that certain way, a fairly small
| subset of the interested population._
|
| If you told me this in the first year of my math degree I
| would have included myself in that group. I think you're
| right that a lot of people are filtered out by higher
| math's focus on definitions and theorems, although I
| think there's an argument to be made that many people
| filter themselves out before really giving themselves the
| chance to learn it. It took me another year or two to
| begin to get comfortable working that way. Then at some
| point it started to click.
|
| I think it's similar to learning to program. When I'm
| trying to write a proof, I think of the definitions and
| theorems as my standard library. I look at the conclusion
| of the theorem to prove as the result I need to obtain
| and then think about how to build it using my library.
|
| So for me it's a linguistic approach but not a natural
| language one. It's like a programming language and the
| proofs are programs. Believe it or not, this isn't a
| hand-wavey concept either, it's a rigorous one [1].
|
| [1] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_co
| rrespon...
| Tainnor wrote:
| > When I'm trying to write a proof, I think of the
| definitions and theorems as my standard library. I look
| at the conclusion of the theorem to prove as the result I
| need to obtain and then think about how to build it using
| my library.
|
| fwiw, this is exactly the thing that you when you're
| trying to formally prove some theorem in a language like
| Lean.
| chongli wrote:
| I do want to learn theorem proving in Lean just for a
| hobby at some point. I haven't found a great resource for
| it though.
| Tainnor wrote:
| Have you seen: https://leanprover-
| community.github.io/mathematics_in_lean/
| chongli wrote:
| I hadn't seen that. Thanks!
| MalbertKerman wrote:
| > and I was annoyed that nobody had just told me that
| because I would have gotten it instantly.
|
| Right?! In my path through the physics curriculum, this
| whole area was presented in one of two ways. It went
| straight from "You don't need to worry about the details
| of this yet, so we'll just present a few conclusions that
| you will take on faith for now" to "You've already deeply
| and thoroughly learned the details of this, so we trust
| that you can trivially extend it to new problems." More
| time in the math department would have been awfully
| useful, but somehow that was never suggested by the
| prerequisites or advisors.
| ajkjk wrote:
| oh, my point was the opposite of that. The math
| department was totally useless for learning how anything
| made sense. I only understood linear algebra when I took
| quantum mechanics for instance. The math department
| couldn't be bothered to explain anything in any sort of
| useful way; you were supposed to prove pointless theorems
| about things you didn't understand.
| MalbertKerman wrote:
| I did get a lot of that in the lower level math courses,
| where it kinda felt like the math faculty were grudgingly
| letting in the unwashed masses to learn some primitive
| skills to _apply_ [spit] to their various fields, and
| didn 't really give a shit if anybody understood anything
| as long as the morons could repeat some rituals for
| moving _x_ around on the page. I didn 't really
| understand integrals until the intermediate classical
| mechanics prof took an hour or two to explain what the
| hell we had been doing for three semesters of calculus.
|
| But when I did go past the required courses and into math
| for math majors, things got a lot better. I just didn't
| find that out until I was about to graduate.
| Tainnor wrote:
| > So I had to repair my intuition again: a vector is a
| local linearization of something that might be
| macroscopically, and the linearity is what makes it
| possible to add and scalar-multiply it. And also that
| there is functionally no difference between composing
| vectors with addition or multiplication, they're just
| notations.
|
| Except none of this is true of vectors in general,
| although it might be true of very specific vector spaces
| in physics that you may have looked at. Matrices or
| continuous functions form vector spaces where you can add
| any vectors, no matter how far apart. Maybe what you're
| referring to is that differentiability allows us to
| locally approximate nonlinear problems with linear
| methods but that doesn't mean that other things aren't
| globally linear.
|
| I also don't understand what you mean by "no difference
| between composing vectors with addition or
| multiplication", there's obviously a difference between
| adding and multiplying functions, for example (and vector
| spaces in which you can also multiply are another
| interesting structure called an algebra).
|
| That's the problem if you just go from intuition to
| intuition without caring about the formalism. You may end
| up with the wrong understanding.
|
| Intuition is good when guided by rigour. Terence Tao has
| written about this:
| https://terrytao.wordpress.com/career-advice/theres-more-
| to-...
|
| The vector space axioms in the end are nothing more than
| saying: here's a set of objects that you can add and
| scale and here's a set of rules that makes sure these
| operations behave like they're supposed to.
| tsimionescu wrote:
| > I see this a lot with math concepts as they begin to get
| more abstract: strange visualizations to try to build
| intuition. I think this is ultimately a dead-end approach
| which misleads rather than enlightens.
|
| Isn't this how people arrived at most of these concepts
| historically, how the intuition arose that these are
| meaningful concepts at all?
|
| For example, the notion of a continuous function arose from
| a desire to explicitly classify functions whose graph
| "looks smooth and unbroken". People started with the visual
| representation, and then started to build a formalism that
| explains it. Once they found a formalism that was
| satisfying for regular cases, they could now apply it to
| cases where the visual intuition fails, such as functions
| on infinite-dimensional spaces. But the concept of a
| continuous function remains tied to the visual idea,
| fundamentally that's where it comes from.
|
| Similalrly with vectors, you have to first develop an
| intuition of the visual representation of what vector
| operations mean in a simple to understand vector space like
| Newtonian two-dimensional or three-dimensional space. Only
| after you build this clean and visual intuition can you
| really start understanding the formalization of vectors,
| and then start extending the same concepts to spaces that
| are much harder or impossible to visualize. But that
| doesn't mean that vector addition is an arbitrary operation
| labeled + - vector addition is a meaningful concept for
| spatial vectors, one that you can formally extend to other
| operations if they follow certain rules while retaining
| many properties of the two-dimensional case.
| Scene_Cast2 wrote:
| Same thing in video form explained by a different person -
| https://youtu.be/mhEFJr5qvLo
| malwrar wrote:
| So cool! This is the first time I've ever read about a math idea
| and felt a deep pull to know more.
| skybrian wrote:
| It seems like mentioning some of the applications at the
| beginning would motivate learning all these definitions.
| almostgotcaught wrote:
| > "The material is not motivated." Not motivated? Judas just
| stick a dagger in my heart. This material needs no motivation.
| Just do it. Faith will come. He's teaching you analysis. Not
| selling you a used car. By the time you are ready to read this
| book you should not need motivation from the author as to why
| you need to know analysis. You should just feel a burning in
| you chest that can only be quenched by arguments involving an
| arbitrary sequence {x_n} that converges to x in X.
|
| https://www.amazon.com/review/R23MC2PCAJYHCB
| skybrian wrote:
| Not sure what I'm supposed to get from that. I guess some
| people care a little too much about math and have trouble
| relating to others?
| almostgotcaught wrote:
| you're supposed to get that the cynical lens you're
| applying here doesn't fit - if you aren't intrinsically
| motivated to read this stuff then it's not for you. which
| is fine btw because (functional) analysis isn't a required
| class.
| TheRealPomax wrote:
| If you need "practical applications" for some part of math
| to have value to you, then large parts of math will not be
| for you. That's fine, but that's also something you should
| accept and internalize: math is already its own
| application, we dig through it in order to better
| understand it, and that understanding will (with rather
| advanced higher education) be applicable to other fields,
| which _in turn_ may have practical uses.
|
| Those practical uses are someone else's problem to solve
| (even if they rely on math to solve them), and they can
| write their own web pages on how functions as vectors help
| solve specific problems in a way that's more insightful
| than using "traditional" calculus, and get those upvoted on
| HN.
|
| But _this_ link has a "you must be this math to ride"
| gate, it's not for everyone, and that's fine. It's a world
| wide web, there's room for all levels of information. You
| need to already appreciate the problems that you
| encountered in non-trivial calculus to appreciate this
| interpretation of what a function even is and how to
| exploit the new power that gives you.
| skybrian wrote:
| I don't see any such "math gate" on this link. Also,
| _this_ math _does_ have practical applications, but they
| 're not mentioned until very late in the article.
|
| My suggestion is that briefly mentioning them up front
| might be nice. I didn't mean to start a big argument
| about it.
| almostgotcaught wrote:
| i'll never fathom why people on hn treat a post as an
| auto-invite for unsolicited feedback.
| LegionMammal978 wrote:
| Yet some parts of math are 'preferred' over others, in
| that most 'serious' mathematicians would rather read 100
| pages about functional analysis than 100 pages of
| meandering definitions from some rando trying to solve
| the Collatz conjecture.
|
| Some people would like to have a filter for what to spend
| their time on, better than "your elders before you have
| deemed these ideas deeply important". One such filter is
| "Can these ideas tell us nontrivial things about other
| areas of math?" That is, "Do they have applications?"
|
| Short of the strawman of immediate economic value, I
| don't think it's wrong to view a subject with light
| skepticism if it seemingly ventures off into its own
| ivory tower without relating back to anything else. A few
| well-designed examples can defuse this skepticism.
| ethan_smith wrote:
| This perspective is crucial for understanding signal
| processing, machine learning, and quantum mechanics. Viewing
| functions as vectors enables practical techniques like Fourier
| transforms and kernel methods that underlie many modern
| technologies.
| sixo wrote:
| The genre of this article is not pedagogical, really. One
| usually learns these techniques in the course of a particular
| field like physics, electrical engineering, or theoretical
| chemistry. _This_ article is best thought of as "a story
| you've seen before, but told from the beginning / ground up,
| with a lot of the connections to other topics and examples laid
| out for you". For that purpose, it's excellent, perhaps the
| best I've ever seen. It might also whet the appetite of a
| novice, but it's not really for that.
| gizmo686 wrote:
| The first paragraph and table of context both mention
| applications.
| skybrian wrote:
| Yes, so it does. Perhaps I read too quickly.
| tempodox wrote:
| Oh, my. Alice, meet rabbit hole.
| MalbertKerman wrote:
| The jump from spherical harmonics to eigenfunctions on a general
| mesh, and the specific example mesh chosen, might be the finest
| mathematical joke I've seen this decade.
| sixo wrote:
| Would you explain the joke for the rest of us?
| xeonmc wrote:
| Spherical Haromics approximating Spherical Cows?
| dark__paladin wrote:
| assume spherical cow
| MalbertKerman wrote:
| It's quietly reversing the traditional "We approximate the
| cow to be a sphere" and showing how the spherical math can,
| in fact, be generalized to solutions on the cow.
| sixo wrote:
| oh. I did not interpret that blob as a cow. Thanks.
| a3w wrote:
| Nice: the variable l and m values can allow you to get orbitals
| from chemistry.
|
| (This is where I learned at least half of the math on this page:
| theoretical chemistry.)
| xeonmc wrote:
| also known as Applied Quantum Mechanics.
| ttoinou wrote:
| Isn't this the opposite way ? Vectors are functions whose input
| space are discrete dimensions. Let's not pretend going from
| natural numbers to real is "simple", reals numbers are a
| fascinating non-obvious math discovery. And also the passage from
| a few numbers to all natural numbers (aleph0) is non obvious. So
| basically we have two alephs passages to transforms N-D vectors
| as functions over reals.
| xeonmc wrote:
| Vectors are not necessarily discrete-domained. Anything that
| satisfies the vector space properties is a vector.
| ttoinou wrote:
| I agree but I'm operating under the assumption of the article
| Conceptualizing functions as infinite-dimensional vectors
| lets us apply the tools of linear algebra to a vast landscape
| of new problems
| layer8 wrote:
| Linear algebra isn't limited to discrete-dimensional vector
| spaces. Or what do you mean?
| ttoinou wrote:
| See my other comment sibling.
|
| And he's starting from the assumption vectors are finite
| (cf. the article)
| Sharlin wrote:
| He does not _assume_ anything! Any assumption is in your
| head only. Of course he starts from the specific type of
| vector spaces that 's the most familiar to readers. But
| then he shows that there's nothing that requires a vector
| space to have a finite, or even countably infinite,
| dimension. What matters are the axioms.
| gizmo686 wrote:
| Vectors are an abstract notion. If you have two sets and two
| operations that satisfy the definition of a vector space, then
| you have a vector space; and we refer to elements of the vector
| set as "vectors" within that vector space.
|
| The observation here is that set of real value functions,
| combined with the set of real numbers, and the natural notion
| of function addition and multiplication by a real number
| satisfies the definition of a vector space. As a result all the
| results of linear algebra can be applied to real valued
| functions.
|
| It is true that any vector space is isomorphic to a vector
| space whose vectors are functions. Linear algebra does make a
| lot of usage of that result, but it is different from what the
| article is discussing.
| ttoinou wrote:
| I agree but we're using functions for different things here.
| Yes some specific families of functions can be treated as
| vector spaces. In this article it seems like the author is
| pretending to take all real->real functions and treating them
| as if they are a vector space, whatever the _content_ of the
| functions, quote : we've built a vector space
| of functions
|
| and later he admits it is impossible Ideally,
| we could express an arbitrary function f as a linear
| combination of these basis functions. However, there are
| uncountably many of them--and we can't simply write down a
| sum over the reals. Still, considering their linear
| combination is illustrative:
|
| They are uncountable because they are aleph1
| 998244353 wrote:
| The set of all real->real functions is still a vector
| space.
|
| This vector space also has a basis (even if it is not as
| useful): there is a (uncountably infinite) subset of
| real->real functions such that every function can be
| expressed as a linear combination of a _finite_ number of
| these basis functions, in _exactly one way_.
|
| There isn't a clean way to write down this basis, though,
| as you need to use Zorn's lemma or equivalent to construct
| it.
| ttoinou wrote:
| I'd love to read more about that, he's not talking about
| that at all in this article though
| gizmo686 wrote:
| It is not required for vector spaces to have a basis. As it
| turns out, the claim that every vector space has a basis is
| equivalent to the axiom of choice, which seems well beyond
| the scope of the article.
|
| However, the particular vector space in question (functions
| from R to R) does have a basis, which the author describes.
| That basis is not as useful as a basis typically is for
| finite dimensional (or even countably unfitine dimensional)
| vector spaces, but it still exists.
| ttoinou wrote:
| But it's the article talking about vectors as a sequence
| of reals and having a basis, then extending that to
| infinite sequences of reals. The author is playing on
| multiple definitions of vector to produce a "woaw that's
| cool" effect, and that's bad maths
| Sharlin wrote:
| There is only one definition of "vector space" (up to
| isomorphism anyway), and that's what the author uses.
| You'll note that he doesn't talk about bases at all, the
| assumption of a basis is entirely in your mind. The
| entire point of the article is that the R-R function
| space is a vector space. A vector space is not required
| to have a basis, but assuming the axiom of choice, every
| vector space does have (at least) one, including that of
| R-R functions.
| sixo wrote:
| A few questions occur to me while reading this, which I am far
| from qualified to answer:
|
| - How much of this structure survives if you work on "fuzzy" real
| numbers? Can you make it work? Where I don't necessarily mean
| "fuzzy" in the specific technical sense, but in _any_ sense in
| which a number is defined only up to a margin of error /length
| scale, which in my mind is similar to "finitism", or "automatic
| differentiation" in ML, or a "UV cutoff" in physics. I imagine
| the exact definition will determine how much vectorial structure
| survives. The obvious answer is that it works like a regular
| Fourier transform but with a low-pass filter applied, but I
| imagine this might not be the only answer.
|
| - Then if this is possible, can you carry it across the analogy
| in the other direction? What would be the equivalent of "fuzzy
| vectors"?
|
| - If it isn't possible, what similar construction on the fuzzy
| numbers would get you to the obvious endpoint of a "fourier
| analysis with a low pass filter pre-applied?"
|
| - The argument arrives at fourier analysis by considering an
| orthonormal diagonalization of the Laplacian. In linear algebra,
| SVD applies more generally than diagonalizations--is there an
| "SVD" for functions?
| xeonmc wrote:
| I'd guess that it would be factored as "nonlinearity", which
| might be characterized as some form of harmonic distortion,
| analogous to clipping nonlinearity of finite-ranged systems?
|
| Perhaps some conjugate relation could be established between
| finite-range in one domain and finite-resolution in another, in
| terms of the effect such nonlinearities have on the spectral
| response.
| sitkack wrote:
| A fuzzy vector is a Gaussian? Thinking of what it would be in
| 1, 2, 3 and n dimensions.
| sfpotter wrote:
| 1. Numerical methods for solving differential and integral
| equations are algorithms for solving algebraic equations
| (vector solutions) that arise from discretizing infinite-
| dimensional operator equations (function solutions). When we
| talk about whether these methods work, we usually do so in
| terms of their consistency and stability. There is a multistage
| things that happens here: we start by talking about the well-
| posedness of the original equation (e.g. the PDE), then the
| convergence of the mathematical discretization, and then
| examine what happens when we try to program this thing on a
| computer. Usually what happens is these algorithms will get
| implemented "on top" of numerical linear algebra, where
| algorithms like Gaussian elimination, and different iterative
| solvers, have been studied very carefully from the perspective
| of floating point rounding errors etc. This kind of subsumes
| your concern about "fuzzy" real numbers. Remember that in
| double precision, if the number "1.0" represents "1 meter",
| then machin epsilon is atomic scale. So, frequently, you can
| kind of assume the whole process "just works"...
|
| 2/3. I'm not really sure what you mean by these questions...
| But if you want to do "fourier analysis with a filter
| preapplied", you'd probably just work with within some space of
| bandlimited functions. If you only care around N Fourier modes,
| any time you do an operation which exceeds that number of
| modes, you need to chop the result back to down to size.
|
| 4. In this context, it's really the SVD of an operator you're
| interested in. In that regard, you can consider trying to
| extend the various definitions of the SVD to your operator,
| provided that you carefully think about all spaces involved. I
| assume at least one "operator SVD" exists and has been studied
| extensively... For instance, I can imagine trying to extend the
| variational definition of the SVD... and the algorithms for
| computing the SVD probably make good sense in a function space,
| too...
| woopsn wrote:
| Convolution with dirac delta will give you an exact sample of
| f(0), and in principle a whole signal could be constructed as a
| combination of delayed delta signals - but we can't realize an
| exact delta signal in most spaces, only approximations.
|
| As a result we get finite resolution and truncation of the
| spectrum. So "Fourier analysis with pre-applied lowpass filter"
| would be analysis of sampled signals, the filter determined by
| the sampling kernel (delta approximator) and properties of the
| DFT.
|
| But so long as the sampling kernel is good (that is the actual
| terminology), we can form f exactly as the limit of these fuzzy
| interpolations.
|
| The term "resolution of the identity" is associated with the
| fact that delta doesn't exist in most function spaces and
| instead has to be approximated. A good sampling kernel
| "resolves" the missing (convolutional) identity. I like
| thinking of the term also in the sense that these operators
| behave like the identity if it were only good up to some
| resolution.
| gizmo686 wrote:
| Your can replace the real numbers with the rational numbers and
| maintain all of the vector structure.
|
| If you wanted something more quantized, you can pick some
| length unit, d, and replace the real numbers with {... -2d, -d,
| 0, d, 2d,... }. This forms a structure known as a "ring" with
| the standard notion of addition, subtraction, and
| multiplication (but no notion of division. Using this instead
| of R does lose the vector structure, but is still an example of
| a slightly more general notion of a "module". Many of the
| linear algebra results for vector spaces apply to modules as
| well.
|
| > If it isn't possible, what similar construction on the fuzzy
| numbers would get you to the obvious endpoint of a "fourier
| analysis with a low pass filter pre-applied?"
|
| If that is where you want to end up, you could pretty much
| start there. If you take all real value functions and apply a
| courier analysis with a low pass filter to each of them, the
| resulting set still forms a vector space. Although I don't see
| any particular way of arriving at this vector space by
| manipulating functions pre Fourier transform.
| simpaticoder wrote:
| The author asserts vectors are functions, specifically a function
| that takes an index and returns a value. He notes that as you
| increase the number of indices, a vector can contain an arbitary
| function (he focuses on continuous, real-valued functions).
|
| It's fun to simulate one thing with another, but there is a
| deeper and more profound sense in which vectors are functions in
| Clifford Algebra, or Geometric Algebra. In that system, vectors
| (and bi-vectors...k-vectors) are themselves meaningful operators
| on other k-vectors. Even better, the entire system generalizes to
| n-dimensions, and decribes complex numbers, 2-d vectors,
| quaternions, and more, essentially for free. (Interestingly, the
| primary operation in GA is "reflection", the same operation you
| get in quantum computing with the Hadamard gate)
| layer8 wrote:
| Well, yeah, function spaces are an example of vector spaces:
| https://en.wikipedia.org/wiki/Vector_space#Function_spaces
| dang wrote:
| This previous thread was also good: _Functions are vectors_ -
| https://news.ycombinator.com/item?id=36921446 - July 2023 (120
| comments)
| EGreg wrote:
| Only functions on a finite domain are vectors.
|
| Functions on a countable domain are sequences.
| ttoinou wrote:
| Why is this being downvoted ? Could a downvoter elaborate ?
| teiferer wrote:
| Because it makes little sense.
|
| Vector spaces can have infinite dimension, so the "only" in
| the first sentence does not belong there.
|
| The second sentence is also odd. How do you define
| "sequence"? Are there no finite sequences?
| ttoinou wrote:
| I think it is "vector" taken in the way the author wrote
| about it / showed illustrations in the article.
|
| For the second sentence, he's right, we could also write
| (wrongly) an article titled "Functions are Sequences" and
| (try to) apply what we know about dealing with countable
| sequences to functions
| jschveibinz wrote:
| An engineering, signal processing extension/perspective:
|
| An infinite sequence approximates a general function, as
| described in the article (see the slider bar example). In signal
| processing applications, functions can be considered (or forced)
| to be bandlimited so a much lower-order representation (i.e.
| vector) suffices:
|
| - The subspace of bandlimited functions is much smaller than the
| full L^2 space - It has a countable orthonormal basis (e.g.,
| shifted sinc functions) - The function can be written as (with
| sinc functions):
|
| x(t) = \sum_{n=-\infty}^{\infty} f(nT) \cdot \text{sinc}\left(
| \frac{t - nT}{T} \right)
|
| - This is analogous to expressing a vector in a finite-
| dimensional subspace using a basis (e.g. sinc)
|
| Discrete-time signal processing is useful for comp-sci
| applications like audio, SDR, trading data, etc.
| QuesnayJr wrote:
| Full $L_2$ also has a countable orthonormal basis. Hermite
| functions are one example.
| 77pt77 wrote:
| Any basic liniear algebra course should talk about this, at least
| in the finite dimensional case.
|
| Polynomials come to mind.
| ttoinou wrote:
| Finite degree polynomials are vectors yes. Polynomials is a
| typical example you study when learning about linear algebra.
| Doesn't say anything about real functions in general though, I
| don't think any linear algebra course should make the analogies
| made in this article, that'd be confusing
| mouse_ wrote:
| I love the prerequisites section. Every technical blog post
| should start with this.
| bmitc wrote:
| I will need to read through the rest of the article later, but
| the initial intuition building is a bit sloppy. None of those
| vectors drawn in the initial examples belong to the same vector
| space. Vectors need to emanate from the same origin to be
| considered as part of the same vector space.
| ttoinou wrote:
| The author seems to be a great educator and computer scientist,
| much respect to his work. But from what I can gather, although
| I'd love to study more infinite sized matrices, he proved /
| showed nothing in this article. What he wrote is not true at all,
| they are only analogies and not rigorous maths. Functions are not
| vectors. But finite polynomials are vectors yes, this is trivial.
| gizmo686 wrote:
| https://thenumb.at/Functions-are-Vectors/#proofs
|
| It's not a particularly interesting proof, but the author does
| prove that real valued functions are vectors. The bulk of the
| article is less about proofs, and more about showing how the
| above result is useful.
| ttoinou wrote:
| Vectors in the way he talks about in the beginning. With
| indices (and then extending to "In higher dimensions, vectors
| start to look more like functions!"). Of course if you use
| the general meaning of every word, vectors are functions and
| functions are vectors, and this article shouldn't then have
| anything interesting to talk about. how the
| above result is useful
|
| It doesn't seem useful at all to me, the examples in the
| article are not that interesting. On the contrary it is more
| confusing than anything to apply linear algebra to real
| valued functions.
___________________________________________________________________
(page generated 2025-07-06 23:00 UTC)