[HN Gopher] Everything is a linear model
___________________________________________________________________
Everything is a linear model
Author : nopipeline
Score : 100 points
Date : 2024-02-18 16:00 UTC (6 hours ago)
(HTM) web link (danielroelfs.com)
(TXT) w3m dump (danielroelfs.com)
| SubiculumCode wrote:
| I find it irritating that the article mentions repeated measures,
| but does not try them, much less a mixed effects model. Yes, they
| are linear models with more parameters, but doing it in lm would
| be a special kind of madness
| simulo wrote:
| I knew the linked-in-the-article
| https://lindeloev.github.io/tests-as-linear/ which is also great.
| A bit meta on the widespread use of linear models: "Transcending
| General Linear Reality" by Andrew Abbott, DOI:10.2307/202114
| ivan_ah wrote:
| Here is the Python port of test-as-linear developed by George
| Ho (eigenfoo): https://github.com/minireference/tests-as-
| linear/blob/bugfix...
|
| I'm linking to my fork of it because I've added some fixes and
| filled in some of the missing parts (e.g. Welch's t-test).
| LifeIsBio wrote:
| I read this article when I was in grad school 5 years ago.
| Absolutely love it and talk about it to this day.
|
| It really makes me frustrated about the ways I was introduced
| to statistics: brute force memorization of seeming arbitrary
| formulas.
| btdmaster wrote:
| I thought nonlinearity was very important to be able to make a
| larger model better than a smaller one? Like so important that
| tom7 made a half-joke demo with it:
| https://yewtu.be/watch?v=Ae9EKCyI1xU
| epgui wrote:
| Linear models don't need everything to be linear.
| iamcreasy wrote:
| I presume you are implying that linear model only mandates
| linear relationship between predictor and regression
| coefficients?
| stdbrouw wrote:
| A linear relationship between any transformation of the
| outcome and any transformation of the predictor variables
| -- so the function is linear but the relationship between
| predictors and outcome can take on almost any shape.
| iamcreasy wrote:
| Ah, I missed 'the transformation of outcome' in my mind.
| Thanks for clearing it up.
| hackerlight wrote:
| Linear models are a linear combination of possibly non-
| linear regressors. The linearity is strictly in the
| parameters, not in whatever you're adding up.
|
| A neural network can be pedantically referred to as a
| linear model of the form y = a + b*neural_network, for
| example. Here, y is a linear model (even though
| neural_network isn't).
| dist-epoch wrote:
| Well, you can create a non-linear model by piece-wise combining
| multiple linear models.
|
| The famous ReLU non-linearity is just that - two linear
| functions joined.
| nyrikki wrote:
| same thing with any feed forward network too. They are all
| piece-wise linear in respect to inputs.
|
| Layers reduce resource requirements and make some patterns
| easier or even practical to find, but any ANN that is a FNN
| supervised learning could be represented as a parametric
| linear regression.
|
| Unsupervised learning, that tends to use clustering is harder
| to visualize but is the same thing.
|
| You still have ANNs, which have binary output, which can be
| viewed through the lens of deciders. They have to have unique
| successor and predecessor functions.
|
| Really this is just set shattering that relates to a finite
| VC dimensionality being required for something to be PAC
| learnable.
|
| But the title of this is confusing the map for the territory.
| It isn't that 'Everything is a linear model' but that linear
| models are the preferred, most practical form.
|
| The efforts to leverage spikey neutral networks, which is a
| more realistic model of cortical neurons, and which have
| continuous output (or more correctly the computable reals)
| tend to run into problems like riddled basins.
|
| https://arxiv.org/abs/1711.02160
|
| Obviously setting rectified linear unit at 0 = 1 resolves to
| differentiation problem, but many functions may not be so
| simple
|
| Perhaps a useful lens is how TSP with a discreet Euclidean
| metric is in NP-complete while the continuous version is in
| NP-hard.
|
| But it isn't that everything is linearizable, but rather that
| linearized problems tend to be the most practical.
| tnecniv wrote:
| Nonlinear things start looking like linear things again in very
| high dimensions
| nyrikki wrote:
| Only when you dimensions are truly independent and that is a
| stretch. Really what you are saying is that you are more
| likely to find a field for your problem, and fields don't
| exist in more than 2 dimensions.
|
| Consider Predator Pray with fear and refuge, which is
| indeterminate, and not due to a lack of precision but a
| topological feature where >=3 open sets share the same
| boundary set.
|
| https://www.sciencedirect.com/science/article/abs/pii/S09600.
| ..
|
| General relativity, with 3 spacial and one temporal dimension
| is another. One lens to consider this is that rotations are
| hyperbolic due to the lack of independence from the time
| dimension.
|
| Quantum mechanics would have been much more difficult if it
| didn't have two exit basins. Which is similar to ANNs and
| linear regressions being binary output.
|
| (Some exceptions will orthogonal dimensions like EM)
| dboreham wrote:
| Hmm...I thought everything was an Eigenfunction.
| rzzzt wrote:
| Applying basic syllogism, Eigenfunction is securities fraud.
| ggm wrote:
| If you bundle up enough securities of dubious value, it
| creates less dubious value higher than the sum of the parts.
| jwilber wrote:
| Another fun stats X is really Y:
|
| estimating the Area Under the Curve metric (AUC) is equivalent to
| the Wilcoxon-Mann-Whitney test!
|
| https://rmets.onlinelibrary.wiley.com/doi/abs/10.1256/003590...
| t_mann wrote:
| Statistics is more than hypothesis testing, but you'll get
| surprisingly far without straying too far from linear models - I
| remember a Stats prof saying 'most of classical Statistics is GLM
| [0]'
|
| [0] https://en.wikipedia.org/wiki/Generalized_linear_model
| lanstin wrote:
| Which means it really all just finding hyperplanes that are
| near the data.
| sebastianavina wrote:
| "Classification of mathematical problems as linear and
| nonlinear is like classification of the Universe as bananas and
| non-bananas. "
|
| and everything turns around the same principles. For example
| dynamical models and PID controls.
|
| yet solving a banana, is the only thing we really know how to
| do. So we end up fitting everything in our banana models.
| Tommah wrote:
| Linear algebra was the first math class I took in undergrad.
| I thought the next one was going to be non-linear algebra!
| But it wasn't.
| whatshisface wrote:
| I disagree with the implication that linearity is an
| unnatural concept, it appears whenever the changes being
| studied are small relative to the key parameters that
| determine the system. _Every_ system is linear for small
| perturbations. Even logic gates; in negative feedback they
| can form passable inverting amplifiers. In a place as big as
| the universe it is rather common for two things to be very
| different in scale and yet interacting.
| notjoemama wrote:
| So, Lady Finger, not Cavendish. Got it.
| szundi wrote:
| Interesting aspect to see the world from, thanks
| 3abiton wrote:
| It's true, until it's not. It's easy to make these claims, but
| would you bet your money on it?
| ofrzeta wrote:
| I thought everything was a power function.
| smitty1e wrote:
| I thought everything was a graph.
| optimalsolver wrote:
| Nah, it's all just ifs and for-loops:
|
| https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2F4...
| sva_ wrote:
| (2022)
| chmaynard wrote:
| RSS feed: https://danielroelfs.com/blog/index.xml
___________________________________________________________________
(page generated 2024-02-18 23:00 UTC)