[HN Gopher] Graph Theory and Linear Algebra [pdf]
___________________________________________________________________
Graph Theory and Linear Algebra [pdf]
Author : sebg
Score : 170 points
Date : 2022-02-24 15:23 UTC (7 hours ago)
(HTM) web link (www.math.utah.edu)
(TXT) w3m dump (www.math.utah.edu)
| vermarish wrote:
| I opened this pdf expecting to see a section on total
| unimodularity (TU).
|
| For example, if an undirected graph is bipartite, its node-edge
| incidence matrix is TU. Any linear program operating on a TU
| matrix is guaranteed to have an integer solution. This means that
| any integer program can be relaxed to a linear program. Thus, the
| problems of bipartite matching and maximum flow can be easily
| solved using Simplex.
|
| Some of the interesting results are described here:
| https://en.wikipedia.org/wiki/Unimodular_matrix#Examples_of_...
|
| That wikipedia page does not do a great job at presenting the
| optimization connection. These slides are better:
| http://eaton.math.rpi.edu/faculty/Mitchell/courses/matp6620/...
| dpflan wrote:
| Perhaps I was wrong to expect mention of explicit discsussion of
| spectral graph theory? Anyway, always interesting topics to me.
|
| On a related note: Gilbert Strang (MIT) has a relatively new
| class in the area of Linear Algebra (of course!) and learning:
| from data: "Matrix Methods in Data Analysis, Signal Processing,
| and Machine Learning" >
| https://ocw.mit.edu/courses/mathematics/18-065-matrix-method...
|
| With _Part IV: Special Matrices_ discussing the Laplacian and
| Distance matrices as noted in the posted paper. [TOC:
| https://math.mit.edu/~gs/learningfromdata/dsla_toc.pdf]
| hwers wrote:
| The laplacian of a matrix also has cool applications in
| computer graphics, if anyone is curious (about further
| connections). Think of the mesh of an object as a graph and
| compute the lapacian. (You get natural unsupervised
| segmentations of the mesh for instance if you look at it as a
| spectral clustering problem.) Look up resources by Keenan
| Crane. If you've played with 'smooth normals' in blender I
| believe it uses techniques like that as well.
| actually_a_dog wrote:
| You are most definitely not wrong. I would have expected some
| mention of the spectrum of a graph beyond simply defining what
| the eigenvalues are.
|
| Incidentally, one interesting thing about the graph spectrum is
| that it's actually _not_ a very interesting invariant. Among
| other things, almost all trees are cospectral, as proven in the
| following paper by Allen Schwenk in 1973:
| https://www.researchgate.net/publication/245264768_Almost_al...
| enriquto wrote:
| > Among other things, almost all trees are cospectral
|
| This does not really mean that the spectrum is not
| interesting, does it? Only that all trees have nearly the
| same spectrum. Sure, there's not much variability inside the
| class of trees, but almost no graphs are trees. After all,
| the matrix associated to a tree is basically a permutation of
| the identity.
| actually_a_dog wrote:
| You're right, of course. I should have said that the
| spectrum isn't all that useful for telling two graphs of
| the same order apart. Certain eigenvalues are frequently
| useful by themselves ( _e.g._ the largest, second largest,
| and smallest eigenvalues often contain some information).
|
| But, speaking of information, consider this: the adjacency
| matrix of a simple, loopless graph of order n is a
| symmetric n x n matrix with all zeroes on the diagonal.
| That means the adjacency matrix contains (literally) n^2/2
| - n bits of information, while the number of connected
| graphs of order n as n -> infinity grows exponentially [0].
| That implies that for the purpose of telling two graphs of
| the same order apart, the spectrum is of fairly limited
| utility, which is what I was thinking of when I wrote that
| it was not an interesting invariant.
|
| ---
|
| [0]: https://users.cecs.anu.edu.au/~bdm/papers/BCM_Connecte
| d1.pdf
| Py-o7 wrote:
| I don't understand why you are linking to something about
| _labelled graphs_ when people are (presumably) trying to
| distinguish between graphs _up to isomorphism_ i.e. after
| modd 'ing out the labeling.
| dls2016 wrote:
| > Perhaps I was wrong to expect mention of explicit discsussion
| of spectral graph theory?
|
| Agreed... if you're going to write down the Laplacian at least
| demonstrate the null space <-> connected component
| relationship.
|
| Edit: looks like maybe an undergraduate paper?
| dpflan wrote:
| Oh, ah, nice find, yeah, if you go down the URL:
|
| https://www.math.utah.edu/~gustafso/s2017/2270/projects-2017.
| ..
|
| Projects from a LA class at UofU.
| vlan121 wrote:
| Ow man, tomorrow exam :/
| nonrandomstring wrote:
| A nice compact guide of some important topics. Thanks for
| posting.
| azxd123 wrote:
| Can someone please explain to me why there are 17 paths of length
| 4 between v1 (1) and itself.
| Syzygies wrote:
| If you're checking that 17 isn't a typo, do a screen grab and
| doodle on the graph. For each vertex, write down how many ways
| you can get there in two steps from v1. There are the same
| number of ways to get back, so you sum the square of these
| numbers. That's the dot product (3,1,1,1,1,2) with itself,
| 9+1+1+1+1+4 = 17.
|
| The (1,1) entry of the matrix multiplication of A^2 with itself
| is exactly this calculation. Matrix multiplication is a dance
| one sees everywhere. It's crippling to teach young minds that
| matrix multiplication is composing linear functions, when the
| dance shows up so many other ways.
|
| Path counting is for example at the heart of automata theory in
| computer science. For a finite state machine, the question
| becomes "is there a path?" rather than "how many paths?" Think
| of 0 as false, and any positive integer 1, 2, 3, ... as true,
| and one sees this same dance in automata theory.
| galaxyLogic wrote:
| What I'm interested to learn more about is Hypergraphs. They
| basically seem like a more real-world thing than basic graphs.
|
| Yes social networks are graphs but think about electrical
| circuits. They are hyper-graphs. See the picture of an electrical
| circuit in:
|
| https://en.wikipedia.org/wiki/Hypergraph
|
| How does Linear Algebra deal with Hypergraphs?
| igorkraw wrote:
| One trivial way is to embeded your hyperedges and nodes in a
| bipartite graph. Other ways exist but are outside my working
| memory right now
| aghilmort wrote:
| awesome - spent couple of years deep in this space for
| dissertation using graph theory to boost linear algebra computes;
| lots of room to extend our discrete result into approximate /
| continuous computes and boost AI, GPUs, etc.
|
| similar result came out around the same time using category
| theory as basis; our work was intentionally simpler / more
| applied, partly to make it more accessible from an algorithmic
| perspective and also to put it into practice, etc.
|
| * www - https://scholar.afit.edu/etd/2632/
|
| * pdf - https://www.sagemath.org/files/thesis/augeri-
| thesis-2008.pdf
| orange3xchicken wrote:
| That's really cool. These days similar strategies based on
| graph coarsening and vertex ordering are really popular for
| improving the sparsity pattern of preconditioners- e.g.
| incomplete lu decompositions for iterative linear solvers.
| aghilmort wrote:
| yeah, there's so much room to extend this work in terms of
| graph coarsening / sparsifying, especially for progressive /
| approximate approaches
|
| there's lots of overlap with entropy binning as well and
| likely some sort of combo for win -- figure out "max
| frequency" with some sort of FFT / wavelet like meta
| technique or somehow bin edges based on entropy contribution,
| etc.
|
| or, lots of pathways to make coontinuous / numerical in JPEG
| like way
|
| basically JPEGs for graphs -- with big impacts in AI / ML
| computes
| aghilmort wrote:
| p.s. my user name from that work -- aghilmort is canonical
| alpha sort / scrabble sort of algorithm and logarithm
| voldacar wrote:
| If that was interesting, you might like the book "Linear Algebra
| Methods in Combinatorics"
|
| https://people.cs.uchicago.edu/~laci/CLASS/HANDOUTS-COMB/BaF...
| activatedgeek wrote:
| A question for the more informed:
|
| "What are recent developments in linear algebra?"
|
| Perhaps, we can broaden the scope and include
| numerical/randomized linear algebra.
|
| I come from a machine learning background, and the most recent
| directions I am aware of are in applications of Krylov subspace
| methods to very large number of data points and low-precision
| training (both in the context of Gaussian processes).
| Syzygies wrote:
| I teach linear algebra in a pure math department. We like to
| think that freeing people to be able to daydream about linear
| algebra is our goal, that there's more to it than what function
| calls to make in MatLab.
|
| Still, you could throw out our course and spend a semester
| teaching people how to really think about singular value
| decomposition. It's everywhere, critical for example in machine
| learning, yet pure math types think it's "applied".
|
| Riemannian geometry begins with an inner product at each point
| (though what's amazing about O'Neill's text is that he slips by
| you a categorical definition first at every step). As another
| route up the mountain, one could now understand parallel
| transport as a local version of the singular value
| decomposition, and work out the whole theory that way. This
| requires a fluid, intuitive grasp of singular values, rather
| than the textbook definition. We should teach that
| understanding.
| WoahNoun wrote:
| Depends on how you define "linear algebra." Representation
| theory, lie groups, and grassmannians can all be considered
| "linear algebra" and are active areas of math research, but it
| probably isn't what you are looking for.
| activatedgeek wrote:
| Fair enough.
|
| I don't quite care about obscure and beautiful theoretical
| problems. I am looking for new developments that have shown
| promise to make big practical impact.
|
| For what it's worth, I am no linear algebra definition police
| so feel free to list what you think is reasonably relevant.
| ogogmad wrote:
| This document has mistakes.
|
| The consequences of Theorem 2.2 are wrong as stated. Substitute k
| = n, and count the number of eigenvalues they say exist.
|
| Caught another mistake:
|
| Section 3.2: _First, an observation: the column sums of Q(G) are
| all zero_ is wrong! Contradicts their definition of Q(G).
___________________________________________________________________
(page generated 2022-02-24 23:00 UTC)