[HN Gopher] Turing's topological proof that every written alphab...
___________________________________________________________________
Turing's topological proof that every written alphabet is finite
(2010)
Author : 082349872349872
Score : 128 points
Date : 2024-07-23 17:43 UTC (1 days ago)
(HTM) web link (divisbyzero.com)
(TXT) w3m dump (divisbyzero.com)
| ainoobler wrote:
| Interesting argument. This assumes that cognition must also be
| happening on a compact manifold which seems like a reasonable
| assumption but the conclusion is somewhat counterintuitive
| because it means there are only finitely many personality types
| and ways of thinking.
| Vecr wrote:
| That's how the math works out in general unless you think some
| kind of soul exists.
| ainoobler wrote:
| It all follows from compactness so if the cognitive manifold
| is not compact then the conclusion is not true.
| andoando wrote:
| At this point I kind of do. The matter that goes into forming
| your body and brain is somewhat special, having accumulated
| the properties it has experiencing billions of years of
| traveling through the universe.
|
| Once you die it decomposes and goes on its way, and forms
| something new.
|
| Its not a "soul" in the same sense but still pretty trippy
| Smaug123 wrote:
| Is that counterintuitive? There's a finite number of electrons
| in a human brain (which fits in a space of size 1m^3), and
| information takes time to propagate across any distance, and
| humans die before 150 years; this all gestures at there being
| not only finitely many personality types and ways of thinking,
| but finitely many _human mind states_.
| ainoobler wrote:
| I guess that means reincarnation is real.
| Smaug123 wrote:
| Assuming you're not trolling: nope, that would be an
| empirical question of how many brain states ever _actually
| arise_ in the world during the time humans exist, and how
| much space and time it takes to maintain a given brain
| state. The universe is currently expected to stop being
| able to support human life at some point in the future, so
| the pigeonhole principle argument requires some physical
| parameters to be known before you can apply it; and even if
| the pigeonhole principle did apply, you still can 't know
| by that argument whether any _given_ brain-state is
| repeated (and if so, how many times). Things which you can
| deduce using only mathematics, and no physical data, are
| necessarily _extremely_ weak statements about reality.
| Y_Y wrote:
| If you are a mental clone but uncausally linked then is
| that still reincarnation?
| ainoobler wrote:
| The universe is mathematical, all we can ever know about
| it is mathematical.
| Smaug123 wrote:
| A very strong empirical statement about the nature of
| reality, and a strong statement about the nature of
| mathematics, neither of which everyone agrees with! (What
| will your reply be if we discover a halting oracle at the
| centre of the galaxy? _Mathematics_ doesn 't forbid that:
| the Church-Turing thesis is an _empirical_ statement, and
| it 's the empirical _scientific law of induction_ which
| is what tells us that we 're extremely unlikely to find a
| halting oracle.)
|
| But you've missed the point: even assuming that
| mathematics can perfectly model the universe, you cannot
| use weak physical hypotheses ("human brains are finite,
| there is an injection from human mind-states into human
| brains") and fully general mathematics (the pigeonhole
| principle) to derive such specific strong truths as "a
| particular human mind-state is repeated in a predictable
| way in the universe". You can _at best_ derive general
| truths such as "at least one human mind-state is
| repeated at least once, somewhere", if the universe
| happens to have its physical parameters set in such a way
| that the pigeonhole principle holds.
| throwaway43567 wrote:
| We do have
| https://en.m.wikipedia.org/wiki/Boltzmann_brain
|
| That may appear every (10^10)^50 years according to
|
| https://en.m.wikipedia.org/wiki/Timeline_of_the_far_futur
| e
| Y_Y wrote:
| But there may yet be more brain states than number of
| possible thoughts by the finite humans.
| Smaug123 wrote:
| Oh, 100%, it seems _extremely_ unlikely that multiple brain
| states in this sense don 't code for the same thought! If
| nothing else, you could only avoid that with an absurdly
| precise definition of "brain"; any practical definition of
| "brain" is very likely to include matter that is physically
| irrelevant to at least _some_ thoughts. I hedged with
| "1m^3" because the argument goes through even if you take
| the entire body to be the brain.
| tempfile wrote:
| Who said this had anything to do with cognition at all? I think
| Turing's argument goes through with humans replaced by automata
| and eyes by cameras. It's just that perception has finite
| resolution.
| ainoobler wrote:
| I made the obvious connection. It's an original thought from
| me and no one else.
| aiauthoritydev wrote:
| Well, this is true only if you think cognition and thought as a
| pure biological process totally dependent on state of brain.
| ainoobler wrote:
| I don't think that. I'm just explaining what follows
| logically if the cognitive manifold is compact.
| nxobject wrote:
| Just a reading note to myself: the key assumption is that there
| is a "resolving limit" \epsilon such that symbols that are
| "closer and smaller" than \epsilon are indistinguishable. The
| implicit assumption there is that you can resize symbols while
| keeping them the same.
| montecarl wrote:
| It is even easier to just say, given a fixed area of paper, and a
| finite printing resolution, there is a finite number of symbols
| possible?
|
| Say you have a 1 in by 1 in area of paper and a printing
| resolution of 300 DPI then there are 300*300 total dots. If the
| printer is monochrome and each dot is either black or white then
| there are 2^(300^2) possible symbols.
| grvbck wrote:
| Next week on Show HN: I made a fractal font with unlimited
| scaling
| perihelions wrote:
| You can take a glyph from that font, cut up and rearrange it,
| and come out with two copies of the original size. Saves
| money on printer ink
| ashton314 wrote:
| This is a deep joke and hilarious. Thank you for sneaking
| in a Banach-Tarski reference here
| myhf wrote:
| assuming you are willing to spend an infinite amount of
| time doing the cutting
| byteknight wrote:
| I smell a supertask
| tempfile wrote:
| Turing actually addressed this in his argument!
|
| > If these sets are restricted to be measurable
|
| measurable sets are not Banach-Tarski-able :)
| kazinator wrote:
| The closed-form math formulas giving the glyphs of the font
| have to be written on a square piece of paper with limited
| resolution ...
| daedalus_j wrote:
| The Minds from The Culture scifi series have something like
| this... The top level glyph is the base meaning and you can
| encode essentially infinite depth of meaning beyond that...
| Of course you have to go multidimensional at some point, so
| you can't really print it out beyond a few layers of
| resolution...
| flir wrote:
| This reminds me of a design for analogue TV from the late
| 80s/early 90s. Instead of the electron gun scanning in
| horizontal lines, it follows a space-filling, non-crossing,
| self-similar curve. Want to upgrade the resolution? Just
| build a TV that renders the curve one generation deeper,
| and stuff more data into the analogue broadcast signal.
| Older TVs can still render the new signal, maybe with a
| slightly higher noise floor.
|
| (Anyone know what I'm talking about? I assume it was
| obsoleted by digital solutions, but I'd like to know if
| there were any serious flaws in the design).
| namanyayg wrote:
| The concept you're talking about is a Hilbert Curve. But
| this is the first time I've heard it applied to analogue
| tv.
|
| https://en.m.wikipedia.org/wiki/Space-filling_curve
| flir wrote:
| Yeah, in my memory the pattern was specifically a Hilbert
| Curve but rotated 45deg (that can't be right, can it?
| That would just make things harder). I must have seen it
| between 1987 and 1997.
|
| Ah! Found this: https://www.ripcorddesigns.com/blog-
| news/who-needs-rastas-an... - there's even a patent! I
| was searching on space-filling curve, not Hilbert curve.
| FredPret wrote:
| I think arcade games worked like that. Looked it up and
| oscilloscopes do, too:
|
| https://en.m.wikipedia.org/wiki/Vector_monitor
| Smaug123 wrote:
| That works if you're fixed to a grid, but the argument is much
| more general: it allows symbols to be of arbitrary size (as
| long as they're still within the unit square), in any
| orientation, formed by continuous or (not-too-pathological)
| discontinuous penstrokes, etc.
| Y_Y wrote:
| It doesn't, however, allow for non-closed symbols. I can
| imagine a spiralling brushstroke that gets fainter as it
| approaches the centre. Maybe the proof can be strengthened,
| and it certainly won't pass the distinguishability criterion,
| but we must be rigorous, here if anywhere.
| dinkelberg wrote:
| Why should it not allow for your spiralling brushstroke?
| You can divide the spiral into the strokes corresponding to
| each turn around the center. We can assume that each of
| these strokes is measurable (otherwise we would not need
| the construction of the spiral). Then the entire spiral is
| just a countable union of those measurable strokes, thus it
| is measurable itself.
| Y_Y wrote:
| For the proof given in TFA they use the assumption that
| the symbol is compact and hence closed. I argue that the
| symbol needn't be closed, it may still be measurable.
| clintonc wrote:
| > here if anywhere
|
| :-D
|
| We can play a lot of games with "what is a symbol", but
| compactness pervades many of the models that we use to
| describe reality. The crux of the argument is not
| necessarily that the symbols themselves are compact as
| sets, but that the *space of possible descriptions* is
| compact. In the article, the space of descriptions is
| (compact) subsets of a (compact) two-dimensional space,
| which (delightfully) is compact in the appropriate
| topology.
|
| In your example, the symbols themselves could instead be
| modeled as a function f:[0,1]^2 -> [0,1] which are "upper
| semicontinuous", which when appropriately topologized is
| seen to be compact; in particular, every infinite sequence
| must have a subsequence that converges to another upper
| semicontinuous function.
|
| Much of the fun here comes from the Tychonoff theorem,
| which says that arbitrary products of compact spaces is
| compact. Since the *measurement space* is compact, the
| topology of the domain is not as important, as long as the
| product topology on the function space is the appropriate
| one. (Mystically, it almost always is.)
| Y_Y wrote:
| The proof defines F(X) as the set of compact subsets of X
| (here the unit square). The compactness of F(X) follows
| from (necessary?) the compactness of its elements, so we
| need to find a topology where all our allowable symbols
| are compact, and this is not the standard topology if you
| allow the spiral. If you take another topology
| (equivalently space of symbol descriptions) then you
| again must show that this compactness still corresponds
| to the desired notion of "distinguishability".
|
| My topology is rusty and I don't genuinely doubt the
| validity of the argument, but I'm having fun trying to
| poke holes.
| gus_massa wrote:
| The proof doesn't assume that they are formed by not-too-
| pathological penstrokes.
|
| <wrong>The idea is that you should measure the amount of
| black and white ink to change a symbol into another simbol,
| and if the total amount of ink is less than e then they
| indistinguishable. (Where e is some constant you must choose
| for the whole system.)</wrong>
|
| I think that every horrible-totally-pathological ink splash
| has a nice indistinguishable version, but my real analysts is
| a bit rusty, and there are a few horrible things that I may
| have forgotten.
|
| Edit: see comment below.
| Smaug123 wrote:
| By "not-too-pathological" I intended at least to include
| the requirement "measurable".
| gus_massa wrote:
| I just notice that I used the wrong metric. The article
| uses the Hausdorf metric and I used the measure of the
| symetric difference.
|
| And the article assume that the sets are compact, so they
| are measurable as you say. Anyway compact sets can be
| quite pathological (but not as pathological as non
| measurable sets).
| tel wrote:
| It also suggests the argument generalizes to symbols as
| non-compact sets.
| clintonc wrote:
| I made a comment elsewhere on this thread that explains
| that symbols themselves being compact isn't so important,
| but that the set of descriptions of the symbols must be
| compact. For example, if the description of the symbol is
| not the symbol itself as a set, but a map f:[0,1]^2 ->
| [0,1] that describes the "intensity" of ink at each
| point, then the natural conclusion is that the
| description of a symbol must be upper semicontinuous,
| which makes the set of descriptions compact.
| ertgbnm wrote:
| Yes it's easier and not too different from Turing's argument.
| However, Turing made this proof in 1936, you know, before the
| concept of pixels exists.
| Y_Y wrote:
| Not far off though, "pix" was in use by at least 1932, and
| "pixel" by 1956[0].
|
| [0] https://en.wikipedia.org/wiki/Pixel
| kragen wrote:
| nyquist's sampling theorem is from 01915. western union
| started offering wirephoto service in 01921, and the
| associated press started mass distribution of news photos in
| 01935. these aren't discrete pixels but they did demonstrate
| that nyquist's sampling theorem applies to images too
| eru wrote:
| If you want to use that sampling theorem, you have to bring
| up frequencies. Explaining those and how they apply to
| printer's ink is arguably more complicated than what Turing
| did; especially for his contemporary target audience.
|
| The argument from pixels is more natural to us these days,
| that's why we think it's simpler.
|
| (I had similar reactions to many other older style proofs
| when I was studying math.)
| kragen wrote:
| fair point
| tempfile wrote:
| Approximating an arbitrary shape by covering it with tiny
| squares is not exactly new. The method of exhaustion goes
| back to 200BC.
| throwawayffffas wrote:
| The Planck length has been a thing since 1899.
| kmoser wrote:
| The article ends by noting that even with colored ink, the
| number of distinguishable symbols is finite. Indeed, if there
| is a limit to the number of values held by each of the
| variables, that would limit what you could write.
|
| If we're looking for theoretical ways around that constraint,
| you could allow inks that change over time, so you would have
| to observe each character for a certain amount of time in order
| to see what it was representing as its ink changed (or didn't).
| codeflo wrote:
| If the observation time is finite, and the time resolution of
| the observation is limited, then you'd still only have a
| finite number of distinguishable symbols, wouldn't you?
| kmoser wrote:
| That's true if the time resolution is limited.
| Theoretically there could be no limit. It would not make
| for a very _practical_ alphabet, but it would still be an
| unlimited alphabet.
| passion__desire wrote:
| For some reason, I am thinking about the green-blue and grue-
| bleen paradox.
|
| https://en.wikipedia.org/wiki/New_riddle_of_induction
|
| The "New Riddle of Induction," proposed by Nelson Goodman in
| 1955, challenges our understanding of inductive reasoning and
| the formation of scientific hypotheses. Goodman introduced
| the concept through his famous "grue" example. Imagine a
| property "grue," defined as "green up to time t, and blue
| thereafter." All emeralds examined before time t are both
| green and grue. The riddle asks: Why is it rational to
| predict that future emeralds will be green rather than grue?
| This paradox highlights the problem of choosing between
| competing hypotheses that equally fit past observations. It
| questions our basis for preferring "natural" predicates (like
| green) over "artificial" ones (like grue) in inductive
| reasoning. Goodman's riddle challenges the idea that
| induction is based solely on observed regularities. It
| suggests that our choice of predicates in forming hypotheses
| is influenced by factors beyond mere observation, such as
| simplicity, familiarity, or projectibility. The New Riddle of
| Induction has significant implications for philosophy of
| science, epistemology, and artificial intelligence. It raises
| questions about the foundations of scientific prediction, the
| nature of natural kinds, and the role of language in shaping
| our understanding of the world.
|
| Related :
|
| https://x.com/eshear/status/1812926436623413285
|
| There are crystal structures that simply won't form anymore,
| even though they did a few decades ago. Real life Ice-9?
| Could make an incredible sci-fi book.
| mhink wrote:
| That link is absolutely fascinating, and would definitely
| make for a great sci-fi concept!
| kazinator wrote:
| You also have to argue for finite color resolution, or else a
| color noise floor that masks the difference between similar
| colors.
|
| If we have 300 dpi, we can still code unlimited amounts of
| information if the color space is continuous, and noise-free.
|
| (The article mentions the human eye limitations, but we could
| use technological instrumentation to extract info from a print;
| the human eye doesn't limit what we can code on a BluRay disc,
| e.g.)
| refulgentis wrote:
| Print resolution is far higher than 300 DPI, floor for the
| cheapest is 1000, most will do 4K. (not a correction, I am only
| sharing this because I think of this often because it seems
| leverage-able for something unique, and HN is a good place to
| share that sort of thing, in case someone is inspired)
| snthpy wrote:
| I totally agree with your argument. However just to throw out
| an idea for a possible counterexample (as mathematicians like
| to do):
|
| What if you used aperiodic Penrose tiles and could detect
| intensity? Would it be possible to encode anything in that?
| There would be no repetition but when would you hit limits of
| discernability with all the overwriting?
| snthpy wrote:
| I mean the number of tiles is finite but what if the alphabet
| was the encoded in the geometrical arrangement of the tiles?
| snthpy wrote:
| Ignore me. That's no different to having infinite length
| strings with a finite alphabet.
| bamboozled wrote:
| I've thought about melodies in music too. Obviously, the number
| of melodies possible is extremely vast. But in 4:4 timing,
| there must be a limit?
| SideburnsOfDoom wrote:
| Not only is there a limit, they have been enumerated. And
| since this is the music industry, copywrited.
|
| https://www.hypebot.com/hypebot/2020/02/every-possible-
| melod...
| seanhunter wrote:
| The number of notes per bar is not really finite given you
| can make any a:b polyrhythm and you can make any subdivision
| you like. For example, see the famous falling coin rhythm
| based on the Fibonacci series in Bartok's "Music for Strings,
| Percussion and Celeste"[1]
|
| [1] If you want a look at what "Pyramids on Mars"-style
| musical analysis is like, Erno Lendvai's book about Bartok is
| pretty awesome. Since Bartok never really wrote or said
| anything about his compositional methods it's very hard to
| prove anything conclusively.
| tempfile wrote:
| Turing's argument says "conditionally compact" but the article
| talks only about "compact". They are not the same; "conditionally
| compact" means every sequence has a Cauchy subsequence, while
| (sequentially) compact means every sequence has a convergent
| subsequence. If the metric space Turing mentions is complete,
| then I guess they're the same. Is that obvious?
| dbfclark wrote:
| Convergent sequences are always Cauchy; for metric spaces,
| compactness and sequential compactness are the same.
| Smaug123 wrote:
| I think the question is more like:
|
| > Turing says that a certain space, the space of all compact
| subsets of [0,1]^2 endowed with the metric "integral of
| minimal distance required to transform {1 ink at each point
| of P1} u {infinite amount of ink at (2, 0)} into {1 ink at
| each point of P2} u {infinite amount of ink at (2,0)}", is
| conditionally-compact. How is that related to the article's
| argument?
|
| This is _not_ obvious, I think. The article has moved away
| from Turing 's "integral of the distance we have to transfer
| ink", instead using "maximum distance we have to transfer any
| ink", and I don't have a great intuition for whether this is
| a legit transformation of the argument. (I'm sure both proofs
| are correct, but it's not obvious to me that they are the
| _same_ proof.)
| tempfile wrote:
| Yes and yes, but _conditional_ compactness is different, and
| Cauchy sequences are not always convergent. That 's why I
| mentioned completeness.
| bubblyworld wrote:
| Given any compact metric space $(M, d)$ (which in the article
| is the closed unit square), the set of non-empty compact
| subsets of $M$ under the Hausdorff metric $h$ is also a compact
| metric space.
| tempfile wrote:
| Are the symbols assumed to be compact? I guess their closures
| obviously are, and the closure of a symbol should be
| indistinguishable from the symbol.
| bubblyworld wrote:
| Yeah, seems like Turing _defines_ a symbol to be a compact
| subset of the unit square. Whether that makes sense or not
| to you is up to interpretation I guess, but like you say
| the closure should be more or less the same as the symbol
| for "reasonable" symbols.
|
| You then need to assume that each symbol has some variation
| in how it's written, I think, which you can think of as an
| open set in the symbol-space (which is compact under $h$ as
| per my previous comment). The collection of all these opens
| forms a cover of the symbol-space, which by compactness
| must have a finite sub-cover (Turing's "alphabet").
|
| A very elegant bit of philosophy imo =)
| thriftwy wrote:
| I don't agree. What if the next symbol's meaning conditionally
| depends on the previous ones?
|
| For example, in 1937 the third glyph is number three, but in VAZ,
| the third glyph is Cyrillic letter Z. They're not different in
| writing.
|
| It would not be hard to devise a symbolic system where new
| symbols are context dependent, and there are infinitely many of
| these depending on the context. The context will be governed by a
| finite number of rules. The glyphs will not change meaning in a
| specific text.
|
| Human language is an example of such system. If you treat words
| as a whole as symbols, you can have infinitely many different
| words as they become defined via the specific text, but obviously
| you can't have infinitely many different words in a single text
| because they will stop being mutually intelligible at some point.
| Smaug123 wrote:
| Of course you can have infinitely many different _semantics_ ,
| but this is a question about available _syntaxes_. (Trivial
| proof: for each n, take the semantics S_n which assigns to the
| symbol "X" the meaning "integer n", just as English usually
| assigns to the symbol "3" the meaning "integer 3".)
| thriftwy wrote:
| I'm not sure I follow. Natural languages usually have very
| simple syntax. The Greeks had great literature while still in
| "boustrophedon with no whitespace" phase.
|
| The number of possible symbol meanings may be unbound. And
| there may be symbol definition rules so that any effective
| symbol set used to write a text is still plausibly an
| alphabet. Chinese actually does fit the bill if you are
| allowed to define new valid characters according to best
| practices.
| Smaug123 wrote:
| You're talking entirely about the meaning of symbols,
| right. That's not a question Turing addressed at all, and
| the answer is trivially "there are infinitely many possible
| semantics". Turing only addressed the question of how many
| physically distinct symbols there could be, and came up
| with the result "finitely many".
| tgv wrote:
| The alphabet itself would still be finite, like our vocabulary.
| thriftwy wrote:
| Our vocabulary is only statistically finite. Number of words
| in a dictionary is finite. Number of words used to all
| speakers is not.
| taejo wrote:
| The Turing machine has a memory, it can absolutely cope with
| symbols meaning something different depending on what appears
| before them. The trouble is that it can't distinguish between
| infinitely many distinct symbols on the paper tape. Turing here
| shows that a human can't do that either, so that is not a way
| in which humans have more computational power than a Turing
| machine.
| thriftwy wrote:
| Not being to able to distinguish infinitely many symbols on a
| single tape does not mean there is a bound on the number of
| meanings that may be assigned to symbols.
| k92mueller wrote:
| I have no idea about topology, so sorry if my question is stupid:
| Assume we are limited to the unit square (0<=x<=1 and 0<=y<=1).
| Now I map every real number in [0,1] to a symbol by simply
| defining the symbol for the number r as x=r and y=r. This results
| in a uncountable infinite amount of symbols. Now it's clearly not
| true that it's possible to describe any of those new symbols with
| a finite amount of symbols from a finite alphabet (because
| otherwise this would also be possible for real numbers). Where is
| my intuition wrong?
| ainoobler wrote:
| How can you tell the difference between (r,r) and (r+e,r+e)?
| The argument in the article assumes that there is an e such
| that it is impossible to tell the difference between r and r+e.
| So this means that the entire square can be covered by squares
| of side length e and since the unit square is compact this
| means that there are only finitely many e squares required to
| cover the entire unit square. Variation within the e squares is
| imperceptible so this means there are only finitely many
| symbols that can be perceived to be different.
| constantcrying wrote:
| >we are limited to the unit square (0<=x<=1 and 0<=y<=1). Now I
| map every real number in [0,1] to a symbol by simply defining
| the symbol for the number r as x=r and y=r.
|
| Every symbol has measure zero and so every symbol is identical.
| kazinator wrote:
| > _The human eye cannot tell two symbols apart when they are too
| similar._
|
| That leads to a simple information-theory based argument. There
| is a limited spatial resolution and noise floor, and so the
| amount of information that can be coded is finite.
| schoen wrote:
| I feel like this fact from information theory also requires its
| own proof, and one strategy for that is likely to be similar to
| Turing's!
| fiforpg wrote:
| From a quick reading of the footnote, it seems that Turing uses
| optimal transport distance rather than the Hausdorff distance.
| Convergence in the latter implies convergence in the former, but
| not the other way round.
| ridiculous_fish wrote:
| Why do we need the property that X is compact? Say we define X
| (the square) to exclude the boundary: X = (0,1)x(0,1). This is no
| longer a compact set, but it seems silly for the proof to now
| fail.
|
| Is boundedness sufficient?
| NotYourLawyer wrote:
| "Farthest" point doesn't work if you drop compactness.
| meroes wrote:
| Isn't this just taking a common sense concept and adapting it to
| the concepts and language of Turing machines and also topology?
|
| While yes it shows these are good tools, capable of
| confirming/proving these intuitions and showing exactly how these
| specific systems achieve that, is there ever any concern we are
| making too many redundant concepts? That may lead to confusion
| and obfuscate the search for novel ideas, which I think won't
| always happen through said tools. It's not like these kind of
| intuitions were on shaky grounds or could be disproven.
|
| I wonder if anyone shares these opinions?
| bmitc wrote:
| What is common sense about it?
|
| > is there ever any concern we are making too many redundant
| concepts
|
| I need to read the contextual work, but I think it's actually
| the opposite here. Turing is a mathematician and is
| transferring a "foreign" notion into common mathematical
| concepts. It is a recasting of the concept so that it can be
| used. It isn't done just to do it.
| citizen_friend wrote:
| Math is all about taking a notion from life that you might
| consider "common sense" and using it to solve a problem. You
| have to prove it works carefully though.
|
| This is Turing's works where he is translating the idea of a
| human "computer" into a mathematical machine.
| lmm wrote:
| > While yes it shows these are good tools, capable of
| confirming/proving these intuitions and showing exactly how
| these specific systems achieve that, is there ever any concern
| we are making too many redundant concepts? That may lead to
| confusion and obfuscate the search for novel ideas, which I
| think won't always happen through said tools.
|
| On the contrary, coming up with good general abstractions
| reduces the number of concepts you have to keep in your head
| and makes it much more possible to search for novel ideas.
| Imagine trying to figure out whether a given computing device
| was different to other computing devices without having the
| general concept of a Church-Turing computer - you wouldn't even
| get started.
| motohagiography wrote:
| non-anything question: Is this text/sheet an analogy for a
| thing's relationship to its substrate, or does the proof define a
| relationship or category of "thing-and-substrate?"
| taejo wrote:
| Turing's goal here is to justify why a Turing machine captures
| the "intuitive notion of a computation". To his audience, a
| "computer" is a person with a stack of paper and a pen,
| following an arbitrary procedure. Turing shows here that, even
| though a person can potentially draw infinitely many drawings
| on a sheet of paper, that doesn't give them any more
| computational power then writing symbols from a finite alphabet
| into a grid of squares.
|
| The machine he introduces also writes symbols onto paper with a
| pen and reads them. So he really is talking about pens and
| sheets of paper, it's not an analogy for something else.
| andybak wrote:
| I made a joke:
|
| > Prior to his seminal work Turing published a lesser known 1935
| paper: "On Caffeine with an Application to the
| Anfangsverzogerungsproblem"
|
| I think it might be a little too niche.
| defrost wrote:
| The tapeworms that consumed your characters are still moving
| their heads backwards and forwards on whether to upvote,
| downvote, or move on.
|
| It's impossible to say when they'll settle, so strap in for an
| indeterminate wait.
| thih9 wrote:
| > We cannot tell at a glance whether 9999999999999999 and
| 999999999999999 are the same.
|
| Off topic - some of us can, iOS underlines one of these for me,
| treating it as a phone number...
| ithkuil wrote:
| BTW I noticed that the clickhouse CLI has an interesting
| visualization trick: wide numbers have each third digit
| rendered with an underline style. That's as effective as a
| thousands separator "," but you can copy paste the number into
| another program without having to edit the commas away
| nyc111 wrote:
| Neither finite nor infinite is defined. I don't know what those
| words mean. So the assumption of an infinite alphabet doesn't
| make sense to me.
| oefrha wrote:
| Finite and infinite are well defined in every set theory. You
| shouldn't approach a topological proof without acquainting
| yourself with the most basic set theoretic foundation.
| colinwilyb wrote:
| "I am assuming that the reader is familiar with the terms metric,
| metric space, topological space, and compact set."
|
| On behalf of the math-declined folks, I wish math articles had
| subtitles, references, and definitions built-in.
|
| I wonder if a browser extension for Greek letter definitions
| would be possible?
| ycombinete wrote:
| The problem is that the definitions of in such footnotes would
| be full of terms that one also needs defined.
|
| For example here's the first few sentences from the Wikipedia
| page on Compact Sets:
|
| _In mathematics, specifically general topology, compactness is
| a property that seeks to generalize the notion of a closed and
| bounded subset of Euclidean space.[1] The idea is that a
| compact space has no "punctures" or "missing endpoints", i.e.,
| it includes all limiting values of points. For example, the
| open interval (0,1) would not be compact because it excludes
| the limiting values of 0 and 1, whereas the closed interval
| [0,1] would be compact._
| tzs wrote:
| I'd like to see a series of sites or interactive ebooks that
| work as described below, which would address that problem. I
| can't actually build such a site or ebook because (1) I'm not
| sufficiently mathematically sophisticated and (2) my front
| end skills aren't up to it.
|
| Each one would be devoted to one interesting major
| mathematical theorem, such as the prime number theorem. The
| initial view would be a presentation of the theorem as it
| would be presented if it were a new discover being published
| in an appropriate journal for the relevant field.
|
| At any point in that view you can pick a term or a step in
| the proof and ask for it to be expanded. There are two
| expansion options.
|
| 1. More detail. You use this when you understand what is
| being said but just don't see how they got from A to B. It
| adds in the missing details.
|
| 2. Background. You use a background expansion when you need
| to know more about some term or theorem that is being used.
|
| I'll give an example of how these might work later.
|
| The details or background you get from either of those
| expansions can themselves be expanded. Background expansion
| should work all the way down to pre-college mathematics.
|
| Ideally if you started at the initial view of the prime
| number theorem site without having had any college
| mathematics and did background expansions all the way down
| you would end up learning the necessary calculus and complex
| analysis and number theory to understand the initial journal-
| level proof.
|
| You would _not_ learn all of the calculus or complex analysis
| or number theory one would normally learn in those courses.
| You would just learn the parts necessary for the top level
| proof.
|
| I think it would be possible to choose a selection of
| interesting major theorems to do these sites for such that
| their combined background expansions would include everything
| that you'd get in a normal college mathematics degree
| program.
|
| I think many people would find that a more interesting way to
| learn the material from a college mathematics degree program
| than the normal series of courses that each covers one
| subject. By getting it via background expansions everything
| you are learning you are learning for a specific application
| which can be more motivating.
|
| Here's an example of what expansions might do.
|
| There's a theorem from Liouville that says that if a is a
| real root of an irreducible polynomial with integer
| coefficiants of degree v >= 2, and p and q are any integers
| (q != 0) then there is a constant C > 0 which does not depend
| on p and q such that |a-p/q| > C/q^v
|
| In Gelfond's book "Transcendental and Algebraic Numbers" he
| says (I'm changing the names of some of his variables because
| I don't want to deal with typing greek letters and some of
| his terminology for clarity):
|
| > The proof of this is quite straightforward. Suppose a is a
| real root of an irreducible equation f(x) =
| a0 x^v + ... + av = 0,
|
| > where all of the ai (i = 0, 1, ..., v) are integers. Then,
| using the mean value theorem, we get |f(p/q)|
| = |a-p/q| |f'(z)| >= 1/q^v; z = a + t(p/q-a), |t| <= 1,
|
| > from which the Liouville theorem follows directly.
|
| Directly for Gelfond maybe. It certainly does not follow
| directly for me! I understand everything he's saying, but the
| steps between some of the things is a little too big for me.
| I'd need to use a details expansion (maybe more than once).
| What that should give is something along the lines of how
| Wikipedia proves that theorem, which is the first lemma in
| the "Liouville numbers and transcendence" section of their
| article on Liouville numbers [1].
|
| If I didn't know what the mean value theorem is (or the
| extreme value theorem, which shows up after expansion to a
| Wikipedia level of detail) that would be time for a
| background expansion.
|
| [1] https://en.wikipedia.org/wiki/Liouville_number#Liouville_
| num...
| nyc111 wrote:
| "In this paper Turing gives a topological argument that every
| written alphabet must be finite."
|
| I searched Turing's paper and the word "alphabet" does not
| appear. I guess the author interprets Turing's use of "symbol" as
| an "alphabet". Can anyone familiar with Turing's paper clarify?
___________________________________________________________________
(page generated 2024-07-24 23:11 UTC)