[HN Gopher] Emit-C: A time travelling programming language
___________________________________________________________________
Emit-C: A time travelling programming language
Author : doppp
Score : 100 points
Date : 2024-11-12 15:10 UTC (4 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| unquietwiki wrote:
| Submitted to r/altprog ; I love a language that can murder
| variables, heh.
| roarcher wrote:
| This one can murder its own grandfather.
| delaaxe wrote:
| Can it? How is a child of a variable defined?
| cyanmagenta wrote:
| I am going to be harsh here, but I think it's necessary: I don't
| think anyone should use Emit-C in production. Without proper
| tooling, including a time-traveling debugger and a temporal
| paradox linter, the use case just fails compared to more
| established languages like Rust.
| doormatt wrote:
| No one is claiming this is necessary. It's a toy language built
| for fun.
| mygrant wrote:
| Woosh
| porcoda wrote:
| Given how often people seriously say things like the top
| level comment being responded to around here, an explicit
| /s is almost necessary since it can be hard to distinguish
| from the usual cynical dismissive comments.
| fragmede wrote:
| I down vote explicit /s' on principle. If you have to add
| it, you're not doing it right (imo).
| jprete wrote:
| I take it you've never heard of Poe's Law?
| fragmede wrote:
| https://hn.algolia.com/?dateRange=all&page=0&prefix=false
| &qu...
|
| err, I mean. I have not! What's that?
| jprete wrote:
| Touche.
| croes wrote:
| Your link is just a long version of /s
| cardanome wrote:
| Explicit sarcasm markers are an important accessibility
| tool for neurodivergent people that might have otherwise
| a hard time figuring out if something is meant to be
| sarcastic.
| fragmede wrote:
| It is as an extremely neurodivergent person that I reject
| explicit sarcasm markers.
| cardanome wrote:
| Well, good for you that you don't need them. Other people
| like and/or need them though
| soulofmischief wrote:
| And other people don't like them aesthetically or
| conceptually.
|
| It's great how we can all make our own decisions when
| communicating, and can ignore judgement from people who
| say that they are "necessary". You're treating fragmede
| like they brought up "/s" as some sort of virtue
| signaling, when they were pushing back on porcoda for
| attempting to impose controversial new-age grammar.
| fragmede wrote:
| Who said I didn't need them? The problem is people in the
| real world don't have /s markers and often aren't
| actually your friends and can listen actually mean you
| harm. Part of my neurodivergence leads to me being
| trusting of people when I really should not, (aka I'm
| gullible) and this sends me down the wrong path
| sometimes. Since I'm neurodivergent and have trouble with
| this, the way for me to get better at it, is to practice
| harder at things that others find easy, not for me to
| demand and expect the whole world change to suit my
| needs.
| sgarland wrote:
| Same and same.
| atoav wrote:
| Nope, you don't add /s because you think the sarcasm is
| generally not recognizable otherwise, you add it because
| of accessability and to avoid spawning useless debates.
|
| Not all people in all situations will read a thing as
| sarcastic -- e.g. your response to a post might be
| sarcastic, but the three comments next to it which were
| not there when you wrote it, might produce a context in
| which it would be read as serious by most people without
| neither you or your readers being at fault. But _you_
| have the power to clarify and at least on this platform a
| unneeded /s has more benefits than disadvantages IMO.
| keybored wrote:
| > Given how often people seriously say things like the
| top level comment being responded to around here, an
| explicit /s is almost necessary since it can be hard to
| distinguish from the usual cynical dismissive comments.
|
| This is what satire is.
| lolinder wrote:
| I thought the same as you until I read TFA. In the
| context of a clearly satirical programming language I'm
| happy to accept untagged satire in the comments.
| hathawsh wrote:
| Please ELI5... I know there's a joke in there, but I'm
| missing it.
| skavi wrote:
| The humor lies in the inherent absurdity of the critique
| itself. Obviously no one will use this in production.
| There's nothing especially clever you're missing.
| ramon156 wrote:
| I was confused at the rust part, which also made me
| realize that it was part of a joke.
| 9dev wrote:
| The temporal paradox linter could have given it away too
| :)
| pcblues wrote:
| I think a syntax highlighter and in-line documentation for
| future language features before they are created is also
| necessary. I'll stick with more established languages, too.
| Time in a single direction is already hard.
| beefnugs wrote:
| This was the first message sent back by the machines. Early
| versions considered propaganda before violence. But by 3460 our
| last hope was enough time loop bugs in their code that their
| processors were slow enough for us to get single suicide
| bombers into their cooling vents
| seanhunter wrote:
| The good news is we can hopefully expect future users to send
| those back to us along with the compiler version from the
| language once it's more established. Unless perhaps I'm
| misunderstanding TFA?
| fasa99 wrote:
| I do wonder how this pertains to "blast processing" in the
| original Sega Genesis
|
| Some say it was a temporal distortion field around the 68000
| increasing clock speed by 900%
|
| Some say it was the code name for programmers working 140
| instead of 80 hours a week writing assembly.
|
| Some say it is something to do with streaming realtime pixel
| data to the graphics chip for high-color graphical image
| rendering
|
| If you ask me, though, time distortion
| greenhat76 wrote:
| I don't think anyone's arguing this should be used in
| production
| deadbabe wrote:
| For the _real_ computer scientists out here, what would time
| complexity notation be like if time traveling of information was
| possible (i.e. doing big expensive computations and sending the
| result back into the past)?
| openasocket wrote:
| Surprisingly there is prior work on this!
| https://www.scottaaronson.com/papers/ctchalt.pdf . Apparently a
| Turing Machine with time travel can solve the halting problem
| fragmede wrote:
| With time travel, isn't the halting problem trivially
| solvable? You start the program, and then just jump to after
| the end of forever and see if the program terminated.
| HeliumHydride wrote:
| I think you can only time travel a finite time.
| MadnessASAP wrote:
| That's why you instead specify that if & when the program
| halts it travels back in time to let you know it has.
| Thus you would know immediately after starting the
| computation if it's going to halt, how long it'll take,
| and what the result is.
|
| Of course you should still carry out the computation to
| prevent a paradox.
| JadeNB wrote:
| > With time travel, isn't the halting problem trivially
| solvable? You start the program, and then just jump to
| after the end of forever and see if the program terminated.
|
| Some programs won't halt even after forever, in the sense
| of an infinite number of time stops. For example, if you
| want to test properties of (possibly infinite) sets of
| natural numbers, there's no search strategy that will go
| through them even in infinite time.
|
| (Footnote that I'm assuming, I think reasonably but who
| knows what CSists have been up to?, a model of computation
| that allows the performance of countably, but not
| uncountably, many steps.)
| eddd-ddde wrote:
| But if you are at the present, and dont receive a future
| result immediately, can't you assume it never halts?
| Otherwise you would have received a result.
| mgsouth wrote:
| I don't think so. That's assuming the program will always
| be in a frame of reference which is temporally unbounded.
| If, for example, it fell into a black hole it would,
| IIUC, never progress (even locally) beyond the moment of
| intercepting the event horizon.
| pino999 wrote:
| It goes like this: We have two observers with a computer.
| A is outside a black hole. B goes over the event horizon.
| B is infinitely time dilated seen from A. It takes
| forever for B to reach the singularity from A standpoint.
| B reaches the middle in a finite time.
|
| A starts computation. If it halts A sends a result,
| otherwise it won't.
|
| B sees the result in a finite time. If it doesn't, the
| program didn't halt.
|
| If time is discrete, it won't fly I think. This works
| because there is no smallest time unit in gr.
|
| We are working with different types of infinities. A's
| computational steps take, the further B goes in, less
| time. Sort of Zeno's paradox. It is easy to map all
| natural numbers between 0 and 1 on the real line. Just
| not 1 to 1.
|
| There are more problems.
|
| How to get the information out and how to survive the
| divergent blue shift, it is somewhat unclear. B cannot
| talk back. But still a cool find.
| JadeNB wrote:
| > It is easy to map all natural numbers between 0 and 1
| on the real line. Just not 1 to 1.
|
| For the usual meaning of the term, you certainly can
| construct a 1-to-1 (that is, injective) map N \to [0, 1]
| (for example, n \mapsto 10^(-n)); the natural numbers
| just can't be mapped _onto_ [0, 1] (that is, the map can
| 't be surjective). That's the opposite of the problem we
| have: it's saying you can losslessly encode a countable
| amount of information in an uncountable amount of space;
| but I'm saying conversely that you can't perform an
| uncountable number of steps in a countably infinite
| amount of time.
| PittleyDunkin wrote:
| Surely we already have this: the jump just takes forever.
| pcblues wrote:
| If you shift the computational result back in time to the same
| time you started it, your O notation is just a zero, and quite
| scalable. Actually it would open the programming market up to
| more beginners, because they can brute force any computation
| without caring about the time dimension. Algorithm courses will
| go broke, and the books will all end up in the remainder bin.
| Of course, obscure groups of purists will insist on caring
| about the space AND time dimensions of complexity, but no-one
| will listen to them anymore.
| ordu wrote:
| I don't think it will work like that. It is necessary to run
| the computation to get the result, even if you transfer the
| result back in time. So the world will end up running a lot
| of computations while knowing their future results. These
| computations will be a new kind of a tech debt, people will
| choose between adding one more task to the existing
| supercomputer (and slowing all the tasks there), or
| creating/leasing another supercomputer to finish some tasks
| early and forget of them finally.
| neeleshs wrote:
| There will also be a cult that says this is all borrowing
| time from the future and we will run out of time eventually.
| deadbabe wrote:
| You definitely have to be capable of performing the
| computation to get the result, you can't just get something
| from nothing. You just don't have to actually exist in the
| timeline where you process the work.
| JadeNB wrote:
| I whimsically imagine some version of bi-directional Hoare
| logic.
| seanmcdirmid wrote:
| Going back in the past requires checkpointing the past, which
| can be very expensive, but maybe cheaper with the right kind of
| time-indexed data structures underlying your programs. So you
| just wind back time and begin fixing data structures with your
| backward change until you are consistent at head time again.
|
| But uhm, if you actually want the future to affect the past
| formally, that is going to be something really different, so
| I'm not really sure.
| twwwt wrote:
| I'm waiting for the day when we have all agreed that what is done
| here is _not_ time travelling. Rather, it is a simulation of time
| travelling, or pseudo time travelling, if you will. At the same
| time, I 'm afraid this will never happen since it is already too
| late, since we have missed the point in time when there was the
| chance to call it right. For this, we would need _real_ time
| travelling.
| seanmcdirmid wrote:
| Oh wow, this reminds me of Jefferson's time warp (virtual time),
| but that was more for dealing with inconsistencies brought about
| by concurrent processing.
|
| https://dl.acm.org/doi/10.1145/37499.37508
|
| I wrote a paper with Jonathan Edwards around the concept of
| managed time a while back also:
|
| https://www.microsoft.com/en-us/research/publication/program...
|
| But this is more like time traveling as an explicit first class
| thing, rather than just a way to make things consistent wrt to
| concurrency or live programming. I don't think first-class time
| travel has been explored so much yet.
| Mathnerd314 wrote:
| There is the TARDIS monad in Haskell
| https://hackage.haskell.org/package/tardis-0.5.0/docs/Contro...
| It doesn't have the multiple timelines or killing features - it
| just deadlocks if there is a paradox or the timeline is
| inconsistent.
| anonzzzies wrote:
| Should've waited for april 1st and presented this as a serious
| language with a web framework, leftpad and always changing, in
| unproductive and unpredictable ways, npm competitor, having
| published modules travel forward and backward in time in random
| ways every time you install one, so like actual npm module devs
| behave. Then I might start using it instead of node/js/npm.
| eterevsky wrote:
| Slightly related, time travel has implications for the
| computation complexity: https://arxiv.org/abs/quant-ph/0309189,
| https://www.scottaaronson.com/papers/ctc.pdf.
| svilen_dobrev wrote:
| can you send something in _future_? so it awaits silently there?
| in either particular timeline.. or in _any_ timeline?
|
| invokes quite some bitemporal associations..
|
| btw speaking of time-warping, anyone know what's the situation
| with Dynamic Time Warping [1], i.e. non-linear timeline (think
| wow-and-flutter of a tape-player, or LP-player). Last time i
| checked it was computationaly prohibitive to try use it for like
| sliding-window-search (~convolution) over megabytes of data?
|
| [1] https://mlpy.sourceforge.net/docs/3.5/dtw.html
| mgsouth wrote:
| Two weeks ago [edit: local time] we submitted a request for some
| enhancements we really need for our term final paper, but haven't
| heard anything. We realize this is a volunteer project, but our
| whole final grade hinges on this, so we're going to try
| submitting the requests earlier...
|
| * Can we have a multi-verse?
|
| * We really need to be able to kill a variable but have it linger
| on for a while. This is to allow revenge, and...
|
| * Our whole paper is about Bob/Alice going back in time, shooting
| his/her grandfather, who lingers long enough to sire their
| father's brother Tom, who marries their mother instead, who gives
| birth to Alice/Bob, who although now their own cousin is still a
| murderous beast, goes back in time and shoots at their
| grandfather, misses, kills Tom instead, thus their original
| father marries their mother, and Bob/Alice is born. Thus
| demonstrating local paradoxes with meta-consistency and
| explaining much in our current timeline. We're gonna get an A+
| for sure.
|
| * We suggest storing the metaverse as a directed cyclic graph of
| the operations and not saving state. To collapse the metaverse
| (in the quantum physics sense, not in emiT's oh-no-everything-is-
| impossible sense) simply apply the graph to a set of pre-
| conditions, and as you process each node apply its state
| constraints to the metaverse state. Handling the node cycles is
| normally seen as a challenge, but there's some sample code posted
| on HN in March, 2028 that should make it a breeze.
___________________________________________________________________
(page generated 2024-11-16 23:01 UTC)