[HN Gopher] An Epic future for SPJ
___________________________________________________________________
An Epic future for SPJ
Author : Smaug123
Score : 239 points
Date : 2021-11-06 17:20 UTC (5 hours ago)
(HTM) web link (discourse.haskell.org)
(TXT) w3m dump (discourse.haskell.org)
| tweedledee wrote:
| A huge loss to and a massive vote of no confidence in MSR
| Cambridge.
|
| Is Epic worse than Facebook? I can't imagine much of an upside to
| praying on human frailties in order to make games addictive to
| extract money from kids.
| ksec wrote:
| >Is Epic worse than Facebook? I can't imagine much of an upside
| to praying on human frailties in order to make games addictive
| to extract money from kids.
|
| What?
| otabdeveloper4 wrote:
| Have you seen kids these days? Video games are an addictive
| substance like cigarettes or alcohol.
|
| (Yes, I know you played Mario Brothers as a child and turned
| out fine. So did I. But the stuff we had back in the day is
| extremely tame and weak compared to the hard stuff they
| peddle today.)
| ModernMech wrote:
| I dunno, they called EverQuest "EverCrack" for a reason.
| That was over two decades ago.
| tsimionescu wrote:
| That's still in the area of D&D - social rewards
| encouraging large amounts of time. Modern psychological
| tricks used to encourage addiction and over-spending,
| especially in kids and neuro-atypical people are newer,
| and were a "great" contribution from the likes of Zynga
| or King.
| Mikeb85 wrote:
| Probably a reference to the fact that Epic makes a ton of
| money from Fortnite.
| ksec wrote:
| But Fortnite is a not a pay to win game. Unless All Games
| are "praying on human frailties".
|
| Not only that, Epic invested every single dollar they made
| from Fortnite back to improving Unreal Engine.
| DonaldPShimoda wrote:
| > But Fortnite is a not a pay to win game.
|
| No, but that's not the point.
|
| The "addictive" factor comes from micro-transactions with
| randomized returns on investment ("loot boxes"). Kids get
| sucked into chasing the most valuable skins for
| characters, weapons, etc. This either results in them
| spending an inordinate amount of time playing the game
| (increasing odds of paying money) or just paying money
| directly to purchase in-game loot boxes that have
| fractional chances of dropping the rarest gear.
|
| It's not "all games" that are problematic in this way; it
| is games which are monetized through micro-transactions
| with loot boxes. It's essentially legal gambling for
| kids, and it's absolutely a problem that we should be
| taking more seriously.
| redxdev wrote:
| Didn't fortnite remove loot boxes in 2019?
| https://arstechnica.com/gaming/2019/01/fortnite-puts-an-
| end-...
|
| There's still something to be said about the addictive
| nature of microtransactions, but loot boxes haven't been
| relevant to the discussion as far as fortnite goes for a
| few years.
| tsimionescu wrote:
| Pay to win or not is irrelevant.
|
| Advertising and game design technicques to encourage
| vulnerable people to spend inordinate amounts of time and
| money on a game are the real problem. This leads to the
| current situation of most hugely successful games today,
| where the vast majority of players spend nothing or next
| to it, while a small minority of "whales" literally spend
| thousands of dollars monthly.
| Twisol wrote:
| > But Fortnite is a not a pay to win game.
|
| That depends on how you define "win". As a gacha player
| myself (Genshin Impact), a lot of people roll because of
| FOMO and because they want to own a thing, not
| necessarily because they're going to "win".
|
| Heck, I main Yoimiya, and she's not exactly part of the
| meta. Maybe I'm a sucker (no, I definitely am), but I'm
| not paying to "win" in the usual sense. There's plenty of
| exploitation to be had without attaching a competitive
| advantage.
| Mikeb85 wrote:
| Didn't say I agreed. But there's an argument to be made
| that by making the game addictive, keeping players _in_
| the game encourages them to spend money on skins and
| whatever else they sell.
|
| I'm personally pro-entertainment, pro-culture, etc...
| Tons of things we spend money on in life is entertainment
| and frankly, useless. But it's fun.
| DethNinja wrote:
| It is pure speculation but I don't believe Epic's Verse language
| will be a functional programming language.
|
| Verse is more intended for Blueprint people and there is no way
| people working on the Blueprint code will be able to move into
| functional programming.
|
| Also I don't get the investment into Verse language itself, why
| not just fund LuaJit, do we really need another scripting
| language when LUA nailed some many things right?
|
| I guess Epic will craft a pretty fast scripting language and try
| to lock game developers to their own ecosystem.
| dllthomas wrote:
| I don't know, Simon has been big on "spreadsheets as functional
| programming" for a bit. Something blueprinty built with that
| point of view might be both "functional" and approachable.
| pjmlp wrote:
| Visual languages can be quite functional, specially when they
| map into IC like modules.
| tinco wrote:
| Blueprint is already basically Haskell, I've done both and the
| similarities are striking. I think the world is ready for a
| proper visual programming language that plays nice with git,
| compiles down to native code and has all the nice abstractions
| that Haskell offers and none of the downsides of a scripting
| language. No doubt SPJ is one of the people who can help make
| that dream a reality, not in small part because just his
| presence will attract the best and brightest to Epic.
| WoahNoun wrote:
| It my not be FP, but I imagine it will be heavily type
| dependent and Haskell has one of the best type systems of any
| programming language.
| shrimpx wrote:
| Bizarre move. AFAIK he does whatever he wants at MSR. Maybe he's
| sick of that environment's politics; maybe he's sick of academic
| research/the ICFP crowd and wants to work on "real world" stuff.
| I can't imagine it's about money.
| twic wrote:
| MSR is having pretty substantial layoffs at the moment. Some
| kind of substantial reorientation, with whole projects being
| shut down.
| nickkell wrote:
| I would've also assumed his job was perfect, but he's the best
| judge of that!
|
| This is basically what Erik Meijer did isn't it? He worked at
| Microsoft and produced some really nice things that had a big
| impact, like reactive extensions for example.
| shrimpx wrote:
| Yeah. Simon Marlow also left MSR years ago, for Facebook.
| friedman23 wrote:
| I wonder what the new language is going to look like, one thing
| I'm interested in is declarative programming languages. I feel
| like a declarative programming language would actually be a
| pretty good fit for games.
| willvarfar wrote:
| I always remember SPJ's chart in a short jokey video "Haskell is
| useless" https://youtu.be/iSmkqocn0oQ
|
| Really made me see things clearer.
| ksec wrote:
| I really like his chart about usefulness and Unsafe
| programming. And that was in 2011! We have seen in the past 10
| years how programming moving in the direction he was pointing
| at.
| graffitici wrote:
| I guess Rust is a language that went the move to Safe and
| Useful, following the top arrow?
| WJW wrote:
| Rust prevents a few types of unsafety, mostly relating to
| memory and concurrency. There are many more on which Rust
| does not do particularly well though. Specifically, Rust
| allows arbitrary side effects from basically any part of the
| code. You can always write to stdout or a logfile, or open up
| new sockets, or delete files from the filesystem and there
| will be nothing in the type signature to even warn you about
| this.
| discreteevent wrote:
| >write to stdout or a logfile, or open up new sockets, or
| delete files from the filesystem
|
| Do those things come under the category of unsafe? Why
| would we be programming if it weren't to do those kinds of
| things? If I buy a shovel, at some point I'll probably want
| to dig a hole with it.
| colonwqbang wrote:
| The point is not that you shouldn't do these things. The
| point is that Rust does not provide any tools to help you
| do it safely. Mutation is also a necessary thing in
| programming, which can be unsafe if done incorrectly.
| Rust has many built in rules for keeping mutation safe
| and requires labelling functions that mutate. For side
| effects, there is not much of a safety net in rust.
| TheDong wrote:
| Absolutely. If you watch the youtube video linked in this
| thread, SPJ makes that very comment, that a program with
| zero effects is useless.
|
| However, the goal of haskell (and indeed any language
| trying to be safe in the way SPJ means) is to be able to
| have most code be safe/effect-free, and then to have
| effects be very carefully controlled and limited in their
| use. Things like the IO monad mean many parts of haskell
| code can't do IO in fact.
|
| We obviously do want some sort of effect in the end, but
| the idea is it's safer to contain those effects in very
| limited places, and not allow effects to happen literally
| anywhere at any time.
|
| Note, unsafe in the SPJ video was specifically about
| effects, while "unsafe" in rust terminology is mostly
| about memory safety, so those two terms really aren't the
| same word, and to be honest that can make communication
| less clear. I don't know what "category of unsafe" meant
| in your comment really.
| pyrale wrote:
| The basic idea isn't that you want to avoid such
| activities, but that you want to know which functions do
| it, so that it is easier to reason about your program.
| WJW wrote:
| If you'd watched the video from the grandparent comment,
| you'd have seen that those kind of uncontrolled side
| effects are exactly what Simon Peyton Jones was talking
| about in the video when talking about "safe" versus
| "unsafe" languages.
|
| But in any case, yes those things I mentioned _can_ fall
| under the category of "unsafe". In the category of
| memory safety, you want to be sure (with compiler
| guarantees!) that no thread will secretly free some
| allocated memory while you are still using it. This is
| something the borrow checker can give you in rust. There
| are no builtin guarantees in Rust that this fancy new
| library you imported won't do `DROP TABLE users`, or open
| a socket to https://haxorz.com and pass on all the
| environment variables. There are other languages in which
| you can be sure that a function cannot open any sockets
| unless it specifically has "can open sockets" encoded in
| its type.
| discreteevent wrote:
| I get that. (Another way to do it at run time is
| capabilities). What I don't like is calling this
| "unsafe". We know that use after free is never something
| anyone intended. We don't know that about opening a
| socket. If we take the attitude that any effect is unsafe
| then soon we will feel we have to control every one of
| them. If I have to control everything someone else does
| then I might as well do it myself (i.e. you eventually
| start to leak the implementation details and lose
| flexibility). Call it contracts or capabilities or
| something but not unsafe.
| chriswarbo wrote:
| Use-after-free is bad. "Unsafe" isn't the same as bad;
| unsafe means "could be bad". The reason that is a useful
| definition for unsafe, is that it allows us to define
| safe as "definitely not bad".
|
| In Haskell, a function like 'Int -> String' is safe
| (definitely not bad). A function like 'Int -> IO String'
| is unsafe (it might be bad; we hope not). If it were
| possible to specify "bad" via the type system (like the
| type of use-after-free) then we would want that to be a
| type error (like it is in Rust).
| feanaro wrote:
| The part you missed is "always". The unsafe part is being
| able to do this from any function, instead of only from
| functions explicitly marked as being able to perform
| effects.
| amelius wrote:
| It lacks a GC however, which Haskell has for a reason.
| eklavya wrote:
| It's funny in my mind that I laughed at his joke about "the
| computer would just produce heat" and somehow the whole of
| cryptocurrency is literally doing just that.
| Mikeb85 wrote:
| Good video, never seen it before. Thanks! Also love the fact he
| doesn't take himself or programming things too seriously
| (Haskell in useless category is great).
| Jare wrote:
| A long time coming since Tim Sweeney's 2005 talk referred to
| functional concepts and things like STM https://www.st.cs.uni-
| saarland.de/edu/seminare/2005/advanced...
| servytor wrote:
| Tim Sweeney did a presentation[0] at MIT's CSAIL on programming
| languages that I thought was really interesting (it mentions
| Haskell quite a bit), titled "The Next Mainstream Programming
| Language: A Game Developer's Perspective."
|
| [0]:
| https://web.archive.org/web/20120113030236/http://groups.csa...
| zemo wrote:
| On slide 57 he takes the following position:
| Memory model - Garbage collection should be the only
| option
|
| an interesting position to see Sweeney take up, since so many
| people in games reflexively say that garbage collection is
| totally non-viable whenever it comes up.
|
| but he also says: By 2009, game developers
| will face... CPU's with: -20+ cores
|
| MacBook Pros, Razer Blades, the XBox Series X, and the PS5 all
| use 8-core CPUs, so the prediction of an ever-escalating number
| of CPU cores seems to have not really panned out. One thing
| that has been majorly outside of these predictions is the
| dedicated ML chips like the M1's Neural Engine.
|
| The presentation isn't explicitly dated, but since the sample
| game is Gears of War, which came out in 2006, later on he says
| "by 2009" as if it was not-yet 2009, and the file name is
| "sweeney06games.pdf", gonna say this is almost certainly a
| presentation given in 2006. At the time this was given, the
| iPhone hadn't even come out yet.
|
| It's an interesting presentation. I wonder what Sweeney would
| say today about this presentation, about how much of it still
| rings true, and about which things he would second-guess
| nowadays.
| nabla9 wrote:
| Even with garbage collection, you still can manage memory
| manually on top of gc by creating a pool and managing it
| manually.
| throwawayapples wrote:
| > By 2009, game developers will face... CPU's with: 20+ cores
|
| It seems like he was off by a matter of a few months at most.
|
| I have a quad AMD Opteron(tm) Processor 6172 (Mar 2010) 2U
| server with _48_ hardware cores. This motherboard can also
| take the 16 core CPU 's which were also introduced the
| following year, which would put it at 64 cores and 128
| threads.
|
| The Opteron 6 core CPU's were introduced in June 2009, so a
| quad server would be 24 cores, or 48 threads.
|
| Of course, that's not cores in a _single_ CPU, if that 's
| what even what the prediction was referring to, but that
| probably wouldn't matter very much from a gaming dev
| perspective anyway. It was probably a pretty good guess, all
| things considered.
| Hendrikto wrote:
| > a quad server would be 24 cores
|
| That's not usually what people game on, though. Even today,
| 20 core CPUs are the exception for gaming rigs.
| pjmlp wrote:
| Most of those people aren't aware that Unreal Blueprints and
| Unreal C++ do have GC.
|
| They just see C++ and don't look into the details.
|
| Besides that, given that most engines now have a scripting
| layer, even if deep down on the engine GC is forbidden, the
| layer above it, used by the game designers, will make use of
| it.
|
| It is like fostering about C and C++ performance, while
| forgetting that 30 years ago, they weren't welcomed on game
| consoles, and on the 8/16 bit home micros for games.
|
| My phone is more powerful than any Lisp Machine or Xerox PARC
| workstation, it can spare some GC cycles.
| chc4 wrote:
| Most people's mental model of garbage collection is still
| just stop-the-world mark-and-sweep, which is extremely old.
| Game developers hate GC because of impossible to predict or
| the lack of ability to limit latency pauses eating into the
| frame budget - but there are tons of new, modern GC
| implementations that have no pauses, do collection and
| scanning incrementally on a seperate thread, have extremely
| low overhead for write barriers and only have to scan a
| minimal subset of changed objects to recompute liveness, etc.
| that are probably a great choice for a new programming
| language for a game engine. Game data models are famously
| highly interconnected and have non-trivial ownership, and
| games already use things like deferred freeing of objects
| (via area allocators) to avoid doing memory management on the
| hot path that it could do automatically for everything.
| hn_throwaway_99 wrote:
| After being both a Java programmer for the greater part of
| a decade, then spending a bunch of years doing Objective
| C/Swift programming, I don't really understand why the
| Automatic Reference Counting approach hasn't won out over
| all others. In my opinion it's basically the best of all
| possible worlds:
|
| 1. Completely deterministic, so no worrying about pauses or
| endlessly tweaking GC settings.
|
| 2. _Manual_ ref counting is a pain, but again ARC basically
| makes all that tedious and error prone bookkeeping go away.
| As a developer I felt like I had to think about memory
| management in Obj C about as much as I did for garbage
| collected languages, namely very little.
|
| 3. True, you need to worry about stuff like ref cycles, but
| in my experience I had to worry about potential memory
| leaks in Obj C about the same as I did in Java. I.e. I've
| seen Java programs crash with OOMs because weak references
| weren't used when they were needed, and I've seen similar
| in Obj C.
| flohofwoe wrote:
| If you check ARC code in a profiler, there's a shocking
| amount of time spent in the retain/release calls:
|
| https://floooh.github.io/2016/01/14/metal-arc.html
|
| There are ways around this overhead at least in Metal,
| but this requires at least as much effort as not using
| ARC to begin with.
| Twisol wrote:
| I think the M1 CPU lowers some of the ARC retain/release
| to silicon -- it doesn't remove the problem, but it does
| seem to reduce the relative overhead significantly.
|
| https://news.ycombinator.com/item?id=25203924
|
| > this requires at least as much effort as not using ARC
| to begin with.
|
| Designing a brand new CPU architecture certainly counts
| as "at least as much effort", yes. ^_^
|
| P.S. While I'm here, your "handles are the better
| pointers" blog post is one of my all-time favorites. I
| appreciate you sharing your experiences!
| fulafel wrote:
| Compared to other, state of the art GC strategies, it's
| slow and has bad cache behavior esp. for multithreading.
| Also, not deterministic perf wise.
| hn_throwaway_99 wrote:
| Can you explain these or point me to any links, I'd like
| to learn more.
|
| > has bad cache behavior for multithreading
|
| Why is this? Doesn't seem like that would be something
| inherent to ref counting.
|
| > Also, not deterministic
|
| I always thought that with ref counting, when you
| decrement a ref count that goes to 0 that it will then
| essentially call free on the object. Is this not the
| case?
| b3morales wrote:
| > Also, not deterministic
|
| > not deterministic _perf wise._
|
| was what parent wrote (emphasis added), I assume
| referring to the problem that when an object is
| destroyed, an arbitrarily large number of other objects
| -- a subset of the first object's members, recursively --
| may need to be destroyed as a consequence.
| lallysingh wrote:
| Bad cache behavior: you're on core B, and the object is
| used by core A and in A's L2 cache. Just by getting a
| pointer to the object, you have to mutate it. Mutation
| invalidates A's cache entry for it and forces it to load
| into B's cache.
|
| determinism: you reset a pointer variable. Once in a
| while, you're the last referent and now have to free the
| object. That takes more instructions and cache
| invalidation.
| hn_throwaway_99 wrote:
| > Bad cache behavior: you're on core B, and the object is
| used by core A and in A's L2 cache. Just by getting a
| pointer to the object, you have to mutate it. Mutation
| invalidates A's cache entry for it and forces it to load
| into B's cache.
|
| Thank you! This was the response that made the cache
| issues clearest to me.
| dllthomas wrote:
| > Why is this? Doesn't seem like that would be something
| inherent to ref counting.
|
| Once I know I can read the data, it usually doesn't
| matter that another thread is also reading it. Reference
| counting changes that because we both need to write to
| the count every time either of us takes or drops a
| reference to the data, and in the latter case we need to
| know what's happened on the other core, too. This means a
| lot more moving of changing data between processor cores.
|
| > > Also, not deterministic
|
| > I always thought that with ref counting, when you
| decrement a ref count that goes to 0 that it will then
| essentially call free on the object. Is this not the
| case?
|
| That's my understanding, but is that "deterministic" as
| we mean it here? It's true that the same program state
| leads to the same behavior, but it's _non-local_ program
| state, and it leads to you doing that work - potentially
| a lot of work, if you (eg) wind up freeing all of a huge
| graph structure - at relatively predictable places in
| your code.
|
| There are good workarounds (free lists, etc) but "blindly
| throw language level reference counting at everything"
| isn't a silver bullet (or maybe even a good idea) for
| getting low-latency from your memory management.
| fulafel wrote:
| Writing to the rc field even for read only access dirties
| cache lines and causes cache line ping ping pong in case
| of multi core access, where you also need to use slower,
| synchronised refcount updates so not to corrupt the
| count. Other GC strategies don't require dirtying cache
| lines when accessing the objects.
|
| Determinism: the time taken is not deterministic because
| (1) malloc/free, which ARC uses but other GCs usually
| not, are no deterministic - both can do arbitrary amounts
| of work like coalescing or defragmenting allocation
| arenas, or performing system calls that reconfigure
| process virtual memory - and (2) cascading deallocations
| as rc 0 objects trigger rc decrements and deallocation of
| other objects.
| [deleted]
| chrisseaton wrote:
| Isn't ARC absolutely worst-case for most multi-threading
| patterns? You'll thrash objects between cores just when
| you _reference_ them. Every object becomes a mutable
| object!
| tsimionescu wrote:
| On the contrary, as others have pointed out, automatic
| reference counting seems to be the _worst_ of all worlds.
|
| 1. Automatic reference counting adds a barrier every time
| you pass around an object, even if only for reading,
| unlike both manual memory management and tracing GC.
|
| 2. ARC still requires thought about ownership, since you
| have to ensure there are no cycles in the ownership graph
| unlike tracing/copying/compacting GC.
|
| 3. ARC still means that you can't look at a piece of code
| and tell how much work it will do, since you never know
| if a particular piece of code is holding the last
| reference to a piece of data, and must free the entire
| object graph, unlike manual memory management. In the
| presence of concurrency, cleanup is actually non-
| deterministic, as 'last one to drop its reference' is a
| race.
|
| 4. ARC doesn't solve the problem of memory fragmentation,
| unlike arena allocators and compacting/copying GC.
|
| 5. ARC requires expensive allocations, it can't use cheap
| bump-pointer allocation like copying/compacting GC can.
| This is related to the previous point.
|
| 6. ARC cleanup still takes time proportional to the
| number of unreachable objects, unlike tracing GC
| (proportional to number of live objects) or arena
| allocation (constant time).
|
| Reference counting is a valid strategy in certain pieces
| of manually memory managed code (particularly in single-
| threaded contexts), but using it universally is almost
| always much worse than tracing/copying/compacting GC.
|
| Note that there are (soft) realtime tracing GCs, but this
| can't be achieved with ARC.
| twic wrote:
| > there are tons of new, modern GC implementations that
| have no pauses
|
| There are no GC implementations that have _no_ pauses. You
| couldn 't make one without having prohibitively punitive
| barriers on at least one of reads and writes.
| bazooka_penguin wrote:
| Pretty sure unreal engine 4 and 5 have garbage collection
| built into the runtime
| LegitShady wrote:
| he was off by some decades but that is mostly intel's fault.
| about a decade later we have intel chips coming out with 10
| cores and amd with 16 cores. We're getting there.
| eecc wrote:
| > MacBook Pros, Razer Blades, the XBox Series X, and the PS5
| all use 8-core CPUs, so the prediction of an ever-escalating
| number of CPU cores seems to have not really panned out.
|
| You seem to forget the GPU, how may cores does that have?
| (but really the question is: isn't parallelism the default
| mode when working with GPUs.
| caslon wrote:
| GPUs have existed since long before 2009. Sorry, but the
| guy who wrote Unreal Tournament of all people definitely
| did not make a basic mistake or generalization like you're
| insinuating.
| a_lost_needle wrote:
| Well, the RTX 3090 apparently has 10496... But I'm not
| aware of how to use them to optimize garbage collection.
| [deleted]
| seanmcdirmid wrote:
| He wrote an article on I think gamespy of the same variety.
| Back in 2000 nonetheless, it really influenced the direction I
| took in grad school: https://m.slashdot.org/story/9565 (the
| actual article seems to be lost, maybe I can dredge up a web
| archive link).
| kreetx wrote:
| Recently listened to the Haskell Interlude podcast with Lennart
| Augustson where he mentions in the end that he's at Epic Games.
| Looks like they are investing greatly in Haskell-like things
| now.
|
| [1] https://haskell.foundation/podcast/2/
| armchairhacker wrote:
| Might be similar to and looks like it's going to replace
| https://skookumscript.com
| __s wrote:
| Previously he only announced leaving MSR two months ago:
| https://news.ycombinator.com/item?id=28473733
| losvedir wrote:
| Wow! Didn't see that coming, but very interesting. When he says
| "the metaverse", does he mean Epic is working on one of their
| own? Or is it the one facebook is working on, and they have some
| sort of partnership?
| YetAnotherNick wrote:
| I think Epic is working on VR games, and saying metaverse is
| more cooler.
| theptip wrote:
| When you say "the one FB is working on" - the stated goal is
| for there to be one Metaverse, as there is one Internet.
|
| Of course that requires protocol interop (as did the Internet
| with TCP/IP and HTTP) and who knows if it will happen for the
| Metaverse.
| kibwen wrote:
| Metaverses are less fundamental than the internet, and
| there's no need for there to be exactly one of them (other
| than lucrative proprietary lock-in, anyway). Metaverses are
| application-level concerns, and will be built on top of the
| existing internet infrastructure. In the same way that a web
| browser is a portal to HTTP content, a metaverse application
| will be a portal to whatever application-level protocols
| arise to support them.
| dagmx wrote:
| Have either Tim Sweeney or Mark Zuckerberg said that there's
| plans for only a single metaverse?
|
| I think it's been pretty clear that they're going to work on
| their own metaverse products. Epic seems to build on Fortnite
| , while Meta have Horizon.
|
| I don't believe either have stated compatibility as a goal.
| wmf wrote:
| They're each going to build their own metaverse _and_ they
| 're planning for that to be the one and only metaverse.
| They're planning to kill each other.
| glennpratt wrote:
| Seems pretty clear companies are talking about "the metaverse"
| as broad concept.
|
| https://tech.fb.com/connect-2021-our-vision-for-the-metavers...
|
| https://www.epicgames.com/site/en-US/news/announcing-a-1-bil...
|
| https://www.matthewball.vc/about
| gpm wrote:
| Epic has been talking about the metaverse for years... e.g. it
| came up a lot in their Apple lawsuit, and searching on google
| turns this up from mid 2019:
| https://twitter.com/TimSweeneyEpic/status/115784180038569574...
|
| So far it seems like they sort of vaguely want to turn Fortnite
| into it... but I haven't really seen much that I would call
| concrete.
| Barrin92 wrote:
| >So far it seems like they sort of vaguely want to turn
| Fortnite into it... but I haven't really seen much that I
| would call concrete.
|
| I always thought their Travis Scott event was a pretty good
| example of something you could call early metaverse-ish,
| going beyond the usual scope of the game
|
| https://www.youtube.com/watch?v=wYeFAlVC8qU
| bangonkeyboard wrote:
| An early look at Verse syntax:
| https://www.twitch.tv/videos/840713360?t=01h06m24s
| atarian wrote:
| Timestamp?
| polio wrote:
| It's in the query param: 01h06m24s
| dharmaturtle wrote:
| CurrentRound^ : int = 0 for (i =
| 1..NumberOfRounds): CurrentRound := i
| if(!SetupPlayersAndSpawnPoints()?):
| ShutdownGame() return else:
| Players.PutAllInStasisAllowEmotes()
| ResetMatch()
|
| Looks nonfunctional as heck to me. Also it isn't expression-
| based - at least that's what the bare `return` says to me.
|
| Honestly just looks like a dialect of Python. I wonder what SPJ
| will do with it...
| keyle wrote:
| Ugh. Failing to see why lua isn't a better choice than this.
| Even a subset of lua augmented with those funky ^? stuff to
| achieve what you want to achieve in terms of safety.
|
| Feels like "the whole everything is looking old we got new
| keywords! Metaverse! We need new languages...".
|
| The older I get the more I feel like languages are just as
| generational as developers come into the market. They don't
| want to write Java, even python is getting old. We need new
| fresh languages every 5-10 years which frankly just feels
| like the same walls with a fresh coat of paint.
| shaunxcode wrote:
| If it were up to me: make sure it has the primitives required
| to write things in a "mostly functional" way if the
| programmer desires/when the problem fits the space. Not sure
| the whole list but maybe starting with nice syntax for
| lambdas and built in immutable collections? Maybe a macro
| system for introducing syntax? OK you caught me just make it
| a lisp with python syntax and types.
| FuckedCompany wrote:
| Metaverse? Really? Holy hypeverses Batman
| dochtman wrote:
| Anyone know more information about Verse itself?
| bmitc wrote:
| Sounds like Epic is hiring some pretty epic people. From the
| article, Dan Piponi works there as Principal Mathematician. As
| far as titles go, that's a dream one. I've really enjoyed some of
| Dan Piponi's writing and blog posts.
| agumonkey wrote:
| I wonder how fast the cutural view of haskell will shift now.
| Lots of young minds will be exposed to haskell/purefp with a very
| exciting and positive light, meaning they'll probably sip all of
| it without questionning it. Pretty good but maybe a little bad
| too.
| thewakalix wrote:
| What are you talking about?
| agumonkey wrote:
| Haskell is still niche, but if the gaming community gets wind
| of that thing Epic is using for their systems, they might get
| interested in a vastly different way people are looking at
| haskell and pure fp right now. This might bring a new (and
| vast) crowd, and a crowd that might come with an apriori
| appreciation of haskell (because of the Epic brand trust),
| this often causes a religious view on whatever idea is
| learned.
| tsimionescu wrote:
| I don't think you should assume SPJ was brought in to
| transition Epic to Haskell.
| zepto wrote:
| Much as I am suspicious of Epic, this kind of move is _way_
| better than their silly stunt with the App Store.
|
| The way to beat Apple is to develop better technology than them.
| It will take billions of dollars to do this, but Epic has the
| money.
| exdsq wrote:
| > I also expect to work with Dan Piponi on data-centre-scale
| transactional memory
|
| This sounds really interesting, I learned about STM in Haskell
| while working on Cardano & it seemed so obvious - I am sure there
| will be some seminal outcomes of this too.
___________________________________________________________________
(page generated 2021-11-06 23:00 UTC)