[HN Gopher] Outrun: Execute local command using processing power...
       ___________________________________________________________________
        
       Outrun: Execute local command using processing power of another
       Linux machine
        
       Author : ingve
       Score  : 307 points
       Date   : 2021-03-18 16:08 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | jll29 wrote:
       | Clever idea, kudos!
       | 
       | For people who work mostly on a laptop but have access to various
       | fast, stationary machines, Overrun lets them use their CPUs and
       | GPUs without tirades of "scp -r" or "rsync".
       | 
       | (Of course you should still rsync to your backup machines, but
       | that's not necessarily the same box as your compute server.)
        
       | piperswe wrote:
       | It would be neat if Outrun automatically spun up a(n)
       | (EC2|DigitalOcean|GCE|etc) instance to run the task on then
       | destroyed it when it finished.
        
       | superasn wrote:
       | Really love when program authors actually explain what is
       | happening behind the scenes like this and even go into details
       | about how they made the small details work.
       | 
       | I may or may not use this program but I did definitely learned
       | another cool thing about Linux just now and that would most
       | certainly be useful to me sometime in the future in an all
       | together totally different context.
       | 
       | Thanks for the software and the explanation!
        
         | penguin_booze wrote:
         | Seconded. I was pleasantly surprised to see the 'how it works'
         | section.
         | 
         | Kudos to the author, and I hope I'll see more of this
         | everywhere.
        
       | tener wrote:
       | Very interesting concept and approach, mostly thanks to the very
       | generic approach it takes. I wonder if one could implement
       | something similar using some lightweight virtualization instead?
        
       | Topgamer7 wrote:
       | Ah yes Overv, I used to frequent facepunch forums and see the
       | interesting crap he came up with. I don't read the replacement
       | dev forum enough :(.
        
         | Overv wrote:
         | Good times, browsing the WAYWO threads every day to see what
         | people were creating. Where do the devs hang out nowadays?
        
       | not2b wrote:
       | It's very impressive that the developer made this work, but it
       | seems it would always be outperformed by simply having a set of
       | uniform Linux servers and common NFS mounts serving as a compute
       | farm, which is how people currently solve the problem.
        
         | justaguy88 wrote:
         | Not everyone has a farm, some people just have a beefy machine
         | somewhere in the other room that they'd love to have the power
         | of while they work from the laptop on the couch
        
           | guenthert wrote:
           | Why don't they work on the machine in the other room using
           | the laptop as a terminal?
        
             | sirtaj wrote:
             | So you can just put the laptop in your bag and go somewhere
             | else and still have the same environment, just slower. No
             | need to sync filesystems and packages etc.
        
       | dima55 wrote:
       | Does anybody know how/if this deals with /dev and /proc and /sys
       | and all those? Usually you need to separately bind-mount these in
       | your chroot.
        
         | Overv wrote:
         | They are indeed bind-mounted separately in the chroot on top of
         | the local file system:
         | https://github.com/Overv/outrun/blob/1e86c429c60022421de68f2...
        
           | dima55 wrote:
           | Ah. The follow-up question would be "how do you do that
           | without root access?" But I now see that the docs say root
           | access is required. I was hoping you'd have some magic way
           | that didn't require it. Thanks for replying
        
       | high_byte wrote:
       | nice. I like this project. but there's a problem because you
       | assume here my binary would run on the target machine. which is
       | why I just have a shared filesystem and that's it. I see there's
       | potential in the AI industry if you would like to send microtasks
       | to cloud, that would be amazingly simple.
        
         | Overv wrote:
         | One way to approach this would be to use QEMU binary
         | translation, but it would probably ruin any performance
         | benefits.
        
       | tsujp wrote:
       | Very cool with the application of `chroot` et al here. Would
       | something like this be possible with remote direct memory access?
       | That is, and for lack of a better word, more "cleanly"? I assume
       | not since you'll need to end up transferring the binary etc which
       | is already explained in the README on their repo as something
       | they _don't_ do.
        
       | BluePen8 wrote:
       | Sadly, everything that wastefully consumes too much of my CPU
       | cycles is usually some bloated graphical app.
       | 
       | At my last job simply watching the logs in TravisCI via Google
       | Chrome would slow my MacBook to a crawl.
       | 
       | At my new job my MacBook is fine until I join a Zoom meeting,
       | especially if I try and screen share.
       | 
       | Anything I actually call from the terminal is optimized enough to
       | never be a bother. If I worked in a space where it wasn't (big
       | data, video transcoding, etc), I'd probably just run a powerful
       | cloud instance and sync my code there.
        
         | johncena33 wrote:
         | Can we please stop hijacking threads with unrelated or barely
         | related rants?
        
       | brettproctor wrote:
       | I could see this being really useful for the team I'm on,
       | specifically for doing docker builds of rust binaries. The build
       | always has to start from scratch, and being able to easily used a
       | remote high power machine could really speed this up. Thanks for
       | sharing!
        
         | lewisinc wrote:
         | Builds don't "have" to start from scratch. If you're building
         | in a docker context you can mount a volume containing build
         | artifacts (target directory). Or `sccache` if you want those
         | artifacts stored in a remote store such as S3. I'm sure there's
         | other solutions as well, but not clearing build artifacts in
         | between builds would be a simpler win over maintaining
         | separate, shared infrastructure.
        
         | kccqzy wrote:
         | Docker image builds have caching! You can learn to leverage
         | them without using a third-party tool.
        
         | nucleardog wrote:
         | It's relatively easy with Docker to just point your local
         | docker tooling at a remote daemon.
         | 
         | That way during the build process you're only transferring the
         | cwd around your Dockerfile (the build state) to the daemon, and
         | it takes it from there.
         | 
         | Versus this that would be doing a lot of file i/o during the
         | build process all back across the wire.
        
         | fenollp wrote:
         | Then you'd be interested by https://github.com/denzp/cargo-
         | wharf And DOCKER_HOST, and BuildKit!
         | 
         | And if any of this makes sense:
         | https://fenollp.github.io/faster-rust-builds-docker_host
        
       | kevinmgranger wrote:
       | See also: plan9's `cpu` and `exportfs`
        
         | spijdar wrote:
         | Yeah, surprised more people haven't mentioned this, this is
         | very plan9-esque. Although as a side note for the viewers at
         | home, because plan9 exposed all interfaces as files, the remote
         | cpu server could use all resources on the client machine. Also,
         | because of plan9's exclusive use of static linking and
         | supporting file servers with many arch binaries "superimposed",
         | the "server needs to be same architecture" req could be
         | relaxed.
        
           | MisterTea wrote:
           | > Also, because of plan9's exclusive use of static linking
           | and supporting file servers with many arch binaries
           | "superimposed", the "server needs to be same architecture"
           | req could be relaxed.
           | 
           | Plan 9 was designed fro the get-go to easily build and run on
           | different architectures. static linking helps and you do all
           | your communication over 9p so you don't care if the machine
           | is arm, power, x86, mips, etc. The protocol is platform
           | independent and you just rpc everything. So instead of poorly
           | bolting a security library to your program, you let someone
           | experienced with security write a server and let it do the
           | hard stuff. Then you statically lnk a little c lib into your
           | code that just talks to the server.
           | 
           | And file servers are just little micro services running on
           | your computer. plan 9 is more cloud ready and distributed
           | than any poor excuse of an os nowadays.
        
       | reaperducer wrote:
       | R.I.P., Xgrid: https://en.wikipedia.org/wiki/Xgrid
        
       | mbreese wrote:
       | This is a clever use of fuse/sshfs and chroots to allow you to
       | run programs on remote computers. However, for many workloads,
       | isn't the network I/O going to outweigh the benefits of remote
       | CPU cycles?
       | 
       | I really think this is a clever hack, but I'm not sure how
       | _useful_ it is. Is it really good for the transcoding example? I
       | could see that being good if the remote machine has a better GPU,
       | but I 'm struggling to see how this could work for other use-
       | cases. Maybe compiling (which was one of the main use-cases
       | behind Sun's SGE)?
       | 
       | I do most of my work on HPC clusters, so moving compute out to
       | remote nodes is something I'm very interested in... and I'd love
       | to learn more about what kinds of workflows the authors had in
       | mind.
        
         | chris_wot wrote:
         | I regularly build LibreOffice. It would be great to use a
         | faster computer.
         | 
         | Edit: yeah, I posted too fast. I am not their use-case.
        
         | fuzzer37 wrote:
         | I can totally imagine wanting to work on a thin client on your
         | home network over a 1gbps network connection. Hell, if you
         | optimized specifically for this and ran a 10gbps line right
         | from your thin client to the server, it would be blazing fast.
         | 
         | Another application I could see is putting a loud server into a
         | closet somewhere and just running network cables to it. Make a
         | 4 GPU render node, then just send jobs to it over your thin
         | MacBook.
        
           | jayd16 wrote:
           | The thing is you need a client fat enough for normal use but
           | too thin to run the command in question.
           | 
           | For the workflow you describe, why not just ssh or RDP into
           | the beefy machine from the thin client? Wouldn't that make
           | more sense?
        
             | zozbot234 wrote:
             | ssh and RDP doesn't easily extend to live workload
             | migration or other forms of location transparency.
        
             | fuzzer37 wrote:
             | Well for starters, no scp command at the beginning and end
             | of each job. I'm sure you could work out an alternative
             | solution, this is just one such solution.
        
           | stefan_ wrote:
           | A 4k 60hz 30bpc display requires 14Gbps to feed. The thin
           | client thing is such a meme, all the computing requirements
           | continuously scale - there is nothing "seamless" about them.
        
             | alpaca128 wrote:
             | A ton of tasks don't require any graphical data streaming,
             | in fact aside from gaming most resource intensive
             | applications can be run in the terminal and thus easily
             | offloaded to another machine. Text editors are already
             | getting distributed code checkers via LSP, it wouldn't
             | surprise me if in a couple years we could run the whole
             | backend of IDEs on another computer and the local part can
             | be more lightweight than most websites.
             | 
             | Then you only need to drive a 4k screen for displaying a
             | desktop and maybe video playback, which can already be
             | achieved with a passively cooled Raspberry Pi.
        
               | herbstein wrote:
               | > it wouldn't surprise me if in a couple years we could
               | run the whole backend of IDEs on another computer and the
               | local part can be more lightweight than most websites.
               | 
               | Checkout the official SSH/Docker/WSL Remote Development
               | extensions to VSCode
        
               | taeric wrote:
               | Not surprisingly, emacs has been able to transparently do
               | this for a long time, now.
               | 
               | Granted, this all falls flat when your network connection
               | is garbage. :(
        
               | TeMPOraL wrote:
               | Also could use some more polish around handling SSH
               | connections; I often manage to lock up TRAMP when I step
               | away from the computer for 5 minutes. It's something that
               | can probably be fixed with the right
               | ssh_config/sshd_config, but I can't for the life of me
               | figure out the right incantations.
               | 
               | For now, I just habitually do M-x tramp-cleanup-all-
               | connections whenever I stop actively working with remote
               | files, and let Emacs reconnect automatically when I save
               | a remote file or refresh a remote folder listing.
               | 
               | (Incidentally, half of my use of TRAMP went away once I
               | managed to configure a 256-color terminal on my sidearm;
               | with that, Emacs in terminal mode looks almost
               | indistinguishable from a GUI session.)
        
               | taeric wrote:
               | For me, I'm just fighting rural networking. Ever since I
               | setup control master for my ssh, I have not had too much
               | issue with the ssh.
               | 
               | I still like running emacs locally.
        
             | mrmuagi wrote:
             | I think if you were to send the raw frames sure, but
             | wouldn't they just send the delta of the frames (only areas
             | which changed) -- with some sort of compression algorithm
             | and parameters like VNC does?
        
             | junipertea wrote:
             | I agree that computing requirements scale as we go, but
             | surely you don't need 4k at 60Hz to browse the web now? I
             | regularly play games using steam link now and while it's
             | not the full quality and framerate, it's just convenient
             | (and cheaper than having a full desktop pc in every room).
             | Thin computing seems much more seamless and workable now
             | than it did in my college years with Sun thin clients.
        
             | [deleted]
        
           | liuliu wrote:
           | The said method doesn't support cross-platform executions (at
           | least not without some qemu hackery).
           | 
           | I can totally see how I would use this to transcode videos
           | from my NAS using my main workstation CPU since both are
           | connected through a 10gbs switch though (need to figure out
           | how to integrate this with Plex transcoding flow).
        
         | AdamJacobMuller wrote:
         | > for many workloads, isn't the network I/O going to outweigh
         | the benefits of remote CPU cycles?
         | 
         | Definitely, but, there are also many workloads (and situations)
         | where it doesn't and this works great.
        
           | ddtaylor wrote:
           | One that comes to mind is intense build jobs.
        
         | baybal2 wrote:
         | > Maybe compiling (which was one of the main use-cases behind
         | Sun's SGE)?
         | 
         | The problem is that your task will still be limited by power of
         | a single machine.
         | 
         | You need means to run a task across more than one server, and
         | preferably great many.
         | 
         | It's possible, but completely impractical to run a shared
         | memory system across network.
         | 
         | If somebody figures out how to do that usable in practice, and
         | without complete rewrite of the program that will be really
         | impressive, and an ACM medal material.
        
           | stonogo wrote:
           | ScaleMP, SGI UV, and HP Superdome all disagree with you.
           | Arguably, plain old RDMA qualifies under this phrasing.
        
             | ericbarrett wrote:
             | I took the parent comment as meaning on a regular
             | Ethernet+IP network. SGI UV and HP Superdome are semi-
             | exotic hardware architectures. ScaleMP requires Infiniband,
             | which is at least 10 times, if not 100, the price of
             | Ethernet; it also requires fiber optic cable for runs > 5
             | meters.
        
               | zozbot234 wrote:
               | I thought modern variations of Ethernet (Gbps+ range)
               | were competing with traditional uses of Infiniband
               | nowadays.
        
               | nonameiguess wrote:
               | I should add to my above comment about the IC doing this
               | kind of thing in Kubernetes on EC2 instances that they
               | utilize the Amazon Elastic Fabric Adapter, which bypasses
               | the ethernet NIC to avoid all of the queueing delay you
               | can't tolerate in applications that need to be basically
               | unaware that some threads are local and some are remote.
               | And obviously they make sure your servers are in the same
               | or physically nearby racks.
        
             | Jouvence wrote:
             | UV and Superdome are custom hardware for huge NUMA boxes,so
             | not a great comparison. ScaleMP is definitely valid though
             | - a real shame it is stuck behind a license fee, would be
             | interesting to experiment with.
        
             | baybal2 wrote:
             | Even a hardware RDMA is snail slow for launching a non-
             | adapted piece of software over a network on shared memory.
             | 
             | HPC programs which do use shared memory, were written with
             | network latency in mind from the start.
        
           | nonameiguess wrote:
           | It's not trivial, but it's certainly possible. <Un-named IC
           | customer> has been doing level 1 image formation processing
           | for decades using shared memory across a network. They're
           | even doing it in Kubernetes now utilizing libraries from
           | Sandia National Laboratories that use a volume mount of an
           | in-memory filesystem to running containers so you can scale
           | your compute cluster up and down as your memory load varies.
           | This basic setup is powering the entire US geointelligence
           | enterprise. They call it the Tumbo Framework. There's a press
           | release alluding to it here: https://www.sandia.gov/news/publ
           | ications/labnews/articles/20...
        
         | jvanderbot wrote:
         | Sometimes I like to transcode videos I recorded on my windows
         | machine during a gaming session to post to our group chat.
         | 
         | If I have to do those steps locally, I can't be gaming. That's
         | not cool.
         | 
         | Still, kinda makes more sense to just fuse/smb the videos and
         | have the remote processing node pull, processes, push.
        
         | ipsum2 wrote:
         | Compilation is a good example of where using remote machines
         | can help substantially:
         | https://docs.bazel.build/versions/master/remote-execution.ht...
        
         | jxy wrote:
         | For a typical HPC cluster, there is a shared filesystem with
         | exactly the same file structures across the cluster over a high
         | bandwidth network, which basically replaces "a clever use of
         | fuse/sshfs and chroots" with something far better. OP's project
         | is clearly useless in HPC setting.
         | 
         | On the other hand, OP's project is exactly an linux/posix
         | version of the same as what Plan 9 was designed to do and has
         | been doing.
        
           | mbreese wrote:
           | _> project is clearly useless in HPC setting _
           | 
           | Which is why I'm curious about the applications they had in
           | mind. I know that my workflow isn't typical for most people,
           | so hopefully others have better uses in mind.
           | 
           | For me, I just submit adhoc jobs to SLURM and call it a day.
           | Not everyone has access to an HPC cluster or is comfortable
           | setting such a system up for a small, home cluster.
           | 
           | Anything (like this project) that makes HPC-like processing
           | more accessible, I'm interested in.
        
             | Overv wrote:
             | When I developed this I was very much thinking of using
             | this to run one-off compute heavy tasks, e.g. compressing a
             | video, running scientific computations or rendering
             | something in Blender. You would just rent a VM for a few
             | hours with the hardware you need, set it up in a few
             | minutes, run your job there and delete it again.
        
         | kelnos wrote:
         | > _for many workloads, isn 't the network I/O going to outweigh
         | the benefits of remote CPU cycles?_
         | 
         | Yes, the README addresses this:
         | 
         | > _File system performance remains a bottleneck, so the most
         | suitable workloads are computationally constrained tasks like
         | ray tracing and video encoding. Using outrun for something like
         | git status works, but is not recommended._
        
         | npsimons wrote:
         | It reminds me of GNU Parallel[0], so I have to imagine it fits
         | for some use cases. I stumbled upon Parallel when I was trying
         | to do user-level "distributed" computing at the day job for
         | running hundreds of simulations in a shorter amount of time.
         | Never really did get it working (I had a hard enough time just
         | ginning up a Python program to orchestrate test cases across
         | local cores; still took 3.5 hours for a full run of tests), and
         | they're currently trying to get a SLURM cluster setup while I'm
         | debugging NVidia binary drivers and overloading our Gitlab
         | instance.
         | 
         | [0] - https://www.gnu.org/software/parallel/
        
       | nikhilgk wrote:
       | Can this be used for building a convergent, dockable phone? What
       | I mean is - if I have, say, a Pine Phone, can I build a dock with
       | its own CPU and RAM, so that when I dock the phone, I get the
       | extra horsepower from the dock, but without the dock, everything
       | still works, albeit a little slower?
        
       | seiferteric wrote:
       | Hmm interesting idea. I was working on something a couple weeks
       | ago after all the "actually pdrtable executable" stuff. My idea
       | was to be able to run local programs like your favorite fancy
       | shell, but on a remote machine or container that does not have it
       | installed (think lightweight containers you need to work in). The
       | idea was to have a parent program that runs your shell or other
       | client program using ptrace to intercept and proxy all syscalls
       | to a small client on the remote machine/container. So the code
       | would be running locally, but all of the syscalls would be
       | running remotely. I actually got it somewhat working but gave up
       | when I realized that the difficulty in memory and file access.
       | Files in particular were hard since I couldn't disambiguate if a
       | file access was for a "local" or "remote" file. Also in the past
       | I did something silmilar for python programs
       | https://github.com/seiferteric/telepythy
        
       | michaelpb wrote:
       | Nice! This is a little like a one-off High Throughput Computing.
       | Could be useful for both media processing, and also I imagine
       | some stuff with large data sets as well.
       | 
       | A long time ago I contributed to HTCondor, which permits long-
       | running processes to be automatically shuffled between N
       | computers: https://research.cs.wisc.edu/htcondor/
        
       | phaedrus wrote:
       | I wonder if this works for the Linux version of Dwarf Fortress?
       | It would be an ideal workload - requires a lot of CPU but doesn't
       | do a whole lot of disk or screen I/O. It would be an ideal way to
       | run DF on a laptop.
        
         | Overv wrote:
         | I love this idea! I've never played Dwarf Fortress, but I just
         | tested it on a fresh vps by setting [SOUND:NO] and
         | [PRINT_MODE:TEXT] in df/data/init/init.txt and it seems to work
         | fine!
        
       | trollied wrote:
       | I'm waiting for the Slashdot Beowulf Cluster comments to start
       | rolling in...
        
       | jacquesm wrote:
       | on -n somenode /bin/bash
       | 
       | It's interesting how stuff that is totally common and considered
       | so normal it isn't even mentioned is special elsewhere and can be
       | seen as a novelty. The future not being distributed equally also
       | holds for the IT world.
        
       | jksmith wrote:
       | So arbitrage it for money. This is the future that threatens need
       | for centralized power. Get a smart washing maching to run Linux.
       | Buy cycles (pun) when it's idle.
        
       | generalizations wrote:
       | I wish plan9 had taken off. This is such cool functionality that
       | comes built into plan 9.
        
         | MisterTea wrote:
         | Plan 9 is still plenty alive and kicking. My profile has all
         | the links you need to get started. I recommend 9front for pc
         | hardware and either 9front or Millers pi image (based on
         | vanilla plan 9) which supports the pi wifi.
         | 
         | cpu/rcpu does the remote cpu session which can act like an ssh
         | in a way. Plan 9 doesnt do "remote" control but instead
         | imports/exports resources directly. Everything is done through
         | 9p. Very zen OS.
        
       | rcarmo wrote:
       | Genius stuff. Almost Plan9-like in spirit.
        
       | hojjat12000 wrote:
       | This is very clever. I know that if you are using this solution a
       | lot for the same purpose (let's say rendering) then it would be
       | beneficial to come up with a more efficient setup for your
       | particular application.
       | 
       | But this could be very useful for ad hoc, or just spur of the
       | moment things. Let's say I have a few servers that I have access
       | to. Whenever I need to do something computation heavy and it
       | could be something different everytime. Sometimes transcoding a
       | video, sometimes compiling a library in rust. I just outsource
       | them to the servers and keep using my laptop doing other stuff.
        
         | mbreese wrote:
         | _> a more efficient setup for your particular application_
         | 
         | Yeah, once you get past ad hoc applications, one common way to
         | set this up is to setup a cluster/batch scheduler. This lets
         | your jobs be run on an available node. You don't get the same
         | chroot/fuse filesystem voodoo, but once you have a cluster,
         | then an NFS mount for applications is a common setup.
        
         | [deleted]
        
       | ACAVJW4H wrote:
       | Somewhat related: https://github.com/StanfordSNR/gg
        
         | hrez wrote:
         | I thought stanford's gg is in the same line as well. Here is a
         | link to the Usenix paper https://stanford.edu/~sadjad/gg-
         | paper.pdf
        
       | dekhn wrote:
       | Reminds of MOSIX and Condor, which used different mechanisms to
       | remote commands.
        
       | nimbius wrote:
       | how is this different than distcc?
        
         | bityard wrote:
         | According to the README, it's different in almost every way
         | imaginable.
        
       | mrami wrote:
       | This reminds me of playing with openMosix and ClusterKnoppix back
       | in the day. The kernel would take care of migrating the processes
       | and balancing the load automagically...
        
         | eeZah7Ux wrote:
         | It's pretty crazy that we don't have something similar now. And
         | possibly with better security.
        
         | marcodiego wrote:
         | Maybe popcornlinux http://popcornlinux.org/ can give us that
         | back.
        
         | jaboutboul wrote:
         | Ah man. OpenMosix. What a great piece of software that was.
         | Many people might not know but the developer beyond OpenMosix
         | founded a company around it called Qlusters and then that was
         | the predecessor to Xen/Xensource which he also founded and then
         | he went on to found KVM/Qumranet. To say he left a mark is an
         | understatement.
        
       | Abishek_Muthian wrote:
       | Well done.
       | 
       | I've been frequently thinking over this past decade that an
       | average consumer level distributed computing is the biggest
       | causality of closed, proprietary software/hardware.
       | 
       | Consider compressing several terabytes of disjoint data to a zip
       | file using compute resources of pc, smartphone, tablet etc. at
       | the same time. I've seen some subtle attempts to make use of
       | distributed computing in consumer space like Apple's Compressor
       | for video encoding.
       | 
       | Of course distributed computing has been a staple in scientific
       | research for a long time. I personally use dask, modin for some
       | data exploration activities when I feel the urge for some
       | distributed computing. Wanted to checkout Julia's distributed
       | computing capabilities but it required similar setup for all
       | nodes but I'm interested in cross-platform(architecture) only.
        
         | nonameiguess wrote:
         | This is sort of a plot point in Silicon Valley where the Pied
         | Piper team is trying figure out where to physically store files
         | they keep as a remote backup/sync service for a mobile app and
         | end up going with a fully-distributed peer-to-peer solution
         | that uses no servers at all and just stores file shards
         | directly on the phones of all of their users, like the CERN
         | shared CPU time project but for disk space.
        
           | Abishek_Muthian wrote:
           | Thanks, I haven't watched that show. I'll try to watch that
           | episode or season; don't know how that show is structured.
           | 
           | Common network storage access removed most of the headache
           | with distributed computing in my explorations on that
           | subject(OP is using common network share to execute programs
           | as well).
        
         | astrange wrote:
         | Distributed computing is not super useful for video encoding
         | (although it's certainly kind of useful) because there's
         | overhead to ship all the raw video around and put it back
         | together again. If you have multiple files to encode, you might
         | as well just run one on each machine.
        
       | FeepingCreature wrote:
       | Oh damn, I actually just needed this today. Nice.
        
       | mrozbarry wrote:
       | Just noticed in the readme:
       | 
       | > No need to first install the command on the other machine.
       | 
       | and
       | 
       | > It must be installed on your own machine and any machines that
       | you'll be using to run commands. On those other machines you must
       | make sure to install it globally, such that it can be started
       | with a command like ssh user@host outrun.
       | 
       | Not sure if the readme is just out of date, or if I'm
       | misunderstanding the initial statement about the _other machine_.
        
         | ddtaylor wrote:
         | In this context "the command" is something like ffmpeg, whereas
         | both machines do need the outrun program itself installed.
        
         | tsar_nikolai wrote:
         | You need to install outrun on the remote machine but you don't
         | need to install the target command on the remote machine.
        
       | codezero wrote:
       | Ok, now someone combine this with Actually Portable Executables
       | just for kicks.
       | 
       | https://justine.lol/ape.html
        
       | marcodiego wrote:
       | I tried an openMOSIX cluster back in the day when it was
       | fashionable. Cluster-Knoppix could transform a bunch of old pc's
       | in an instant SSI cluster with almost zero configuration needed.
       | Though I know it is not the most efficient way, I miss the
       | simplicity of it.
       | 
       | A shell script that simply launches a new process on the most
       | idle node of a network would be enough to get back some of that
       | experience. Live migration would be even better, almost perfect
       | for that. Hope some day I can easily build a bewolf cluster out
       | of my ARM SBC's with an experience similar to the old days of
       | Cluster-Knoppix.
        
         | zozbot234 wrote:
         | Linux is getting features for process checkpointing and
         | migration in the mainline kernel, see the CRIU (checkpoint and
         | restore in userspace) patchset. It needs proper namespacing of
         | all system resources, but we're not far from that with all the
         | container-enabling features that Linux has.
        
         | bloopernova wrote:
         | I remember back in the early 2000s thinking how cool it was
         | that I had distributed C compilers helping my Gentoo installs
         | along.
         | 
         | I really wanted to set up a university computer lab such that
         | when computers are idle, their CPU cycles are donated to some
         | massively parallel simulations. Worked great on small scale,
         | but I moved on before we implemented it on a larger scale.
        
           | tyrrvk wrote:
           | you just described HTC Condor! Not the parallel part per se..
           | 
           | https://research.cs.wisc.edu/htcondor/
        
           | astrange wrote:
           | Well, that's just F@H/R@H/BOINC. But I think there's some
           | ethical considerations before you sign up all your computers
           | for that, or at least you should check the electricity cost.
        
         | JoshTriplett wrote:
         | What would you consider the current state-of-the-art in Open
         | Source SSI cluster software? The vast majority of solutions
         | seem to have gone fallow.
        
           | marcodiego wrote:
           | Popcorn linux: http://popcornlinux.org/
        
       ___________________________________________________________________
       (page generated 2021-03-18 23:00 UTC)