[HN Gopher] Show HN: OK-Robot: open, modular home robot framewor...
       ___________________________________________________________________
        
       Show HN: OK-Robot: open, modular home robot framework for pick-and-
       drop anywhere
        
       Hi all, excited to share our latest work, OK-Robot, which is an
       open and modular framework to perform navigation and manipulation
       with a robot assistant in practically any homes without having to
       teach the robot anything new! You can simply unbox the target
       robot, install OK-Robot, give it a "scan" (think a 60 second iPhone
       video), and start asking the robot to move arbitrary things from A
       to B. We already tested it out in 10 home environments in New York
       city, and one environment each in Pittsburgh and Fremont.  We based
       everything off of the current best machine learning models, and so
       things don't quite work perfectly all the time, so we are hoping to
       build it together with the community! Our code is open:
       https://github.com/ok-robot/ok-robot and we have a Discord server
       for discussion and support: https://discord.gg/wzzZJxqKYC If you
       are curious what works and what doesn't work, take a quick look at
       https://ok-robot.github.io/#analysis or read our paper for a
       detailed analysis: https://arxiv.org/abs/2401.12202  P.S.: while
       the code is open the project unfortunately isn't fully open source
       since one of our dependencies, AnyGrasp, has a closed-source,
       educational license. Apologize in advance, but we used it since
       that was the best grasping model we could have access to!  Would
       love to hear more thoughts and feedback on this project!
        
       Author : MahiShafiullah
       Score  : 318 points
       Date   : 2024-02-23 17:23 UTC (5 hours ago)
        
 (HTM) web link (ok-robot.github.io)
 (TXT) w3m dump (ok-robot.github.io)
        
       | mdfriefeld wrote:
       | congrats on the awesome work!
        
         | MahiShafiullah wrote:
         | Thank you!
        
       | dlivingston wrote:
       | That's very cool. I have almost no experience with robotics, so
       | excuse the silly questions:
       | 
       | - How does it know what objects are? Does it use some sort of
       | realtime object classifier neural net? What limitations are there
       | here?
       | 
       | - Does the robot know when it can't perform a request? I.e. if
       | you ask it to move a large box or very heavy kettlebell?
       | 
       | - How well does it do if the object is hidden or obscured? Does
       | it go looking for it? What if it must move another object to get
       | access to the requested one?
        
         | fishbotics wrote:
         | Disclaimer: I'm not one of the authors, but I work in this
         | area.
         | 
         | You basically hit the nail on the head with these questions.
         | This work is super cool, but you named a lot of the limitations
         | with contemporary robot learning systems.
         | 
         | 1. It's using an object classifier. It's described here
         | (https://github.com/ok-robot/ok-robot/tree/main/ok-robot-
         | navi...), but if I understanding it correctly basically they
         | are using a ViT model (basically an image classification model)
         | to do some labeling of images and projecting them onto a voxel
         | grid. Then they are using language embeddings from CLIP to pair
         | the language with the voxel grid. The limitations of this are
         | that if they want this to run on the robot, they can't use the
         | super huge versions of these models. While they could use a
         | huge model on the cloud, that would introduce a lot of latency.
         | 
         | 2. It almost certainly cannot identify invalid requests. There
         | may be requests that are not covered by their language
         | embeddings, in which case the robot would maybe do nothing. But
         | it doesn't appear that this system has any knowledge of
         | physics, other than the hardware limitations of the physical
         | controller.
         | 
         | 3. Hidden? Almost certainly wouldn't work. The voxel labeling
         | relies on a module that labels the voxels and without visual
         | info, it can't label them. Also, as far as I can tell, it
         | doesn't appear to have very complex higher-order reasoning
         | about, say, that a fork is in a drawer, which is in a kitchen,
         | which is often in the back of a house. Partially obscured? That
         | would be subject to the limitations of the visual classifier,
         | so it might work. ViT is very good, but it probably depends on
         | how obscured the object is.
        
           | ativzzz wrote:
           | > While they could use a huge model on the cloud, that would
           | introduce a lot of latency.
           | 
           | Will all the recent work to make gen. AI faster (see groq for
           | LLM & fal.ai for stable diffusion), I wonder if the latency
           | will become low enough to make this a non-issue or at least
           | good enough
        
             | devmor wrote:
             | If AI/ML home systems become significantly common for
             | consumers before the onboard technology is capable, I could
             | see home cacheing appliances for LLMs.
             | 
             | Like something that sits next to your router (or more
             | likely, routers that come stock with it).
        
               | ativzzz wrote:
               | Does a robot that moves things in a home need this? The
               | challenging decisions are (off the top of my head):
               | 
               | 1. what am i picking up? - this can be AI in the cloud as
               | it does not need to be real time
               | 
               | 2. how do i pick it up? - this can be AI in the cloud as
               | it does not need to be real time - the robot can take its
               | time picking the object up
               | 
               | 3. after pickup, where do i put the object? localization
               | while moving probably needs to be done locally but
               | identifying where to put down can be done via cloud,
               | again, no rush
               | 
               | 4. how do put the object down? again, the robot can take
               | its time
               | 
               | You can see in the video the robot pauses before
               | performing the actions after finding the object in its
               | POV, so real time isn't a hard req for a lot of these
        
           | futhey wrote:
           | The cool thing is that there are solutions to all of these
           | problems, if the more basic problems can be solved more
           | reliably to prove the underlying technology works.
        
         | MahiShafiullah wrote:
         | User fishbotics already answers a lot of these questions
         | downstream, but just confirming it here as an author of the
         | project/paper:
         | 
         | > - How does it know what objects are? Does it use some sort of
         | realtime object classifier neural net? What limitations are
         | there here?
         | 
         | We use Lang-SAM (https://github.com/luca-medeiros/lang-segment-
         | anything) to do most of this, with CLIP embeddings
         | (https://openai.com/research/clip) doing most of the heavy
         | lifting of connecting image and text. One of the nice
         | properties of using CLIP-like models is that you don't have to
         | specify the classes you may want to query later, you can just
         | come up with them during runtime.
         | 
         | > - Does the robot know when it can't perform a request? I.e.
         | if you ask it to move a large box or very heavy kettlebell?
         | 
         | Nope! As it is right now, the models are very simple and they
         | don't try to do anything fancy. However, that's why we open up
         | our code! So the community can build smarter robots on top of
         | this project that can use even more visual cues about the
         | environment.
         | 
         | > - How well does it do if the object is hidden or obscured?
         | Does it go looking for it? What if it must move another object
         | to get access to the requested one?
         | 
         | It fails when the object is hidden or obscured in the initial
         | scan, but once again we think it could be a great starting
         | point for further research :) One of the nice things, however,
         | is that we take full 3D information in consideration, and so
         | even if some object is visible from only some of the angles,
         | the robot has a chance to find it.
        
       | khnov wrote:
       | It is opensource but still costs nearly 25k dollar. why is it
       | that expensive ?
        
         | RIMR wrote:
         | The software is open source. The hardware is proprietary and
         | protected by patents.
        
           | dheera wrote:
           | I don't think patents are what is making this specific
           | hardware expensive. Rather it's just a lack of market and
           | supply chain scaling.
        
         | kscottz wrote:
         | Since when does open source mean cheap?
         | 
         | Labor isn't free. Building custom PCBs and hardware in low
         | quantity isn't cheap. Building, calibrating, and testing robots
         | isn't cheap.
        
           | yjftsjthsd-h wrote:
           | Now in all fairness, open source _tends_ to mean cheap _er_
           | because it _does_ reduce how much has to be invented in-
           | house, and also (sometimes) because it lets you crowd source
           | free labor. In software, that can lead to stuff getting
           | completely built for free (or close) because the base costs
           | are low and mostly consist of labor that some people might be
           | willing to do for free. In _hardware_ , it's likely that open
           | source still reduces the costs, but... you can make a
           | thousand copies of a library for free; making a thousand
           | copies of a part is never going to be free.
        
         | TaylorAlexander wrote:
         | It's a low volume product that has to support the salaries of
         | the engineers who create and maintain the product.
        
           | syedkarim wrote:
           | I'm always surprised by how difficult it is for people to
           | understand this.
        
             | TaylorAlexander wrote:
             | Yep. I was recently looking at building an art project that
             | required a gas valve that can freely rotate while under
             | pressure.
             | 
             | If you need one gas line, you can get a swivel for a normal
             | shop hose reel for $15. If you need two gas lines on the
             | same axis, the part is similar but way lower volume, so you
             | have to go to a specialty supplier and the price is $350.
             | 
             | The business that makes hose reel swivels makes lots of
             | high volume parts, has lots of competition, and needs to
             | charge close to cost to sell them. The business that makes
             | specialty gas swivels for industry that offers multiple gas
             | lines in one swivel, lots of different options, and makes
             | them higher quality needs to charge a lot more to keep
             | their business operational.
        
         | claytonwramsey wrote:
         | By robot standards, $25k is not a bad. Most mobile-manipulator
         | robots cost 5 digits or more, mostly due to the small market,
         | high materials and engineering cost, and general headaches of
         | robot building.
        
         | nico wrote:
         | Where's the price? Do you have a link to the product page?
         | 
         | Thank you
        
           | MahiShafiullah wrote:
           | Here is the page for Hello Robots: https://hello-robot.com/
        
       | jamesdwilson wrote:
       | I'd love to see this be usable as potentially a mower and or
       | vacuum/mop with different swappable components.
        
         | bozhark wrote:
         | Make it micro.
         | 
         | I want mini robots cleaning dust and debris, silently and out
         | of my way. I don't want macro bots getting in my way
        
           | observationist wrote:
           | I agree, micro bots would be best to handle the dirty jobs.
        
             | jamesdwilson wrote:
             | okay, so we all agree a Matryoshka doll like system similar
             | to SD card and microSD card is appropriate then.
        
           | mikeegg1 wrote:
           | That's what Zorg though in _Fifth Element_.
        
         | happytiger wrote:
         | Not to counterpoint, but just for the sake of discussion, i
         | kind of want the opposite but for maybe similar reasons. I want
         | flexible robots that can replace my human labor. I don't want
         | specialist robots that are obligate specialists.
         | 
         | Laundry, cooking, dishes, sweeping, vacuuming, and other
         | constantly recurring tasks are what I would love to see
         | automated not just a "robot that sweeps" like the market has
         | been trying to sell me.
         | 
         | Ever since I read _the second shift_ book about the unpaid
         | extra 40 hour week women work doing domestic tasks I've dreamed
         | of robots replacing that for humanity. It's a massive cost to
         | people individually and humanity overall, and kind of a silent
         | epidemic.
         | 
         | It's crazy but freeing up half of humanity from the drudge work
         | of daily chores is one of the most obvious disruptive
         | technology plays. I rarely hear people put the robot revolution
         | in this context, but I very much think we should start doing
         | so.
         | 
         | Here's a good overview for the uninitiated:
         | 
         | https://www.americanprogress.org/article/unequal-division-la...
        
           | sonofhans wrote:
           | I applaud -- for real -- your ideas and feelings here. I've
           | had similar thoughts my whole life, growing up reading golden
           | age science fiction.
           | 
           | But I worry very much that tools like this will be used
           | primarily to increase corporate profits and reduce money
           | spent on humans, rather than remove drudgery from people's
           | lives and allow them to do things more aligned with their
           | goals and natures.
           | 
           | E.g., if we make a cleaning robot, hotels will replace half
           | their staff -- what will these people do for a living? Work
           | in an AI sweatshop, categorizing images of child abuse?
           | 
           | Old-school science fiction often proposed that we'd be
           | entering a new age of art and leisure, as robots and AI take
           | over menial tasks. In fact today I think we're seeing AI and
           | robots -- in part -- taking jobs from humans, and in order to
           | provide entertainment and economic leverage to richer humans.
           | 
           | It's making me reevaluate all that old science fiction, as it
           | seemed to require an invisible 90% of the population
           | basically working for the AIs so that the AIs can curate a
           | great life for a stratospherically-wealthy minority.
        
             | aidenn0 wrote:
             | > Old-school science fiction often proposed that we'd be
             | entering a new age of art and leisure, as robots and AI
             | take over menial tasks. In fact today I think we're seeing
             | AI and robots -- in part -- taking jobs from humans, and in
             | order to provide entertainment and economic leverage to
             | richer humans.
             | 
             | It was also predicted in the mid 20th century that rising
             | productivity would create a shorter work-week; instead we
             | have figured out how to prevent workers from being
             | compensated for higher productivity.
             | 
             | https://www.epi.org/productivity-pay-gap/
        
             | KerrAvon wrote:
             | I don't think you should reevaluate it in that context.
             | Golden age science fiction assumed what we seem to be now
             | calling AGI and still don't know how to create. What we're
             | now calling artificial intelligence (thanks to OpenAI) is
             | effectively an advanced version of autocomplete with
             | infinite computing power behind it. It's incredibly
             | inefficient, and if we ever build AGI we'll look back at AI
             | like people looking back at the earliest manual typewriters
             | without shift keys or lowercase.
             | 
             | For golden age sci fi theories of human work vs leisure to
             | actually take hold, we need universal basic income, or some
             | other monetary theory that allows us to value other people
             | for being alive rather than solely for being feudal slaves
             | of deranged billionaires.
             | 
             | "Hotel maid" as a job really shouldn't exist when robots
             | can do it better and more consistently (which isn't true
             | yet). At that point, not before, should be considered
             | beneath human dignity. But we definitely need an answer for
             | what happens to the newly undignified human.
        
               | dreamworld wrote:
               | Dignity should be intrinsic, not a result of labor. Of
               | course, labor is today necessary, (and in a way will
               | always be necessary by someone), so working is indeed
               | dignified to the extent it helps other people.
               | 
               | I think chores aren't necessarily the terrible boredom.
               | But having a robot as an option, you can do them as a
               | sort of hobby if and when you want. That seems nice.
               | 
               | I think we also will need to develop maturity to deal
               | with our free time, but it's probably not the disaster
               | I've seem many claim (that we lose meaning) -- maybe
               | their way to cope with an unfair world? or my way to cope
               | with laziness.
               | 
               | The main thing is how to protect ourselves from rulers
               | when we aren't necessary for labor. It seems like a
               | difficult but solvable problem. Being able to choose how
               | much to work (and play) is the dream!
        
           | yjftsjthsd-h wrote:
           | I agree that generalist robots would be _better_ , but
           | building them is really hard (which we know, because we've
           | been trying to build them for decades now). So I think
           | piecemeal robots are the happy-enough medium that we can
           | build to start automating away work today (while we hopefully
           | keep working on the general case).
        
       | bshah_ wrote:
       | The failures analysis is super well done, nice work! Curious what
       | qualifies as hardware failure, e.g. there's 5 trials where the
       | "Realsense gave bad depth", and how that's determined.
        
         | MahiShafiullah wrote:
         | Thanks! We collect all the data and analyze it post-facto to
         | see what may have caused the failure. For example, on the 5
         | trials you mentioned, the Realsense gave wrong depth on
         | transparent or semi-transparent objects, and so the pointcloud
         | generated from the robot's head camera was simply wrong.
        
       | Geisterde wrote:
       | I like the presentation of this, heres 10 different environments
       | and multiple videos of each.
        
       | alx__ wrote:
       | This is rad. I would totally buy a 25k robot if I could train it
       | to fold and put away my laundry (serious)
        
         | riedel wrote:
         | You might have to buy a second one to this one for folding [1]
         | 
         | [1] https://pantor.github.io/speedfolding/
        
           | MahiShafiullah wrote:
           | In fact, Hello Robot already shared a teleoperated demo of
           | folding shirts!
           | https://www.youtube.com/watch?v=QtG8nJ78x2M&t=180s But yes, a
           | second arm is needed.
        
         | modeless wrote:
         | I would buy a $100k robot if it could do the laundry and the
         | dishes and the cooking and clean up after the kids. In a
         | heartbeat.
        
       | fnordpiglet wrote:
       | This is remarkable and could be life changing for the disabled,
       | elderly, gamers, or profoundly lazy and their caretakers.
        
         | marci wrote:
         | I forgot where I saw that, but generally, improving things for
         | people with disabilities improves things for everyone, like
         | making sidewalks wheelchair friendly helps parents with a
         | stroller, or people carrying heavy stuff, walking with a cane,
         | young children on bicycles, people who can't see well...
        
           | salviati wrote:
           | I heard this from Anna Martelli Ravenscroft in her
           | presentation "Diversity as a Dependency" [0]
           | 
           | [0] https://www.youtube.com/watch?v=wOpdDxJzNkw
        
           | sandworm101 wrote:
           | >> improving things for people with disabilities improves
           | things for everyone
           | 
           | Everything has its limits. Many years ago I was involved in
           | building a series of staircases in a rock climbing area
           | inside a park. There were about a hundred steps in a handful
           | of orientations to get from the parking lot over a rocky hill
           | to the small valleys behind. The project was primarily to
           | prevent trail erosion and falls. These steps weren't going to
           | even have handrails. (Think 2x6 framed boxes filled with dirt
           | and bolted to the rock.) Then someone in government said if
           | we wanted to use donated money inside a park we would have to
           | somehow make the project wheelchair accessible. All stop.
           | Project over. No stairs were built. Access trail remained a
           | mess.
           | 
           | We were going to replicate these stairs from another climbing
           | area in BC. There is no way to make such a thing wheelchair
           | accessible.
           | 
           | https://sonnybou.ca/ssbou2001/skaha01.jpg
        
             | developerDan wrote:
             | It's so frustrating that city leaders can't even try to use
             | common sense. Where I live a parking requirement blocked a
             | restaurant from being built and our city council publicly
             | acknowledged that there isn't enough space for parking and
             | a building, but "that's the law" so they blocked it. Lazy
             | idiots.
        
             | fnordpiglet wrote:
             | In the US? I assume ADA was the kicker. A lot of folks even
             | in government don't realize the ADA isn't unthinking. If
             | the activity or environment doesn't lend itself to
             | accessibility it's not required. Cutting a wheel chair ramp
             | into a mountain face is a good example where the ADA
             | wouldn't apply because it's impractical given the
             | environment to do so. Even national parks only offer a
             | subset set of activities ADA complaint.
        
               | sandworm101 wrote:
               | No, it wasn't an ADA thing. It was a purely local thing.
               | The local authority had adopted some resolution that no
               | further "development" would happen before they added some
               | sort of accessibility. So we couldn't move forwards even
               | using donated money. We could repair things but not make
               | substantive improvements.
               | 
               | Rock climbing areas tend to be inaccessible or at least
               | very rough terrain. Ironically, a vertical rock surface
               | can be made accessible. There are actually many disabled
               | climbers out there. But with a mixed dirt/rock/scree
               | slope you basically need to install a mile-long ramp.
        
               | fnordpiglet wrote:
               | I guess pointing at the cliff and saying that's the
               | accessible route doesn't fly eh? It's an inclined slope -
               | just very inclined. And yes there are tons of disabled
               | climbers.
        
               | sandworm101 wrote:
               | We generally understand that disabled people have a right
               | to access the spaces that everyone else does. But
               | climbing/caving is different, different than most any
               | other activity: Access to space is controlled by ability.
               | I have stood on ledges that are impossible to get to
               | without a certain set of skills. If there was a ladder or
               | a staircase, sitting on that ledge would mean nothing. We
               | can make a pool or athletic field accessible, but making
               | such a remote ledge on a cliff accessible to people
               | without those abilities isn't possible without destroying
               | the nature of that ledge. So there is always going to be
               | conflict.
        
           | jetrink wrote:
           | I've heard this called the curb cut effect. (It's a subject
           | right in 99% Invisible's wheelhouse and there is a good
           | episode about it that mostly focuses on the history of
           | literal curb cuts.)
           | 
           | 1. https://en.wikipedia.org/wiki/Curb_cut_effect
           | 
           | 2. https://99percentinvisible.org/episode/curb-cuts/
        
           | Iv wrote:
           | ... daleks
        
         | MahiShafiullah wrote:
         | Thank you! A large motivation behind this line of home-robot
         | work for me is thinking about the elderly, people with
         | disabilities, or busy parents who simply don't have enough time
         | to do it all. I am personally hopeful that we can teach AI to
         | take the jobs that no one wants rather than the jobs that
         | everyone wants :)
        
       | taco_emoji wrote:
       | I know nothing about robotics, but can someone ELI5 why the robot
       | makes so many extraneous movements? E.g. the video that shows it
       | moving Takis from the desk to the nightstand, it approaches the
       | desk, and then the arm mechanism moves all the way down (an
       | unnecessary maneuver), then rises again before reaching the level
       | needed to pick up the Takis.
        
         | throwup238 wrote:
         | A lot of those movements are there to zero out the axes so that
         | each movement starts from a known good position and orients
         | itself against the camera. Usually there's a switch that, for
         | example, senses when the body goes all the way to the bottom,
         | which is the origin for the whole positioning system. Several
         | other movements are for safety since it doesn't have a bunch of
         | cameras and really complex logic for collision avoidance so it
         | resets to a smaller profile between moving around.
         | 
         | Since motors are capable of very precise movements and errors
         | accumulate, this is a best practice when starting new
         | movements. Humans instead have a complex hand-eye coordination
         | system that trained all of our lives (and some people are
         | better at it than others).
        
           | leptons wrote:
           | >A lot of those movements are there to zero out the axes so
           | that each movement starts from a known good position and
           | orients itself against the camera. Usually there's a switch
           | that, for example, senses when the body goes all the way to
           | the bottom, which is the origin for the whole positioning
           | system.
           | 
           | Ideally the "zeroing" should be done once when the robot
           | "wakes up" or only once in a while, and there should be
           | digital encoders on all motors, the position should _always_
           | be known within a tiny margin of error, and not enough to
           | cause a problem for positioning. At least that 's how I'd do
           | it, I'm not sure how they built this thing.
        
             | MahiShafiullah wrote:
             | It's always a trade-off! You could have more accurate
             | sensors and motors that are more expensive, or you can have
             | cheaper motors with no sensors and higher accumulated
             | errors. Since this is more of a research project than a
             | product, we went for a cheap robot with the slower-but-
             | more-accurate approach.
        
       | chubs wrote:
       | A friend is working on a slightly related project, I'm curious
       | how they map out the room in voxels, anyone care to suggest how
       | this is done?
        
         | MahiShafiullah wrote:
         | The mapping process can be done with any RGB-D cameras, we use
         | an iPhone pro but any apple devide with AR-Kit should work.
         | Once we have a sequence of RGB-D images with associated camera
         | poses, we can just backproject the pixels (and any associated
         | information, like CLIP embeddings) using the depth into voxels.
        
       | btbuildem wrote:
       | I've been watching this project for a while now, great progress!
       | 
       | I envision an integration with a mobility aid (eg, a wheelchair)
       | for someone with limited control over their limbs. Imagine a
       | "smart" exoskeleton that can help with otherwise impossible tasks
       | -- it could be a game-changer for so many people.
        
       | k4rli wrote:
       | It's cool but what's the point for a normal person? Useful for
       | warehouses and manufacturing but I don't see myself ever needing
       | such things
        
         | polygamous_bat wrote:
         | Are elderly or disabled people "normal" in your book? Do you
         | see yourself or your loved ones growing old someday?
        
         | MahiShafiullah wrote:
         | A large motivation behind this line of home-robot work for me
         | is thinking about the elderly, people with disabilities, or
         | busy parents who simply don't have enough time to do it all. I
         | am personally hopeful that we can teach AI to take the jobs
         | that no one wants rather than the jobs that everyone wants :)
        
       | rsync wrote:
       | I very much want a stabilized platform vehicle that I can send
       | point-to-point with a payload on it.
       | 
       | So, a gyro-stabilized platform like a segway that I can send back
       | and forth from point A to point B on a not-terrible-but-rough
       | (walking path) route.
       | 
       | I have tried to stay abreast of the options in the past and have
       | never seen anything that matches this ... does anyone know if
       | there is anything new that matches this use-case ?
       | 
       | (the use-case is a tray of drinks and hors d'oeuvres that needs
       | to go from one part of a property to another without spilling ...
       | needs to be minimally all-terrain)
        
         | thebruce87m wrote:
         | You've maybe seen these already in restaurants?
         | https://www.pudurobotics.com/product/detail/bellabot
         | 
         | Not sure I've seen them take drinks though, but definitely
         | food.
        
       | owenpalmer wrote:
       | Isn't this the same as dobb-e?
       | 
       | https://dobb-e.com/
        
         | MahiShafiullah wrote:
         | No, although it has some of the same people on the team (aka
         | I'm the first author there, and my advisor is advising both
         | projects :) )
         | 
         | The primary difference is that this is zero-shot (meaning the
         | robot needs 0 (zero!) new data in a new home) but has only two
         | skills (pick and drop); where Dobb-E can have many skills but
         | will need you to give some demonstrations in a new home.
        
         | ohnit wrote:
         | The projects look related and have an author in common. Both
         | are mentioned on the website for robot that they used:
         | 
         | https://hello-robot.com/stretch-embodied-ai
        
       | westmeal wrote:
       | Is the title a reference to OK computer or is it just something
       | you all came up with?
        
         | MahiShafiullah wrote:
         | The title has multiple meanings, some credits definitely should
         | go to OK computer/radiohead, but also "OK Google" for
         | controlling a home assistant, open-knowledge (OK) models, etc.
        
       | CodeWriter23 wrote:
       | Back in the day, my friend would lament not having "closetgrep"
       | to find the needed thingy stored in an overfilled closet.
        
         | drivers99 wrote:
         | I bought 40 little cardboard boxes (VATTENTRAG - they are
         | pretty cheap, and shallow enough so you don't have to go
         | digging too much) from IKEA and started putting what was in
         | each one in text files so I could literally do that (grep for
         | things). I still need to catalog 38 out of the 40 boxes though
         | so I'm reconsidering my strategy.
        
       ___________________________________________________________________
       (page generated 2024-02-23 23:00 UTC)