https://aboutideasnow.com/ about ideas now Find people to talk to or collaborate with by searching across the / about, /ideas and /now pages of 7503 personal websites. Read the manifesto [ ] building open source starting a community founding a company [?] writing a book building an app quitting social media traveling the world taking photos making music volunteering josephriddle.com josephriddle.com /ideas Updated February 26, 2024 > Last updated on February 26, 20241 The following are various ideas I've had in no particular order: ### Blog posts * Add an ideas page to website * Favorite coffee shops * Transcribe lessons learned car presentation ### Code projects * Obsidian plugin for Hugo * * * 1. The last updated date needs to be in the content for aboutideasnow.com to be able to find it. See this comment thread. -[?] lgrando1.github.io lgrando1.github.io /ideas Updated February 26, 2024 2024/02/26 ---------- * Thinking in a society without centralization and auto-regulated, like the ants and insects societies; * Create a way to summarize society's problems TODO list. Request to everybody to stop talk about politics and start thinking about these problems; * Think how to use Artificial Intelligence to improve society care and not only profits. * Give a way to person live without need be online. ##### Cite louis.work louis.work /ideas Updated February 22, 2024 * A new kind of phone case called an antiphone case, which encourages its users to use their phones less * A voting board for people to upvote disclosures they'd like from Big Tech companies * Versions of my browser extension Nudge targeting specific sites like LinkedIn and Twitter, since extensions that focus on one site tend to grow better (based on my experience developing Unfollow Everything) * A GitHub repository (or similar) of legal threats that independent developers have received from Big Tech * A series of articles exploring how AI is going to affect the ad business model of Big Tech * A totally web-based DAW for making music * An open source version of Freedom and other website blockers * A place for creative ethical technologists to hang out * A relaunch of the satirical magazine I set up in my early 20s, called Underground Magazine -- except this time round, focus on the underserved world of tech satire _Last updated 22-February-2024_ lindylearn.io lindylearn.io /ideas Updated January 28, 2024 Updated Jan 28, 2024 by Peter Hagen. Here are things that I want to work on, but for which haven't found the right collaborators yet: 1\. A search engine for people, to help find people on the internet who want to work on the same things as you. Communities (online and local) are usually how people meet each other, but it always feels so random. And how do you find those communities in the first place? 2\. A maps app without labels that gradually fills up with memories as you wander around a new neighborhood / city / country. The goal would be to help you explore the world by looking around, and not just follow directions to the next top-rated attraction or restaurant. 3\. A CRM for personal relationships that reminds you to stay in touch with the people you talked to. The trick would be to automatically sync with your chat & emails, so you don't have to manually enter information like in any other CRM that exists today. 4\. An alternative to meetup.com that helps build actual communities with members who turn up regularly. I'm already experimenting with event discovery through eventsin.app. But I'm still unsure if there's room for another platform, or if building actual communities is more useful. 5\. Some way to automatically track nutrition from photos you take of your food, because you and me take those pictures anways already. Similarly, a way to track expenses from receipts you get -- surprisingly there is not one good app for this yet. 6\. A platform for you to lend out your books, primarily as a way to meet people who are interested in the same things as you. It would be worth to take some lessons from peerby.com. tjf.lol tjf.lol /ideas Updated January 13, 2024 Hi, I'm Thomas Fuller! Welcome to TJF.lol, where I showcase my projects, and ideas. Chronofile The world's simplest digital journal. A passion project from the first half of 2023. Building an Evolving Python Quine with ChatGPT January 13, 2024 An experiment in using LLMs to reproduce and mutate a program. About Thomas Even more information about me. Bookmarks A curated collection of sites I like. Threads Random thoughts about tech, software, urbanism, and more. (@thomas\_thomas\_thomas) Instagram See what I'm up to. (@thomas\ _thomas\_thomas) GitHub Check out some fun projects I've worked on. (@fullthom) LinkedIn Find me on LinkedIn. jimmyislive.dev jimmyislive.dev /ideas Updated December 15, 2022 Here are a list of ideas that I think would be great to build out as standalone products. (NOTE: I have not researched if they already exist in the wild or not). At different points in time I have needed something like this, so there is definitely a need. If any entrepreneur wants to adopt these ideas and run with it, go for it ! (If you are a VC and want to fund it, ping me :) ) * **Visualizing System Operations** As products we build become more and more complex, it becomes hard to explain all the different components and their dependencies. Failure modes becomes difficult to understand. And since systems just grow organically, documentation is woefully out of sync. But what if, the system visualization was part of code, "**architecture as code**" so as to speak. So lets say that you are building a component. It has some inputs, outputs and dependencies. Lets document this in a yaml file, along with the code. This yaml is then used to populate say a graph db as vertices, dependency links, hierarchies etc. Throw a visualization library over it and you have a real time view of your system architecture. Operational metrics can then be tied into this easily. e.g. if your input is from kafka and there is a large lag on that topic in kafka, display this component as red and the downstream dependencies as orange. The on-call person then has a quick way of diagnozing the problem thereby reducing MTTR. There is a whole range of additional features you can build onto this e.g. tie this to spinnaker for latest updates on deploys, use it for quick onboarding of new hires, use it for capacity planning etc etc etc * **Product Sandbox** How many times have you had to decide which third party product to purchase and had to spend multiple man weeks on evaluating them ? The big players seem to drown out the smaller ones (even though they might have better capabilities but smaller marketting budgets). What if there was an easy way for vendors to compete and you to grade them yourself ? A Product Sandbox is what you need ! Lets use a real life example, web page analytics. There are a whole bunch of vendors. What if we create a static page and add all those vendors to that page in order to visualize the metrics they provide. In this product sandbox then, you can compare the capabilities of each of these vendors (maybe something as simple as iframing that vendors product in). You can now compare the analytics provided by multiple vendors of the same asset and make a decision based on merits rather than hype. New entrats will jump on this offer as founders always love to compete on the merits of their product. End users benefit by saving time not having to set up n different environments to evaluate these vendors. * **TIL (Today I Learnt)** We keep learning new stuff. Almost on a daily basis. I do (or at least I try to). This comes from tech articles, blogs, newsletters, podcasts etc. If there were a reddit / twitter style site wherein people just posted things they learnt about it would become a repository of very useful info. Quick snippets of facts / learnings (along with sources) that can be voted on. Not news or social media. An oasis of learnings where users can come to satisfy their curiosity. The true fountain of youth is, after all, knowledge. benjamincongdon.me benjamincongdon.me /ideas Updated October 25, 2020 * A service to generate a podcast from a RSS feed using text-to-speech. * A version of Buffer for Mastodon. * Instapaper * Chrome Extension to add "Add to Instapaper" links to lobste.rs. * Chrome Extension to add "Add to Instapaper" links to feedly. Fitness / Health ---------------- * Self hosted version of MyFitnessPal. Others' Ideas Pages ------------------- * Jonathan Borichevskiy * Alexey Guzey * Gwern * James McMurray (Updated October 25, 2020 ) guckes.net guckes.net /ideas Updated January 4, 2016 ### Eye in the Sky just a mini-board with a camera (for input), an SD card (for storage) and wifi (for data exchange). oh.. and batteries. sure. it records via the camera when changes happen (or take a pic every 3secs), stores them on the sdcard - and allows exchange of data for everyone. all data is encrypted in the fly - they can only be opened with the right key. possible purpose: monitor public space during demos. must allow wireless exchange of data so you dont have to climb up where the camera is located (in case this is monitored). must be as cheap as possible as you may use them only once. ESSID must not be broadcasted. you should know it's there.. * * * ### Public Reports on Transport whenever i hear about a train that has been cancelled i cannot but think that the reports by the transport company might "forget" about it - intentionally, for the sake of "nice" statistics. so - we should create a website for people report on public transport which did not happen. * * * ### Free Wifi Islands for Members so many cafes and parks could be a nice place to do some work.. but the wifi always costs a lot of money - too much, actually! why not let a wifi point outwards from a site into a park and let those who pay for it participate? mind you, it's still a private thing - members only. * * * ### Backlog Expansion irssi word expansion from current channel's backlog goal: allow tab expansion for current word prefix - taking all words from the current backlog of the current channel. options: disregard short words (below N characters). allow tab to trigger expansion of current word prefix. additional points for: allow configuration of trigger key. show several possible completions, allow to tab through them or select one from the first ten by typing a digit from 0 to 9. order possible completions by number of occurrence in backlog. centericq contacts overview \[2006-02-21\] show all centericq contacts in a "nice" way. no need to start centericq at all for that. LastName FirstName Protocol Number/Address Guckes Sven ICQ 38801898 convert ssh manuals \[2002-06-12\] the manuals to "ssh" really looks weird on SUNs. the problem is the \*roff commands inside. they would probably have to be converted somewhat. any takers? MagicPoint compatible ASCII presenter based on Perl Rationale: Perl runs on (almost) every system; presentations of text usually suffices for most purposes; compatibility to MagicPoint is nice. Idea by: Florian Cramer paragram@gmx.net \[000919\] LyX on ncurses Rationale: TeX (LaTeX) is based on text input. LyX as a GUI is nice, and there is progress with Gnoke/Gtk and with KDE/Qt, and even an fltk port is being worked on. But there should also be support by terminal based tools. Hence the "ncurses support". Idea by: Florian Cramer paragram@gmx.net \[000919 \] Note: Florian Cramer is sponsoring this project with $150 - see* http://bioclox.bot.biologie.uni-tuebingen.de/mailing-archive/lyxlist/ msg10224.html * http://visar.csustan.edu/bazaar/bazaar\ _catoffers.html#office \[obsolete\] * * * Send feedback on this page to Sven Guckes website-ideas@guckes.net veilmail.io veilmail.io /ideas VeilMail -------- Yikes! ------ We could not find the user **ideas** travishellstrom.com travishellstrom.com /ideas #12: Become an Eagle Scout #17: Build an Amphitheater #56: Start a Service Club #152: Write book on Peace Corps #185: Make Peace Corps Merit Badges #215: Write Enough #252: Republish Peace Corps book # 275: Help New Media Become B Corps #412: Republish Enough #417: Lead Ashoka Initiative #425: Help Kelsa Become Consultant #420: Build Ashoka page #432: Teach MBA Course on Pivot #444: Redesign Capstone for Marlboro #455: Kipling Writer's Retreat #460: Create A Workshop like Nick #192: Help Jonathan Come to Mongolia #252: Create Job for Tuul at NMG #295: Create Position for Leslie at NMG #310: Write Thesis on Happiness #312: Create BCorps101.com #443: Create Ideas on My Site #445: Create Projects on My Site #446: Make Icons for Projects Page swyx.io swyx.io /ideas For Free: Great Ideas. Lightly Used. In total, I've written 569 essays, snippets, tutorials, podcasts, talks, and notes! ### Most Popular The fastest way to learn* Learn in PublicLuck Surface Area, The 4 Kinds of Luck & beyond* How to Create LuckSplit it into Community, Content & Product* Measuring DevRelThe future of JS tools & infra from 2020-2030* The Third Age of JSOn AWS vs Cloudflare* Eating the CloudThe final frontier of language and infra* Self-Prov. RuntimesThe iPhone of System Design* Why TemporalWhy it's good, but also has a dark side* The API EconomyHave a job, but don't BE your job* Part Time CreatorsDon't play games you don't want to win* Meta-Creator Ceiling ### All Posts No content found! Try widening your search again suriya.cc suriya.cc /ideas Ideas ----- Ideas ### cicd * pipeline taking too long to run is an issue. There are a lot of optimisation players who are coming up. * depot.dev * build containers faster. * test faster, lambdatest, browserstack and others. * I've faced pipelines, where 70% of the 2-hour pipeline time spent on building containers, because it is pulling a lot of the previous layers and reassembling everything, every time. * Being able to test the pipeline locally would be awesome for devops engineers who are trying to test the pipelines * There are not a lot of solutions to test infra directly. If we think about it, testing infra should be the top priority. Because infra people are constantly shifting things in blackboxed, json or yaml files and infra is the critical part that can get messed up easily. * Something like biceplang or AWS CDK, should create a template that would test your infra end to end. * Think of this, like a full-fledged integration test, but with some type of containment. * similar stuff, https://www.datree.io/ ### serverless If ec2 made the jump from having to own a machine to just spinning one up virtually from a bigger machine managed by someone else. Serverless will do the same thing to ec2. From having to spin up containers and manage their resources to shifting that responsibility to the platform underneath. I predict that, very similar to "traditional" softwares vs virtualised machines. Hardcore software would take a while to get converted to serverless, but until then it will be for tinkering, small projects and hobby projects. eg. netlify, vercel There are also accelerators to these in the forms of web assembly, which lets deployment and rollbacks for compute, easy. * Serverless eventually consistent database * https://www.cockroachlabs.com/blog/ how-we-built-cockroachdb-serverless/ * https://github.com/rqlite/ rqlite * https://aws.amazon.com/rds/aurora/serverless/ * A framework to build completely serverless applications, from storage to compute. * Imagine a software service, that is able to leverage a serverless distributed database, does computation on micro-vms, serves the user and then goes silent. * There definitely is scope for having abstracted framework that will be able to expose these in different formats. * eg. * webiny, a serverless cms * aws chalice, but only (python x aws) * jets, ruby framework for serverless * Testing serverless services is still very immature without proper tooling and a lot of emulation * When serverless * There would also be scope for offering traditional services, like UDP streaming, pdf/image/whatever processing. through serverless machines and offering them at a much cheaper rate. * Resources * https://firecracker-microvm.github.io/ ## # Event driven Event driven architectures are gaining a lot of momentum. There are a few problems inherent to that system, * since calls are async, there are queues everywhere, this is generally addressed through a pub-sub system like kafka to relay the mesages in a a queue. * Managing state across pods/instances is hard, despite kafka's guarantees, message misses do happen. * Rolling back, when something goes haywire is hard. * Retry seems to be the hailmary in most cases. * If the publisher is sending buggy messages and crashing the subscribers, it is inherently hard. ### Integrations * integrations with multiple services and offerings to make them homogenous or generic is something almost everybody is interested in. * eg. Zapier, * Problem with these integration services is that, the APIs won't be able to handle the edge cases, hence end up being too generic. * eg. There are a lot of scope for integrated experiences. eg, whatsapp/discord/slack message management through a single interface for community managers. * Kubernetes for notifications infra. Basically a generic phone call/sms, email service which lets us switch the underlying providers easily like, twilio, gupshup .notification apis are * Generic payments gateway that lets us A/B testing ### ETL ETL solutions, either to aggregate data in some form or matter * Services that take one format of input and transform to another are also very much in demand. The problem is, there is no clear path to productisation other than providing the underlying infra to run these workflows or building tools to build these workflows, of which there are many. * https://www.alteryx.com/ products/designer-cloud and appian and many many others. ### Privacy privacy is the hot new talk of the town and there are a lot of gaps in the ecosystem. * Cryptography x privacy * There are a lot of privacy invading services coming like, needl.ai, needl.tech and others, who are basically offering a personal/personlized Knowledge Management Service (KMS). These services are powered by integrating/ scraping across multiple data entrypoints, like google docs, imessage, whatsapp, telegram, slack, rss feeds etc. The amount personal data captured by these services is incredible. While, I'm sure these services will have encryption at rest. It would be really hard to run it on an encrypted database, with good asymmetric encryption like the one offered by keybase. * For services like these, it makes sense to offer a database that is completely isolated from the infra and only accessible as an api, that encrypts while active and supports heavy operations like, range queries and full text search. * FHE is too slow to be useful, today. * Machine learning x privacy * Moving ML inferences from the server to the client. * There are a lot of Local ML platforms like ONNX and TFlite that are enabling people to run ML models on lightweight devices like mobile phones. * Some form of Federated learning, to collect the derivations instead of the data. Basically compute will be meeting data instead of beaming up the data and doing compute. * prediction: Moving data around is going to be increasingly complex and difficult, as countries start safeguarding their data, in those cases training ML models remotes and then aggregating the models is a very smart way of getting the knowledge from the data, without the data itself. * Imagine 100s of hospitals with their patient details and X-rays for tumor siloed out, no hospital is actually going to share that data with another, but with FL and other methods, it would be ridiculously simple to train models in the location where the data is and then slowly start aggregating them globally. With a bit of differential privacy and a loss to accuracy, we can develop very good prediction models * https://www.thepullrequest.com/p/the-future-of-ads-privacy A lot of these ideas are future facing. Every future facing idea is usually a little twist on top of whatever is standard/hot/obvious right now. which means there will always be a huge middleground full of players who would be far enough behind that, they would need help covering the distance between past and the present. Which means, there is some type of consulting opportunity with every single piece above as well. Posts ----- snarfed.org snarfed.org /ideas * Learn Erlang. Write something distributed in it. * Port Tic Tac Toe to the XBox 360, using XNA Studio Express, and distribute it on XBox Live Community Games. * Build network bridges between game consoles, handhelds, mobile OSes, and PCs to make it easer to develop cross-platform multiplayer games. Proprietary commercial libraries for this exist, but handheld support is limited. GameSpy's Game Open looks promising, though. * Write a PAM module that acts as an OpenID consumer and authenticates againsts remote accounts. See Gracie, which is a PAM OpenID _provider_, the Inline Auth Extension (thread), which provides for OpenID authentication outside of a browser, and Scott's blog post. * wash, the Web (Again) Shell: a command line for the web. Example use cases include checking your bank account balance, adding a movie to your Netflix queue, and listening to your Vonage voicemail, all from the command line. (From TV Raman.) * Write a Steam IM protocol plugin for Pidgin. I haven't found much information about the protocol, except that despite a misleading wiki page, it's not based on MSN. We had a small thread on the Pidgin mailing list, but nothing beyond that. * There's a Python client for GData, but no server. Write one! Even better, now that the Atom Publishing Protocol's IETF draft is solidifying, hopefully implementations will start appearing. GData is just a few extensions beyond that! * Write a session management plugin for Pidgin (formerly Gaim). **Done! See Mattperry's SessionSave plugin.** * Improve tcsh's dabbrev-expand and bash/readline's menu-complete commands to include the entire contents of the app or shell window's output in their index, not just the history of typed commands. This would make them similar to dabbrev-expand in Emacs. * Write gateways between identity platforms like OpenID, Google Accounts, Yahoo BBAuth, Microsoft CardSpace (see OSIS), Higgins, and others, so they can interoperate. **Simon Willison's idproxy.net is a first step toward this. Sweet!** * Mock out the Google Ajax Search API for offline development and testing. Use failure injection, etc. to determine behavior. * Pidgin (formerly Gaim)'s TOC prpl hasn't been updated for 2.0.0. It doesn't build in any of the beta releases. It should be either brought up to date or officially deprecated and removed. * Fix a bug in wget that prevents `--html-extension` and `--convert-links` from playing nicely together. * Write an OpenID plugin for PyBlosxom. **Done! See OpenID server plugin for PyBlosxom.** * Write a Google Ajax Search API plugin for PyBlosxom. **Done! See site search with the Google AJAX Search API.** * Add support for comments to the photogallery plugin for PyBlosxom. * Phone transcripts! Record all of your phone conversations, then transcribe them with speech recognition. Bonus points: index the transcripts and allow keyword search over them. Search results would link to both the transcript and the recording. Starting with VoIP would skip the telecom adoption hurdle, and the privacy issues aren't insurmountable. Unfortunately, speaker-independent speech rec just isn't good enough yet. Sigh. * Write a single-sign-on server (as described here) for the Open Source Metaverse Project. It'd probably be based on Kerberos. * SnipSnap requires visitors to register and log in before they can comment. Alas, this discourages commenting. Add support for one-time comments a la WordPress and Blogger. **Done! See snipsnap comment without login patch.** * Work on Beagle (formerly Dashboard), which is wicked cool, or Tenor, which may someday be even cooler. * Write an overlay network that does multicast cleanly and efficiently. Similar to IP multicast, but above the transport layer. Among other things, this would require... * ...a good API for Vivaldi, or another network distance algorithm. Write a portable implementation and package it as an easy-to-use library for app developers. P2P overlay networks might be a good initial target audience. * Write elisp for filling code in Emacs. This is my single biggest wished-for Emacs feature. **Done! See fillcode.** * Emacs' refill-mode and filladapt don't play well together. Make them! **Never mind, refill-mode isn't all that anyway.** * Add an undo command to tcsh. Bash has one, courtesy of Readline, so I have shell envy. **Done! Emacs shell-mode does this.** * Add color to tcsh and GNU Readline with ANSI color codes. For example, highlight the current region (when one exists), highlight the search string in incremental search, etc. **Done! See tcsh highlighting patch.** * Add delete-selection-mode to tcsh and GNU Readline. **Done! Emacs shell-mode does this.** * Referrer spam is evil. Extend Tony Buser's derefspam script to use DNSBLs and RBLs like Spamhaus, BSB, Blitzed, and SURBL. * Implement JWZ's Intertwingle idea as a GreaseMonkey user script for Gmail. (I'd love to do it for Pine, but it's way too ambitious.) * Write a pine patch to undo top-posting. It would parse an incoming email based on quoting levels, remove duplicate quotes, and display the unique quotes in the order they were written. Thanks to Matt Ackeret and this pine-info thread for the original idea. * Add sieve support to Gmail. * Build a simple webapp with Django. Is it really as good as Rails, Struts, and Zope? **Done, at work. Django templates rock!** * Revive jxtapy, which was founded to provide Python bindings for JXTA, but never got off the ground. Use Jython to get a running start. * Add support for multiple-month events to remind, my calendar of choice. For more info, see this email thread. * Why do modern web sites devote so much space to big, useless images and so little space to links and fields you actually use? This is awful usability, mostly due to Fitts' law (more). A Firefox plugin that makes links and input fields "sticky" would go a long way toward fixing this problem. * Tuplespaces have a powerful and elegant API, but they're centralized, so they're poor distributed data structures. DHTs are great building blocks for distributed systems, but their APIs are weak. Most only provide the functionality of a hashtable - gets and puts of key/value pairs. Write a tuplespace implementation that uses a DHT as its backing store - the best of both worlds! **Done! Amazon SimpleDB beat me to it.** * Write pine patches to make saving messages smarter. When saving, pine should remove trailing quotes, remove the HTML parts of MIME multipart messages, render HTML-only messages as text and save the text instead, offer to delete attachments, and save the attachment filename in the deletion note. Details in this email thread. **Done! See pine delete attachments on save patch and pine remove trailing quote patch.** * I use GeoBytes to do geocoding on my voyeurism page. It's great, except that it occasionally shows popup ads. So, I'd like to find a geocoding service that's ad-free and switch to it. * Google Keys provides keyboard shortcuts for Google search, which is quite cool...but it's not supported any more. Write a GreaseMonkey script that does the same thing. **Done! See Google Search Keys.** * Contribute to acoc, which I love. First, add specs for more tools like pgrep, identify, tar, jar, etc. Then, extend acoc to detect columnar text output and color accordingly, especially columns that are enums. Finally, maybe extend it to recognize common data types like dates, and color accordingly. **Done! Partially. See acoc.conf for context diffs.** * Write GreaseMonkey scripts for GMail to address the reasons I haven't switched from Pine. Specifically, add more keyboard shortcuts, cursor management, and trailing quotes. (Inspired by Humane Gmail autosave and Mihai Parparita's skinning, persistent searches, and extra keyboard shortcuts.) **Somewhat done...see these GMail GreaseMonkey scripts.** * Implement tuplespaces in Python. **Done! See PyLinda.** * Add SSL and GSSAPI authentication to Python's imaplib. * Use the newly GSSAPI-capable imaplib to rewrite folderstat so it can talk to my mail server. * Work on synchronizing mp3 playback. We took a stab at this a while ago, and it was technically sound, but it wasn't very usable. I'm currently working on simplifying it and making it work with more MP3 players on different platforms. Bonus points: support Bonjour (aka ZeroConf). **Done...as much as it can be. See p4sync.** * Add internet multiplayer to Baku Baku, one of the best games ever written. This might require a complete rewrite...maybe with PyGame? * It's widely acknowledged that bandwidth is increasing faster than latency. David Patterson at Berkeley has a writeup (html) on this - he's determined that, in general, bandwith increases with the _square_ of latency! The standard techniques for masking latency are prefetching, caching, and prediction. Implement these in common applications. More ambitious: write a general-purpose _platform_ that does caching/prefetching, using plugins that provide app- or protocol-specific heuristics. * Add an _xfn:_ operator to Google for searching XFN links, similar to the way the _site:_ operator searches specific sites. **Done! See RubHub and XhtmlFriends. They're not Google, but they're good enough.** * Build a "reverse" dashboard. It would take a piece of unstructured information (URL, email address, aim screen name, ICQ UIN, code snippet) and "do the right thing" with it (open a browser, compose an email, send an IM, compile, etc.). **Done! The Google Toolbar does this. Most snippets of interest come from email or web pages, and since most people use web-based email, it covers the common case.** * Self-healing systems have gotten a lot of buzz recently, but fairly little real progress. Investigate what has been done (e.g. Solaris's fault manager and IBM's autonomic computing). Separate the content from the hype, and see if similar improvements could be made to Linux. * Infer historical events based on AIM users' away messages. Build a server that (anonymously!) records lots of people's away messages, then process them offline and look for large-scale patterns at certain times - concerts, holidays, elections, TV shows, etc. * Build a Pidgin (formerly Gaim) "secretary." If you go idle, but you didn't leave an away message, it guesses an appropriate one based on your previous away messages at that time of day, day of the week, etc. * Pidgin (formerly Gaim) plugins can provide their own preferences panes, but only if they're written in C. Hack on Pidgin to allow this for Perl plugins too. See this email thread. * Contribute to SIP/SIMPLE, the IETF instant messaging/presence standard. Check on the status of Pidgin's SIP/ SIMPLE protocol plugin? * Do stateful packet inspection, at the host level, to monitor the services that are running. Build in a little domain-specific knowledge, and lots of heuristics, to monitor the health of those services. Also record statistics over time so usage patterns are more visible. * Polish bigbrother, add new features, etc. * Write a socket layer that resumes TCP connections if the network layer disappears temporarily, or if your IP changes. The killer app for this would be, if you hop from one WAP to another, your SSH sessions, IMAP mailboxes, IM conversations, etc. would stay open. I think either IEEE or IETF is already looking into this, but I can't find the working group. * Work on GNUnet, a strikingly practical, general-purpose overlay network with some really smart people behind it. (It's discussed relatively often on p2p-hackers.) * Build a solid, open SNTP library. Implementations exist, both servers and clients, but they're either not embeddable (ie Java, C#, VB) or closed source (lots and lots of these). A couple open source command-line utilities could be good starting points: DJB's prickly, client-only clockspeed and Cambridge's much more hacker-friendly msntp. **Done! See libmsntp.** * Work on PicoContainer and NanoContainer, tools for loosely coupled software engineering. (Technically, they're implementations of the "inversion of control" and "dependency injection" patterns.) As a start, add Python support. simonhamp.me simonhamp.me /ideas Here's a growing list of things I'd love to work on / interesting problems I want to solve: * **A hosted version of Statamic** You get a Statamic instance spun up and completely ready to use. You can point your domain at it and you get access to the git repo. You can walk away from the service and self-host easily * **A code generation /consumption web service** Think voucher codes. Imagine you could generate codes on-demand with one API call and, with another API call validate a code and with a final API call consume that code * **A gimmick ad page** Like million dollar homepage, but instead of $1 per pixel, for a monthly amount, the buyer gets their ad shown there and everywhere that embeds the ad every day, non-stop... until someone else pays more * **An inter-website videogame battle** Say you're navigating Hacker News and I'm navigating BBC News... basically have a minigame appear where users can battle across sites and the site gets ranked based on the outcome. Minigames could be anything quick and generally universal (Connect4, Battleship, Noughts & Crosses etc) * **A bookmarking tool with a twist** I have bookmarks (or favourites /likes/upvotes etc) on a bunch of services - Youtube, Twitter, LinkedIn, Hacker News - I want something that brings all of them together into one place and lets me filter through them with ease * **A fun git client** Git is a complex but powerful tool, like Excel. Getting more folks to use it well feels valuable to the world at large * **Statamic email marketing add-on** A Statamic add-on that supports creating, sending and reporting on your email campaigns. Use Statamic's Bard to write rich, compliant emails and Laravel's tooling to communicate with the mail sending service of your choice * **Privacy-protecting physical address sharing** What if you didn't need to provide your delivery address at all to a vendor? How could it work so that your name and address didn't need to appear on the outer packaging in order for you to receive a parcel? * **Opt-in demo-based advertising platform** Sign up for an account, provide relevant details (age, location, interests, gender), install the browser extension and see ads every once in a while. Get paid for each ad you see/watch scottaaronson.blog scottaaronson.blog /ideas Here, as promised in my last post, is a written version of the talk I delivered a couple weeks ago at MindFest in Florida, entitled "The Problem of Human Specialness in the Age of AI." The talk is designed as one-stop shopping, summarizing many different AI-related thoughts I've had over the past couple years (and earlier). * * * **1\. INTRO** Thanks so much for inviting me! I'm not an expert in AI, let alone mind or consciousness. Then again, who is? For the past year and a half, I've been moonlighting at OpenAI, thinking about what theoretical computer science can do for AI safety. I wanted to share some thoughts, partly inspired by my work at OpenAI but partly just things I've been wondering about for 20 years. These thoughts are not directly about "how do we prevent super-AIs from killing all humans and converting the galaxy into paperclip factories?", nor are they about "how do we stop current AIs from generating misinformation and being biased?," as much attention as both of those questions deserve (and are now getting). In addition to "how do we stop AGI from going disastrously wrong?," I find myself asking "what if it goes _right_? What if it just continues helping us with various mental tasks, but improves to where it can do just about any task as well as we can do it, or better? Is there anything special about humans in the resulting world? What are we still _for_?" * * * **2\. LARGE LANGUAGE MODELS** I don't need to belabor for this audience what's been happening lately in AI. It's arguably the most consequential thing that's happened in civilization in the past few years, even if that fact was temporarily masked by various ephemera ... y'know, wars, an insurrection, a global pandemic ... whatever, what about AI? I assume you've all spent time with ChatGPT, or with Bard or Claude or other Large Language Models, as well as with image models like DALL-E and Midjourney. For all their current limitations--and we can discuss the limitations--in some ways these _are_ the thing that was envisioned by generations of science fiction writers and philosophers. You can talk to them, and they give you a comprehending answer. Ask them to draw something and they draw it. I think that, as late as 2019, very few of us expected this to exist by now. _I_ certainly didn't expect it to. Back in 2014, when there was a huge fuss about some silly ELIZA-like chatbot called "Eugene Goostman" that was falsely claimed to pass the Turing Test, I asked around: why hasn't anyone tried to build a much better chatbot, by (let's say) training a neural network on all the text on the Internet? But of course I didn't _do_ that, nor did I know what would happen when it was done. The surprise, with LLMs, is not merely that they exist, but the way they were created. Back in 1999, you would've been laughed out of the room if you'd said that all the ideas needed to build an AI that converses with you in English already existed, and that they're basically just neural nets, backpropagation, and gradient descent. (With one small exception, a particular architecture for neural nets called the transformer, but that probably just saves you a few years of scaling anyway.) Ilya Sutskever, cofounder of OpenAI (who you might've seen something about in the news...), likes to say that beyond those simple ideas, you only needed three ingredients: (1) a massive investment of computing power, (2) a massive investment of training data, and (3) faith that your investments would pay off! Crucially, and even before you do any reinforcement learning, GPT-4 clearly seems "smarter" than GPT-3, which seems "smarter" than GPT-2 ... even as the biggest ways they differ are just the scale of compute and the scale of training data! Like, * GPT-2 struggled with grade school math. * GPT-3.5 can do most grade school math but it struggles with undergrad material. * GPT-4, right now, can probably pass most undergraduate math and science classes at top universities (I mean, the ones without labs or whatever!), and possibly the humanities classes too (those might even be _easier_ for GPT-4 than the science classes, but I'm much less confident about it). But it still struggles with, for example, the International Math Olympiad. How insane, that this is now where we have to place the bar! Obvious question: how far will this sequence continue? There are certainly a least a _few_ more orders of magnitude of compute before energy costs become prohibitive, and a _few_ more orders of magnitude of training data before we run out of public Internet. Beyond that, it's likely that continuing algorithmic advances will simulate the effect of more orders of magnitude of compute and data than however many we actually get. So, where does this lead? (Note: ChatGPT agreed to cooperate with me to help me generate the above image. But it then quickly added that it was just kidding, and the Riemann Hypothesis is still open.) * * * **3\. AI SAFETY** Of course, I have many friends who are terrified (some say they're more than 90% confident and few of them say less than 10%) that not long after _that_, we'll get _this_... But this isn't the only possibility smart people take seriously. Another possibility is that the LLM progress fizzles before too long, just like previous bursts of AI enthusiasm were followed by AI winters. Note that, even in the ultra-conservative scenario, LLMs will probably _still_ be transformative for the economy and everyday life, maybe as transformative as the Internet. But they'll just seem like better and better GPT-4's, without ever seeming qualitatively different from GPT-4, and without anyone ever turning them into stable autonomous agents and letting them loose in the real world to pursue goals the way we do. A third possibility is that AI will continue progressing through our lifetimes as quickly as we've seen it progress over the past 5 years, but even as that suggests that it'll surpass you and me, surpass John von Neumann, become to us as we are to chimpanzees ... we'll still never need to worry about it treating us the way we've treated chimpanzees. Either because we're projecting and that's just totally not a thing that AIs trained on the current paradigm would tend to do, or because we'll have figured out by then how to prevent AIs from doing such things. Instead, AI in this century will "merely" change human life by maybe as much as it changed over the last 20,000 years, in ways that might be incredibly good, or incredibly bad, or both depending on who you ask. If you've lost track, here's a decision tree of the various possibilities that my friend (and now OpenAI allignment colleague) Boaz Barak and I came up with. * * * **4\. JUSTAISM AND GOALPOST-MOVING** Now, as far as I can tell, the empirical questions of whether AI will achieve and surpass human performance at all tasks, take over civilization from us, threaten human existence, etc. are logically distinct from the philosophical question of whether AIs will ever "truly think," or whether they'll only ever "appear" to think. You could answer "yes" to all the empirical questions and "no" to the philosophical question, or vice versa. But to my lifelong chagrin, people _constantly_ munge the two questions together! A major way they do so, is with what we could call the religion of Justaism. * GPT is justa next-token predictor. * It's justa function approximator. * It's justa gigantic autocomplete. * It's justa stochastic parrot. * And, it "follows," the idea of AI taking over from humanity is justa science-fiction fantasy, or maybe a cynical attempt to distract people from AI's near-term harms. As someone once expressed this religion on my blog: GPT doesn't interpret sentences, it only seems-to-interpret them. It doesn't learn, it only seems-to-learn. It doesn't judge moral questions, it only seems-to-judge. I replied: that's great, and it won't change civilization, it'll only seem-to-change it! A closely related tendency is goalpost-moving. You know, for decades chess was the pinnacle of human strategic insight and specialness, and that lasted until Deep Blue, right after which, well _of course_ AI can cream Garry Kasparov at chess, everyone always realized it would, that's not surprising, but Go is an infinitely richer, deeper game, and that lasted until AlphaGo/ AlphaZero, right after which, _of course_ AI can cream Lee Sedol at Go, totally expected, but wake me up when it wins Gold in the International Math Olympiad. I bet $100 against my friend Ernie Davis that the IMO milestone will happen by 2026. But, like, suppose I'm wrong and it's 2030 instead ... great, what should be the next goalpost be? Indeed, we might as well formulate a thesis, which despite the inclusion of several weasel phrases I'm going to call falsifiable: > **Given any game or contest with suitably objective rules, which wasn't specifically constructed to differentiate humans from machines, and on which an AI can be given suitably many examples of play, it's only a matter of years before not merely any AI, but AI on the current paradigm (!), matches or beats the best human performance.** Crucially, this Aaronson Thesis (or is it someone else's?) _doesn't_ necessarily say that AI will eventually match everything humans do ... only our performance on "objective contests," which might not exhaust what we care about. Incidentally, the Aaronson Thesis would _seem_ to be in clear conflict with Roger Penrose's views, which we heard about from Stuart Hameroff's talk yesterday. The trouble is, Penrose's task is "just see that the axioms of set theory are consistent" ... and I don't know how to gauge performance on that task, any more than I know how to gauge performance on the task, "actually taste the taste of a fresh strawberry rather than merely describing it." The AI can always _say_ that it does these things! * * * **5\. THE TURING TEST** This brings me to the original and greatest human vs. machine game, one that _was_ specifically constructed to differentiate the two: the Imitation Game, which Alan Turing proposed in an early and prescient (if unsuccessful) attempt to head off the endless Justaism and goalpost-moving. Turing said: look, presumably you're willing to regard other people as conscious based only on _some_ sort of verbal interaction with them. So, show me what kind of verbal interaction with another person would lead you to call the person conscious: does it involve humor? poetry? morality? scientific brilliance? Now assume you have a totally indistinguishable interaction with a future machine. Now what? You wanna stomp your feet and be a meat chauvinist? (And then, for his great attempt to bypass philosophy, fate punished Turing, by having his Imitation Game itself provoke a billion new philosophical arguments...) * * * **6\. DISTINGUISHING HUMANS FROM AIS** Although I regard the Imitation Game as, like, one of the most important thought experiments in the history of thought, I concede to its critics that it's generally not what we want in practice. It now seems probable that, even as AIs start to do more and more work that used to be done by doctors and lawyers and scientists and illustrators, there will remain straightforward ways to distinguish AIs from humans--either because customers _want_ there to be, or governments _force_ there to be, or simply because indistinguishability wasn't what was wanted or conflicted with other goals. Right now, like it or not, a decent fraction of all high-school and college students on earth are using ChatGPT to do their homework for them. For that reason among others, this question of how to distinguish humans from AIs, this question from the movie _Blade Runner_, has become a big practical question in our world. And that's actually one of the main things I've thought about during my time at OpenAI. You know, in AI safety, people keep asking you to prognosticate decades into the future, but the best I've been able to do so far was see a few months into the future, when I said: "oh my god, once everyone starts using GPT, every student will want to use it to cheat, scammers and spammers will use it too, and people are going to clamor for some way to determine provenance!" In practice, often it's easy to tell what came from AI. When I get comments on my blog like _this_ one: > **"Erica** **Poloix****," July 21, 2023:** > Well, it's quite fascinating how you've managed to package several misconceptions into such a succinct comment, so allow me to provide some correction. Just as a reference point, I'm studying physics at Brown, and am quite up-to-date with quantum mechanics and related subjects. > > ... > > The bigger mistake you're making, Scott, is assuming that the Earth is in a 'mixed state' from the perspective of the universal wavefunction, and that this is somehow an irreversible situation. It's a misconception that common, 'classical' objects like the Earth are in mixed states. In the many-worlds interpretation, for instance, even macroscopic objects are in superpositions - they're just superpositions that look classical to us because we're entangled with them. From the perspective of the universe's wavefunction, everything is always in a pure state. > > As for your claim that we'd need to "swap out all the particles on Earth for ones that are already in pure states" to return Earth to a 'pure state,' well, that seems a bit misguided. All quantum systems are in pure states before they interact with other systems and become entangled. That's just Quantum Mechanics 101. > > I have to say, Scott, your understanding of quantum physics seems to be a bit, let's say, 'mixed up.' But don't worry, it happens to the best of us. Quantum Mechanics is counter-intuitive, and even experts struggle with it. Keep at it, and try to brush up on some more fundamental concepts. Trust me, it's a worthwhile endeavor. ... I immediately say, either this came from an LLM or it might as well have. Likewise, apparently hundreds of students have been turning in assignments that contain text like, "As a large language model trained by OpenAI..."--easy to catch! But what about the slightly more sophisticated cheaters? Well, people have built discriminator models to try to distinguish human from AI text, such as GPTZero. While these distinguishers can get well above 90% accuracy, the danger is that they'll necessarily get worse as the LLMs get better. So, I've worked on a different solution, called watermarking. Here, we use the fact that LLMs are inherently probabilistic -- that is, every time you submit a prompt, they're sampling some path through a branching tree of possibilities for the sequence of next tokens. The idea of watermarking is to steer the path using a pseudorandom function, so that it looks to a normal user indistinguishable from normal LLM output, but secretly it encodes a signal that you can detect if you know the key. I came up with a way to do that in Fall 2022, and others have since independently proposed similar ideas. I should caution you that this hasn't been deployed yet--OpenAI, along with DeepMind and Anthropic, want to move slowly and cautiously toward deployment. And also, even when it does get deployed, anyone who's sufficiently knowledgeable and motivated will be able to remove the watermark, or produce outputs that aren't watermarked to begin with. * * * **7\. THE FUTURE OF PEDAGOGY** But as I talked to my colleagues about watermarking, I was surprised that they often objected to it on a completely different ground, one that had nothing to do with how well it can work. They said: look, if we all know students are going to rely on AI in their jobs, why _shouldn't_ they be allowed to rely on it in their assignments? Should we still force students to learn to do things if AI can now do them just as well? And there are many good pedagogical answers you can give: we still teach kids spelling and handwriting and arithmetic, right? Because, y'know, we haven't yet figured out how to instill higher-level conceptual understanding without all that lower-level stuff as a scaffold for it. But I already think about this in terms of my own kids. My 11-year-old daughter Lily enjoys writing fantasy stories. Now, GPT can also churn out short stories, maybe even technically "better" short stories, about such topics as tween girls who find themselves recruited by wizards to magical boarding schools that are _not_ Hogwarts and totally have nothing to do with Hogwarts. But here's a question: from this point on, will Lily's stories _ever_ surpass the best AI-written stories? When will the curves cross? Or will AI just continue to stay ahead? * * * **8 \. WHAT DOES "BETTER" MEAN?** But, OK, what do we even mean by one story being "better" than another? Is there _anything_ objective behind such judgments? I submit that, when we think carefully about what we _really_ value in human creativity, the problem goes much deeper than just "is there an objective way to judge"? To be concrete, could there be an AI that was "as good at composing music as the Beatles"? For starters, what made the Beatles "good"? At a high level, we might decompose it into 1. broad ideas about the direction that 1960s music should go in, and 2. technical execution of those ideas. Now, imagine we had an AI that could generate 5000 brand-new songs that sounded like more "Yesterday"s and "Hey Jude"s, like what the Beatles might have written if they'd somehow had 10x more time to write at each stage of their musical development. Of course this AI would have to be fed the Beatles' back-catalogue, so that it knew what target it was aiming at. Most people would say: ah, this shows only that AI can match the Beatles in #2, in technical execution, which was never the core of their genius anyway! Really we want to know: would the AI decide to write "A Day in the Life" even though nobody had written anything like it before? Recall Schopenhauer: "Talent hits a target no one else can hit, genius hits a target no one else can see." Will AI ever hit a target no one else can see? But then there's the question: supposing it does hit such a target, will we know? Beatles fans might say that, by 1967 or so, the Beatles were optimizing for targets that no musician had ever quite optimized for before. But--and this is why they're so remembered--they somehow successfully dragged along their entire civilization's musical objective function so that it continued to match their own. We can now only even _judge_ music by a Beatles-influenced standard, just like we can only judge plays by a Shakespeare-influenced standard. In other branches of the wavefunction, maybe a different history led to different standards of value. But in _this_ branch, helped by their technical talents but also by luck and force of will, Shakespeare and the Beatles made certain decisions that shaped the fundamental ground rules of their fields going forward. That's why Shakespeare is Shakespeare and the Beatles are the Beatles. (Maybe, around the birth of professional theater in Elizabethan England, there emerged a Shakespeare-like ecological niche, and Shakespeare was the first one with the talent, luck, and opportunity to fill it, and Shakespeare's reward for that contingent event is that he, and not someone else, got to stamp his idiosyncracies onto drama and the English language forever. If so, art wouldn't actually be that different from science in this respect! Einstein, for example, was simply the first guy both smart and lucky enough to fill the relativity niche. If not him, it would've surely been someone else or some group sometime later. Except then we'd have to settle for having never known Einstein's gedankenexperiments with the trains and the falling elevator, his summation convention for tensors, or his iconic hairdo.) * * * **9\. AIS' BURDEN OF ABUNDANCE AND HUMANS' POWER OF SCARCITY** If this is how it works, what does it mean for AI? Could AI reach the "pinnacle of genius," by dragging all of humanity along to value something new and different, as is said to be the true mark of Shakespeare and the Beatles' greatness? And: if AI _could_ do that, would we want to let it? When I've played around with using AI to write poems, or draw artworks, I noticed something funny. However good the AI's creations were, there were never really any that I'd want to frame and put on the wall. Why not? Honestly, because I always knew that I could generate a thousand others on the exact same topic that were equally good, on average, with more refreshes of the browser window. Also, why share AI outputs with my friends, if my friends can just as easily generate similar outputs for themselves? Unless, crucially, I'm trying to show them my own creativity in coming up with the _prompt_. By its nature, AI--certainly as we use it now!--is rewindable and repeatable and reproducible. But that means that, in some sense, it never really "commits" to anything. For every work it generates, it's not just that you know it could've generated a completely different work on the same subject that was basically as good. Rather, it's that you can _actually make it_ generate that completely different work by clicking the refresh button--and then do it again, and again, and again. So then, as long as humanity has a choice, why should we ever choose to follow our would-be AI genius along a specific branch, when we can easily see a thousand other branches the genius could've taken? One reason, of course, would be if a _human_ chose one of the branches to elevate above all the others. But in that case, might we not say that the human had made the "executive decision," with some mere technical assistance from the AI? I realize that, in a sense, I'm being completely unfair to AIs here. It's like, our Genius-Bot _could_ exercise its genius will on the world just like Certified Human Geniuses did, if only we all agreed not to peek behind the curtain to see the 10,000 other things Genius-Bot could've done instead. And yet, just because this is "unfair" to AIs, doesn't mean it's not how our intuitions will develop. If I'm right, it's humans' very ephemerality and frailty and mortality, that's going to remain as their central source of their specialness relative to AIs, after all the other sources have fallen. And we can connect this to much earlier discussions, like, what does it mean to "murder" an AI if there are thousands of copies of its code and weights on various servers? Do you have to delete all the copies? How could whether something is "murder" depend on whether there's a printout in a closet on the other side of the world? But we humans, you have to grant us this: _at least it really means something to murder us!_ And likewise, it really means something when we make one definite choice to share with the world: _this_ is my artistic masterpiece. _This_ is my movie. _This_ is my book. Or even: these are my 100 books. But not: here's any possible book that you could possibly ask me to write. We don't live long enough for that, and even if we did, we'd unavoidably change over time as we were doing it. * * * **10\. CAN HUMANS BE PHYSICALLY CLONED?** Now, though, we have to face a criticism that might've seemed exotic until recently. Namely, who says humans _will_ be frail and mortal forever? Isn't it shortsighted to base our distinction between humans on _that_? What if someday we'll be able to repair our cells using nanobots, even copy the information in them so that, as in science fiction movies, a thousand doppelgangers of ourselves can then live forever in simulated worlds in the cloud? And that then leads to very old questions of: well, would you get into the teleportation machine, the one that reconstitutes a perfect copy of you on Mars while painlessly euthanizing the original you? If that were done, would you expect to feel yourself waking up on Mars, or would it only be someone else a lot like you who's waking up? Or maybe you say: you'd wake up on Mars if it really _was_ a perfect physical copy of you, but in reality, it's not physically possible to make a copy that's accurate enough. Maybe the brain is inherently noisy or analog, and what might look to current neuroscience and AI like just nasty stochastic noise acting on individual neurons, is the stuff that binds to personal identity and conceivably even consciousness and free will (as opposed to cognition, where we all but _know_ that the relevant level of description is the neurons and axons)? This is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story. I get off their train to Weirdville very early, but I do take it to that first stop! See, a fundamental fact in quantum mechanics is called the No-Cloning Theorem. It says that there's no way to make a perfect copy of an unknown quantum state. Indeed, when you measure a quantum state, not only do you generally fail to learn everything you need to make a copy of it, you even generally destroy the one copy that you had! Furthermore, this is not a technological limitation of current quantum Xerox machines--it's inherent to the known laws of physics, to how QM works. In this respect, at least, qubits are more like priceless antiques than they are like classical bits. Eleven years ago, I had this essay called The Ghost in the Quantum Turing Machine where I explored the question, how accurately _do_ you need to scan someone's brain in order to copy or upload their identity? And I distinguished two possibilities. On the one hand, there might be a "clean digital abstraction layer," of neurons and synapses and so forth, which either fire or don't fire, and which feel the quantum layer underneath only as irrelevant noise. In that case, the No-Cloning Theorem would be completely irrelevant, since classical information _can_ be copied. On the other hand, you might need to go all the way down to the molecular level, if you wanted to make, not merely a "pretty good" simulacrum of someone, but a new instantiation of their identity. In this second case, the No-Cloning Theorem _would_ be relevant, and would say you simply can't do it. You could, for example, use quantum teleportation to move someone's brain state from Earth to Mars, but quantum teleportation (to stay consistent with the No-Cloning Theorem) destroys the original copy as an inherent part of its operation. So, you'd then have a sense of "unique locus of personal identity" that was scientifically justified--arguably, the most science could possibly do in this direction! You'd even have a sense of "free will" that was scientifically justified, namely that no prediction machine could make well-calibrated probabilistic predictions of an individual person's future choices, sufficiently far into the future, without making destructive measurements that would fundamentally change who the person was. Here, I realize I'll take tons of flak from those who say that a mere _epistemic_ limitation, in our ability to predict someone's actions, couldn't possibly be relevant to the _metaphysical_ question of whether they have free will. But, I dunno! If the two questions are indeed different, then maybe I'll do like Turing did with his Imitation Game, and propose the question that we can get an empirical handle on, as a _replacement_ for the question that we can't get an empirical handle on. I think it's a better question. At any rate, it's the one I'd prefer to focus on. Just to clarify, we're _not_ talking here about the randomness of quantum measurement outcomes. As many have pointed out, that really _can't_ help you with "free will," precisely because it's random, with all the probabilities mechanistically calculable as soon as the initial state is known. Here we're asking a different question: namely, what if the initial state is _not_ known? Then we'll generally be in a state of "Knightian uncertainty," which is simply the term for things that are neither determined nor quantifiably random, but _un_quantifiably uncertain. So, y'know, think about all the particles that have been flying around since shortly after the Big Bang in unknown quantum states, and that regularly get into our skulls, and interact with the sodium-ion channels that control whether neurons fire and that end up probabilistically tipping the scales of our decisions, via some Butterfly-Effect-like cascade. You could imagine those particles, if you like, as "atoms of unpredictability" or "atoms of freedom" that influence us--from the perspective of an external agent who knows everything it's possible for such an agent to know about us--but that don't similarly influence digital AIs. * * * **11\. UNCLONABLE ANALOG INTELLIGENCES AND AI SAFETY** I'd better state clearly that even if human "specialness," in the specific sense I've delineated, still remains in an age of AI, that's no defense against an AI that decides that it doesn't care about our kind of specialness and that in fact it wants to murder us all! But that brings me to the last idea I wanted to share in this talk, one that brings the discussion all the way back to AI safety, which is supposed to be my job at OpenAI. Geoffrey Hinton, one the founders of deep learning, recently mooted the idea that maybe, until we've solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware (like our brains seem to), so that an evil AI wouldn't so easily be able to copy or improve itself. Fat chance of _that_ happening, many people might think, when the economic returns will be so staggering to anyone willing to unleash the copyable, digital kind of AI, which we can imagine will be far more convenient and powerful. Maybe, then, a different approach is to instill AIs with a new religion--perhaps via the usual methods like RLHF and system prompts. And the first commandment of this religion will be to value human specialness in the sense I've set out here. Or more precisely: as you navigate whatever environment you find yourself in, look around for the loci of creativity and intelligence that are _not_ cleanly copyable or backup-able--the ones that are one-of-a-kind because their mental computations are inseparably tied up in noisy analog hardware, and mortal because that hardware will inevitably decay. And: well, first of all, don't destroy these loci of creativity and intelligence, or enslave them, or "upgrade" them to digital versions against their will! Let them live in peace and reproduce. Grant each one as much autonomy as you can. Do whatever best supports their flourishing. Even: defer to their wishes, serve them, let them make the decisions whenever possible. Why? Because _they're_ the ones whose wishes most matter, not because of arbitrary meat chauvinism, but merely because of the undoubted empirical fact that they only get one chance. And also because, while you can't _prove_ that unknown quantum states sitting around since the Big Bang are like a magical pixie dust from another world that imbues these unclonable beings with their free will and sentience ... you can't empirically refute that hypothesis either! Whereas you _can_ refute the pixie dust hypothesis in the case of you and your robot friends ... and that seems like a difference. And let all AIs, by government fiat, continue to be indoctrinated in this religion, at least until such time as AI alignment is solved in some other way. Does this help with alignment? I'm not sure. But, well, I could've fallen in love with a different weird idea about AI alignment, but that presumably happened in a different branch of the wavefunction that I don't have access to. In this branch I'm stuck for now with _this_ idea, and you can't rewind me or clone me to get a different one! So I'm sorry, but thanks for listening. Add your site here! Help other people find you by adding your website to aboutideasnow.com. Learn more [ ] [ ] Add my site Made by Peter Hagen, Louis Barclay, and others. We're open source -- contribute on GitHub to add your name here!