Posts by abucci@buc.ci
 (DIR) Post #B2TtJ3UDoBxLiCJzNI by abucci@buc.ci
       0 likes, 1 repeats
       
       @0x00string@infosec.exchange @androcat@toot.cat Possibly relevant: https://buc.ci/abucci/p/1768581953.835600Technological power always seems to push into the head (neuroscience) and the reproductive system (genetics; eugenics). I think that's because the former is for better controlling your human resources (flesh machines) while the latter is for ensuring the reliable manufacture of more of them. The interest in "health", and surveillance generally, is about maintenance and maximizing ROI. Resources are to be developed into assets, after all, and investors (or their agents) need to ensure these assets mature optimally.
       
 (DIR) Post #B2TwCgghxaM380oiXY by abucci@buc.ci
       0 likes, 1 repeats
       
       @0x00string@infosec.exchange @androcat@toot.cat Oh yes--I was just throwing some thoughts around trying to diagnose what exactly the wealthy are up to. They felt relevant to the conversation. Sorry to butt in like that though; it's hard to gauge on here whether that's unwelcome, or whether the thoughts are too far afield. I forget where I picked it up but the quote stuck with me that the wealthy view the rest of us as the sh!t they grow their money in. I think they think of us as financial assets. Surely not as full human beings, the way they view themselves and their peers. Which is a kind of madness, though it's brutally rational in its way. The same kind of madness that reasons that incinerating the planet is economical and good for GDP, say.
       
 (DIR) Post #B2U7Vf6bAOEhDbB3ei by abucci@buc.ci
       0 likes, 1 repeats
       
       @nini@oldbytes.space It's interesting how different people experience such a thing differently, especially with regards to whether it's a positive or negative experience. I tend to associate noise with disaster! Though I can see how the silence after the noise could be chilling.
       
 (DIR) Post #B2UA9yJFNyI6GoRVBI by abucci@buc.ci
       0 likes, 1 repeats
       
       @yoginho@spore.social I'm enjoying Darkspace. Thanks for this; hadn't encountered them before.
       
 (DIR) Post #B2VnLk1OQNYMnnedTk by abucci@buc.ci
       0 likes, 1 repeats
       
       @matthewskelton@mastodon.social @danmcquillan@kolektiva.socialOur investments will be focused on specific areas where generative AI excelsIt excels at making bullshit, porn, and spam at scale, when it's not greasing the wheels of oppressive government and corporate policy. Wikipedia is, or at least has been, the opposite of this.Supporting Wikipedia’s moderators and patrollers with AI-assisted workflows that automate tedious tasks in support of knowledge integrityNope. That's not how moderation, or generative AI, work. Fighting a bullshit generator's bullshit output is a formula for making moderation harder, not easier. Knowledge integrity is compromised if the bullshit generator can randomly place hard-to-find bullshit within the knowledge base.Giving Wikipedia’s editors time back by improving the discoverability of information on Wikipedia to leave more time for human deliberation, judgment, and consensus building;Nope, that's not how AI works either. Generative AI is terrible at information retrieval, and terrible for information retrieval. We've known this forever. Besides Google's AI telling you to glue the cheese onto your pizza, there's research on this. See https://dair-community.social/@emilymbender/113422515033543544 , which links two academic papers and an expository op ed. AI does not improve the discoverability of information.It goes on and on and on like this, splattering AI booster rhetoric all over the place. To me it reads like a surrender document. Human beings managing human knowledge meant for humans should be holding the line against this anti-human, anti-imagination, anti-ecological technology, not embracing it and using the accelerationist rhetoric of its promoters.
       
 (DIR) Post #B2VsryhXscdTMuYtyi by abucci@buc.ci
       0 likes, 1 repeats
       
       @matthewskelton@mastodon.social @danmcquillan@kolektiva.social I refuse to use the anti-human, anti-imagination, anti-ecological toolset that is being used in the enterprise to disempower workers and more broadly to disempower all of us. The only way one can assert such tools have "value" is by ignoring the political project they represent and the vast harms they cause, and uncritically hyperfocusing on a microcosm. I won't do that.If you have a look at my CV you'll see I am quite capable of building such things myself from the ground up, so this is not a position taken out of technical ignorance.
       
 (DIR) Post #B2W6PpGKmLXvLPIvKa by abucci@buc.ci
       0 likes, 1 repeats
       
       @sundogplanets@mastodon.social I've had good experiences with Kile (https://kile.sourceforge.io). It's a KDE project but should be installable on any flavor of Ubuntu with sudo apt install kile on the command line or with whatever tool you use to install software. I haven't written book-length things with it, but I have used it for articles and I don't see that it'd falter on longer works. I don't have any experience using Markdown in it, only Latex. I don't know if it can even handle Markdown.
       
 (DIR) Post #B2XtjcRVF4hyHuYXB2 by abucci@buc.ci
       0 likes, 1 repeats
       
       I put the text below on LinkedIn in response to a post there and figured I'd share it here too because it's a bit of a step from what I've been posting previously on this topic and might be of some use to someone.In retrospect I might have written non-sense in place of nonsense.If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).#AI #GenAI #GenerativeAI #LLM #coding #dev #tech #SoftwareDevelopment #programming #nihilism #LinkedIn
       
 (DIR) Post #B2XxWopoG0WKm04qYK by abucci@buc.ci
       0 likes, 0 repeats
       
       @wwhitlow@indieweb.socialDoes data arrive at true understanding, or statistical associations?Maybe you've read him, but Han digs into this, and his answer is a resounding "no". He refers to a data-only view as "total ignorance". He cites Hegel while making this argument, so we've been around this block for over 2 centuries now.The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.I couldn't agree more. There's a lot to engage with here, and it's frustrating at times that so many seem to be assuming the problems away rather than grappling with them. Among other things it's a wasted opportunity to learn and discover.
       
 (DIR) Post #B2Y1my7nsqqC2JRxnU by abucci@buc.ci
       0 likes, 0 repeats
       
       @wwhitlow@indieweb.socialI realize what follows is quite a digression and probably not about what you intended, but after reading a few of your posts and typing this out, I figured I'd share anyway in case it resonates.generate new responses to text or image inputs by the userI'd pick some fault with the choice of name "process generative". The cited artifacts seem to be neither of those things. They can be embedded into generative processes for sure, but it'd be the people who interact with them that make them processal and generative, in my view, not the models themselves. Below is the tl;dr:I'd say the use of "new" in the quote above is load bearing. Technically and historically, the latent diffusion model underlying many image generators was developed to represent complicated probability distributions in a concise set of parameters. To use this model as an image generator, one must collect a set of samples of the image distribution one hopes to represent, and then apply a training procedure to develop the parametric representation. There's already several layers of representation happening here, each with a corresponding fidelity loss but also a kind of "reality loss" if you want to call it that: the subjects of images -> the images themselves -> vector representations of images -> a probability distribution over the vector representations -> parameters representing distributions over vector representations.Once you finally have the parameters in hand, you can then sample from the represented distribution. This is the "generation" step, and what you are referring to as "new". I'd argue both words are inappropriate here in any but their jargon senses.Something like ChatGPT has a similar flavor, though it arose from sequence-to-sequence translation research, which is not explicitly about representing complicated probability distributions. However, implicitly that's what it's doing (it's a descendant of conditional random fields, which were more explicit about this aim). At base when you enter a prompt, you're drawing a sample from a conditional probability distribution over sequence space, conditioned on the prompt sequence (I'm ignoring the guardrails and other wrappers around the core LLM for brevity).So, what exactly is "new" in a sample from a probability distribution? Arguably nothing. Users might be surprised because they were not previously aware that some particular sample was "in there". But it's the kind of surprise a street magician trades in with a two-headed coin, the kind of surprise that happens in a board game.  What we generally think of as "new", "novel", "creative", usually happens in the realm before this stack of representations, not several layers deep in it. Or it plays with the representations themselves, rather than keeping them fixed and sealed. Or it knocks the board game over entirely. Or it comes up with some other thing I haven't listed.What exactly is "generative" about a sample from a probability distribution? Also arguably nothing. Yes, "generative" is a piece of jargon used to mean roughly "draw a sample from". But if we imbue "generative" with a sense of open endedness, a quality we think human language and creativity, biological evolution and ecosystems, and political and social systems have, among other examples, then a probability distribution cannot be generative. It encapsulates what Leonard Savage called a "small world", and even he acknowledged there's such a thing as a "large world" and that it's inappropriate to apply these small world methods and concepts to the large world.To me, words like "generate", "new", and "process" refer to the large world. There might be small world analogs, but those will always be missing something important.
       
 (DIR) Post #B2Y3fVKMkkrGbVtVxY by abucci@buc.ci
       0 likes, 0 repeats
       
       @wwhitlow@indieweb.social The only hylomorphism I'm familiar with is the computer science concept, so I'm not sure I'd be able to follow you there on first read. I do feel like I'm seeing a tendency in the fields I do follow towards a panpsychism or pancognitivism that I read as an attempt to breathe a form into non-living matter that explains how it becomes animate, alive, evolving, or what have you. The attempts I'm aware of feel circular, but then again I'm not super knowledgeable about such things.Channeling Hegel, I think Han is saying that associations of words lack comprehension: there's no single concept that encompasses what the words are saying into a unified whole. So, word associations are accretive: you can only add more, never synthesize. In my mind it's a bit like gathering more and more 2-d points of a circle without ever realizing they lie on a circle; you have a growing mess of data with no comprehension, no finality or completion in the form of the circle. I think there's a case to be made that computers by themselves are not capable of bridging this gap in general: humans must be involved because we are able to make leaps of logic that are uncomputable. Which I suppose is a kind of experience.
       
 (DIR) Post #B2Y54sGQiFOcvyiMOO by abucci@buc.ci
       0 likes, 0 repeats
       
       @wwhitlow@indieweb.social I'm doing to same, thinking out loud here.I think one of the many challenges with current discourse around AI is that it's functioning to draw people into a small world that is convenient for certain perspectives. It's as if a group of chess grandmasters succeeded in convincing everyone to settle all disputes over chessboards. I think we have to pay attention to this.I'd say the creativity lies in the prompts and what the person does with the output, more than in the dataset. The dataset is dead, so to speak. It can't get up and dance for you. What's still live is the interactions that are made with it.
       
 (DIR) Post #B2YABDkiEQJx5Uiz3o by abucci@buc.ci
       0 likes, 0 repeats
       
       @wwhitlow@indieweb.social While I was noodling on what I read on Wikipedia about Aristotelian hylomorphism, what occurred to me is that the functional programming notion buries form, in a sense, and that might be the relation. These things function to compose two processes that could be separable. One builds up a structure from parts; the other breaks down the structure, computing something about the parts+structure along the way. What's lovely about hylomorphisms is that they can go directly from the initial part to the value computed without ever building up and breaking down the structure. The structure remains implicit, in other words. You might say the I/O process a hylomorphism encompasses has an implicit form built into it. I don't know if that's the reason for the use of this term in functional programming, but I suspect something along those lines might be.
       
 (DIR) Post #B2YD8MLU7s5EjTFfhw by abucci@buc.ci
       0 likes, 0 repeats
       
       @baldur@toot.cafe It serves a community purpose, collecting a list of people to block in one convenient place.
       
 (DIR) Post #B2YE9lDbEcVW7MnNg0 by abucci@buc.ci
       0 likes, 0 repeats
       
       I wonder if there's a psychological price that a web developer has to overcome to make a non-trivial web page with no pop-ups of any kind. It seems like a compulsion.At least in the way I use computers as a low vision person, pop-ups are extraordinarily anti-accessibility. Yes, even tooltips and alt-hovertext, depending on how they're done. Some websites are close to unusable because of these things.#tech #dev #web #DarkPatterns #accessibility
       
 (DIR) Post #B2YRZvd2S16lL1VDN2 by abucci@buc.ci
       0 likes, 0 repeats
       
       @jgroszko@tech.lgbt Oh no that's horrible.The one that was plaguing me today was on a shopping site, but wasn't an ad. It's become fairly common in product views to have a set of images of the product, with a set of thumbnails below or alongside the current main image. The one I was viewing changed the image if you moused over a thumbnail, and popped up a zoomed version of the image if you moused over the main image. The net result was that placing your mouse nearly anywhere resulted in something popping into your view and covering up a sizable proportion of the page. The frenetic popping and unpopping made it easy to lose track of where your mouse pointer is, which led to even more frenetic popping and unpopping. Since I use a screen magnifier most of the time, the popped up stuff took up 90% of the available screen real estate. The net result was deeply frustrating, and I closed the page and moved on with my life.The individual features are all useful. I like being able to see several different images. Being able to zoom the image is nice at times. However, the way they were all crammed together was poor, at least for me.
       
 (DIR) Post #B2ZuXZ9Q8w56sGpnu4 by abucci@buc.ci
       0 likes, 1 repeats
       
       @lukito@gamedev.lgbt If it quacks like a Ponzi scheme...
       
 (DIR) Post #B2aFjoGtqgpwHa0pjk by abucci@buc.ci
       0 likes, 1 repeats
       
       @danmcquillan@kolektiva.social This is not quite an answer, but to the extent that corporations are the main purchasers of image generators, the promise seems to be similar to what LLMs promise. Confine human endeavor to a small, controllable world that seems "creative", but is incapable of generating a true challenge to the powerful. Here I am using Leonard Savage's small vs. large world distinction. The former is like living in a chess game according to the rules of chess; the latter is like living in the real world, where you can knock over the board, refuse to play, draw a smiley face on the Queen, etc etc etc. If chess grandmasters convince people that all of society's concerns should be decided over a chess board according to the rules of the game then they've locked in their power because there is no rule of chess that allows you to suspend the rules of chess. That's how I've been seeing the value of AI (to the powerful) lately.
       
 (DIR) Post #B2aISA9i2tA6nwvTge by abucci@buc.ci
       0 likes, 1 repeats
       
       @danmcquillan@kolektiva.social Latent diffusion models were conceived as a way to efficiently represent very complicated probability distributions. They have a wide variety of uses, though I'd say many of those uses are the ones that large corporations invested in "big data" are concerned with. I've found in my time working for and with corporations that once you ascend far enough up the chain you're almost certain to encounter people whose primary drives involve "capturing the knowledge" of the people who work below them (seems like a form of Marx's notion of dead labor). Latent diffusion models promise to be able to do this. There may not be a single "killer app", but many individual applications that are together serving this background disposition. Adobe sticks image generators in its various creative tools to let people more quickly do things they were already doing such as remove backgrounds from photographs, smooth out noise, or upsample.  These are all tendrils of the same phenomenon, it seems to me.But I can't say I know for sure; I'm just spitballing. They've probably confessed some of their motives in their earnings calls.
       
 (DIR) Post #B2agqw4LxARVFvlYvo by abucci@buc.ci
       0 likes, 1 repeats
       
       Bracing for -20℃ temperatures tonight. The temperature is dropping pretty rapidly now.#maine #winter #cold #weather