[HN Gopher] Google Colab will soon introduce AI coding features
___________________________________________________________________
Google Colab will soon introduce AI coding features
Author : todsacerdoti
Score : 152 points
Date : 2023-05-17 16:03 UTC (6 hours ago)
(HTM) web link (blog.google)
(TXT) w3m dump (blog.google)
| greatpostman wrote:
| I really think the current paradigm of literally typing if else
| logic all day into programs, and then getting paid huge money to
| do so will go away. Programming is going to be much higher level
| and accessible, though still complex. It will take five years or
| so.
|
| Edit:
|
| My prediction is essentially that:
|
| You give it a set of requirements for an api, with edge
| conditions written in plain English. Test cases are provided. And
| the vanilla api is generated. We aren't that far from this. It's
| going to happen. Programmers may go in and tweak the generated
| code. When the requirements change, you pass in the old code as
| context.
|
| Complex software will require human coding. But most programmers
| are in denial about how complex the code they're writing actually
| is.
|
| Requimemts gathering is notoriously tricky, but you won't need
| engineers for it. Welcome to the age of the PM
| nemothekid wrote:
| > _You give it a set of requirements for an api, with edge
| conditions written in plain English. Test cases are provided.
| And the vanilla api is generated._
|
| Sometimes I wonder where some of the posters here work or if
| I'm working in a dystopia.
|
| > _You give it a set of requirements for an api with edge
| conditions written in plain English_
|
| This part is the job! If my job was 100% writing logic, it
| would be infinitely easier. Defining the requirements,
| evaluating the tradeoffs, discovering the edge conditions is
| where the bulk of my time goes. The only time someone did this
| for me was when I was a junior developer. Maybe I'm
| overestimating things, but I find it hard to believe that most
| engineers pulling huge salaries are just shuffling around
| fields on JSON API. Do you really need AI to expose a CRUD
| interface to Postgres?
|
| Edit:
|
| This idea that LLMs will replace engineers (or lawyers, or any
| traditionally "skilled" field) is hype. It's the same mistake
| that the customer makes when he's shocked that he gets a bill
| for $10,000 for replacing a screw in his car engine. You are
| conflating the actual physical labour requirements of the job
| (sitting down and coding) with the actual knowledge value that
| is being used when you do the job.
|
| For example, take a look at Redis. Redis is a great codebase,
| especially for those that want to learn C. It's simple - there
| are exceedingly few mind-bending, hardcore, engineering
| algorithms in Redis. Antirez is an amazing software engineer,
| but Redis is not the fastest database, nor is it most durable.
| But what you see is the meticulous application of understanding
| engineering tradeoffs; there are things Redis does incredibly
| well and things that it doesn't that is made easier by the
| overall architecture of the code. How would you even begin to
| prompt this to an LLM? Again the code _isn 't_ complex, but the
| _engineering_ is, and the act of turning those ideas and
| communicating them either to an LLM or to a C compiler, _is_
| engineering.
|
| No one comes home from a long day and says "Honey, I'm tired I
| spent all day typing JSON schemas and function signatures".
| fdgsdfogijq wrote:
| Someone non-technical can do the requirements gathering.
| nemothekid wrote:
| Again, where do you guys work? Maybe lets not use software
| engineering. If I asked you today, to do requirements
| gathering for building a bridge across the Mississippi
| river, could you do it? Don't you think you would have to
| learn about materials, regulations, land procurement,
| tensile and compression strengths and all the tradeoffs of
| the above? And once you learned all of that and you can
| finally present the requirements, would it be fair to call
| you non-technical?
| treyg wrote:
| I have to strongly disagree with this. If someone non-
| technical does the requirements gathering, the requirements
| will be incomplete. Developers will then either assume
| their own requirements or, if they're thorough and good at
| their job, check these assumptions with the business.
| Either way, this wastes the company's time. Either the
| assumed requirements will require re-work or the company
| has paid two employees to do the same job that should've
| been done by one.
| Workaccount2 wrote:
| It will be a technical person, and they will be payed
| $65k a year to do it.
| fdgsdfogijq wrote:
| The requirements are in the plain english doc
| SkyPuncher wrote:
| In my experience, it's a complete myth that requirement
| gathering can be done in a box.
|
| "Requirement gathering" is almost always influenced by
| technical capabilities. All but the simplest projects,
| require back and forth with guestimates about the direction
| and approach.
| fdgsdfogijq wrote:
| This is because its difficult to decouple the code from
| the requirements. And the full requirements are never
| contained within a doc. So you need an engineer to look
| at the code to see how its done. This is going to change.
| umanwizard wrote:
| Not true at any job I've ever worked at.
| mdorazio wrote:
| > or if I'm working in a dystopia
|
| Going to guess you're both young and working at a tech
| company as opposed to a big company that also does some tech
| things.
|
| Prior to the rise of Agile and the decrease in waterfall-
| style development the role of Business Analyst was very
| popular. It still is at many companies that do need to do
| waterfall development, but less so these days. The Business
| Analyst (BA) role is semi-technical and requires some subject
| matter expertise, but generally doesn't do much actual
| writing of code. Instead it's focused on requirements
| gathering, wireframe creation, test case creation & running,
| bug logging, and status reporting. I know you're probably
| saying "that's at least half of my job!" and you're right,
| but now go and look at the difference in pay between a BA and
| a developer. It turns out that if you remove the "actually
| writes code" part of the job, it's not worth nearly as much
| and I think that's what the comment above is trying to
| express.
| nextworddev wrote:
| The bi-furcation of programmer careers will intensify.
|
| Think of it like hedge fund traders vs stay-at-home retail
| traders.
|
| The closer you are to the core AI stack, the more you will get
| paid - kind of like hedgefund traders.
|
| If you are just copy pasting ChatGPT mostly, then you will get
| disintermediated at some point.
| greatpostman wrote:
| I disagree with this as well. The number of people
| legitimately working on LLMs right now is probably only in
| the hundreds. One or a few huge models will do all the
| programming automation. But yes, high paying developer jobs
| will exist. The rest will not pay as well, and with much
| worse job security. This field is going to change very fast
| cj wrote:
| > This field is going to change very fast
|
| It will be slowed by bureaucracy.
|
| Example: To be SOC 2 compliant, you need to have change
| management controls in place. How do humans manage change
| management if an AI is doing everything for the humans?
| Would humans still be doing code reviews? (AI might be so
| great where code reviews are obsolete, but compliance /
| regulations may still require humans in the loop)
|
| There's also a -huge- subset of software that is extremely
| mission critical or heavily regulated by compliance where
| all current compliance frameworks assume there's a human in
| the loop. Ripping the human out of the loop would require
| re-writing the regulations / standards (which I guess AI
| could help with) but I think the change will come slower
| than you're predicting.
|
| Change won't be slower because of any technological barrier
| necessarily (this is debatable), but because it will
| absolutely take a very long time for humans to fully trust
| AI and its output.
|
| The era we're in currently feels like where Tesla was a few
| years ago. Really cool concepts and proof that self-driving
| "works", but there are so many edge cases which limit its
| complete roll-out and makes the original promise of
| "everything will be self driving in 5 years" seemingly
| achievable, but 5 years later it hasn't been achieved, and
| remains simply an assistive technology with many
| limitations.
| greatpostman wrote:
| There's is way too much money to be made in this for
| beauracracy to get in the way.
| nextworddev wrote:
| Yes - but here's the catch.
|
| Orgs that are agile and unbothered by middle managers
| will thrive by adopting change, while the slow ones will
| lose competitive edge.
|
| Eventually even the slow ones will have to adopt change.
| weatherlite wrote:
| > Complex software will require human coding. But most
| programmers are in denial about how complex the code they're
| writing actually is.
|
| I think it's just a matter of time till all software becomes
| complex, e.g a codebase worked on by a few dozens people for 10
| years will have a non trivial level of complexity. Not
| disagreeing or anything, the machines might be able to do this
| well, we'll see. It will have to improve by a lot, like almost
| no hallucinations. But there could be many shades in between.
| Instead of completely replacing developers in 5 years which I
| personally find unlikely, it can replace 50% of them.
| adammarples wrote:
| Thinking logically isn't going out of fashion any time soon.
| Excel lets you fit increasingly higher order polynomials to a
| time series graph but you still need someone to tell them
| that's not a good forecast
| beoberha wrote:
| What do you mean by "higher level"? AI code generation tools
| are extremely useful to people who know how to code, but how is
| it useful if you can't understand the code it spits out. They
| are far from bug free and prompting it to handle complex
| business logic is tricky. If the higher level paradigm is
| simply English, then I don't see it. If something else, I'm
| curious what you foresee.
| greatpostman wrote:
| You give it a set of requirements for an api, with edge
| conditions written in plain English. Test cases are provided.
| And the vanilla api is generated. We aren't that far from
| this. It's going to happen. Complex software will require
| human coding. But most programmers are in denial about how
| complex the code they're writing actually is
| zitterbewegung wrote:
| Since people are bad at describing what they want there will be
| some kind of person that deals with the business and makes the
| computer work.
|
| Already we have prompt engineering. And also we could have said
| the same thing for people who write in assembly language, and
| each higher level language and also programs that are designed
| to be low code . I see that this is a new type of spreadsheet
| but scaling these up or using these programs aren't going to
| make coders / software engineers / data scientists go away in
| fact historically we have seen the opposite.
| dpistole wrote:
| "paid huge money to literally write if/else"
|
| "most programmers are in denial about how complex the code
| they're writing actually is"
|
| I can taste the salt
| StefanWestfal wrote:
| A possibility, and I think the way we work will evolve as well.
| But coming from a more "numerical" background, I can imagine a
| different route. Twenty years or longer ago, people who wanted
| to process a larger amount of data needed to understand low-
| level details, compilers, C/C++/Fortran, mathematical details,
| and so on. Today, we have JAX, scikit-learn, and many more
| tools. But these tools did not make the old 'numerical' people
| jobless; instead, their jobs evolved. Today, we have more data
| scientists than ever. You can create your own app faster than
| years ago, including hosting, persistent storage, load
| balancing, ... And again, we have more web developers than
| ever. The same goes for jobs like DevOps and other jobs that I
| probably don't even know. The level of abstraction got higher
| as you better now what the algorithm is doing but you do not
| need to implement it again. The point is the field will evolve,
| and right now, it may be the biggest jump ever, but that does
| not mean jobs will go away. We might end up in a situation like
| self-driving, where we are really close but still missing the
| last bit and need human intervention. I hope LLMs will solve
| the tasks that have been solved many times before, like
| bootstrapping a CRUD app, and we can focus on the edge cases
| and niche problems.
| analog31 wrote:
| Who will write in plain English? Granted I'm not in software
| development, but work in an adjacent area. Communicating
| requirements is already a hard, unsolved problem.
|
| In an adjacent discussion about ChatGPT, we will look forward
| to when nobody learns to write in plain English any more.
| brigadier132 wrote:
| Here's a different 10-year prediction. AI will become good
| enough to be useful for programmers but not good enough to run
| on its own and will remain an assistant. Given that there is a
| shortage of programmers, more software will be written than
| ever. More software will beget more software engineers. Because
| the output of software engineers will rise, each individual
| software engineer will become more valuable to their business
| and salaries will rise.
| adventured wrote:
| 10-15 years out AI will be good enough to wipe out at least
| half of all software developers on the planet (that is,
| people writing actual code). That's not a risky prediction,
| that's very easily going to be the case.
|
| Those people have nowhere to go to match what they're earning
| now. Maybe support review roles (which won't pay particularly
| well), where they approve decisions by the AI that are held
| up for human approval.
|
| The bottom half of software developers won't have AI
| assistants. The AI will have them as human assistants
| (required by corporations for control/safety/oversight
| purposes).
|
| The ~50%-25% bracket will build software using AI tools and
| will rarely write the actual code.
|
| In the top ~25% bracket (in terms of skill) you'll have
| software developers that are still paid very well and they'll
| directly write code, although not always. That group will be
| the only one remaining that is paid like today's software
| developers get paid.
|
| Software developer in the future will most commonly mean
| someone who builds software via AI tools (with the AI writing
| nearly all of the actual code). Human software developers
| will be glorified prompt wizards (with required degrees; it
| won't be a great job).
|
| For the median software developer, the peak has already been
| reached (in terms of pay and job security).
|
| Emerging market software developers will be hammered before
| their economies can fully benefit from the relatively high
| pay of the industry (from off-shoring work from big tech).
|
| The golden run is over for the bottom 3/4 of software
| developers. Prepare for it. Get used to it. In the developed
| world the ladder up and out of the middle class via software
| development is going to go away (and quickly).
|
| To regularly write code in the future you'll have to be damn
| good. Good enough, and knowledgeable enough, to be better at
| what you're doing than an AI with a handler (human
| assistant). You'll be writing the AI systems that govern
| everything and it'll be increasingly regulated, with more
| government licensing (plausibly formal AI engineer licensing
| and actual accountability, because the risks will go way up).
| boringPragma wrote:
| [dead]
| brigadier132 wrote:
| > 10-15 years out AI will be good enough to wipe out at
| least half of all software developers on the planet
|
| This reminds me of self driving car predictions. It also
| ignores that "building" software encompasses many different
| tasks.
| adventured wrote:
| That's actually a great supporting point to why AI will
| wipe most developers out.
|
| What you're referring to is one of the AI systems that I
| mention that the top ~25% will still write code for
| directly. It's a governing AI system.
|
| Most software development doesn't involve tasks that can
| very easily kill people with N thousand pounds of fast
| moving metal.
|
| Most software development is trivial by comparison in
| terms of challenge/complexity. It'll be wiped out
| accordingly. Why would you need anything more than a
| human handler to sign off on AI development as it goes
| down a path? It'll be able to build drastically faster
| than a median developer can and it can do it without
| getting tired (its productivity won't implode after 4-6
| hours). All you'll need are some human handlers to
| approve key decisions during the process of development
| (to ensure you get to the end product that is desired).
| yamazakiwi wrote:
| We have a lot of stages to go through before we achieve
| the level of development you're describing, and I don't
| think you can confidently argue 10-15 years vs 5-10 years
| vs 20-30 years.
|
| As with most technology job predictions, it won't happen
| the way most predict. Yes what defines a developer/swe
| will change, but our history has shown us that we are
| likely to see an increase of jobs in technology for the
| long term. I remain optimistic that most people will find
| new roles as they emerge.
| nemothekid wrote:
| > _10-15 years out AI will be good enough to wipe out at
| least half of all software developers on the planet (that
| is, people writing actual code). That 's not a risky
| prediction, that's very easily going to be the case._
|
| How is this _not_ a risky prediction?
|
| 1. Are you talking about AI (i.e. AGI) or LLMs? If you
| think we will have AGI in 10 - 15 years maybe you are
| right, but AGI is always just around the corner.
|
| 2. LLMs don't seem to be the magic multiplying force people
| are insinuating. GPT-3 has been around for almost 3 years,
| and while great (I was an early adopter) it's just another
| tool for me.
|
| Over the past 10 - 15 years software has gotten _easier_ to
| develop. And make software easier to develop has just gone
| to serving greater and greater demand. The invention of C
| didn 't reduce the number of engineers because it was
| simpler than ASM. The invention of Python didn't reduce the
| number of Java engineers. Every year CPUs get faster and
| faster (at an exponential rate almost), and yet software
| somehow manages to get slower and slower. No other tool in
| the short history of software development ever did anything
| close to " _wipe out at least half of all software
| developers_ " despite newer tools because easier and
| easier.
| adventured wrote:
| Absolutely not AGI.
|
| I'm talking about AI - made up of multiple modules that
| focus on different aspects - that can comprehensively
| build software products, with nothing more than human
| sign-offs to get from start to finish. Nothing even
| remotely close to the difficulty of building AGI (which I
| don't think will happen in the next 30 years at least).
|
| In this scenario the AI has human assistants that sign
| off on decisions before the AI can proceed further,
| before it can continue writing more code. The human
| developers that build this system will include a judgment
| for checkpoints, for the AI to judge when it thinks it's
| necessary to ask permission from a human to proceed (hey
| human, do you want me to go this way or that way?). The
| AI will occasionally present the human prompt clicker to
| choose from multiple viable development paths (for
| example overnight it'll build three different approaches
| to solving a problem after you leave work at 5pm; when
| you come in in the morning your task will be to pick the
| one you think is best, and the AI will proceed from
| there). I think a pattern of: AI development ->
| checkpoint approval by human -> continued AI development,
| will be the superior way to utilize AI to build software
| (rather than letting it get too far down the wrong path
| by trying to let it build without oversight most or all
| of the way).
|
| The human decision making process at checkpoints will
| become by far the slowest part of building software. The
| AI will spend most of its time waiting for someone to
| sign off on a path.
| [deleted]
| weatherlite wrote:
| What you describe is true pretty much to all professions,
| even doctors who don't use their hands. 15 years is a lot
| in the current pace of things. Also, society will be
| completely transformed if this transpires, retraining will
| be completely normal. Not saying it's gonna be easy, I'm
| just saying this is much bigger than software developers.
| StefanWestfal wrote:
| Isn't that the nature of tech? In the past most programmers
| needed to focus on low level details while today most devs
| kit together libraries and services and yet there are more
| then ever and salaries are higher then ever. I think nobody
| that enters tech expects that in 20 years we "code" as we
| do today but we will still build stuff and need to solve
| problems... and there are enough problems to solve.
| adventured wrote:
| That's pretty much the case. We keep building layers and
| abstracting away.
|
| Being a prompt wizard won't pay as well as directly
| writing code for the AI systems. They're different
| layers. There won't be more people directly developing
| software than there are today in the US market, there may
| be more overall jobs in and around the process however
| (ie the tech industry will continue to expand to more
| people in terms of employment; benefits will weaken,
| median pay will fall).
|
| Most software developers will be prompt wizards, with
| required degrees that say they know how to be effective
| prompt wizards. Then there will be a lot of supporting
| roles, oversight roles.
|
| More jobs in the industry, lower skill levels, less pay.
| StefanWestfal wrote:
| I think we push complexity forward. I agree in the sense
| that the pure dev part will require less bandwidth for
| most but the free bandwidth allows us to push complexity
| forward into different domains. My father still needed to
| punch card to code and now we can setup an app with a few
| clicks world wide that uses NN to solve a task and that
| all by ourself. So demand will be high for cross domain
| knowledge like Fullstack/ML + Domain X.
| adventured wrote:
| If you're a high skill programmer that has domain
| knowledge re AI, you'll do very well in the future,
| whether 5 or 20 years out.
|
| We simultaneously won't need and won't want the majority
| of software developers that exist today (the sheer number
| of them), writing code in the future. That would be a bad
| outcome.
|
| They're going to end up more valuable as prompt wizards
| and checkpoint decision makers, because the AI will be
| drastically better at writing code (in all respects) than
| they could ever be. And they're going to get paid less
| because more people will be able to do it, software
| development will become a lot less intimidating as a
| field. It'll be more mass market as a field, akin to
| being a nurse (4.2 million registered nurses in the US).
| boringPragma wrote:
| [dead]
| [deleted]
| tudorw wrote:
| Having played a bit with coding with AI I like this view,
| it's powerful, but context will always be an issue, turning
| real world problems into working code is a skill in it's own
| right, that will not change, superpowered yes, for me it's
| all the 'dull' parts of typing stuff out that I look forward
| to missing, from an early study on impact in the workplace,
| using a 2 year study of a call centre, the result was large
| improvement for less skilled staff as knowledge of best
| practice from those really excelling at customer service was
| rapidly propagated to juniors, seniors saw a much smaller
| increase in performance metrics. Now there is no excuse for
| me not to get AI to write my comments, a bunch of tests and
| some API's and database interconnects, etc. I've always taken
| a modular iterative approach, create a working basic model
| with a good foundation then extend it keeping it working as I
| build up to the final deliverable. Tempting just to go and
| sit in a cave for a couple of months and come back when the
| tools are a little more refined :)
| LamaOfRuin wrote:
| Jesus, this just triggered another nightmare scenario I
| should have thought of earlier. People are going to have it
| write and/or comment a function whose purpose is non-
| trivial and not straight-forward. It will be not or
| imperfectly checked that the comment actually says what it
| should and the functionality matches. There will be no
| indication of what was written by AI, so the only option is
| to guess that both were written competently/with the same
| purpose, but when they don't actually match there's no way
| to know which is wrong except to check all other code that
| interacts with it and figure out what has been working
| right/expecting what was originally intended in ways that
| subtly break/have been working right depending on the
| incorrect implementation that existed.
|
| This is absolutely something that already happens with
| fully human developers, but it seems likely to be much more
| frequent and not caught as soon with AI assistance.
|
| This also seems like a failure mode that could go
| pathologically wrong on a regular basis for TDD types.
| occamrazor wrote:
| An AI may be more likely to hallucinate the wrong
| comment, but also much less likely to forget to update
| the comment when the code changes. The net result could
| be better comments.
| lvl102 wrote:
| People who get paid well in this business get paid for the big
| picture stuff. Architecture. Writing good code was never that
| valuable. My two cents.
| frozenport wrote:
| Wow you were doing that?
|
| And I'm sitting around here architecting code, rewriting
| graphs, and gathering input from stakeholders.
| danwee wrote:
| That's what they thought 50 years ago. Programming is not
| anymore "mov eax,1" and "int 0x80". Programming nowadays is
| much more highly accessible than it was 50 years ago... but
| still only a few (programmers) do it. AI is not going to change
| that (i.e., nor my manager nor my mother are suddendly going to
| "program" anything using high-level constructs)
| boredemployee wrote:
| You may be right, you may be wrong.
|
| Only time will tell (and I think it will tell really really
| soon).
| rustyminnow wrote:
| I find that this comment adds absolutely nothing to the
| conversation. It could be posted under every comment that
| says <regex>"AI is (not)? going to revolutionize (the
| world|everything|this industry|my job)"</regex>
|
| Anybody making a prediction about the future might be right
| or might be wrong, and we won't know until it happens. This
| is just a snarky way of saying "nah dude you're wrong"
| andai wrote:
| No, they also said they think we'll find out "really
| really soon", which seems highly likely.
| rustyminnow wrote:
| They may be right, they may be wrong.
|
| Only time will tell :)
| boredemployee wrote:
| thank you for knowing how to interpret texts
| edgyquant wrote:
| It could be posted under anything. I could claim that the
| earth is going to turn into a ball of cheese and then
| write off everyone with "only time will tell."
| rustyminnow wrote:
| Yes, but how SOON will time tell? For earthen ball of
| cheese I predict it will tell "pretty soon". But who
| knows? Only time will tell.
| boredemployee wrote:
| Well... how about reading my text trying to interpret it
| and the world around us? In the sense that things are
| going so fast that we will soon have the answer and most
| probably _some_ jobs will be replaced yes (and that it
| won't take another 50 years as implied in the parent
| comment to have the definitive answer for it).
| rustyminnow wrote:
| You make a good argument (in this comment). I think
| things are moving really fast and some jobs will be
| replaced. But even though AI moves extremely fast, the
| industry as a whole moves much slower. It will take years
| to integrate and leverage these things. How many
| companies are still running legacy java applications? How
| many still run Cobol? Things will change eventually, but
| AI won't destroy a majority of jobs for many years.
| boredemployee wrote:
| Wow you're so smart my guy
| rustyminnow wrote:
| Thanks :)
| pphysch wrote:
| Yeah, generating some Python with Codey is just adding
| another layer to an already substantial stack.
|
| Natural language is translated into Python which is
| interpreted as C which was compiled into bytecode which is...
|
| We still need experts at every layer.
| zelag wrote:
| Who is typing if else logic all day into programs? Not even as
| a junior did I do this.
| greatpostman wrote:
| Programming is essentially if else logic
| sorokod wrote:
| Is that what they call "full stack" programming?
| weatherlite wrote:
| "full stack" programming is actually incredibly hard
| since you can no longer push new items once the stack is
| full. If this happens you better go the route of
| "stackoverflow" programming.
| sorokod wrote:
| Totally valid use of an if statement right there.
| abeppu wrote:
| I think this is a sad vision of the future not because a lot of
| programmers will be forced to do something else, but because it
| seems like this represents a fundamental failure for formal
| methods. We're hurtling towards a world where we automatically
| generate low-quality, buggy code, produced by stochastic
| parrots who mimic code written in possibly a different era by
| humans tackling different problems.
|
| Forget about being employed producing software in a world where
| these are the norms, I don't think I'd want to be a software
| _user_.
|
| I'd love for programming to be higher level and accessible, but
| I wish the process were:
|
| - write a _specification_ that describes functionality,
| invariants, etc
|
| - generate signatures compatible with that specification
|
| - interactively seek implementation-relevant information from
| the designer, e.g. - "I see Users can have an
| unbounded collection of Bars, which can each reference an
| unbounded number of Wugs. How many Bars do you suppose a
| typical User would have? How many Wugs per Bar and how many
| Bars will reference each Wug? How large is a Wug typically?"
| - "Which of these DB queries do you expect to be run most/least
| frequently?" - "This API method allows for
| pagination. How often do you expect a user to page beyond k?"
|
| - generate tests
|
| - generate code aligning with signatures, _and which cause
| tests to pass_ (i.e. program synthesis from examples)
|
| - static analysis / abstract interpretation / model checking to
| confirm that some invariants are respected
|
| - explicitly reporting which invariants or properties were not
| able to be confirmed automatically which a human may need to
| confirm via an ad-hoc analysis
|
| Software is one of the domains where we can actually do a lot
| of sophisticated automated reasoning, but the current trend of
| ML code generation as text completion ignores basically all of
| that, and this seems like a giant waste.
| seydor wrote:
| Alexa build me a Google (or is it the other way around?)
|
| As more people use automated programmaking, most of the code will
| be low level spaghetti in procedural languages, because it is
| easier for the model to reason with it (without needing huge
| buffers to know about abstractions). We will see a lot of
| generated Javascript, PHP, even a return of Win32. Who cares if
| they are tedious for humans to read and fix, now the AI can do
| all kinds of advanced search/replace in the code. And maybe at
| some point they 'll be generating machine code directly
| falcor84 wrote:
| I'm tired of Google announcing something in the future; wake me
| up when it's actually out.
| [deleted]
| tomkaos wrote:
| I always know about google product too soon or too late. When
| the new product is not available yet and when they announce
| they kill it.
| xena wrote:
| Are you kidding? Google's main product is waiting lists.
| akiselev wrote:
| There's Google Wait, Wait+, GWait, Hangwait, Wate, Wait4it,
| Google Weight List, Alphawait, and last but not least GWait
| Hangwait.
|
| Am I missing any?
| dekhn wrote:
| Google Finally. They're the ones who release everything at
| google: "Google Finally releases GoogleGargle"
| golergka wrote:
| There's actually 3 different products named Google Wait.
| barumrho wrote:
| wait until you gotta wait for the waitlist.
| davidhs wrote:
| The Brits love it.
| cperry wrote:
| we debated this; this is going to be a sequential rollout based
| on availability/capacity, and it seemed better to broadcast it
| was coming earlier b/c we're not going to have a "launch event"
| moment.
| reaperducer wrote:
| So it's a soft opening. Like an out-of-town preview.
| AbrahamParangi wrote:
| Arguably it would be good to not announce till it's both
| available because even once released, expectations are going
| to be extremely high due to copilot.
| anotherpaulg wrote:
| (Shamelessly spreading the word about my open source tool)
|
| You can do GPT-4 powered coding chats in your terminal today with
| aider. It's not free in the sense that you need a gpt-4 api key,
| which openai charges you for. But it's a pretty great way to
| collaborate with AI on code.
|
| https://github.com/paul-gauthier/aider
| calny wrote:
| This looks great, thanks for sharing. Esp interested in:
|
| > Complex Multi-file Change with Debugging: A complex code
| change involving multiple source files and debugging.[0]
|
| [0] https://aider.chat/examples/complex-change.html
| anotherpaulg wrote:
| Thanks for checking out the chat transcripts!
|
| Ya, that one is some real work I was doing on the tool itself
| by using the tool. I use aider extensively while I code now.
| Aider has authored 150+ commits in that repo in the last week
| or so.
|
| Many of the other transcripts are more "toy problems" to help
| convey the experience of working with aider to folks
| unfamiliar with the tool.
| worldsayshi wrote:
| Neat! This interface look simpler than most alternatives I've
| seen lately.
|
| Do you know how it deals with source files that are larger than
| the context window?
| anotherpaulg wrote:
| Thanks for checking it out.
|
| Right now aider doesn't even try and deal with files larger
| than the context window. For gpt-4, that's 8k tokens or about
| 32kbytes. Which is pretty reasonable.
|
| According to the data in [1], the average source file in
| github is less than 14KB. It's worth noting that they
| explicitly discarded small files with less than 10 lines. So
| the true average is probably much lower.
|
| Regardless, I haven't found the gpt-4 context window issue to
| be problematic in practice yet. You do need to be careful
| about how many files you "add to the chat" at once. But
| that's not hard. For sure there are cases where you would
| need to refactor a large file before you could use it with
| aider.
|
| I am certainly interested added context window management
| features to aider. I have previously explored a bunch of
| approaches for this with gpt-3.5-turbo. I shared some notes
| about these experiments previously on HN [2].
|
| [1] https://hoffa.medium.com/400-000-github-
| repositories-1-billi...
|
| [2] https://news.ycombinator.com/item?id=35441666
| panarky wrote:
| For chat, its common to use gpt to summarize previous turns
| so they consume fewer tokens, while not losing the context
| completely.
|
| Seems like that could work even better for code, where gpt
| could compactly summarize what functions do without
| memorizing all the code.
| dang wrote:
| > "soon introduce"
|
| This looks like an announcement of an announcement, which isn't
| on topic for HN.
| https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
|
| On HN, there's no harm in waiting for the actual thing:
| https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
| candiodari wrote:
| I'm sure it'll be available in 180 countries: the US, the
| United States, America, New York, Washington, Wisconsin, ...
|
| (sorry frustrated bard-non-user in the EU speaking ...)
| pram wrote:
| Don't worry. Bard isn't very good, you're not missing out on
| much.
| jsnell wrote:
| I don't think that's the case. This is the official
| announcement of a product, with details on how the product
| works, and I doubt they'll be writing anything particularly
| more meaty when they proceed through various rollout stages.
|
| The way I've understood the "announcement of an announcement"
| rule in the past is that it is for reports/rumors that somebody
| is company to release something, or a company sending a teaser
| for a future event where they'll announce something.
|
| The standard has never been that the product needs to already
| be generally available to be on-topic. If that were the
| standard, when could we discuss say a new iPhone? Not when it's
| talked about on the big Apple stage. Or as another example
| consider yesterday's discussion of the changes to the Google
| account deletion policy. No accounts will be deleted for 9
| months, was it an announcement of an announcement?
| dang wrote:
| Ok, I've rolled back that penalty.
| https://news.ycombinator.com/item?id=35978216 confirms what
| you've said.
|
| The comments in this thread are nearly entirely generic,
| though.
| omeze wrote:
| Free of charge, unless they don't like what sort of programs you
| run! Google bans certain open source AI models in Colab, like
| Pygmalion. See
| https://colab.research.google.com/github/PygmalionAI/gradio-...
| dymk wrote:
| What's the problem here? They get to choose how they offer
| their free service?
| omeze wrote:
| The problem for users (not google) is that its an arbitrarily
| enforced restriction. Of course companies can do whatever
| they want. Google could ban all programs that import PyTorch
| on their free tier, too. Totally their choice. It's also my
| choice to mention that people should look into alternatives
| like Replit if they're worried about a cool notebook they
| made on Colab randomly getting them in Google TOS trouble.
| jsnell wrote:
| Does Replit give free access to GPUs for any use case? The
| documentation[0] has screenshots that imply that GPUs cost
| "cycles", which are a virtual currency you buy with real
| money[1].
|
| [0] https://docs.replit.com/power-ups/gpus
|
| [1] https://docs.replit.com/cycles/about-cycles
| balls187 wrote:
| The animated gif for the first example leaves a lot to be
| desired.
|
| The UX: Click a button, type in a prompt, click another button.
| The speed just to generate 2-3 lines of code.
|
| And AI generated code with no basic error handling.
| pphysch wrote:
| That's the free UX. Paid users get a seamless Copilot-style
| autocomplete experience.
|
| Clever use of intentional UX friction IMO. It does cost money
| to run these models.
| balls187 wrote:
| And, free users are providing training data back to Google
| for free.
|
| Certainly google should monetize; but not off removing
| artificially bad UX.
| renewiltord wrote:
| The way that I do this right now is use Copilot in IntelliJ for
| notebooks that I execute on a remote server. This does the job.
| nlstitch wrote:
| Yeah something about being vendor locked by Google into the
| "Democratisation of ML" does not sound appealing at all.
| assane101 wrote:
| How is it vendor locking if you are writing Python code you can
| pull out any time ?
| [deleted]
| meghan_rain wrote:
| > Democratizing machine learning for everyone: Anyone with an
| internet connection can access Colab, and use it free of charge.
| Millions of students use Colab every month to learn Python
| programming and machine learning. Under-resourced groups
| everywhere access Colab free of charge, accessing high-powered
| GPUs for machine learning applications.
|
| Ugh, cringe. Just say this is a panic swing at an existential
| threat (OpenAI) and you're trying to commiditize them.
| cperry wrote:
| lol been working on this since 2021; I have a small team and we
| do our best.
| striking wrote:
| Deep respect for reading the comments and showing up for your
| team.
| [deleted]
| ztgasdf wrote:
| Would be nice to try, until I got banned yesterday for trying to
| connect two Colab runtimes together via SOCKS5. Turns out
| connecting to _any_ proxy at all immediately suspends your Colab
| usage for "suspected abusive activity"!
| [deleted]
| junglistguy wrote:
| [dead]
| aorth wrote:
| Was it also trained on my/our open-source code repositories and
| Creative Commons StackOverflow answers?
| https://stackoverflow.com/help/licensing
|
| Does it provide any attribution?
| oli5679 wrote:
| I find VS-code running notebooks with Colab plugin installed
| pretty helpful for this type of thing already.
|
| (1) run notebook in VS-code
| https://code.visualstudio.com/docs/datascience/jupyter-noteb...
|
| (2) install Github Copilot extension
| https://marketplace.visualstudio.com/items?itemName=GitHub.c...
|
| and then it can quite often achieve what you want with just a
| comment or some previous code snippets.
|
| Then if you load in pandas and a DataFrame you want to analyse,
| it quite often suggests the next stage of your data-analysis
| work, either right away, as you write the first step in some
| chain of pandas methods.
| hn_throwaway_99 wrote:
| > I find VS-code running notebooks with Colab plugin installed
| pretty helpful for this type of thing already.
|
| I think you meant "with the _Github Copilot_ plugin installed
| ".
| oli5679 wrote:
| You're completely right. It was a typo that causes confusion
| BudaDude wrote:
| I got excited for a moment to run Colab in vscode. I was sad
| to see it was typo
| sva_ wrote:
| Hint: VS Code Jupyter even allows connecting to remote kernels,
| so you can access some remote GPU from within the editor.
|
| I think Colab doesn't support this, but Paperspace Gradient for
| example does.
|
| https://docs.paperspace.com/gradient/notebooks/notebooks-rem...
| indigodaddy wrote:
| Not free for gpt-4, but there is also cursor.so (no
| affiliation/never used but looked interesting when I stumbled on
| it last week)
| version_five wrote:
| "We won't charge you for providing us with training data"
|
| Also, I'm sure they have ways around it, but I'd imagine colabs
| are a poor source of "good" code to use for further model
| training, both because of the kind of code you write in notebooks
| and the demographic that would make up colab users. It sort of
| fits with the idea that autocomplete might be good at writing
| short functions that do some specific thing, but not much help
| actually writing a full program.
| [deleted]
| neets wrote:
| Train the AI by using AI, things are getting pretty META
| nextworddev wrote:
| Is this another "beta" feature? It says it's available today, but
| I don't see it.
|
| Source: currently a Colab pro subscriber based in the U.S., but
| don't see the AI features in my Colab notebook
| cperry wrote:
| not yet available; it'll roll out sequentially in the coming
| weeks as we sort out capacity and polish.
| nailer wrote:
| How far away is the VS code plugin?
| neom wrote:
| It specifically says in multiple areas of the article that it's
| _not_ available today.
|
| "Google Colab will soon introduce..."
|
| "Access to these features will roll out gradually in the coming
| months, starting with our paid subscribers in the U.S. and then
| expanding into the free-of-charge tier. We'll also expand into
| other geographies over time. Please follow @googlecolab on
| Twitter for announcements about new releases and launches."
| barbazoo wrote:
| If you're not paying for it, you're the product
| sharemywin wrote:
| your data is they could give 2 shits about you. They take care
| of pigs before they slaughter them.
| astrange wrote:
| Google doesn't need to care about "your data" particularly,
| they show you ads.
| astrange wrote:
| You can pay for Colab.
___________________________________________________________________
(page generated 2023-05-17 23:00 UTC)