[HN Gopher] Better Call GPT: Comparing large language models aga...
       ___________________________________________________________________
        
       Better Call GPT: Comparing large language models against lawyers
       [pdf]
        
       Author : vinnyglennon
       Score  : 289 points
       Date   : 2024-02-06 15:04 UTC (7 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | crakenzak wrote:
       | This is one of the domains I'm very very excited about for LLMs
       | to help me with. In 5-10 years (even though this research paper
       | makes me feel its already here), I would feel very confident
       | chatting for a few hours with a "lawyer" LLM that has access to
       | all my relevant taxes/medical/insurance/marriage documents and
       | would be able to give me specialized advice and information
       | without billing me $500 an hour.
       | 
       | A wave of (better) legally informed common-person is coming, and
       | I couldn't be more excited!
        
         | OldOneEye wrote:
         | I wouldn't blindly trust what the LLM says, but I take it that
         | it would be mostly right, and that would give me at the very
         | least explorable vocabulary that I can expand on my own, or
         | keep grilling it about.
         | 
         | I've already used some LLMs to ask questions about licenses and
         | legal consequences for software related matters, and it gave me
         | a base, without having to involve a very expensive professional
         | into it for what are mostly questions for hobby things I'm
         | doing.
         | 
         | If there was a significant amount of money involved in the
         | decision, though, I will of course use the services of a
         | professional. These are the kinds of topics you can't be
         | "mostly right".
        
           | nprateem wrote:
           | The only problems are it could be convincingly wrong about
           | anything it tells you and isn't liable for its mistakes.
        
             | sonofaragorn wrote:
             | What if they were liable? Say the company that offers the
             | LLM lawyer is liable. Would that make this feasible? In
             | terms of being convincingly wrong, it's not like lawyers
             | never make mistakes...
        
               | giantg2 wrote:
               | "What if they were liable?"
               | 
               | They'd be sued out of existence.
               | 
               | "In terms of being convincingly wrong, it's not like
               | lawyers never make mistakes..."
               | 
               | They have malpractice insurance, they can potentially
               | defend their position if later sued, and most importantly
               | they have the benefit of appeal to authority
               | image/perception.
        
               | AnimalMuppet wrote:
               | All right, what if legal GPTs had to carry malpractice
               | insurance? Either they give good advice, or the insurance
               | rates will drive them out of business.
               | 
               | I guess you'd have to have some way of knowing that the
               | "malpractice insurance ID" that the GPT gave you at the
               | start of the session was in fact valid, and with an
               | insurance company that had the resources to actually
               | cover if needed...
        
               | kulikalov wrote:
               | It's funny how any conversation ends with this question
               | unanswered.
        
               | YetAnotherNick wrote:
               | Weirdly HN is full of anti AI people who just refuses to
               | discuss the point that is being discussed and goes into
               | all the same argument of wrong answer that they got some
               | time. And then they present anecdotal evidence as truth,
               | while there is no clear evidence if AI lawyer has more or
               | less chance to be wrong than human. Surely AI could
               | remember more and has been shown to clear bar
               | examination.
        
               | giantg2 wrote:
               | "while there is no clear evidence if AI lawyer has more
               | or less chance to be wrong than human."
               | 
               | In the tests they are shown to be pretty close. The point
               | I made wasn't about more mistakes, but about other
               | factors influencing liability and how it would be worse
               | for AI than humans at this point.
        
               | naniwaduni wrote:
               | You'd require them to carry liability insurance (this is
               | usually true for meat lawyers as well), which basically
               | punts the problem up to "how good do they have to be to
               | convince an insurer to offer them an appropriate amount
               | of insurance at a price that leaves the service
               | economically viable?"
        
               | kulikalov wrote:
               | Given orders of magnitude better cost efficiency, they
               | will have plenty of funds to lure in any insurance firm
               | in existence. And then replace insurance firms too.
        
             | engineer_22 wrote:
             | This is an area for further development and thought...
             | 
             | If a LLM can pass the bar, and has a corpus of legal work
             | instantly accessible, what prevents the deployment of the
             | LLM (or other AI structure) to provide legitimate legal
             | services?
             | 
             | If the AI is providing legal services, how do we assign
             | responsibility for the work (to the AI, or to its owner)?
             | How to insure the work for Errors and Omissions?
             | 
             | More practically, if willing to take on responsibility for
             | yourself, is the use of AI going to save you money?
        
               | AnimalMuppet wrote:
               | A human that screws up either too often or too
               | spectacularly can be disbarred, even if they passed the
               | bar. They can also be sued. If a GPT screws up, it could
               | in theory be disbarred. But you can't sue it for damages,
               | and you can't tell whether the same model under a
               | different name is the next legal GPT you consult.
        
               | nprateem wrote:
               | Re your first point: it's not conscious. It has no
               | understanding. It's perfectly possible the model could
               | successfully answer an exam question but fail to reach
               | the same or similar conclusion when it has to reason it's
               | own way there based on information provided.
        
               | mvdtnz wrote:
               | Careful, there are plenty of True Believers on this
               | website who really think that these "guess the next word"
               | machines really do have consciousness and understanding.
        
               | lewhoo wrote:
               | I incline towards you on the subject but if you call it
               | guessing you open yourself up to all sorts of rebuttals.
        
             | boplicity wrote:
             | The obvious intermediate step is that you add an actual
             | expert into the workflow, in terms of using LLMs for this
             | purpose.
             | 
             | Basically, add a "validate" step. So, you'd first chat with
             | the LLM, create conclusions, then vet those conclusions
             | with an expert specially trained to be skeptical of LLM
             | generated content.
             | 
             | I would be shocked if there aren't law agencies that aren't
             | already doing something exactly like this.
        
           | thallium205 wrote:
           | I wouldn't blindly trust what a lawyer says either so there's
           | no difference there.
        
             | brk wrote:
             | Sure, but you have a lot less personal risk following
             | advice from a lawyer vs. advice from an LLM.
        
               | toomuchtodo wrote:
               | When your GPT is wrong, you will be laughed out of the
               | room and sanctioned.
               | 
               | When your attorney is wrong, you get to point at the
               | attorney and show a good faith effort was made.
               | 
               | Hacks are fun, just keep in mind the domain you're
               | operating in.
        
               | giantg2 wrote:
               | "When your attorney is wrong, you get to point at the
               | attorney and show a good faith effort was made."
               | 
               | And possibly sue their insurance to correct their
               | mistakes.
        
               | toomuchtodo wrote:
               | Indeed. People forget that the system is built around
               | checks and balances as well as recourse. The real world
               | is not a runtime running your code.
        
               | Scoundreller wrote:
               | But you'll have to find a lawyer that specializes in
               | suing lawyers and their own malpractice plans.
               | 
               | Maybe that's where legal AI will find the most demand.
        
               | kulikalov wrote:
               | Can't a tech firm running a "legal gpt" have an
               | insurance?
        
               | toomuchtodo wrote:
               | Do they have a license to practice law?
        
               | giantg2 wrote:
               | No. Malpractice insurance would be at the professional
               | level. There could be lawyers using a legal chatGTP, but
               | the professional liabilities would still be with the
               | licensed professional.
        
               | kulikalov wrote:
               | Well, I guess since it's not "practice" we gonna call it
               | "mal-inference insurance".
        
               | freejazz wrote:
               | More legal malpractice? No, because they aren't attorneys
               | and you cannot rely upon them for legal advice such that
               | they'd be liable to you for providing subpar legal
               | advice.
        
               | kulikalov wrote:
               | Why? Because there's no word for "insurance of AI advise
               | accuracy"? The whole point of progress is that we create
               | something that is not a thing at the moment.
        
               | freejazz wrote:
               | No, because, like I said, GPTs are not legally allowed to
               | represent individuals, so they cannot obtain malpractice
               | insurance. You can make up an entirely ancillary kind of
               | insurance. It does not change the fact that GPTs are not
               | legally allowed to represent clients, so they cannot be
               | liable to clients for legal advice. Seeing as how you
               | think GPTs are so useful here... why are you asking me
               | these questions when a GPT should be perfectly capable of
               | providing you with the policy considerations that
               | underline attorney licensing procedures.
        
               | giantg2 wrote:
               | That was the point of my comment - no ability to collect
               | the insurance.
        
               | InsomniacL wrote:
               | What about if your lawyer is using chatgpt? :D
               | 
               | https://www.forbes.com/sites/mollybohannon/2023/06/08/law
               | yer...
        
           | mannykannot wrote:
           | I like the term "explorable vocabulary." I can see using LLMs
           | to get an idea of what the relevant issues are before I
           | approach a professional, without assuming that any particular
           | claim in the model's responses is correct.
        
           | chaxor wrote:
           | I don't understand how everyone keeps making this mistake
           | over and over. They explicitly just said "in 5-10 years".
           | 
           | So many people continually use arguments that revolve around
           | 'I used it once and it wasn't the best and/or me things up',
           | and imply that this will always be the case.
           | 
           | There are many solutions already for knowledge editing, there
           | are many solutions for improving performance, and there will
           | very likely continue to be many improvements across the board
           | for this.
           | 
           | It took ~5 years from when people in the NLP literature
           | noticed BERT and knew the powerful applications that were
           | coming, until the public at large was aware of the
           | developments via ChatGPT. It may take another 5 before the
           | public sees the developments happening now in the literature
           | hit something in a companies web UI.
        
             | mjr00 wrote:
             | on the other hand the rate of change isn't constant and
             | there isn't a guarantee that the incredible progress in the
             | past ~2 years in the LLM/diffusion/"AI" space will
             | continue. As an example, take computer gaming graphics;
             | compare the evolution between Wolfenstein 3D (1992) and
             | Quake 3 Arena (1999), which is an absolute quantum leap.
             | Now compare Resident Evil 7 (2017) and Alan Wake 2 (2023)
             | and it's an improvement but nowhere near the same scale.
             | 
             | We've already seen a fair bit of stagnation in the past
             | year as ChatGPT gets progressively worse as the company is
             | more focusing on neutering results to limit its exposure to
             | legal liability.
        
               | GaggiX wrote:
               | >ChatGPT gets progressively worse
               | 
               | https://huggingface.co/spaces/lmsys/chatbot-arena-
               | leaderboar..., In blinded human comparisons, newer models
               | perform better than older ones.
        
               | mvdtnz wrote:
               | That website doesn't load for me but anyone who uses
               | ChatGPT semi regularly can see that it's getting steadily
               | worse if you ever ask for anything that begins to border
               | risque. It has even refused to provide me with things
               | like bolt torque specs because of risk.
        
               | GaggiX wrote:
               | It could be a bias, that's why we do blinded comparisons
               | for a more accurate rating. If we have to consider my
               | opinion, since I use it often, then no, it hasn't gotten
               | worse over time.
        
               | mvdtnz wrote:
               | Well I can't load that website so I can't assess their
               | methodology. But I am telling you it is objectively worse
               | for me now. Many others report the same.
               | 
               | Edit - the website finally loaded for me and while their
               | methodology is listed, the actual prompts they use are
               | not. The only example prompt is "correct grammar: I are
               | happy". Which doesn't do anything at all to assess what
               | we're talking about, which is ChatGPT's inability to deal
               | with subjects which are "risky" (where "risky" is defined
               | as "Americans think it's icky to talk about").
        
               | GaggiX wrote:
               | There is no selected prompt, humans ask the models
               | (blindly) some questions in a chat and then select the
               | best one for them.
        
               | Taylor_OD wrote:
               | Worse is really subjective. More limited functionality
               | with a specific set of topics? Sure. More difficult to
               | trick to get around said topic bans? Sure.
               | 
               | Worse overall? You can use chatgpt 4 and 3.5 side by side
               | and see an obvious difference.
               | 
               | Your specific example seems fairly reasonable. Is there
               | liability in saying x bolt can handle y torque if that
               | ended up not being true? I don't know. What is that bolt
               | causes an accident and someone dies? I'm sure a lawyer
               | could argue that case if ChatGPT gave a bad answer.
        
             | ufmace wrote:
             | I'm not so sure that truth and trustability is something we
             | can just hand-wave away as something they'll sort out in
             | just a few more years. I don't think a complex concept like
             | whether or not something is actually true can be just
             | tacked onto models whose core function is to generate what
             | they think the next word of a body of text is most likely
             | to be.
        
             | ARandumGuy wrote:
             | > It took ~5 years from when people in the NLP literature
             | noticed BERT and knew the powerful applications that were
             | coming, until the public at large was aware of the
             | developments via ChatGPT. It may take another 5 before the
             | public sees the developments happening now in the
             | literature hit something in a companies web UI.
             | 
             | It also may take 10, 20, 50, or 100 years. Or it may never
             | actually happen. Or it may happen next month.
             | 
             | The issue with predicting technological advances is that no
             | one knows how long it'll take to solve a problem until it's
             | actually solved. The tech world is full of seemingly
             | promising technologies that never actually materialized.
             | 
             | Which isn't to say that generative AI won't improve. It
             | probably will. But until those improvements actually
             | arrive, we don't know what those improvements will be, or
             | how long it'll take. Which ultimately means that we can
             | only judge generative AI based on what's actually
             | available. Anything else is just guesswork.
        
           | coffeebeqn wrote:
           | I wonder could GPTs come up with legal loopholes. Like they
           | are expected to come up with security vulnerabilities
        
         | XCSme wrote:
         | Or a LLM that helps you spend less. Imagine a LLM that goes
         | over all your spending, knows all the current laws, benefits,
         | organizations, promotional campaigns, and suggests (or even
         | executes) things like changing electricity provider, insurance
         | provider, buying stuff in bulk from a different shop that you
         | get for 4x the price at your local store, etc.
        
           | OldOneEye wrote:
           | I love this idea. It would be incredibly useful!
           | 
           | I feel LLMs are great at suggestions that you follow up
           | yourself (if only for sanity checking, but nothing you
           | wouldn't do with a human too).
        
         | Solvency wrote:
         | This does not make sense to me. ChatGPT is completely nerfed to
         | the point where it's either been conditioned or trained to
         | provide absolutely zero concrete responses to anything. All it
         | does is provide the most baseline, generic possible response
         | followed by some throwaway recommendation to seek the advice of
         | an actual expert.
        
           | frankfrank13 wrote:
           | The way to get around this is to have it "quote" or at least
           | try to quote from input documents. Which is why RAG became so
           | popular. Sure, it won't right you a contract, but it will
           | read one back to you if you've provided one in your prompt.
           | 
           | In my experience, this does not get you close to what the
           | top-level comment is describing. But it gets around the
           | "nerfing" you describe
        
           | baobabKoodaa wrote:
           | It's very easy to get ChatGPT to provide legal advice based
           | on information fed in the prompt. OpenAI is not censoring
           | legal advice anywhere near as hard as they are censoring
           | politics or naughty talk.
        
         | frankfrank13 wrote:
         | I think a lot of startups are working on exactly what you are
         | describing, and honestly, I wouldn't hold my breath. Everyone
         | is still bound by token limits and the two best approaches for
         | getting around them are RAG and Knowledge-Graphs, both of which
         | could get you close to what you describe but not close enough
         | to be useful (IMO).
        
         | baobabKoodaa wrote:
         | We are literally building this today!
         | 
         | Our core business is legal document generation (rule based
         | logic, no AI). Since we already have the users' legal documents
         | available to us as a result of our core business, we are
         | perfectly positioned to build supplementary AI chat features
         | related to legal documents.
         | 
         | We recently deployed a product recommendation AI to prod
         | (partially rule based, but personalized recommendation texts
         | generated by GPT-4). We are currently building AI chat features
         | to help users understand different legal documents and our
         | services. We're intending to replace the first level of
         | customer support with this AI chat (and before you get upset,
         | know that the first level of customer support is currently a
         | very bad rule-based AI).
         | 
         | Main website in Finnish: https://aatos.app (also some services
         | for SE and DK, plus we recently opened UK with just a e-sign
         | service)
        
           | baobabKoodaa wrote:
           | Here's an example of what our product recommendations look
           | like:
           | 
           |  _Given your ownership in a company and real estate, a
           | lasting power of attorney is a prudent step. This allows you
           | to appoint PARTNER_NAME or another trusted individual to
           | manage your business and property affairs in the event of
           | incapacitation. Additionally, it can also provide tax
           | benefits by allowing tax-free gifts to your children, helping
           | to avoid unnecessary inheritance taxes and secure the
           | financial future of your large family._
        
           | Closi wrote:
           | > Since we already have the users' legal documents available
           | to us as a result of our core business, we are perfectly
           | positioned to build supplementary AI chat features related to
           | legal documents.
           | 
           | Uhh... What are the privacy implications here?!
        
             | baobabKoodaa wrote:
             | If you look at the example I posted of our product
             | recommendations, you will see that the GPT-4 generated text
             | contains "PARTNER_NAME" instead of actual partner name.
             | That's because we've created anonymized dataset from users
             | in such a way that it's literally impossible for OpenAI to
             | deanonymize users. Of course the same can not be done if we
             | want to provide a service where users can, for example,
             | chat with their legal documents. In that case we will have
             | to send some private details to OpenAI. Not sure how that
             | will pan out (what details we decide to send and what we
             | decide not to send).
             | 
             | In any case, all startups today are created on top of a
             | mountain of cloud services. Any one of those services can
             | leak private user data as a result of outsider hack or
             | insider attack or accident. OpenAI is just one more cloud
             | service on top of the mountain.
        
           | nicce wrote:
           | So, let's say that the chat will work as well as the real
           | lawyer some day.
           | 
           | If the current pricing would be $500 an hour for a real
           | lawyer, and at some point your costs are just keeping
           | services up and running, how big cut will you take? Because
           | it is enough if you are only a little cheaper than the real
           | lawyer to win customers.
           | 
           | There is an upcoming monopoly problem, if the users get the
           | best information from the service after they submit all their
           | documents. And soon the normal lawyer might be competitive
           | enough. I fear that the future is in the parent commenter's
           | open platfrom with open models and the businesses should
           | extract money from some other use cases, while for a while,
           | you get money momentarily based on the typical "I am first, I
           | have the user base" situation. It is interesting to see what
           | will happen to lawyers.
        
             | baobabKoodaa wrote:
             | > If the current pricing would be $500 an hour for a real
             | lawyer, and at some point your costs are just keeping
             | services up and running, how big cut will you take?
             | 
             | Zero. We're providing the AI chat for free (or free for
             | customers who purchase something from us, or some mix of
             | those 2 choices). Our core business is generating documents
             | for people, and the AI chat is supplementary to the core
             | business.
             | 
             | It sounds like you're approaching the topic with the
             | mindset that lawyers might be entirely replaced by
             | automation. That's not what we're trying to do. We can
             | roughly divide legal work into 3 categories:
             | 
             | 1. Difficult legal work which requires a human lawyer to
             | spend time on a case by case basis (at least for now).
             | 
             | 2. Cookie cutter legal work that is often done by a human
             | in practice, but can be automated by products like ours.
             | 
             | 3. Low value legal issues that people have and would like
             | to resolve, but are not worth paying a lawyer for.
             | 
             | We're trying to supply markets 2 and 3. We're not trying to
             | supply market 1.
             | 
             | For example, you might want a lawyer to explain to you what
             | is the difference between a joint will and an individual
             | will in a particular circumstance. But it might not be
             | worth it to pay a lawyer to talk it through. This is
             | exactly the type of scenario where an AI chat can resolve
             | your legal question which might otherwise go unanswered.
        
               | nicce wrote:
               | > It sounds like you're approaching the topic with the
               | mindset that lawyers might be entirely replaced by
               | automation.
               | 
               | That is the cynical future, however, and based on the
               | evolution speed of the last year, it is not too far away.
               | We humans are just interfaces for information and logic.
               | If the chatbot has the same capabilities (both
               | information and logic, and _natural language_ ), then
               | they will provide full automation.
               | 
               | The natural language aspect of AI is the revolutionary
               | point, less about the actual information they provide.
               | Quoting Bill Gates here, like the GUI was revolutionary.
               | When everyone can interact and use something, it will
               | remove all the experts that you needed before as middle
               | man.
        
         | nostromo wrote:
         | And not just legal either.
         | 
         | I uploaded all of my bloodwork tests and my 23andme data to
         | Chat GPT and it was better at analyzing it than my doctor was.
        
           | slingnow wrote:
           | This is a really interesting use case for me. I've been
           | envisioning a specially trained LLM that can give useful
           | advice or insights that your average PCP might gloss over or
           | not have the time to investigate.
           | 
           | Did you do anything special to achieve this? What were the
           | results like?
        
       | throwaway17_17 wrote:
       | I will reserve judgment of the possibilities of LLMs as applied
       | to the legal field until they are tested on something other than
       | Document/ contract review. Contract review is, in the large
       | business law case, often done by outsourcing to hundreds of
       | recent graduates and act more like proof reading with minimal
       | application of actual lawyering skills to increase a
       | corporation's bottom line.
       | 
       | The more common, for individual purchasers of legal services,
       | lawyering is going to be family law matters, criminal law
       | matters, and small claims court matters. I can not see a time in
       | the near future where an LLM can handle the fact specific and
       | circumstantial analysis required to handle felony criminal
       | litigation, and I see nothing that would imply LLMs can even
       | approach the individualized, case specific and convoluted family
       | dynamics required for custody cases or contested divorces.
       | 
       | I'm not unwilling to accept LLMs as a tool an attorney can use,
       | but outside of more rote legal proof reading I don't think the
       | technology is at all ready for adoption in actual practice.
        
         | giantg2 wrote:
         | "and I see nothing that would imply LLMs can even approach the
         | individualized, case specific and convoluted family dynamics
         | required for custody cases or contested divorces."
         | 
         | Humans are pretty bad at this. Based on the results, it seems
         | the judges' personal views and emotions are a large part of
         | these cases. I'm not sure what they would look like without
         | emotion, personal views, and the case law built off of those.
        
           | SkyBelow wrote:
           | The worse judges are at being perfectly removed arbiters of
           | justice, the more room for lawyers to exploit things like
           | emotions and humans connections with those judges, and thus
           | the worse LLMs will be at doing that part of the job. A
           | charismatic lawyer backed by an LLM will be much better than
           | an LLM.
           | 
           | At least until the LLMs surpass humans at being charismatic,
           | but that would seem to be its own nightmare scenario.
        
             | staunton wrote:
             | > At least until the LLMs surpass humans at being
             | charismatic
             | 
             | Look into "virtual influencers". Sounds like you should
             | find it interesting.
        
           | staunton wrote:
           | > judges' personal views and emotions are a large part of
           | these cases
           | 
           | That's a completely separate question. We're talking about
           | automating lawyers, not judges. (to be a good lawyer in such
           | a situation, you would need to model the judge's emotions and
           | use them to your advantage. Probably AIs can do this
           | eventually but it's not easy or likely to happen soon)
        
             | giantg2 wrote:
             | Well, judges are a subset of lawyers. And interactions with
             | judges are a large part of being a successful lawyer, as
             | you point out.
        
       | Workaccount2 wrote:
       | I wonder what the reach of a legal argument a bunch of lawyers
       | are going to come up with in order to cripple the tech that
       | threatens their industry?
        
         | delichon wrote:
         | Copyright appears to be the primary attack vector.
        
           | anotherhue wrote:
           | Are the arguments submitted to a court (and made publicly
           | accessible) subject to copyright?
           | 
           | I kind of assumed they were in the same space as government
           | documents.
        
             | delichon wrote:
             | A legal LLM would be significantly crippled without the
             | knowledge stored in the universe of non-legal documents.
        
               | anotherhue wrote:
               | You're probably right, but the law and reality often seem
               | to be orthogonal.
        
         | zugi wrote:
         | Lawyers control government, at least in the U.S. Expect laws
         | banning or severely restricting the use of AI in the legal
         | field soon. I expect arguments will range from the dangers of
         | ineffective counsel to "but think of the children" - whatever
         | helps them protect their monopoly.
        
           | axpy906 wrote:
           | That's some cartel level action.
        
             | DenisM wrote:
             | I think it's more accurate to think of lawyers as a guild.
             | Likewise doctors, accountants, plumbers, and electricians.
        
               | robertlagrant wrote:
               | A guild that has the inside track on changing the rules
               | for itself.
        
           | lewhoo wrote:
           | > whatever helps them protect their monopoly
           | 
           | Ah yes, the story of bad people not wanting their livelihoods
           | taken from them by good tech giants. Seriously, is there no
           | room for empathy in all of this ? If you went through law
           | school and likely got yourself in debt in the process then
           | you're not protecting any monopoly but your means to exist.
           | There are people like that out there you know.
        
             | pb7 wrote:
             | In general, we should not stall technological progress just
             | to protect jobs. They will find other jobs. This is the way
             | throughout human history.
        
               | lewhoo wrote:
               | I'm not advocating anything of this sort. I only reject
               | the typical framing of "bad guys" on one side.
        
             | jjackson5324 wrote:
             | > Seriously, is there no room for empathy in all of this ?
             | 
             | Are you joking?
             | 
             | Do you not empathize with the _far, far larger_ number of
             | people who can 't afford adequate legal representation and
             | have no legal recourse?
             | 
             | There are people like that out there you know!!!!
        
         | guluarte wrote:
         | I think the other way is going to happen, being a lawyer will
         | now be a lot more expensive requiring some servers doing AI
         | inference, developers, and 3 party services..
        
           | bongodongobob wrote:
           | I wouldn't be so sure. I've worked in the MSP space and law
           | is the most tech averse industry I've ever come into contact
           | with.
        
         | phrz wrote:
         | Not a reach, it's called "unlicensed practice of law" and it's
         | a crime.
        
       | vitiral wrote:
       | It feels to me like the law is already a staggering heap of
       | complexity. Isn't using technology going to just enable more of
       | the same, making the situation worse?
        
         | engineer_22 wrote:
         | on the contrary, it may help to highlight incongruities in the
         | legal domain and provide lawyers with compelling groundwork to
         | make relevant claims
        
         | urbandw311er wrote:
         | Or you could take the view that, in fact, this is one of the
         | things LLMs are very good at, ie making sense of complexity.
        
           | vitiral wrote:
           | But the lawyers reading the law won't be the only ones using
           | LLMs. LLMs will also be used to write laws. Then lawmakers
           | will use them to "check" that their 20,000 page law
           | supposedly works. No human can understand the scope of
           | today's laws: how much less when no LLMs can understand the
           | laws created 20 years from now.
           | 
           | I'd love to hear that LLMs can be used to trim and simplify
           | complexity, but I don't believe it. They generate content,
           | not reduce it.
        
       | delichon wrote:
       | In all criminal prosecutions, the accused shall enjoy the right
       | ... to have the Assistance of Counsel for his defense -- 6th
       | amendment to US Constitution
       | 
       | When an LLM is more competent than an average human counsel, does
       | this amendment still require assistance of a _human_ counsel?
        
         | urbandw311er wrote:
         | Ironically you might be better asking GPT this question.
        
         | District5524 wrote:
         | All the governments in the world would do everything in their
         | power to get people accept this suggestion as a truth and use
         | LLMs instead of human lawyers especially in criminal defense.
         | Now, why is that? Maybe it's not the technical knowledge that
         | is the most important feature of a lawyer?
        
           | sudden_dystopia wrote:
           | Yea it's their ability to manipulate language and people to
           | bend the letter of the law to suit their specific cases
           | regardless of any long term potential societal harm.
        
           | AustinDev wrote:
           | All governments of the world at least in the west appear to
           | mostly consist of attorney's I doubt they'd let it happen.
           | It'd be bad for their guild.
        
         | Bud wrote:
         | That's for a court to decide. It's certainly reasonable to
         | guess that that court case is coming. The only question is how
         | soon.
        
         | guluarte wrote:
         | yes because LLM are not free and still requires an expert to
         | verify the output.
        
         | ilc wrote:
         | They serve different uses.
         | 
         | A lawyer can handle the trial for you and things like that. The
         | LLM can help you with issues of fact, etc. And could even make
         | stronger privacy guarantees than a lawyer if setup right. (But
         | I doubt that will ever happen.)
        
       | hansonkd wrote:
       | I run a startup that does legal contract generation (contracts
       | written by lawyers turned into templates) and have done some work
       | GPT analysis of the contract for laypersons to interact and ask
       | questions about the contract they are getting.
       | 
       | In terms of contract review, what I've found is that GPT is
       | better at analysis of the document than generating the document,
       | which is what this paper supports. However, I have used several
       | startups options of AI document review and they all fall apart
       | with any sort of prodding for specific answers. This paper looks
       | like it just had to locate the section not necessarily have the
       | back and forth conversation about the contract that a lawyer and
       | client would have.
       | 
       | There is also no legal liability for GPT for giving the wrong
       | answer. So It works well for someone smart who is doing their own
       | research. Just like if you are smart you could use google before
       | to do your own research.
       | 
       | My feelings on contract generation is that for the majority of
       | cases, people are better served if there were simply better
       | boilerplate contracts available. Laywers hoard their contracts
       | and it was very difficult in our journey to find lawyers who
       | would be willing to write contracts we would turn into templates
       | because they are essentially putting themselves and their
       | professional community out of income streams in the future. But
       | people don't need a unique contract generated on the fly from GPT
       | every time when a template of a well written and well reviewed
       | contract does just fine. It cost hundreds of millions to train
       | GPT4. If $10m was just spent building a repository of well
       | reviewed contracts, it would be a more useful than spending the
       | equivalent money training a GPT to generate them.
       | 
       | People ask pretty wide range of questions about what they want to
       | do with their documents and GPT didn't do a great job with it, so
       | for the near future, it looks like lawyers still have a job.
        
         | OldOneEye wrote:
         | Which is mostly what I feel also happens with LLMs producing
         | code. Useful to start with, but not more than that. We've still
         | got a job us programmers. For the moment.
        
           | klabb3 wrote:
           | Producing code is like producing syntactically correct
           | algebra. It has very little value on its own.
           | 
           | I've been trying to pair system design with ChatGPT and it
           | feels just like talking with a person who's confident and
           | regurgitates trivia, but doesn't really understand. No sense
           | of self-contradiction, doubt, curiosity.
           | 
           | I'm very, very impressed with the language abilities and the
           | regurgitation can be handy, but is there a single novel
           | discovery by LLMs? Even a (semantic) simplification of a
           | complicated theory would be valuable.
        
         | mikepurvis wrote:
         | I recently used Willful to create a will and was pretty
         | disappointed with the result. The template was extremely rigid
         | on matters that I thought should have been no-brainers to be
         | able to express (if X has happened, do Y, otherwise Z) and
         | didn't allow for any kind of property division other than
         | percentages of the total. It was also very consumed with
         | several matters that I don't really feel that strongly about,
         | like the fate of my pets.
         | 
         | I was still able to rewrite the result into something that more
         | suited me, but for a service with a $150 price tag I kind of
         | hoped it would do more.
        
           | hansonkd wrote:
           | Our philosophy at GetDynasty is that the contract (in our
           | case estate planning documents) itself is a commodity which
           | is why we give it away for free. Charging $150 for a template
           | doesn't make sense.
           | 
           | Our solution like you point out is more rigid than having a
           | lawyer write it, but for the majority of people having
           | something that is accessible and free is worth it and then
           | having services layer on top makes the most sense. It is
           | easier to have a well written contract that you can "turn on
           | and off" features or sections of the contract than to try to
           | have GPT write a custom contract for you.
        
             | pclmulqdq wrote:
             | I applaud the efforts to give away free documents like
             | this. That is actually what happens when you have a lawyer
             | do it: you pay pretty much nothing for the actual contract
             | clauses to be written (they start with a basic form and
             | form language for all of the custom clauses you may want),
             | but you pay a lot for them to be customized to fit your
             | exact desires and to ensure that your custom choices all
             | work.
             | 
             | The idea of "legalzoom-style" businesses has always seemed
             | like a bamboozle to me. You pay hundreds of dollars for
             | essentially the form documents to fill in, and you don't
             | get any of the flexibility that an actual lawyer gives you.
             | 
             | As another example, Northwest Registered Agent gives you
             | your corporate form docs for free with their registered
             | agent services.
        
           | JumpCrisscross wrote:
           | > _like the fate of my pets_
           | 
           | Pet trusts [1]! My lawyer literally used their existence,
           | which I find adorable, to motivate me to read my paperwork.
           | 
           | [1] https://www.aspca.org/pet-care/pet-planning/pet-trust-
           | primer
        
           | toss1 wrote:
           | >>didn't allow for any kind of property division other than
           | percentages of the total.
           | 
           | Knowing someone who works in Trusts & Estates, that is
           | terrible. I've often heard complaints about drafting by
           | percentages of anything but straight financial assets which
           | have an easily determined value, because that requires an
           | appraisal(s). Yes, there are mechanisms to work it out in the
           | end, but it is definitely better to be able to say $X to
           | Alice, $Y to Bob and the remainder to Claire.
           | 
           | You have to think of not only what you want, but how the
           | executors will need to handle it. We all love complex
           | formulae, but we should use our ability to handle complexity
           | to simplify things for the heirs - it's a real gift in a bad
           | time.
        
             | mikepurvis wrote:
             | Heh, okay I guess what I wanted was going to end up as the
             | worst of both-- fixed amounts off the top to some
             | particular people/causes, and then the remainder divided
             | into shares for my kids.
             | 
             | I guess there's an understanding the being an executor is a
             | best-effort role, but maybe you could specifically codify
             | that +/-5% on the individual shares is fine, just to take
             | off some of the burden of needing it to be perfect,
             | particularly if there are payouts occurring at different
             | times and therefore some NPV stuff going on.
        
         | jassyr wrote:
         | I'm in the energy sector and have been thinking of fine tuning
         | a local llm on energy-specific legal documents, court cases,
         | and other industry documents. Would this solve some of the
         | problems you mention about producing specific answers? Have you
         | tried something like that?
        
           | hansonkd wrote:
           | Your welcome to try, but we had mixed results.
           | 
           | Law in general is interpretation. The most "lawyerese" answer
           | you can expect is "It depends". Technically in the US
           | everything is legal unless it is restricted and then there
           | are interpretations about what those restrictions are.
           | 
           | If you ask a lawyer if you can do something novel, chances
           | are they will give a risk assessment as opposed to a yes or
           | no answer. Their answer typically depends on how well _they_
           | think they can defend it in the court of law.
           | 
           | I have received answers from lawyers before that were
           | essentially "Well, its a gray area. However if you get sued
           | we have high confidence that we will prevail in court".
           | 
           | So outside of the more obvious cases, the actual function of
           | law is less binary but more a function of a gradient of
           | defensibility and the confidence of the individual lawyer.
        
             | danielmarkbruce wrote:
             | I spent a lot of time with M&A lawyers and this is 100%
             | true. The other answer is "that's a business decision".
             | 
             | So much of contract law boils down to confidence in winning
             | a case, or it's a business issue that just looks like a
             | legal issue because of legalese.
        
         | jerf wrote:
         | As I've said before, one of my biggest concerns with LLMs is
         | that they somehow manage to concentrate their errors in
         | precisely the places we are least likely to notice:
         | https://news.ycombinator.com/item?id=39178183
         | 
         | If this is dangerous with normal English, how much more so with
         | legal text.
         | 
         | At least if a lawyer drafts the text, there is at least one
         | human with some sort of intentionality and some idea of what
         | they're trying to say when they draft the text. With LLMs there
         | isn't.
         | 
         | (And as I say in the linked post, I don't think that is
         | fundamental to AI. It is only fundamental to LLMs, which
         | despite the frenzy, are not the sum totality of AI. I expect
         | "LLMs can generate legal documents on their own!" to be one of
         | those things the future looks back on our era and finds simply
         | laughable.)
        
         | andrewla wrote:
         | > There is also no legal liability for GPT for giving the wrong
         | answer
         | 
         | It was my understanding that there is also no legal liability
         | for a lawyer for giving the wrong answer. In extreme cases
         | there might be ethical issues that result in sanctions by the
         | bar, but in most cases the only consequences would be
         | reputational.
         | 
         | Are there cricumstances where you can hold an attorney legally
         | liable for a badly written contract?
        
           | gymbeaux wrote:
           | I believe all practicing attorneys carry malpractice
           | insurance as well as E&O (errors and omissions) insurance. I
           | think one of those would "cover" the attorney in your
           | example, but obviously insurance doesn't prevent poor Google
           | reviews, nor would it protect the attorney from anything done
           | in bad-faith (ethical violations), or anything else that
           | could land an attorney before the state bar association for a
           | disciplinary hearing.
        
             | dctoedt wrote:
             | > _I believe all practicing attorneys carry malpractice
             | insurance as well as E &O (errors and omissions)
             | insurance._
             | 
             | Nit: Malpractice insurance is (a species of) E&O insurance.
        
           | kayfox wrote:
           | > It was my understanding that there is also no legal
           | liability for a lawyer for giving the wrong answer.
           | 
           | There is plenty of legal, ethical and professional liability
           | for a lawyer giving the wrong answer, we don't often see the
           | outcome of these things because like everything in the courts
           | they take a long time to get resolved and also some answers
           | are not wrong just "less right" or "not really that wrong."
        
             | trogdor wrote:
             | I think the reason you rarely see the outcomes is because
             | those disputes are typically resolved through mediation
             | and/or binding arbitration, not in the courts.
             | 
             | Look at your most recent engagement letter with an
             | attorney. I'd bet that you agreed to arbitrate all fee
             | disputes, and depending on your state you might have also
             | agreed to arbitrate malpractice claims.
        
           | Digory wrote:
           | Yes. If the drafted language falls below reasonable care and
           | damages the client, absolutely you can be sued for
           | malpractice.
           | 
           | Wrong Word in Contract Leads to $2M Malpractice Suit[1].
           | 
           | [1]https://lawyersinsurer.com/legal-malpractice/legal-
           | malpracti...
        
           | freejazz wrote:
           | It's called malpractice
        
           | HillRat wrote:
           | I mean, sure, if the attorney is operating below the usual
           | standards of care -- it's exceptionally uncommon in the
           | corporate world, but not unheard of. In the case of AI
           | assistance, you run into situations where a company offering
           | AI legal advice direct to end-users is either operating as an
           | attorney without licensing, or, if an attorney is on the
           | nameplate, they're violating basic legal professional
           | responsibilities by not reviewing the output of the AI (if
           | you do legal process outsourcing -- LPO -- there's a US-based
           | attorney somewhere in the loop who's taking responsibility
           | for the output).
           | 
           | About the only case where this works in practice is someone
           | going pro se and using their own toolset to gin up a legal AI
           | model. There's arguably a case for acting as an accelerator
           | for attorneys, but the problem is that if you've got an AI
           | doing, say, doc review, you still need lawyers to review not
           | just the output for correctness, but also go through the
           | source docs to make sure nothing was missed, so you're not
           | saving much in the way of bodies or time.
        
         | jannw wrote:
         | You said: "However, I have used several startups options of AI
         | document review and they all fall apart with any sort of
         | prodding for specific answers. "
         | 
         | I think you will find that this is because they "outsource" the
         | AI contract document review "final check" to real lawyers based
         | in Utah ... so, it's actually a person, not really a wholy-AI
         | based solution (which is what the company I am thinking of
         | suggests in their marketing material)
        
           | Aurornis wrote:
           | > I think you will find that this is because they "outsource"
           | the AI contract document review "final check" to real lawyers
           | based in Utah ... so, it's actually a person, not really a
           | wholy-AI based solution (which is what the company I am
           | thinking of suggests in their marketing material)
           | 
           | Which company is that? I don't see any point in obfuscating
           | the name on a forum like this.
        
         | verelo wrote:
         | "There is also no legal liability for GPT for giving the wrong
         | answer."
         | 
         | I mean, i get your point but lets be real: I cannot count the
         | number of times I sat in a meeting and looked back at a
         | contract and wished some element had a different structure to
         | it. In law there are a lot of "wrong answers" someone could
         | foolishly provide, but its way more often something more
         | variable as to how "wrong" the answer is, than it is a binary
         | bad/good piece of advice.
         | 
         | I personally feel the ability to have more discussion about a
         | clause is extremely helpful, v's getting the a hopefully "right
         | answer" from a lawyer, and counting the clock / $ as you try
         | wrap your head around the advice you're being given. If you
         | have deep pockets, you invite your lawyer to a lot of meetings,
         | they have context and away you go....but for a lot of people,
         | you're just involving the lawyer briefly and trying to avoid
         | billable hours. That's been me at the early stage of
         | everything, and it's a very tricky balance.
         | 
         | If you're a startup trying to use GPT, i say do it, but also
         | use a lawyer. Augmenting the lawyer with GPT to save billable
         | hours so you can turn up to a meeting with your lawyer and
         | extract the most value in the shortest time period seems like
         | the best play to me.
        
           | hansonkd wrote:
           | You can read my other reply which agrees with you that law is
           | a spectrum rather than a binary.
           | 
           | > I cannot count the number of times I sat in a meeting and
           | looked back at a contract and wished some element had a
           | different structure to it.
           | 
           | The only way to have something "bullet proof" is to have
           | experience in ways in which something can go wrong. Its just
           | like writing a program in which the "happy path" is rather
           | obvious but then you have to come up with all the different
           | attack vectors and use cases in which the program can fail.
           | 
           | The same is with lawyers. Lawyers at big firms have the
           | experience of the firm to guide them on what to do and what
           | they should include in a contract. A small town family lawyer
           | might have no experience in what you ask them to do.
           | 
           | Which is why I advocate for more standardized agreements as
           | opposed to one off generated agreements (with GPT or a
           | lawyer). Think of the YCombinator SAFE, it made a huge impact
           | on financing because it was standardized and there were
           | really no terms to negotiate compared to the world before
           | which the terms Notes were complex had to be scrutinized and
           | negotiated.
           | 
           | > Augmenting the lawyer with GPT to save billable hours so
           | you can turn up to a meeting with your lawyer and extract the
           | most value in the shortest time period seems like the best
           | play to me.
           | 
           | The issue is that a lot of lawyers have a conflict of
           | interest and a "Not invented here" way of doing business. If
           | you have a Trust for instance written by one lawyer and bring
           | it to another lawyer, the majority of lawyers we talked to
           | actually prefer to throw out the document and use their own.
           | This method works well if you are a smart savvy person, but
           | for the population at large, people have some crazy and weird
           | ideas about how the law works and need to be talked out of
           | what they want into something more sane.
           | 
           | Another common lawyer response besides "It depends" is "Well
           | you can, but why would you want to?" So many people of a
           | skewed view on what they want and part of a lawyers job is
           | interpreting what they really want and guiding them on the
           | path of that.
           | 
           | So the hybrid method really only works if you find a lawyer
           | that accepts whatever crazy terms you came up with and are
           | willing to work with what you generated.
        
             | verelo wrote:
             | Thats all very reasonable, thanks for taking the time to
             | reply!
             | 
             | When i suggest going down a hybrid path, I mostly mean use
             | GPT on your own (disclose this to your lawyer at your own
             | risk) as a means to understand what they're proposing. I've
             | spent so many hours asking questions clarifying why
             | something is done a certain way, and most of that is about
             | understanding the language and trying to rationalize the
             | perspective the lawyer has taken. I feel I could probably
             | have done a lot of that on my own time, just as fast, if
             | GPT had been around during these moments. And then of
             | course, confirmed my understanding aligns with the lawyer
             | at the end.
             | 
             | I need to be upfront, I really don't know I'm right
             | here....its just a hunch and gut reaction to how I'd behave
             | in the present moment, but I find myself using AI more and
             | more to get myself up to speed on issues that are beyond my
             | current skill level. This makes me think law is probably
             | another good way to augment my own disadvantages in that I
             | have a very limited understanding of the rules and
             | exceptional scenarios that might come up. I also find
             | myself often on the edge of new issues, trying to structure
             | solutions that don't exist or are intentionally different
             | to present solutions...so that means a lot of explaining to
             | the lawyer and subsequently a fair bit of back and forward
             | on the best way to progress.
             | 
             | It's a fun time to be in tech, I'm hoping things like GPT
             | turn out to be a long term efficiency driver, but I'm
             | fearful about the future monetization path and how it'll
             | change the way we live/work.
        
               | freejazz wrote:
               | If you need a GPT to explain your lawyer's explanations
               | to you, you need a new attorney.
        
               | verelo wrote:
               | Eh, no...i mean, maybe...I honestly feel the issue is me.
               | I always want a lot of detail, and that can become
               | expensive. Sometimes the detail I want is more than I
               | needed, but I don't know that until after I've asked the
               | question.
        
               | freejazz wrote:
               | If your attorney is not adequately explaining things to
               | you or providing you with resources to understand things
               | he does not need to spend his time explaining to you,
               | then you need a new attorney.
        
         | onetimeuse92304 wrote:
         | > Laywers hoard their contracts and it was very difficult in
         | our journey to find lawyers who would be willing to write
         | contracts we would turn into templates because they are
         | essentially putting themselves and their professional community
         | out of income streams in the future.
         | 
         | I notice same things in other professions, especially where it
         | requires a huge upfront investment in education.
         | 
         | For example (at least where I live), there was a time about 20
         | years ago where architects also didn't want to produce designs
         | that would be then sold to multiple people for cheap. The
         | thinking was that this reduces market for architecture output.
         | But of course it is easy to see that most people do not really
         | need a unique design.
         | 
         | So the problem solved itself because the market does not really
         | care and the moment somebody is able to compile a small library
         | of usable designs and a usable business model, as an architect,
         | you can either cooperate to salvage what you can or lose.
         | 
         | I believe the same comes for lawyers. Lawyers will live through
         | some harsh time while their easiest and most lucrative work
         | gets automated and the market for their services is going to
         | shrink and whatever work is left for them will be of the more
         | complex kind that the automation can't handle.
        
           | adra wrote:
           | I think you greatly underestimate this group to retain their
           | position as a monopoly. A huge chunk of politicians are
           | lawyers, and most legal jurisdictions have hard requirements
           | around what work you must have a lawyer to perform. These
           | tools may make their practices more efficient internally, but
           | it doesn't mean that value is being passed on to the consumer
           | of the service in any way. They're a cartel and one with very
           | close relationships with country leadership. I don't see this
           | golden goose souring any time soon.
        
             | onetimeuse92304 wrote:
             | I think what you are missing is businesses doing what
             | businesses have always been doing: finding a niche for
             | themselves to make a good profit.
             | 
             | When you can hire less lawyers and get more work done and
             | cheaper and at the same (or better) quality, you are going
             | to upend the market for lawyering services.
             | 
             | And this does not require to _replace_ lawyers. It is just
             | enough to equip a lawyer with a set of tools to help them
             | quickly do the typical tasks they are doing.
             | 
             | I work a lot with lawyers and a lot of what they are doing
             | is going to be stupidly easily optimised with AI tools.
        
               | d0odk wrote:
               | Please elaborate with some examples of what legal work
               | you think will be optimized with AI tools.
        
               | __loam wrote:
               | Sometimes it feels like people look at GPT and think
               | "This thing does words! Law is words! I should start a
               | company!" but they actually haven't worked in legal tech
               | at all and don't know anything about the vertical.
        
               | d0odk wrote:
               | It's a logical reaction, at least superficially, to the
               | touted capabilities of Gen AI and LLMs. But once you
               | start trying to use the tech for actual legal
               | applications, it doesn't do anything useful. It would be
               | great if some mundane legal tasks could be automated away
               | --for example negotiation of confidentiality agreements.
               | One would think that if LLMs are capable of replacing
               | lawyers, they could do something along those lines. But I
               | have not seen any evidence that they can do so
               | effectively, and I have been actively looking into it.
               | 
               | One of the top comments on this thread says that LLMs are
               | going to better at summarizing contracts than generating
               | them. I've heard this in legal tech product demos as
               | well. I can see some utility to that--for example,
               | automatically generating abstracts of key terms (like
               | term, expiration, etc.) for high-level visibility. That
               | said, I've been told by legal tech providers that LLMs
               | don't do a great job with some basic things like total
               | contract value.
               | 
               | I question how the document summarizing capabilities of
               | LLMs will impact the way lawyers serve business
               | organizations. Smart businesspeople already know how to
               | read contracts. They don't need lawyers to identify /
               | highlight basic terms. They come to lawyers for advice on
               | close calls--situations where the contract is unclear or
               | contradictory, or where there is a need for guidance on
               | applying the contract in a real-world scenario and
               | assessing risk.
               | 
               | Overall I'm less enthusiastic about the potential for
               | LLMs in the legal space than I was six months ago. But I
               | continue to keep an eye on developments and experiment
               | with new tools. I'd love to get some feedback from others
               | on this board who are knowledgeable.
               | 
               | As a side note, I'm curious if anyone knows about the
               | impact of context window on contract interpretation a lot
               | of contracts are quite long and have sections that are
               | separated by a lot of text that nonetheless interact with
               | each other for purposes of a correct interpretation.
        
               | __loam wrote:
               | I think one of the biggest problems with LLMs is the
               | accountibility problem. When a lawyer tells you
               | something, their reputation and career are on the line.
               | There's a large incentive to get things right. LLMs will
               | happily spread believable bullshit.
        
               | d0odk wrote:
               | In fairness some lawyers will too, haha. I take your
               | point, though. Good lawyers care about their reputation
               | and strive to protect it.
        
               | freejazz wrote:
               | Lawyers are legally liable to their clients for their
               | advice, it's a lot more than just reputation and career.
        
               | treprinum wrote:
               | A friend of mine is a highly ranked lawyer, a past
               | general consul of multiple large enterprises. I sent him
               | this paper, he played with ChatGPT-3.5 (not even GPT-4)
               | and contract creation, he said it was 99% fine and then
               | told me he's glad he is slowly retiring from law and is
               | not envious of any up-and-coming lawyers entering the
               | profession. One voice from the vertical.
        
           | WanderPanda wrote:
           | IIRC about 40% of us politicians are lawyers, unfortunately
           | I'm sure they will find a way to gatekeep these revenue
           | streams for their peers.
        
             | pugworthy wrote:
             | I'm assuming by the use of "us" and "they" you meant US -
             | not that you are a politician.
        
           | adrianN wrote:
           | Lawyers seem to be the prime group to prevent this outcome
           | using some kind regulation. Many politicians are lawyers.
        
             | freejazz wrote:
             | They _were_ lawyers, they aren 't still practicing
             | attorneys representing clients.
        
           | jacquesm wrote:
           | So, you will get the template for free. And then a lawyer has
           | to put it on their letterhead and they charge you the exact
           | same as they do right now for that because that will be made
           | a requirement.
        
             | onetimeuse92304 wrote:
             | No. As a business owner you will hire couple lawyers, give
             | them a bunch of programs to automate searching through
             | texts, answering questions and writing legalese based on
             | human description of what is the text supposed to do. These
             | three are from my experience great majority of the work.
             | The 2 people you hire will now perform like 10 people
             | without tools. Then you will use part of that saved money
             | to reduce prices and if you are business savvy, you will
             | use the rest to research the automation further.
             | 
             | Then another business that wants to compete with you will
             | no longer have an option, they will have to do this or more
             | to be able to stay in the business at all.
        
           | faeriechangling wrote:
           | Lawyers are uniquely well equipped to legislate their
           | continued employment into existence.
        
           | taneq wrote:
           | > I notice same things in other professions, especially where
           | it requires a huge upfront investment in education.
           | 
           | Doctors, for instance. You hear no end of stories about how
           | incredibly high pressure medicine is with insane hours and
           | stress, but will they increase university placements so they
           | can actually hire enough trained staff to meet the workload?
           | Absolutely fkn not, that would impact salaries.
        
           | singleshot_ wrote:
           | "easiest and most lucrative work"
           | 
           | I think this overlooks a big part of how the legal market
           | works. Our easiest work is only lucrative because we use it
           | to train new lawyers, who bill at a lower rate. To the extent
           | the easy stuff gets automated, 1) it's going to be impossible
           | to find work as a junior associate and 2) senior attorneys
           | will do the same stuff they did last year. If there's a
           | decrease in prices for a while, great, but a generation from
           | now it's going to be a lot harder to find someone
           | knowledgeable because the training pathway will have been
           | destroyed.
        
         | jacquesm wrote:
         | > Just like if you are smart you could use google before to do
         | your own research.
         | 
         | Unfortunately people stop at step #1, they use Google and that
         | _is_ their research. I don 't think ChatGPT is going to be
         | treated any different. It will be used as an oracle, whether
         | that's wise or not doesn't matter. That's the price of
         | marketing something as artificial intelligence: the general
         | public believes it.
        
         | jonnycoder wrote:
         | "So It works well for someone smart who is doing their own
         | research."
         | 
         | That's a concise explanation that also applies to GPTs and
         | software engineering. GPT4 boosts my productivity as a software
         | engineer because it helps me travel the path quicker. Most
         | generated code snippets need a lot of work because I'm prodding
         | it for specific use cases and it fails. It's perfect as an
         | assistant though.
        
         | freejazz wrote:
         | >There is also no legal liability for GPT for giving the wrong
         | answer. So It works well for someone smart who is doing their
         | own research. Just like if you are smart you could use google
         | before to do your own research.
         | 
         | How is that good for the end user? Malpractice claims are often
         | all that is left for a client after the attorney messes up
         | their case. If you use a GPT, you wouldn't have that option.
        
         | wow_its_tru wrote:
         | We're building exactly this for contract analysis: upload a
         | contract, review the common "levers to pull", make sure there's
         | nothing unique/exceptional, and escalate to a real lawyer if
         | you have complex questions you don't trust with an LLM.
         | 
         | In our research, we found out that most everyone has the same
         | questions: (1) "what does my contract say?", (2) "is that
         | standard?", and (3) "is there anything I can/should negotiate
         | here?"
         | 
         | Most people don't want an intense, detailed negotiation over a
         | lease, or a SaaS agreement, or an employment contract... they
         | just want a normal contract that says normal things, and maybe
         | it would be nice if 1 or 2 of the common levers were pulled in
         | their direction.
         | 
         | Between the structure of the document and the overlap in
         | language between iterations of the same document (i.e. literal
         | copy/pasting for 99% of the document), contracts are almost an
         | ideal use-case for LLMs! (The exception is directionality -
         | LLMs are great at learning correlations like "company,
         | employee, paid biweekly," but bad at discerning that it's super
         | weird if the _employee_ is paying the _company_)
        
           | bkang97 wrote:
           | That makes sense, how are you guys approaching breaking down
           | what should be present and what is expected in contracts?
           | I've seen a lot of chatbot-based apps that just don't cut it
           | for my use case.
        
         | nwiswell wrote:
         | > If $10m was just spent building a repository of well reviewed
         | contracts
         | 
         | What's your objection to Nolo Press? They seem to have already
         | done that.
        
         | declan_roberts wrote:
         | In other words, LLM's are great examples of the 80/20 rule.
         | 
         | They're going to be great for a lot of stuff. But when it comes
         | to things like the law the other 20% is not optional.
        
           | asah wrote:
           | So the world needs 1/5 as many attorneys ? or 1/100 ? How
           | will 6-figure attorneys replace that income?
        
         | gkk wrote:
         | Hi hansonkd,
         | 
         | I'm working on Hotseat - a legal Q&A service where we put
         | regulations in a hot seat and let people ask sophisticated
         | questions. My experience aligns with your comment that vanilla
         | GPT often performs poorly when answering questions about
         | documents. However, if you combine focused effort on squeezing
         | GPT's performance with product design, you can go pretty far.
         | 
         | I wonder if you have written about specific failure modes
         | you've seen in answering qs from documents? I'd love to check
         | whether Hotseat is handling them well.
         | 
         | If you'r curious, I've written about some of the design choices
         | we've made on our way to creating a compelling product
         | experience: https://gkk.dev/posts/the-anatomy-of-hotseats-ai/
        
           | DanielSantos wrote:
           | Your post is very interesting. Thanks for sharing.
           | 
           | If your focus is narrow enough the vanilla gpt can still
           | provide good enough results. We narrow down the scope for the
           | gpt and ask it to answer binary questions. With that we get
           | good results.
           | 
           | Your approach is better for supporting broader questions. We
           | support that as well and there the results aren't as good.
        
         | DanielSantos wrote:
         | I launched a contract review tool about year ago[1].
         | 
         | The legal liability is an issue in several countries but
         | contract generation can also be. If you are providing whatever
         | is defined as legal services and are not a law firm, you will
         | have issues.
         | 
         | [1]legalreview.ai
        
         | d0odk wrote:
         | How do you think organizations can best use the contractual
         | interpretations provided by LLMs? To expand on that, good
         | lawyers don't just provide contractual interpretations, they
         | provide advice on actions to take, putting the legal
         | interpretation into the context of their client's business
         | objectives and risk profile. Do you see LLMs / tools based on
         | LLMs evolving to "contextualize" and "operationalize" legal
         | advice?
         | 
         | Do you have any views on whether context window limits the
         | ability of LLMs to provide sound contractual interpretations of
         | longer contracts that have interdependent sections that are far
         | apart in the document?
         | 
         | Has your level of optimism for the capabilities of LLMs in the
         | legal space changed at all over the past year?
         | 
         | You mentioned that lawyers hoard templates. Most organizations
         | you would have as clients (law firms or businesses) have a ton
         | of contracts that could be used to fine tune LLMs. There are
         | also a ton of freely available contracts on the SEC's website.
         | There are also companies like PLC, Matthew Boender, etc., that
         | create form contracts and license access to them as a business.
         | Presumably some sort of commercial arrangement could be worked
         | out with them. I assume you are aware of all of these potential
         | training sources, and am curious why they were unsatisfactory.
         | 
         | Thanks for any response you can offer.
        
           | DanielSantos wrote:
           | Not op but someone that currently runs an ai contract review
           | tool.
           | 
           | To answer some of your questions:
           | 
           | - contract review works very well for high volume low risk
           | contract types . Think slas, SaaS... these are contracts
           | comercial legal teams need to review for compliance reasons
           | but hate it.
           | 
           | - it's less good for custom contracts
           | 
           | - what law firms would benefit from is just natural language
           | search on their own contracts.
           | 
           | - it also works well for due diligence. Normally lawyers
           | can't review all contracts a company has. With a contract
           | review tool they can extract all the key data/risks
           | 
           | - LLM doesn't need to provide advice. LLM can just identify
           | if x or y is in the contract. This improving the process of
           | review.
           | 
           | - context windows keep increasing but you don't need to send
           | the whole contract to the LLM . You can just identify the
           | right paragraphs and send that.
           | 
           | - things changes a lot in the past year. It would cost us $2
           | to review a contract now it's $0.2 . Responses are more
           | accurate and faster
           | 
           | - I don't do contract generation but have explored this. I
           | think the biggest benefit isn't generating the whole contract
           | but to help the lawyer rewrite a clause for a specific need.
           | The standard CLM already have contract templates that can be
           | easily filled in. However after the template is filled the
           | lawyer needs to add one or two clauses . Having a model
           | trained on the companies documents would be enough.
           | 
           | Hope this helps
        
             | d0odk wrote:
             | Thanks. Appreciate your feedback.
             | 
             | Do you think LLMs have meaningfully greater capabilities
             | than existing tools (like Kira)?
             | 
             | I take your point on low stakes contracts vs. sophisticated
             | work. There has been automation at the "low end" of the
             | legal totem pole for a while. I recall even ten years ago
             | banks were able to replace attorneys with automations for
             | standard form contracts. Perhaps this is the next step on
             | that front.
             | 
             | I agree that rewriting existing contracts is more useful
             | than generating new ones--that is what most attorneys do.
             | That said, I haven't been very impressed by the drafting
             | capabilities of the LLM legal tools I have seen. They tend
             | to replicate instructions almost word for word (plain
             | English) rather than draw upon precedent to produce quality
             | legal language. That might be enough if the provisions in
             | question are term/termination, governing law, etc. But it's
             | inadequate for more sophisticiated revisions.
        
       | williamcotton wrote:
       | Lexis has some AI feature built into it:
       | 
       | https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page
       | 
       | I haven't had a chance to test it out as anyone should be a bit
       | weary to add more paid features to an already insanely expensive
       | software product!
        
       | FrustratedMonky wrote:
       | Theoretically. The language in laws should be structured similar
       | to code. It has some logical structure. Thus should be more
       | easily adopted to LLMs than other 'natural language'.
       | 
       | So despite the early news about lawyers submitting 'fake' cases,
       | it is only a matter of time before the legal profession is up-
       | ended. There are a ton of paralegals, etc.. doing 'grunt' work in
       | firms, that an LLM can do. These are considered white color, and
       | will be gone.
       | 
       | It will progress in a similar fashion to coding.
       | 
       | It will be like having a junior partner that you have to double
       | check, or that can do some boiler plate for you.
       | 
       | You can't trust completely, but you don't trust your junior devs
       | do you, but it gets you 80% there.
        
         | NoboruWataya wrote:
         | > It will progress in a similar fashion to coding.
         | 
         | I kind of agree with this, but this is why I am confused that I
         | only ever see people (at least on HN) talk about AI up-ending
         | the legal profession and putting droves of lawyers out of work
         | --I never see the same talk about the coding industry being
         | transformed in this way.
        
           | FrustratedMonky wrote:
           | I've heard it discussed. A lot more a few months ago when GPT
           | first blew up.
           | 
           | Maybe HN is full of coders that still think themselves
           | 'special' and can't be replaced.
           | 
           | Or maybe, the law profession has a lot more boilerplate than
           | the coding profession?
           | 
           | So legal profession has more that can be replaced?
           | 
           | Coders will be replaced, but maybe not at same rate as
           | paralegals.
        
       | ulrischa wrote:
       | Lawyers have been resisting technological advances for years. No
       | industry rejects technological tools as vehemently as the legal
       | industry, arguing that everything has to be judged by people.
       | Even laws that are available online do not even link to the
       | paragraphs that are referenced. All in all, progress is
       | institutionally suppressed here in order to preserve jobs.
        
         | NoboruWataya wrote:
         | > Even laws that are available online do not even link to the
         | paragraphs that are referenced.
         | 
         | It's not lawyers' job to publish the laws online. Lawyers are
         | the ones who would benefit from more easily searchable online
         | laws, as they are the ones whose job is actually to read the
         | laws. That is why there are various commercial tools that
         | provide this functionality, that lawyers pay for. You need to
         | ask your government why public online legal databases are so
         | poor, not your lawyer.
        
           | ulrischa wrote:
           | Right. 70% of governmental staff are people with a law
           | education. So I mean lawyer in a broader sense
        
         | DanielSantos wrote:
         | I also built an AI contract review ai tool and talked to > 100
         | lawyers. What I found is that lawyers want technological
         | advances but only if they work 100% of the time.
         | 
         | Also helped lawyers looking for a CLM, and they rejected
         | something if it caused any inconvenience.
        
       | light_hue_1 wrote:
       | Talk about a conflict of interest. A company that pushes llms for
       | legal work says they work better.
       | 
       | This isn't worth the pdf it wasn't printed on.
        
         | carstenhag wrote:
         | I disagree. The company is mentioned multiple times, a conflict
         | of interest is clearly visible. We also don't complain about
         | Google et al publishing papers about some of their internal
         | systems and why it helps, I hope?
        
           | drewdevault wrote:
           | Google isn't trying to sell you their internal systems. This
           | is a bullshit AI hype bubble advert masquerading as an
           | academic paper. Bet you 10 bucks it doesn't get through peer
           | review (or isn't even _submitted_ for peer review).
        
             | og_kalu wrote:
             | >Google isn't trying to sell you their internal systems.
             | 
             | They sometimes are.
             | 
             | If you have an issue with the methodology of paper then all
             | well and good but "conflict of interest" is pretty weak.
             | 
             | Yes, Google and Microsoft et al regularly publish papers
             | describing Sota performance they sometimes use internally
             | and even sell. I didn't have to think much before wavenet
             | came to mind.
             | 
             | Besides, the best performing models here are all Open AI.
        
         | vibeproaaaac21 wrote:
         | Agreed. That's most AI research though. They are a mechanism
         | whose entire value proposition lies in laundering
         | externalities.
         | 
         | Not that this isn't exactly what all the big "tech innovation"
         | of the last decade were either. It's depressing and everyone
         | involved should be ashamed of themselves.
        
       | oldgregg wrote:
       | Interesting problem space-- I have a culture jamming legal theory
       | this might work for:
       | 
       | What if you had a $99/mo pro-se legal service that does two
       | things, 1) teaches you how to move all of your assets into secure
       | vehicles. 2) At the same time it lets you conduct your own
       | defense pro-se-- but the point is not to win, it's just to jam
       | the system. If you signal to the opposing party that you're
       | legally bankrupt and then you just file motion after motion and
       | make it as excruciating as possible for them they might just say
       | nevermind when they realize it's gonna take them 5 years to get
       | through appeals process.
       | 
       | It's true lawyers don't want to give up their legal documents for
       | a template service-- but honestly just going to the court house
       | and ingesting tons of filings might do the trick. With that
       | strategy in mind you don't really need GREAT documents or legal
       | theory anyway. Just docs that comply with court filing
       | requirements. Yeah we're def gonna need to deposition your
       | housekeepers daughters at $400/h and if you have a problem with
       | that I would be happy to have a hearing about it. If enough
       | people did this is would basically bring the legal system to a
       | standstill and give power back to the people.
       | 
       | RIP Aaron Swartz who died fighting for these issues :'(
        
         | gee_totes wrote:
         | What you're describing in 2) sounds a lot like paper
         | terrorism[0]
         | 
         | [0]https://en.wikipedia.org/wiki/Paper_terrorism
        
           | oldgregg wrote:
           | So what do you call it when the wealthy and corporations
           | exploit their opponent's inability to afford legal
           | representation? A normal Tuesday in Amerika. Yes, the
           | banality of tuesday terrorism.
        
         | trevithick wrote:
         | You would be a vexatious litigant.
         | 
         | https://www.courts.ca.gov/12272.htm
        
           | oldgregg wrote:
           | That's different. I'm talking about using it as a defensive
           | mechanism against wealthy individuals and corporations who
           | bully (relatively) poor people knowing they can't afford
           | years of litigation. In theory if you had an AI system like
           | what I'm talking about it could be used the other way though.
           | Honestly if every individual had the ability to go after
           | corporations in the same manner it would even the playing
           | field. Still wouldn't necessarily be vexatious.
        
       | zehaeva wrote:
       | Given the recent legal cases where lawyers did use Chat GPT to do
       | research and help write their brief did not go very well I'm not
       | sold that on all the optimism that's here in the comments.
        
         | minimaxir wrote:
         | The technology is fine, the education and literacy about its
         | flaws and limitations among typical nontechnical users is
         | another story.
        
         | frankfrank13 wrote:
         | That was rookie level mistakes though. Not checking a *case*
         | exists? Building a small pipeline of generation->validation
         | isn't trivial, but it isn't impossible. The cases you describe
         | seem to me like very lazy associates matched with a very poor
         | understanding of what LLMs do.
        
         | bpodgursky wrote:
         | They were all idiots too cheap to pay for GPT-4. Got caught by
         | hallucinations.
        
       | cwoolfe wrote:
       | Where can someone upload a contract and ask the AI questions
       | about it in a secure and private way? It's my understanding that
       | most people and organizations aren't able to do this because it
       | isn't private.
        
         | btbuildem wrote:
         | You can host your own LLM - something like Mixtral for example
         | - then you have full control over the information you submit to
         | it.
        
         | kveton wrote:
         | We do this today (securely upload a file and ask questions or
         | summarize) and part of our promise, and why we're having early
         | success, is because we promise not to train with customer data
         | and we don't run directly on top of OpenAI.
         | 
         | https://casemark.ai
        
         | kulikalov wrote:
         | ChatGPT enterprise? Or over API. They state that those
         | offerings data is not used for training. I'm not a lawyer but
         | afaik illegally retrieved evidence cannot be used -
         | "exclusionary rule".
        
         | DanielSantos wrote:
         | You can try us [1] . During the upload process you can enable
         | data anonymization. It's not perfect though.
         | 
         | We use open ai but they only get segments of a contract. Not
         | the full one and can't connect them.
         | 
         | You get the review via email and after you can delete the
         | document and keep the review.
         | 
         | [1] legalreview.ai
        
       | adastra22 wrote:
       | Nice title!
        
       | advael wrote:
       | It's bizarre how easily we got to the Goodhart's Law event
       | horizon in our comparisons of complex fields to AI models
       | 
       | But this is what happens when industries spend a decade brain-
       | draining academia with offers researchers would be insane to
       | refuse
        
       | District5524 wrote:
       | While this paper is clearly not without merits, it intends to be
       | more like an excuse to make a bombastic statements about a whole
       | profession or "industry" (perhaps to raise their visibility and
       | try to sell something later on?). The worst part is that they
       | have actually referenced a single preprint document as "previous
       | art" - and that document itself is not related to contract
       | review, but to legal reasoning in general. (A part of LegalBench
       | is of course "interpretation", and that is built on existing
       | contract review benchmarks, but they could've found more relevant
       | papers). Automating legal document review has been a very active
       | field in NLP for twenty years or so (including in QA tasks) and
       | became a lot more active since 2017. At least e.g. Kira (and
       | Luminance etc., none of which is LLM-based) are already used
       | quite widely in legal departments/firms around the world. So
       | lawyers do have practical experience in their limitations... But
       | Kira & co. are not measuring the performance of the latest and
       | greatest models and they do not use transparent benchmarks etc.
       | So the benchmark results in this paper are indeed a welcome
       | addition in terms of using LLMs. But also considering its limited
       | scope of reviewing 10 (!) documents based on a single review
       | playbook, they should not have written about "implications for
       | the industry". It is very much pretentious and shows more of the
       | lack of knowledge of the authors of the very same industry than
       | about the future of the legal services industry.
       | 
       | If you're interested in the capabilities and limitations, I
       | suggest these informative, but still light reads as well:
       | https://kirasystems.com/science/ https://zuva.ai/blog/
       | https://www.atticusprojectai.org/cuad
        
       | very_good_man wrote:
       | We may finally again get affordable access to the rule of law in
       | the United States.
        
       | gogogo_allday wrote:
       | I must not be reading this paper correctly because it appears
       | that they only used 10 procurement contracts to do this study.
       | 
       | If so, the abstract and title feels misleading.
       | 
       | I'd be more interested in a study done on thousands of contracts
       | of different types. I also have my doubts it would perform well
       | on novel clauses or contracts.
        
       | gumby wrote:
       | I would not want an transformer-generated contract but I would be
       | delighted if a transformer-generated contract were used as input
       | by an actual lawyer and it saved me money.
       | 
       | In my experience current practice (unchanged for the decades I've
       | been using lawyers) is that associates start with an existing
       | contract that's pretty similar to what's needed and just update
       | it as necessary.
       | 
       | Also in my experience, a contract of any length ends up with
       | overlooked bugs (changed sections II(a) and IV(c) for the new
       | terms but forgot to update IX(h)) and I doubt this would be any
       | better with a machine-generated first draft.
        
       | ok123456 wrote:
       | Does LPO (Legal Process Outsourcing) mean a paralegal?
        
       | graphe wrote:
       | Law is very specific. BERT was sufficient for law even before
       | chatGPT. https://towardsdatascience.com/lawbert-towards-a-legal-
       | domai...
       | 
       | https://huggingface.co/nlpaueb/legal-bert-base-uncased
        
       | adrianmonk wrote:
       | I wonder if this could help regular (non-lawyer) people
       | understand legal documents they run into in everyday life. Things
       | like software license agreements, terms of service, privacy
       | policies, release of liability forms, deposit agreements,
       | apartment leases, and rental car agreements.
       | 
       | Many people don't even try to read these because they're too long
       | and you wouldn't necessarily understand what it means even if you
       | did read it.
       | 
       | What if, before you signed something, you could have an LLM
       | review it, summarize the key points, and flag anything unusual
       | about it compared to similar kinds of documents? That seems
       | better than not reading it at all.
        
         | wow_its_tru wrote:
         | We're building exactly this today, for common business
         | contracts.
         | 
         | We're not building for consumers today, because I think it's
         | vanishingly unlikely that you'll, like, pick a different car
         | rental company once you read their contract :) but leases,
         | employment contracts, options agreements, SaaS agreements...
         | all common, all boilerplate with 5-10 areas to focus on, all
         | ready for LLMs!
        
           | mattmaroon wrote:
           | Honestly, that type of consumer use case might actually be
           | relevant once LLM's can do this sort of thing. Certainly,
           | nobody is going to contact their attorney before renting a
           | car, but if this could be integrated into a travel site or
           | something...
           | 
           | You never know how consumer behavior may change when
           | something that was either impossible or impractical becomes
           | very easy b
        
           | DanielSantos wrote:
           | We have also have been building this[1] but struggled to
           | monetize even with 100s of users and 1000s of contracts
           | review. We are live for about 1 year.
           | 
           | If you want to share experience feel free to reach out [1]
           | legalreview.ai
        
       | adi4213 wrote:
       | For the auditory learners, here is this paper in a summarized
       | Audiobook format :
       | https://player.oration.app/1960399e-ccb0-44f6-81f0-870ef7600...
        
       | visarga wrote:
       | Two reasons why it's a bad idea:
       | 
       | 1. ChatGPT can't be held responsible, it has no body, like
       | summoning the genie from the lamp, and about as sneaky with its
       | hard to detect errors
       | 
       | 2. ChatGPT is not autonomous, not even a little, these models
       | can't recover from error on their own. And their input buffers
       | are limited in size, and don't work all that well when stretched
       | at maximum
       | 
       | Especially the autonomy part is hard. Very hard in general. LLMs
       | need to become agents to collect experience and learn, just
       | training on human text is not good enough, it's our experience
       | not theirs. They need to make their own mistakes to learn how to
       | correct themselves.
        
       | itissid wrote:
       | GPTs only generate specific answers if they are trained using
       | RHLF to prefer certain answers to others. Won't this mean that
       | coming up with a contract that meets a special individual's case
       | will require that much more fine-tuning?
       | 
       | Also how do you reconcile several logical arguments without a
       | solver? Like "If all tweedles must tweed", "If X is a tweedle,
       | therefore it must tweed unless it can meet conditions in para
       | 12". How can it possibly learn to solve many such conjunctions
       | that are staple in legal language?
        
       | unyttigfjelltol wrote:
       | This is apples and bowling balls. You also probably could replace
       | the CEO and entire executive team with LLMs. And cheaper! Much
       | cheaper!
       | 
       | But, if the stochastic analysis was ... wrong ... who would be
       | left to correct it?
        
       | sandbx wrote:
       | I like that they incl the prompt in the paper
        
       ___________________________________________________________________
       (page generated 2024-02-06 23:00 UTC)