[HN Gopher] Answering Legal Questions with LLMs
       ___________________________________________________________________
        
       Answering Legal Questions with LLMs
        
       Author : hugodutka
       Score  : 148 points
       Date   : 2024-04-29 14:01 UTC (8 hours ago)
        
 (HTM) web link (hugodutka.com)
 (TXT) w3m dump (hugodutka.com)
        
       | beeboobaa3 wrote:
       | No thank you, let's not.
        
       | kevingadd wrote:
       | Don't sections of regulations reference each other, and reference
       | other regulations? This article says they only insert snippets of
       | the section they believe to be directly relevant to the legal
       | question. It seems to me that this automatically puts the bot in
       | a position where it lacks all the information it needs to
       | construct an informed answer. Or are the laws in some regions
       | drafted in a "stand-alone" way where each section is fully
       | independent by restating everything?
       | 
       | This feels like they've built an ai that justifies itself with
       | shallow quotes instead of a deep understanding of what the law
       | means in context.
        
         | hugodutka wrote:
         | You're right that sections reference each other, and sometimes
         | reference other regulations. By creating the "plan for the
         | junior lawyer", the LLM can reference multiple related sections
         | at the same time. In the second step of the example plan in the
         | post there's a reference to "Articles 8-15", meaning 7 articles
         | that should be analyzed together.
         | 
         | The system is indeed limited in the way that it cannot
         | reference other regulations. We've heard it's a problem from
         | users too.
        
       | MontagFTB wrote:
       | A tool like this should live in service to the legal profession.
       | Like Copilot, without a human verifying, improving, and
       | maintaining the work, it is risky (possibly negligent) to provide
       | this service to end users.
        
         | tagersenim wrote:
         | My number one request is still: "please rewrite this legal
         | answer in simple language with short sentences." For this, it
         | is amazing (as long as I proofread the result). For actual
         | answers, eh...
        
           | SoftTalker wrote:
           | I assume you're in the legal profession? Do you find, as you
           | proofread, that you want to insert caveats and qualifiers
           | into the "simple" language and end up with something like the
           | original legalese?
           | 
           | Legal language is what it is in large part because simple
           | short sentences are too imprecise to express the detail
           | needed.
        
             | aeonik wrote:
             | Indeed, a corrolary in computer science is the the
             | reasoning behind why using the Go programming language is,
             | in general, a major mistake.
             | 
             | Simple vocabularies, while attractive will inevitably fail
             | to properly describe systems of any sufficient complexity
             | without long chains of hard to follow low-level logic.
             | 
             | Any system of sufficient complexity necessitates the use of
             | more complex vocabulary to capture all the nuances of the
             | system. See: Legalese, medical jargon, chemical naming
             | schemes, the existence of mathematics itself, etc..
        
         | zitterbewegung wrote:
         | This service may have been better with a higher context window
         | but with the required accuracy of legal document writing the
         | inaccuracy of the RAG systems are too high.
         | 
         | Also, people have actually used it in practice and it didn't go
         | that well. So human in the loop systems in practice should have
         | users finding corrections but won't occur when you release the
         | product.
         | 
         | https://qz.com/chat-gpt-open-ai-legal-cases-1851214411
        
           | mediaman wrote:
           | The systems described in the article don't sound like RAG at
           | all.
           | 
           | RAG systems have a much lower propensity to hallucinate, and
           | generate verifiable citations from the source material.
           | 
           | Though I think they're better as research aides than "write
           | the final product for me."
        
         | a13n wrote:
         | At some point computers will be able to provide better,
         | cheaper, and faster legal advice than humans. No human can fit
         | all of the law in their head, and don't always offer the 100%
         | accurate advice. Not everyone can afford a lawyer.
        
           | simonw wrote:
           | Planes with autopilots can fly cheaper and "better" than
           | human pilots. We still have human pilots.
           | 
           | I want a lawyer who can do their work more effectively
           | because they have assistance from LLM-powered tools.
           | 
           | I might turn to LLM-only assistance for very low-stakes legal
           | questions, but I'd much rather have an LLM-enhanced
           | professional for the stuff that matters.
        
             | kristiandupont wrote:
             | On the other hand, would you rather have a math professor
             | calculate the square root of a large number for you, or
             | would you use a calculator?
        
               | goatlover wrote:
               | I'd rather have a math professor teach me why I'm doing a
               | square root of a large number. Same thing applies to
               | lawyers. All this talk about automating away complex
               | professions is just that. LLMs are tools people use, not
               | lawyers, doctors or professors.
        
           | freejazz wrote:
           | > No human can fit all of the law in their head,
           | 
           | Good thing there is no need to!
        
       | tagersenim wrote:
       | Many laws, especially GDPR, can only be interpreted in
       | conjunction with a lot of guidelines (WP29 for example),
       | interpretations by the local Data Protection Authority, decisions
       | by local and European courts, etc.
       | 
       | Given all of this information, I think the bot will be able to
       | formulate and answer. However, the bot first needs to know what
       | information is needed.
       | 
       | If a lawyer has to feed the bot certain specific parts of all of
       | these documents, they might as well write the answer down
       | themselves.
        
         | Workaccount2 wrote:
         | I'm surprised Gemini 1.5 isn't getting more attention. Despite
         | being marginally worse than the leaders, its still solid and
         | you can dump 975,000 (!) tokens into it and still have ~75,000
         | to play with.
         | 
         | I've been using it lately for microcontoller coding, and I can
         | just dump the whole 500 page MCU reference manual into it
         | before starting, and it gives tailored code for the specific
         | MCU I am using. Total game changer.
        
           | lionkor wrote:
           | Is the resulting (C) code maintainable, unit testable, do you
           | understand it? If your answer is "I'll just ask gemini to
           | explain it", I will laugh sarcastically and then sob for the
           | poor people around the hardware you program for
        
             | Workaccount2 wrote:
             | I haven't had an issue (at least more than what is
             | expected). I am also an EE, not an SWE. I use it for
             | internal test systems and it has saved me tons of time that
             | I would have had to spend combing the reference manual.
             | 
             | As I am sure you know, embedded code often has terrible
             | portability and requires lots of "check the 500 page
             | datasheet" to get stuff working properly.
        
               | lionkor wrote:
               | Interesting, thank you!
        
       | 2099miles wrote:
       | Unintuitive LLM only rag?
        
         | balphi wrote:
         | I think its unintuitive relative to the standard implementation
         | of RAG today (e.g. vector-based similarity)
        
       | anonylizard wrote:
       | GPT-4 also cannot solve full programming problems, and frequently
       | makes large errors even with a small focused context, as in
       | Github Copilot Chat.
       | 
       | However, it is still extremely useful and productivity enhancing.
       | When combined with the right workflow and UI. Programming is
       | large enough of an industry, that has Microsoft building it out
       | in VScode. I don't think the legal industry has a similar tool.
       | 
       | Also, I think programmers are far more sensitive to radical
       | changes. They see the constant leaps in performance, and are
       | jumping in to use the AI tools, because they know what could be
       | coming next with GPT-5. Lawyers are generally risk averse, not
       | prone to hype, so far less eager customers for these new tools.
        
         | arethuza wrote:
         | Lawyers can also be held professionally liable if they get
         | things wrong.
        
           | withinboredom wrote:
           | I'll probably get downvoted to oblivion, but I wish it were
           | the same for software engineers (not programmers, or
           | developers though -- but people who explicitly label
           | themselves or be labeled as "engineers").
        
             | pylua wrote:
             | The industry would have to change drastically for that to
             | occur. I don't think it would be a bad thing, but it would
             | be a fundamental shift and drastically raise the cost.
        
             | photonthug wrote:
             | We all know that can't happen until and unless software
             | engineers get the ability to say "no" to random changes in
             | deadlines or specifications. We're talking about an
             | industry that invented agile so it can _skip_ the spec..
        
               | withinboredom wrote:
               | You already have the ability to say "no", unless someone
               | is holding a gun to your head while you write code -- but
               | most of us don't work in Swordfish type environments.
               | 
               | If you are worried about getting fired for saying "no",
               | create a union and get some actual worker's rights; at
               | least in the US, unions have far more rights than
               | workers.
        
       | vouaobrasil wrote:
       | The next step after this is more complicated laws because lawyers
       | can now use LLMs, and thus laws even more opaque to ordinary folk
       | who will have to use LLMs to understand anything. It's an even
       | more fragile system that will undoubtedly be in favour of those
       | who can wield the most powerful LLM, or in other words, the rich
       | and the corporations.
       | 
       | This is another example of technology making things temporarily
       | easier, until the space is filled with an equal dose of
       | complexity. It is Newton's third law for technological growth: if
       | technology asserts a force to make life simpler, society will
       | fill that void with an equal force in the opposite direction to
       | make it even more complex.
        
         | sumeruchat wrote:
         | Lmao dont make up laws like that please. If anything my guess
         | is that LLMs will make laws simpler and without loopholes and
         | rich people wont be able to hire lawyers to have a competitive
         | advantage in exploiting legal loopholes
        
           | vouaobrasil wrote:
           | Isn't it true though, at least in terms of the amount of
           | information we have to wade through these days? Haven't hard
           | drives gotten larger, and why? Because technology makes it
           | possible. It's funny that you are laughing, but it would be
           | even better if you made a serious argument against me.
           | 
           | To be honest, I am posing a serious possibility: are we
           | really sure that AI will cause a democratization of
           | knowledge? I mean, our society is valuable and keeps us
           | alive, so shouldn't we be at least asking the question?
           | 
           | It seems like even questioning technology around here is
           | taboo. What's wrong with discussing it openly? I think it's
           | rather naive to believe that technology will make life
           | simpler for the average person. I've lived long enough to
           | know that many inventions, such as the internet and
           | smartphone, have NOT made life easier at all for many,
           | although they bring superficial conveniences.
        
             | sumeruchat wrote:
             | Look i think its true that technology makes life worse in
             | many ways but making the legal system complex is not one of
             | those things in my opinion.
             | 
             | There is nothing wrong with your position its just that you
             | are trying to make weak generalizations driven primarily by
             | your emotions and anecdotal experience and not data
        
               | vouaobrasil wrote:
               | Well, I look forward to an easy refutation then.
        
               | sumeruchat wrote:
               | Well yes. This makes life better because
               | 
               | 1) This tech makes it easy for anyone to file a complex
               | legal complaint to a situation and then send it to the
               | right legal department for almost free.
               | 
               | 2) You can now ask a complex legal question and get a
               | response for almost free. (Example is reckless driving a
               | crime here ? What is the fine? LLM looks up your
               | coordinates, the laws there and then gives you the exact
               | response)
               | 
               | 3) Even if you dont know the language this tech
               | translates the laws for you and gives you an expert
               | analysis for almost free.
               | 
               | I easily see this as a win for the less powerful.
        
               | freejazz wrote:
               | >1) This tech makes it easy for anyone to file a complex
               | legal complaint to a situation and then send it to the
               | right legal department for almost free.
               | 
               | That's the assumption.
               | 
               | >2) You can now ask a complex legal question and get a
               | response for almost free. (Example is reckless driving a
               | crime here ? What is the fine? LLM looks up your
               | coordinates, the laws there and then gives you the exact
               | response)
               | 
               | I don't think that's a 'complex legal question'
               | 
               | >3) Even if you dont know the language this tech
               | translates the laws for you and gives you an expert
               | analysis for almost free.
               | 
               | That's not what expert means.
        
         | efitz wrote:
         | In the US, the vast majority of legislators are lawyers.
         | Lawyers have their own "unions" (eg the American Bar
         | Association").
         | 
         | I can definitely see this kind of protectionism occurring.
         | 
         | OTOH, I also see potential for a proliferation of law firms
         | offering online services that are LLM-driven for specific
         | scenarios, or tech firms (LegalZoom etc) offering similar
         | services, and hiring a lawyer on staff to ensure that they
         | can't be sued for providing unlicensed legal advice.
         | 
         | In other words it might compete with lawyers at the low end,
         | but big law could co-opt it to take advantage of efficiency
         | increases over hiring interns and junior lawyers.
        
         | ed_balls wrote:
         | You can solve it be assigning a complexity score to a law. If
         | the law increases complexity you need a supermajority to pass
         | it, otherwise simple majority is ok.
        
           | lpribis wrote:
           | How would you define "complexity score"? The complexity of
           | options trading regulation should not be subject to the same
           | complexity threshold as (eg) public intoxication laws.
        
             | ed_balls wrote:
             | It's quite hard, it would be a mixture or references,
             | conditions, size of all legislation.
             | 
             | >The complexity of options trading regulation should not be
             | subject to the same complexity threshold as (eg) public
             | intoxication laws
             | 
             | Why not? The point is to make it a bit harder to pass more
             | complex laws, not stopping it. Your parliament has 500
             | seats. You need 251 votes to pass new complex law. For laws
             | that simplify complexity you need half of the present MPs
             | e.g. 400 are in, so you need 201 votes.
        
       | bdw5204 wrote:
       | Using LLMs to understand laws seems like about as bad an idea as
       | using them to write legal documents:
       | 
       | https://apnews.com/article/artificial-intelligence-chatgpt-f...
        
       | efitz wrote:
       | This was an excellent article describing how they broke down a
       | complex task that an LLM was bad at, into a series of steps that
       | the LLM could excel at. I think that this approach is probably
       | broadly applicable across law (and perhaps medicine).
        
       | avidiax wrote:
       | Is there perhaps a training data problem?
       | 
       | Even if the LLM were trained on the entire legal case law corpus,
       | legal cases are not structured in a way that an LLM can follow.
       | They reference distant case law as a reason for a ruling, they
       | likely don't explain specifically how presented evidence meets
       | various bars. There are then cross-cutting legal concepts like
       | spoliation that obviate the need for evidence or deductive
       | reasoning in areas.
       | 
       | I think a similar issue likely exists in highly technical areas
       | like protocol standards. I don't think that an LLM, given 15,000
       | pages of 5G specifications, can tell you why a particular part of
       | the spec says something, or given an observed misbehavior of a
       | system, which parts of the spec are likely violated.
        
       | bschmidt1 wrote:
       | I saw a RAG demo from a startup that allows you to upload
       | patient's medical docs, then the doctor can ask it questions
       | like:
       | 
       | > what's the patient's bp?
       | 
       | even questions about drugs, histories, interactions, etc. The AI
       | keeps in mind the patient's age and condition in its responses,
       | when recommending things, etc. It reminded me of a time I was at
       | the ER for a rib injury and could see my doctor Wikipedia'ing
       | stuff - couldn't believe they used so much Wikipedia to get their
       | answers. This at least seems like an upgrade from that.
       | 
       | I can imagine the same thing with laws. Preload a city's,
       | county's etc. entire set of laws and for a sentencing, upload a
       | defendant's criminal history report, plea, and other info then
       | the DA/judge/whoever can ask questions to the AI legal advisor
       | just like the doctor does with patient docs.
       | 
       | I mention this because RAG is perfect for these kinds of use
       | cases, where you really can't afford the hallucination - where
       | you need its information to be based on specific cases - specific
       | information.
       | 
       | I used to think AI would replace doctors before nurses, and
       | lawyers before court clerks - now I think it's the other way
       | around. The doctor, the lawyer - like the software engineer -
       | will simply be more powerful than ever and have lower overhead.
       | The lower-down jobs will get eaten, never the knowledge work.
        
         | mdgrech23 wrote:
         | I've 100% found AI to be super helpful in learning a new
         | programming language or refreshing on one I haven't used in a
         | while. Hey how do I this thing in Gleam? What's Gleams
         | equivalent of y? I turn it first instead of
         | forums/stackoverflow/google now and would say I only need to
         | turn to other sources less than maybe 5% of the time.
        
           | jacobr1 wrote:
           | I think that is right. The sweat spot is twofold: 1) A
           | replacement for general search on a topic where you have
           | limited familiarity that can give you an answer for a concise
           | question, or a starting point for more investigation or 2)
           | For power-user use cases, where there already exists subject
           | matter expertise, elaboration or extrapolation from a clear
           | starting point to a clear end state, such as translation or
           | contextualized exposition.
           | 
           | The problem comes with thinking you can bridge both of those
           | use cases - vague task descriptions to final output. The work
           | described in the article of getting an LLM itself to break
           | down a task seems to work sometime but struggles in many
           | scenarios. Products that can define their domain narrowly
           | enough, and embed enough domain knowledge into the system,
           | and can ask the feedback at the right points, and going to be
           | successful and more generalized systems will either need to
           | act more like tools rather than complete solutions.
        
             | itronitron wrote:
             | Is "the sweat spot" where you want to be though?
        
               | wizzwizz4 wrote:
               | Absolutely. If you're not sweating, you're not forcing
               | your prey to stop for rest, and the ruminant you're
               | chasing will outpace you.
        
           | bschmidt1 wrote:
           | Absolutely, I can't imagine doing Angular without an LLM
           | sidekick.
           | 
           | Curiosity + LLM = instant knowledge
        
             | Taylor_OD wrote:
             | Yup. Entirely replaced the "soft" answers online like stack
             | overflow for me. Now its LLM and if that isnt good enough
             | then right to docs. I actually read documentation more
             | often now because its pretty clear when I'm trying to do
             | something common (LLM handle this well) vs uncommon (LLM
             | often do not handle this well).
        
             | goatlover wrote:
             | That's a weird thing to say considering people were doing
             | Angular just fine before chatGPT made LLMs popular only 15
             | months ago.
        
           | hparadiz wrote:
           | I found this to be the case recently when I built something
           | new in a framework I hadn't used before. The AI replaced
           | Google most of the time and I learned the syntax very fast.
        
         | sdesol wrote:
         | > I used to think AI would replace doctors before nurses, and
         | lawyers before court clerks - now I think it's the other way
         | around.
         | 
         | I've come to this conclusion as well. AI is a power tool for
         | those that know what questions to ask and will become a crunch
         | for those that don't. My concern is with the latter, as I think
         | they will lose the ability develop critical thinking skills.
        
         | spmurrayzzz wrote:
         | > I mention this because RAG is perfect for these kinds of use
         | cases, where you really can't afford the hallucination - where
         | you need its information to be based on specific cases -
         | specific information.
         | 
         | I think it's worth cautioning here that even with attempted
         | grounding via RAG, this does not completely prevent the model
         | from hallucinating. RAG can and does help improve performance
         | somewhat there, but fundamentally the model is still
         | autoregressively predicting tokens and sampling from a
         | distribution. And thus, it's going to predict incorrectly some
         | of the time even if its less likely to do so.
         | 
         | I think its certainly a worthwhile engineering effort to
         | address the myriad of issues involved, and I'd never say this
         | is an impossible task, but currently I continue to push caution
         | when I see the happy path socialized to the degree it is.
        
         | JohnFen wrote:
         | > It reminded me of a time I was at the ER for a rib injury and
         | could see my doctor Wikipedia'ing stuff
         | 
         | To be honest, I'm much more comfortable with a doctor looking
         | things up on wikipedia than using LLMs. Same with lawyers,
         | although the stakes are lower with lawyers.
         | 
         | If I knew my doctor was relying on LLMs for anything beyond the
         | trivial (RAGS or not), I'd lose a lot of trust in that doctor.
        
           | sandworm101 wrote:
           | >> Same with lawyers, although the stakes are lower with
           | lawyers.
           | 
           | Doctors and lawyer appear to be using LLMs in fundamentally
           | different ways. Doctors appear to use them as consultants.
           | The LLM spits out an opinion and the Doctor decides whether
           | to go with it or not. Doctors are still writing the drug
           | prescriptions. Lawyers seem to be submitting LLM-generated
           | text to courts without even editing it, which is like the
           | Doctor handing the prescription pad to the robot.
        
             | elicksaur wrote:
             | That's just the highly publicized failures of lawyers.
             | There's likely lawyers also using them discerningly and
             | doctors using them unscrupulously, but just not as
             | publicized.
             | 
             | If a doctor wrote the exact prescription an LLM outputs,
             | how would anyone other than the LLM provider know?
        
             | parpfish wrote:
             | I'm less concerned about how trained professionals use LLMs
             | than I am about untrained folks using them to be a DIY
             | doctor/lawyer.
             | 
             | Luckily doctoring has the safeguard that you will need a
             | professional to get drugs/treatments, but there isn't as
             | much of a safety net for lawyering
        
               | sandworm101 wrote:
               | >> safety net for lawyering
               | 
               | There are some nets, but they aren't as official. The
               | lawyer version of a Doctor's prescription pad is the
               | ability to send threatening letters on law firm
               | letterhead. Lawyers are also afforded privilege's in
               | jails and prisons, things like non-monitored phone calls,
               | that aren't made available to non-lawyers.
        
               | parpfish wrote:
               | but there's no safety net for things that are outside the
               | justice system (e.g., "is this a fair contract?") or
               | things the aren't in the justice system _yet_ (e.g.,  "am
               | i allowed to cut down my neighbor's tree if it blocks my
               | view?")
        
               | sandworm101 wrote:
               | As there are no safety nets for people who want to
               | perform their own surgery on themselves or take de-
               | worming meds instead of getting a vaccination.
        
             | Der_Einzige wrote:
             | It was garbage lawyers doing that. Straight up a cooley law
             | graduate (worst law school in america - the same one
             | Micheal Cohen attended)
             | 
             | The good lawyers are using LLMs without being detected
             | because they didn't submit it verbatim without
             | verification.
        
           | nyrikki wrote:
           | Automation bias plus the LLM failure mode (compitant,
           | confident, and inevitable wrong) will absolutely cost lives.
           | 
           | I am a fan of ML, but simplicity bias and the fact that
           | hallucinations are an intrinsic feature of LLMs is
           | problematic.
           | 
           | ML is absolutely appropriate and will be useful for finding
           | new models in medicine, but it is dangerous and negligent to
           | blindly use, even quantification is often not analytically
           | sufficient in this area.
        
           | bschmidt1 wrote:
           | That's fair, and although I disagree, I at least like that
           | the debate has evolved from _doctors vs LLMs_ to _Wikipedia
           | vs LLMs_.
           | 
           | When we accept that AI is not replacing knowledge workers,
           | the conversation changes to a more digestible and debatable
           | one: Are LLMs useful tools for experts? And I think the
           | answer will be a resounding: _Duh_
        
             | JohnFen wrote:
             | > When we accept that AI is not replacing knowledge workers
             | 
             | I don't accept this, personally. These tools will
             | absolutely be replacing workers of many types. The only
             | questions are which fields and to what degree.
             | 
             | > Are LLMs useful tools for experts?
             | 
             | I didn't think this was a question in play. Of course they
             | can be, once experts figure out how to use them
             | effectively. I thought the question was whether or not the
             | cost/benefit ratio is favorable overall. Personally, I'm
             | undecided on that question because there's not nearly
             | enough data available to do anything but speculate.
        
               | bschmidt1 wrote:
               | > These tools will absolutely be replacing workers of
               | many types
               | 
               | Yeah I agree with that, that's why I specified knowledge
               | workers. I don't think it's bad if cashiers get replaced
               | by self-checkout or if receptionists get replaced by
               | automated agents on either end.
               | 
               | Emergency/police dispatchers - obviously increased
               | sensitivity that makes it a special case, but I still
               | think AI can eventually do the job better than a human.
               | 
               | Driving cars - not yet, at least not outside specific
               | places, but probably eventually, and definitely for known
               | routes.
               | 
               | Teaching yoga - maybe never, as easy as it would be to
               | do, some people might always want an in-person experience
               | with a human teacher and class.
               | 
               | But importantly - most knowledge workers can't be
               | displaced by AI when the work entails solving problems
               | with undocumented solutions that the AI could not have
               | trained on yet, or any work that involves judgment and
               | subjectivity, or that requires a credential (doctor to
               | write the prescription, engineer to sign off on the
               | drawing) or security clearance, authorizations, etc.
               | There's a lot of knowledge work it can't touch.
        
               | JohnFen wrote:
               | > that's why I specified knowledge workers.
               | 
               | I don't think all knowledge workers are immune. Some will
               | be, but companies are going to shed as much payroll as
               | their customers will tolerate.
               | 
               | > I don't think it's bad if cashiers get replaced by
               | self-checkout or if receptionists get replaced by
               | automated agents on either end.
               | 
               | Well, it's bad for those workers. And, personally, I'd
               | consider it bad for me. Having to use self-checkout is a
               | much worse experience than human cashiers. Same with
               | replacing receptionists (and etc.) with automated agents.
               | 
               | When people bring up these uses for LLMs, it sounds to me
               | like they're advocating for a world that I honestly would
               | hate to be a part of as a customer. But that's not really
               | about LLMs as much as it's about increasing the rate of
               | alienation in a world where we're already dangerously
               | alienated from each other.
               | 
               | We need more interpersonal human interactions, not less.
        
               | bschmidt1 wrote:
               | > shed as much payroll as their customers will tolerate
               | 
               | How much will they tolerate with all the new competition
               | it's creating? If you haven't noticed the big boys are
               | trying to ban it lol. What will it do to Netflix when I
               | can say:                 make a Tarantino film of a
               | cyberpunk future starring Bruce Willis and Uma Thurman in
               | their prime with a cameo of Ariana Grande who plays the
               | crime lord's daughter
               | 
               | Or:                 make a very long film that covers
               | every detail of every Harry Potter book
               | 
               | And just get like a 34 hour film that you watch when you
               | have time. Do that with the thousands of years of books
               | already written, in every film style - you could produce
               | infinite content.
               | 
               | Like stock photo sites, soon the film streaming sites
               | will have to either get on top of text-to-video or go the
               | way of Blockbuster. That's just one consumer example to
               | illustrate the potential - AI applies similarly to a lot
               | more boring areas of technology too. A lot of big
               | companies are afraid of AI, and not for the reasons the
               | news is saying.
               | 
               | > cashiers
               | 
               | Disagree on this. Nobody should have to wear a goofy
               | uniform and do the work of a machine. AI is a liberator
               | if it eliminates miserable, low-paying jobs that don't
               | empower individuals financially or intellectually. If all
               | 25 million service worker jobs were eliminated by AI
               | today, I don't think those people would never work
               | another job - they'd just have a better one the same way
               | a service worker in 2024 has better conditions than one
               | in 1924.
               | 
               | > would hate the world / need more interpersonal
               | 
               | Agree on interpersonal, people need to appreciate each
               | other more, look to and help each other and stop
               | worshiping billionaires and brands to start. Buy each
               | other's products, not those of international corps, etc.
               | But if executed well, I see AI as greatly aiding
               | communities by enhancing the creative and productive
               | output of individuals, freeing us up to focus on our
               | human qualities and our businesses. Yes it helps
               | businesses do things easier, but it also helps
               | individuals get up on that level of competition.
        
           | tivert wrote:
           | > To be honest, I'm much more comfortable with a doctor
           | looking things up on wikipedia than using LLMs. Same with
           | lawyers, although the stakes are lower with lawyers.
           | 
           | Yeah, a Wikipedia using doctor could at least fix the errors
           | on Wikipedia they spot.
        
         | epcoa wrote:
         | Obviously, no idea why your doc was using Wikipedia so much,
         | but in general the fair baseline to compare isn't Wikipedia,
         | it's mature, professionally reviewed material like Uptodate,
         | Dynamed, AMBOSS, etc that do have clinical decision support
         | tools and purpose built calculators and references. Of course
         | they're all working on GenAI stuff. (Not to mention
         | professional wikis like LIFTL, emcrit, IBCC).
         | 
         | An issue with these products is access and expense (wealthy
         | institutions easily have access, poorer ones do not), but that
         | seems like a problem that is no better with the new fangled
         | tech.
         | 
         | GIGO is a bigger problem. The current state of tech cannot
         | overcome a shitty history and physical, or outright missing
         | data/tests due to factors unrelated to clinical decision
         | making. I surmise that is a bigger factor than the incremental
         | conveniences of RAG, but I could very well be full of crap.
        
           | guidzgjx wrote:
           | "wealthy institutions easily have access, poorer ones do
           | not),"
           | 
           | Everything you said is agreeable except that statement. The
           | institution's wealth doesn't trickle down to the docs, who
           | pay out of pocket for many of these tools.
        
             | epcoa wrote:
             | Not sure how this is disagreeable it's just relaying an
             | easily verifiable fact. In the US any decent academic
             | affiliated institution or well funded private one will have
             | institutional memberships to one or more of these products.
             | I've never paid out of pocket for either UpToDate or
             | Dynamed, for instance, but obviously not everyone has that
             | benefit, especially on a global level.
             | 
             | > The institution's wealth doesn't trickle down to the docs
             | 
             | As a general statement that's just nonsense. Richer
             | institutions provide better equipment for one, and will
             | often pay for personal equipment memberships like POCUS
             | (and that tends to be more segmented to the top
             | institutions), training, and of course expenses for
             | conferences.
        
         | stult wrote:
         | > Preload a city's, county's etc. entire set of laws
         | 
         | You would also need to load an enormous amount of precedential
         | case law, at least in the US and other common law
         | jurisdictions. Synthesizing case law into rules of law
         | applicable to a specific case requires complex analysis that is
         | frequently sensitive to details of the factual context, where
         | LLMs' lack of common sense can lead it to make false
         | conclusions, particularly in situations where the available,
         | on-point case law is thin on the ground and as a result
         | directly analogous cases are not available.
         | 
         | I don't see the utility at the current performance level of
         | LLMs, though, as the OP article seems to confirm. LLMs may
         | excel in restating or summarizing black letter or well-
         | established law under narrow circumstances, but that's a
         | vanishingly small percentage of the actual work involved in
         | practicing law. Most cases are unremarkable, and the lawyers
         | and judges involved do not need to conduct any research that
         | would require something like consulting an AI assistant to
         | resolve all the important questions. It's just routine, there's
         | nothing special about any given DUI case, for example. Where
         | actual research is required, the question is typically
         | extremely nuanced, and that is precisely where LLMs tend to
         | struggle the most to produce useful outputs. LLMs are also
         | unlikely to identify such issues, because they are issues for
         | which sufficient precedent does not exist and therefore the LLM
         | will by definition have to engage in extrapolational, creative
         | analysis rather than simply reproducing ideas or language from
         | its training set.
        
           | bschmidt1 wrote:
           | > You would also need to load an enormous amount of
           | precedential case law
           | 
           | Very easily done. Is that it?
           | 
           | > lack of common sense, false conclusions
           | 
           | The AI tool doesn't replace the judge/DA/etc. it's just a
           | very useful tool for them to use. Checkout the "RAG-based
           | learning" section of this app I built
           | (https://github.com/bennyschmidt/ragdoll-studio) there's a
           | video that shows how you can effectively load new knowledge
           | into it (I use LlamaIndex for RAG). For example, past cases
           | that set legal precedents, and other information you want to
           | be considered. It creates a database of the files you load
           | in, so it's not making those assumptions like an LLM without
           | RAG would. I think a human would be more error-prone than an
           | LLM with vector DB of specific data + querying engine.
           | 
           | > I don't see the utility
           | 
           | Then you are not paying attention or haven't used LLMs that
           | much. Maybe you're unfamiliar with the kind of work it's good
           | at.
           | 
           | > actual work involved in practicing law
           | 
           | This is what it's best at, and what people are already using
           | RAG for: Reading patient medical docs, technical
           | documentation, etc. this is precisely what humans are bad at
           | and will offload to technology.
           | 
           | > actual research is required
           | 
           | You have not tried RAG.
           | 
           | > LLMs struggle to produce useful outputs
           | 
           | You have not tried RAG.
           | 
           | > LLMs are unlikely to identify issues
           | 
           | You have not tried RAG.
           | 
           | > the LLM by definition is creative analysis
           | 
           | You have not tried RAG.
           | 
           | You can load an entire product catalog into LlamaIndex and
           | the LLM will have perfect knowledge of pricing, inventory,
           | etc. This specific domain knowledge of inventory allows you
           | to have the accurate, transactional conversations that a
           | regular LLM isn't designed for.
        
             | cess11 wrote:
             | I'm not so sure human judgement is as comparable to medical
             | terminology or technical manuals as you think it is.
             | 
             | How did you come to this conclusion?
        
               | bschmidt1 wrote:
               | Maybe I wasn't that clear, but I did say in my original
               | post:
               | 
               |  _I used to think AI would replace doctors before nurses,
               | and lawyers before court clerks - now I think it 's the
               | other way around. The doctor, the lawyer - like the
               | software engineer - will simply be more powerful than
               | ever and have lower overhead. The lower-down jobs will
               | get eaten, never the knowledge work._
               | 
               | Yet you and a few other people insist I'm saying "AI will
               | replace human judgment" - why? I'm saying the doctor
               | isn't replaced, the lawyer, the software engineer, etc.
               | aren't replaced. It's more like the technician just got a
               | better technical manual, not like they are replaced by
               | it.
        
             | freejazz wrote:
             | >You can load an entire product catalog into LlamaIndex and
             | the LLM will have perfect knowledge of pricing, inventory,
             | etc. This specific domain knowledge of inventory allows you
             | to have the accurate, transactional conversations that a
             | regular LLM isn't designed for.
             | 
             | Aren't we talking about caselaw? You didn't really respond
             | to the point, which distinguished caselaw from information
             | like a product catalog. And rather rudely at that.
        
               | bschmidt1 wrote:
               | Rudely? Ha - they misrepresented my point about RAG
               | tooling not replacing lawyers into a straw man about
               | replacing lawyers - I never said that, said the opposite.
               | 
               | Secondly, it's obvious they have not used RAG, or they
               | wouldn't say things like "inaccurate responses" etc. RAG
               | is as accurate as any database (because it is a
               | database). It puts all the information from your uploaded
               | files into a database and reads from that. The commenter
               | fundamentally misunderstands the technology and likely
               | hasn't even used it - yet feels the need to comment on it
               | like an expert. It's not like using ChatGPT, and in any
               | case it's not in lieu of a lawyer anyway, that was just a
               | straw man argument that goes counter to my actual post.
               | 
               | I did respond to the points about accuracy and legal
               | precedents. Unlike the other false statements that were
               | made, these are legitimate concerns a lot of people share
               | about whether or not LLM tooling should be used by legal
               | professionals.
               | 
               | Is ChatGPT sufficient to replace a lawyer? No.
               | 
               | Is ChatGPT sufficient as a legal advice tool that a
               | lawyer might use on a case-by-case basis or generally?
               | No.
               | 
               | Could the same LLM technology be used except on a body of
               | specific case documents to surface information through a
               | convenient language interface to a legal expert? Yes.
               | It's as safe as SQL.
               | 
               | The point about pricing and inventory is that, unlike an
               | LLM, RAG involves retrieval of specific facts from a
               | document (or collection of documents) - the language is
               | more for handling your query and matching it to that
               | information. None of the points he made about
               | inaccuracies and insufficient answers, etc. or replacing
               | lawyers apply.
        
             | Der_Einzige wrote:
             | I have tried a lot of RAG and can tell you that no LLM,
             | including Gemini 1.5 with it's 1.5 million context, will be
             | anywhere near as good at longer context lengths as in
             | shorter context lengths.
             | 
             | Appending huge numbers of tokens to the prompt often leads
             | to the system prompt or user instructions being ignored,
             | and since API based LLM authors are terrified of
             | jailbreaks, they won't give you the ability to "emphasize"
             | or "upweight" tokens (despite this being perfectly
             | possible) since you can easily upweight a token to
             | overwhelm the DPO alignment lobotomization that most models
             | go through - so no easy fix for this coming from
             | OpenAI/Anthropic et al
        
         | remram wrote:
         | > couldn't believe they used so much Wikipedia to get their
         | answers. This at least seems like an upgrade from that
         | 
         | I don't know if I would even agree with that. Wikipedia doesn't
         | invent/hallucinate answers when confused, and all claims can be
         | traced back to a source. It has the possibility of fabricated
         | information from malicious actors, but that seems like a step
         | up from LLMs trained on random data (including fabrications)
         | which also adds its own hallucinations.
        
           | bongoman42 wrote:
           | Unfortunately, there's plenty of wrong information on
           | Wikipedia and the sources don't always say what the article
           | is claiming. Another issue is that, all sources are not
           | created equal and you can often find a source to back you up
           | regardless of what you might want backed up. This is
           | especially in politicised issues like autism, and even things
           | that might appear uncontroversial like vaccines and so on.
        
             | ikesau wrote:
             | There's arbitrary "accuracy lowering" vandalism done by (i
             | suspect) bots that alters dates by a few days/months/years,
             | changes the middle initial of someone, or randomizes the
             | output in an example demonstrating how a cipher works.
             | 
             | it can be hard to spot if no one's watching the article.
             | puts me in a funk whenever I catch it.
        
               | marcosdumay wrote:
               | Some people edit chemistry articles replacing the
               | reactions by stuff that doesn't make any sense or can't
               | possibly work. Some people changes the descriptions of CS
               | algorithms removing pre-conditions, random steps, or
               | adding a wrong intermediate state. And, maybe the worst,
               | somebody vandalizes all the math articles changing the
               | explanations into abstract nonsense that nobody that
               | doesn't already know their meaning can ever understand.
        
         | lolinder wrote:
         | > I can imagine the same thing with laws. Preload a city's,
         | county's etc. entire set of laws and for a sentencing, upload a
         | defendant's criminal history report, plea, and other info then
         | the DA/judge/whoever can ask questions to the AI legal advisor
         | just like the doctor does with patient docs.
         | 
         | This has been tried already, and it hasn't worked out well so
         | far for NYC [0]. RAG can helps avoid complete hallucinations
         | but it can't eliminate them altogether, and as others have
         | noted the failure mode for LLMs when they're wrong is that
         | they're _confidently_ wrong. You can 't distinguish between
         | confident-and-accurate bot legal advice and confident-but-wrong
         | bot legal advice, so a savvy user would just avoid the bot
         | legal advice at all.
         | 
         | [0] https://arstechnica.com/ai/2024/03/nycs-government-
         | chatbot-i...
        
         | barrenko wrote:
         | We have a kind of popular legal forum in my country and I'm
         | convinced if I managed to scrape it properly and format QA
         | pairs for fine-tuning it would make a kick-ass legal assistant
         | (paralegal?). Supply it with some actual laws and codification
         | via RAG and voila. Just need to figure out how to take no
         | liability.
        
         | sqeaky wrote:
         | If the court AI were a cost cutting measure before real courts
         | were involved and appeals to a conventional court could be made
         | then I think it could be done with current tech. Courts in the
         | US are generally overworked and I think many would see an AI
         | arbiter as preferable to one-sided plea agreements.
        
         | akira2501 wrote:
         | > It reminded me of a time I was at the ER for a rib injury and
         | could see my doctor Wikipedia'ing stuff
         | 
         | When was this and what country was it in?
         | 
         | > The doctor, the lawyer - like the software engineer - will
         | simply be more powerful than ever
         | 
         | I love that LLMs exist and this is what people see this as the
         | "low hanging fruit." You'd expect that if these models had any
         | real value, they would be used in any other walk of life first,
         | the fact that they're targeted towards these professions, to
         | me, highlights the fact that they are not currently useful and
         | the owners are hoping to recoup their investments by shoving
         | them into the highest value locations.
         | 
         | Anyways.. if my Doctor is using an LLM, then I don't need them
         | anymore, and the concept of a hospital is now meaningless. The
         | notion that there would be a middle ground here adds additional
         | insight to the potential future applications of this
         | technology.
         | 
         | Where did all the skepticism go? It's all wanna be marketing
         | here now.
        
           | Terretta wrote:
           | > _Anyways.. if my Doctor is using an LLM, then I don 't need
           | them anymore, and the concept of a hospital is now
           | meaningless._
           | 
           | Let's test out this "if A then B therefore C" on a few other
           | scenarios:
           | 
           | - If your lawyer is using a paralegal, you don't need your
           | lawyer any more, and the concept of a law firm is now
           | meaningless.
           | 
           | - If your home's contractor is using a day laborer, you don't
           | need your contractor any more, and the concept of a
           | construction company is meaningless.
           | 
           | - If your market is using a cashier, you don't need the
           | manager any more, and the concept of a supermarket is
           | meaningless.
           | 
           | It seems none of these make much sense.
           | 
           | As long as we've had vocations, we've had apprentices to
           | masters of craft, and assistants to directors of work.
           | 
           | That's "all" an LLM is: a secretary pool speed typist with an
           | autodidact's memory and the domain wisdom of an intern.
           | 
           | The part of this that's super valuable is the lateral
           | thinking connections through context, as the LLM has read
           | more than any master of any domain, and can surface ideas and
           | connections the expert may not have been exposed to. As an
           | expert, however, they can guide the LLM's output, iterating
           | with it as they would their assistant, until the staff work
           | is fit for use.
        
           | bschmidt1 wrote:
           | > When was this and what country was it in?
           | 
           | San Francisco in 2019.
           | 
           | > if LLMs had value they would be used elsewhere first
           | therefore they are not currently useful
           | 
           | I don't see how this logically follows. LLMs are already used
           | and will continue to displace tooling (and even jobs) in
           | various positions whether its cashiers, medical staff, legal
           | staff, auto shops, police (field work and dispatch), etc. The
           | fact they don't immediately displace knowledge workers is:
           | 
           | 1) A win for knowledge workers, you just got a free and open
           | source tool that makes you more valuable
           | 
           | 2) Not indicative of lacking value, looks more like LLMs
           | finding product-market-fit
           | 
           | > the concept of a hospital is now meaningless
           | 
           | Like saying you won't go to an auto shop that does research,
           | or hire a developer who uses a coding assistant. Why? They'd
           | just be better, more informed.
        
         | cogman10 wrote:
         | > I used to think AI would replace doctors before nurses, and
         | lawyers before court clerks - now I think it's the other way
         | around.
         | 
         | Nurses don't read numbers from charts. Part of their duties
         | might be grabbing a doc when numbers are bad but a lot of the
         | work of nursing is physical. Administering drugs, running
         | tests, setting up and maintaining equipment for measurements.
         | Suggesting a nurse would be replaced by AI is almost like
         | suggesting a mechanic would be replaced by AI before the
         | engineer would.
        
       | liampulles wrote:
       | Key point here is that the implementation combines an LLM summary
       | with DIRECT REFERENCES to the source material:
       | https://hotseatai.com/ans/does-the-development-and-deploymen...
       | 
       | That seems to me a sensible approach, because it gives lawyers
       | the context to make it easy to review the result (from my limited
       | understanding).
       | 
       | I wonder if much of what would want couldn't be achieved by
       | analyzing and storing the text embeddings of legal paragraphs in
       | a vector database, and then finding the top N closest results
       | given the embedding of a legal question? Then its no longer a
       | question of an LLM making stuff up, but more of a semantic
       | search.
        
         | still_grokking wrote:
         | That would work better and more efficient.
         | 
         | But than there's no "AI" in there. So nobody would like to
         | throw money on it currently.
        
         | Terr_ wrote:
         | The un-solved problem is how to ensure users _actually verify
         | the results_ , since human laziness is a powerful factor.
         | 
         | In the long run, perhaps the most dangerous aspect of LLM tech
         | is how much better it is at faking a layer of metadata which
         | humans automatically interpret as trustworthiness.
         | 
         | "It told me that cavemen hunted dinosaurs, but it said so in a
         | _very_ articulate and kind way, and I don 't see why the
         | machine would have a reason to lie about that."
        
           | heycosmo wrote:
           | I would like to see solutions (for professionals) that ditch
           | the whole generative part altogether. If it's so good at
           | finding references or identifying relevant passages in large
           | corpora, just show the references. As you said, the "answer"
           | only entices laziness and injects uncertainty.
        
             | gowld wrote:
             | Yes. The most "exciting" part is the worst part of the
             | whole system, that contributes negatively.
        
             | Terr_ wrote:
             | I think the important product-design issue here (which may
             | be sabotaged by the Attract Investor Cash issue) is that
             | labor-savings can backfire when:
             | 
             | 1. It takes longer to verify/debug/fix specious results
             | than to just do it manually.
             | 
             | 2. Specious results were not reliably checked, leading to
             | something exploding in a very bad way.
        
           | anon373839 wrote:
           | Perhaps the system should be designed to equivocate on any
           | conclusions, while prioritizing display of the source
           | material. "Source X appears to state a rule requiring 2%
           | shareholders to report abc, but I can't say whether it
           | applies: [Block quote Source X]."
        
             | Terr_ wrote:
             | That would be nice, but I cynically suspect it's not
             | something LLMs are constitutionally able to provide.
             | 
             | Since they don't actually model facts or contradictions,
             | adding prompt-text like "provide alternatives" is in effect
             | more like "add weight to future tokens and words that
             | correlate to what happened in documents where someone was
             | asked to provide alternatives."
             | 
             | So the _linguistic forms_ of cautious equivocation are easy
             | to evoke, but reliably getting the logical content might be
             | impossible.
        
       | yieldcrv wrote:
       | 2024 and people still just realizing that LLM's need subtasks and
       | that "you're prompting it wrong" is the answer to everything
       | 
       | Maybe "prompt engineering" really is the killer job
        
       | nocoiner wrote:
       | _We've learned that the combination of high latency, faulty
       | reasoning, and limited document scope kills usage. No lawyer
       | wants to expend effort to ask a detailed question, wait 10
       | minutes for an answer, wade through a 2-pages-long response, and
       | find that the AI made an error._
       | 
       | Nor does any lawyer want to have that same experience with a
       | junior associate (except insert "two hours" for "10 minutes"),
       | yet here we are.
        
       | georgeecollins wrote:
       | I wonder if this "AI will replace your job" is like "AI will
       | drive your car" in that where once something can solve 95% of the
       | problem the general public assumes the last 5% will come very
       | quickly.
       | 
       | Rodney Brooks used to point out that self-driving was perceived
       | by the public as happening very quickly, when he could show early
       | examples in Germany from the 1950s. We all know this kind of AI
       | has been in development a long time and it keeps improving. But
       | people may be overestimating what it can do in the next five
       | years -- like they did with cars.
        
         | barrenko wrote:
         | The last 5% recursively turns into 95% of a new whole 100 and
         | so ad nauseum. But one time it will fold...
        
         | akira2501 wrote:
         | I'd say that's it's only value. This is all an obvious open
         | threat against the labor market and is designed to depress
         | wages.
         | 
         | If your business can be "staffed" by an LLM, then will not be
         | competitive, and you will no longer exist. This is not a
         | possible future in a capitalist market.
        
       | helpfulmandrill wrote:
       | Naively I wonder if the tendency towards "plausible bullsh*t"
       | could be problem here? Making very convincing legal arguments
       | that rest of precedents that don't exist etc.
        
         | sorokod wrote:
         | wonder no more [1]
         | 
         |  _In a cringe-inducing court hearing, a lawyer who relied on
         | A.I. to craft a motion full of made-up case law said he "did
         | not comprehend" that the chat bot could lead him astray._
         | 
         | [1] https://www.nytimes.com/2023/06/08/nyregion/lawyer-
         | chatgpt-s...
        
       | cess11 wrote:
       | EU law is case driven, and besides the text of cases you also
       | need to know the books interpreting them, general legal
       | principles that might be applicable and hermeneutic traditions.
       | 
       | They are clearly a long way from a tool that can compete with a
       | human lawyer.
        
       | daft_pink wrote:
       | I would say that it's getting better at answering those
       | questions. I have a list of difficult legal research questions
       | that I worked on at work and gemeni pro and claude opus are
       | definitely way better than 3 and 3.5 and 4.
       | 
       | I believe it will eventually get there and give good advice.
        
         | giobox wrote:
         | What is the situation regarding LLM access to the major
         | repositories of case law and legislation at places like
         | Westlaw/LexisNexis? Those are really basic information sources
         | for lawyers around the world and access is tightly controlled
         | (and expsensive!), but its enormously common for lawyers and
         | law students to need subscriptions to those services.
         | 
         | I'm just curious because I can't imagine either Westlaw or
         | LexisNexis giving being controller of access to this
         | information up without a fight, and a legal LLM that isn't
         | trained on these sources would be... questionable - they are
         | key sources.
         | 
         | The legislation text can probably be obtained through other
         | channels for free, but the case law records those companies
         | have are just as critical especially in Common Law legal
         | systems - just having the text of the legislation isn't enough
         | for most Common Law systems to gain an understanding of the
         | law.
         | 
         | EDIT: Looks like westlaw are trying their own solution, which
         | is what I would have guessed:
         | https://legal.thomsonreuters.com/en/products/westlaw-edge
        
       | anonu wrote:
       | > We preprocess the regulation so that when a call contains a
       | reference to "Annex III," we know which pages to put into the
       | "junior lawyer's" prompt. This is the LLM-based RAG I mentioned
       | in the introduction.
       | 
       | Is this RAG or just an iteration on more creative prompt
       | engineering?
        
         | pstorm wrote:
         | This is RAG. They are _retrieving_ specific info to _augment_
         | the _generation_
        
       | sandworm101 wrote:
       | Don't be too worried about LLM arms races. Law is not as
       | complicated as it seems on TV. Having access to a better LLM
       | isn't going to somehow give you access to the correct incantation
       | necessary to dismiss a case. The vast majority, like 99.99% of
       | cases, turn on completely understood legal issues. Everyone knows
       | everything.
        
         | Foobar8568 wrote:
         | Can you use freely information from a website is a simple
         | statement, yet....We have LLM.
        
         | aorloff wrote:
         | Perhaps, but a lot of lawyering is very expensive. If that
         | turns out to not be so expensive, the practice is going to
         | change.
         | 
         | Right now the court system works at a snail's pace, because it
         | expects that expensive lawyering happens slowly. If that
         | assumption starts to change, and then the ineffectiveness of
         | the courts due to their lack of modernization will really gum
         | up the system because they are nowhere near prepared for a
         | world in which lawyering is cheap and fast.
        
           | philipwhiuk wrote:
           | Nah, courts are already hugely backlogged.
        
       | balphi wrote:
       | How are you using regex to end the while loop? Are you detecting
       | a specific substring or is it something more complex?
        
       | w10-1 wrote:
       | Yes, law applies rules to facts.
       | 
       | No, connecting the facts and rules will not give you the answer.
       | 
       | Lawyers are only required when there are real legal issues:
       | boundary cases, procedural defenses, countervailing leverage...
       | 
       | But sometimes legal heroes like Witkins drag through all the
       | cases and statutes, identifying potential issues and condensing
       | them in summaries. New lawyers use these as a starting-point for
       | their investigations.
       | 
       | So a Law LLM first needs to be trained on Witkins to understand
       | the language of issues, as well as the applicable law.
       | 
       | Then somehow the facts need to be loaded in a form recognizable
       | as such (somewhat like a doctor translating "dizziness" to
       | "postural hypotension" with some queries). That would be an
       | interesting LLM application in its own right.
       | 
       | Putting those together in a domain-specific way would be a great
       | business: target California Divorce, Texas product-liability
       | tort, etc.
       | 
       | Law firms changed from pipes to pyramids in the 1980's as firms
       | expanded their use of associates (and started the whole
       | competition-to-partnership). This could replace associates, but
       | then you'd lose the competitiveness that disciplines associates
       | (and reduce buyers available for the partnership). Also,
       | corporate clients nurture associates as potential replacements
       | and redundant information sources, as a way of managing their
       | dependence on external law firms. For LLM's to have a sizable
       | impact on law, you'd need to sort out the transaction cost
       | economics features of law firms, both internally and externally.
        
       | ei625 wrote:
       | As the same as the software developer, the value of them isn't
       | just to have technical knowledge.
        
       | spdustin wrote:
       | I've always felt that a "smart" person isn't smart because they
       | know everything, but because they know _how to find the answers_.
       | Smart users of LLMs will use the output as an opportunity to
       | learn how to think about their problem, and smart implementations
       | of LLMs will guide the user to do so.
       | 
       | I'm not saying that every interaction must be Socratic, but that
       | the LLM neither be nor present itself as the answer.
        
       | jrm4 wrote:
       | Yup. As a lawyer and IT instructor, the "killer" application
       | really is "knowledgeable literate human-like personal
       | librarian/intern"
       | 
       | When they can do the following, we'll really be getting
       | somewhere.
       | 
       | "If I'm interpreting this correctly, most sources say XXXXXX,
       | does that sound right? If not, please help correct me?"
        
       ___________________________________________________________________
       (page generated 2024-04-29 23:01 UTC)