Demystifying LLMs: Crafting Multiple Choice Questions from Law Outlines

In today’s post, we’ll explore how legal educators and law students can use Large Language Models (LLMs) like ChatGPT and Claude to create multiple-choice questions (MCQs) from a law school outline.

Understanding the Process

My first attempt at this was to simply ask the LLM the best way to make MCQs but it didn’t end up being particularly helpful feedback, so I did some digging. Anthropic recently shed light on their method of generating multiple-choice questions, and it’s a technique that could be immensely beneficial for test preparation – besides being a useful way to conceptualize how to make effective use of the models for studying. They utilize XML tags, which may sound technical, but in essence, these are just simple markers used to structure content. Let’s break down this process into something you can understand and use, even if you’re not a wizard at Technical Services who is comfortable with XML.

Imagine you have a law school outline on federal housing regulations. You want to test your understanding or help students review for exams. Here’s how an LLM can assist you:

STEP 1: Prepare Your Outline

Ensure that your outline is detailed and organized. It should contain clear sections, headings, and bullet points that delineate topics and subtopics. This structure will help the LLM understand and navigate your content. If you’re comfortable using XML or Markdown, this can be exceptionally helpful. Internally, the model identifies the XML tags and the text they contain, using this structure to generate new content. It recognizes the XML tags as markers that indicate the start and end of different types of information, helping it to distinguish between questions and answers.

The model uses the structure provided by the XML tags to understand the format of the data you’re presenting.

STEP 2: Uploading the Outline

Upload your outline into the platform that you’re using. Most platforms that host LLMs will allow you to upload a document directly, or you may need to copy and paste the text into a designated area.

STEP 3: Crafting a General Prompt

You can write a general prompt that instructs the LLM to read through your outline and identify key points to generate questions. For example:

“Please read the uploaded outline on federal housing regulations and create multiple-choice questions with four answer options each. Focus on the main topics and legal principles outlined in the document.”

STEP 4: Utilizing Advanced Features

Some LLMs have advanced features that can take structured or semi-structured data and understand the formatting. These models can sometimes infer the structure of a document without explicit XML or Markdown tags. For instance, you might say:

“Using the headings and subheadings as topics, generate multiple-choice questions that test the key legal concepts found under each section.”

AND/OR

Give the model some examples with XML tags (so it can better replicate what you would like “few shot prompting”):

<Question>
What are "compliance costs" in HUD regulations?
</Question>
<Answers>
1. Fines for non-compliance.
2. Costs associated with adhering to HUD regulations.
3. Expenses incurred during HUD inspections.
4. Overheads for HUD compliance training.
</Answers>

The more examples you give, the better it’s going to be.

AND/OR

You can also use the LLM to add these XML tags depending on the size of your outline and the context limit of the model you are using (OpenAI recently expanded their limit dramatically). Give it a prompt asking it to apply tags and give it an example of the types of tags you would like for your content. Then tell the model to do it with the rest of your outline:

<LawSchoolOutline>
    <CourseTitle>Constitutional Law</CourseTitle>
    <Section>
        <SectionTitle>Executive Power</SectionTitle>
        <Content>
            <SubSection>
                <SubSectionTitle>Definition and Scope</SubSectionTitle>
                <Paragraph>
                    Executive power is vested in the President of the United States and is defined as the authority to enforce laws and ensure they are implemented as intended by Congress.
             

STEP 5: Refining the Prompt

It is very rare that my first try with any of these tools produces fantastic output. It is often a “conversation with a robot sidekick” (as many of you have heard me say at my presentations) and requires you to nudge the model to create better and better output.

If the initial questions need refinement, you can provide the LLM with more specific instructions. For example:

“For each legal case mentioned in the outline, create a question that covers the main issue and possible outcomes, along with incorrect alternatives that are plausible but not correct according to the case facts.”

Replicating the Process

Students can replicate this process for other classes using the same prompt. The trick here is to stay as consistent as possible with the way that you structure and tag your outlines. It might feel like a lot of work on the front end to create 5+ examples, apply tags, etc. but remember that this is something that can be reused later! If you get a really good MCQ prompt, you could use it for every class outline that you have and continue to refine it going forward.

Big Brother

This week, OpenAI announced new features to their platform at their first key-note event, including a new GPT-4 Turbo with 128K context, GPT-4 Turbo with Vision, DALL·E 3 API, and more. Furthermore, announced their agent Assistants API, including their own retrieval augmentation pipeline. (RAG) Today, we will focus on OpenAI’s entry into the RAG market.

At the surface level, RAG boils down to text generation models like Chat-GPT, retrieving data such as documents to assist users with questions and answering, summarization, and so on. Behind the scenes, however, other factors are at play such as vector databases, document chunking, and embedding models. Most RAG pipelines rely on an external vector database and require compute to create the embeddings. However, what OpenAI’s retrieval tool brings to the table is an all-encompassing RAG system. The system eliminates the need for external databases, and compute required to create and store the embeddings. Whether OpenAI’s retrieval system is optimal is a story for another day. Today we are focusing on the data implications.  

Data is the new currency fueling the new economy. Big Tech aims to take control of the economy by ingesting organizations’ private data including IP, leading to a “monolithic system” that completely controls users’ data. Google, Microsoft Adobe, and OpenAI are now offering indemnification to their users against potential copyright infringement lawsuits related to Generative AI, aiming to protect their business model by ensuring more favorable legal precedents. This strategy is underscored by the argument that both the input (ideas, which are uncopyrightable) and the output (machine-generated expressions, deemed uncopyrightable by the US Copyright Office) of Generative AI processes do not constitute copyright infringement. The consequences of Big Tech having their way could be dire, leading us to a cyberpunk dystopia that none of us want to live in. Technology and its algorithms would be in charge, and our personal data could be used to manipulate us. Our data reveals our interests, private health information, location status, etc. When algorithms feed us only limited, targeted information based on our existing interests and views, it restricts outside influence and diversity of opinion that is crucial to freedom of thought. Organizations must not contribute to this cyberpunk dystopia where Big Tech becomes Big Brother. Furthermore, companies are putting their employees, clients, and stakeholders at risk when handing data to Big Tech. Big Tech favors the role of tort feasor, rather than the role of the good Samaritan, and complies with consumer privacy laws.  

To prevent Big Brother, organizations should implement their own RAG pipeline. Open-source frameworks such as Llama-index, Qdrant, and Langchain can be used to create powerful RAG pipelines with your privacy and interests protected. LLMWaare also released an open-source RAG pipeline and domain-specific embedding models. Generative AI is a powerful tool and can enhance our lives, but at the same time in the wrong hands, the cyberpunk nightmare can become a reality. The ease of using prebuilt, turn-key systems, such as those offered by OpenAI, is appealing. However, the long-term risks associated with entrusting our valuable data to corporations, without a regulatory framework or protections, raise concerns about a potentially perilous direction. 

Is Better Case Law Data Fueling a Legal Research Boom?

Recently, I’ve noticed a surge of new and innovative legal research tools. I wondered what could be fueling this increase, and set off to find out more. 

The Moat

An image generated by DALL-E, depicting a castle made of case law reporters, with sad business children trying to construct their own versions out of pieces of paper. They just look like sand castles.

Historically, acquiring case law data has been a significant challenge, acting as a barrier to newcomers in the legal research market. Established players are often protective of their data. For instance, in an antitrust counterclaim, ROSS Intelligence accused Thomson Reuters of withholding their public law collection, claiming they had to instead resort to purchasing cases piecemeal from sources like Casemaker and Fastcase.  Other companies have taken more extreme measures. For example, Ravel Law partnered with the Harvard Law Library to scan every single opinion in their print reporter collections. There’s also speculation that major vendors might even license some of their materials directly to platforms like Google Scholar, albeit with stringent conditions.

The New Entrants

Despite the historic challenges, several new products have recently emerged offering advanced legal research capabilities:

  • Descrybe.ai (founded 2023) – This platform leverages generative AI to read and summarize judicial opinions, streamlining the search process. Currently hosting around 1.6 million summarized opinions, it’s available for free.
  • Midpage (2022) – Emphasizing the integration of legal research into the writing process, users can employ generative AI to draft documents from selected source (see Nicola Shaver’s short writeup on Midpage here). Midpage is currently free at app.midpage.ai.
  • CoPilot (by LawDroid, founded 2016) – Initially known for creating chatbots, LawDroid introduced CoPilot, a GPT-powered AI legal assistant, in 2023. It offers various tasks, including research, translating, and summarizing. CoPilot is available in beta as a web app and a Chrome extension, and is free for faculty and students.
  • Paxton.ai (2023) – Another generative AI legal assistant, Paxton.ai allows users to conduct legal research, draft documents, and more. Limited free access is available without signup at app.paxton.ai, although case law research will require you to sign up for a free account.
  • Alexi (2017) Originally focused on Canadian law, Alexi provides legal research memos. They’ve recently unveiled their instant memos, powered by generative AI. Alexi is available at alexi.com and provides a free pilot.

Caselaw Access Project and Free Law Project

With the Caselaw Access Project, launched in 2015, Ravel Law and Harvard Law Library changed the game. Through their scanning project, Harvard received rights to the case law data, and Ravel gained an exclusive commercial license for 8 years. (When Lexis acquired Ravel a few years later, they committed to completing the project.) Although the official launch date of free access is February 2024, we are already seeing a free API at Ravel Law (as reported by Sarah Glassmeyer).

Caselaw Access Project data is only current through 2020 (scanning was completed in 2018, and has been supplemented by Fastcase donations through 2020) and does not include digital-first opinions. However, this gap is mostly filled through CourtListener, which contains a quite complete set of state and federal appellate opinions for recent years, painstakingly built through their network of web scrapers and direct publishing agreements. CourtListener offers an API (along with other options for bulk data use).

And indeed, Caselaw Access Project and Free Law Project just recently announced a dataset called Collaborative Open Legal Data (COLD) – Cases. COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, suitable for use in machine learning and natural language processing projects.

Most of the legal research products I mentioned above do not disclose their precise source of their case law data. However, both Descrybe.ai and Midpage point to CourtListener as a partner. My theory/opinion is that many of the others may be using this data as well, and that these new, more reliable and more complete sources of data are responsible for fueling some amazing innovation in the legal research sphere.

What Holes Remain?

Reviewing the coverage of CourtListener and Caselaw Access Project it appears to me that they have, when combined:

  • 100% of all published U.S. case law from 2018 and earlier (state and federal)
  • 100% of all U.S. Supreme Court, U.S. Circuit Court of Appeals, and state appellate court cases

There are, nevertheless, still a few holes that remain in the coverage:

  • Newer Reporter Citations. Newer appellate court decisions may not have reporter citations within CourtListener. These may be supplemented as Fastcase donates cases to Caselaw Access Project.
  • Newer Federal District Court Opinions. Although CourtListener collects federal decisions marked as “opinions” within PACER, these decisions are not yet available in their opinion search. Therefore, very few federal district court cases are available for the past 3-4 years. This functionality will likely be added, but even when it is, district courts are inconsistent about marking decisions as “opinions” and so not all federal district court opinions will make their way to CourtListener’s opinions database. To me, this brings into sharp relief the failure of federal courts to comply with the 2002 E-Government Act, which requires federal courts to provide online access to all written opinions.
  • State Trial Court Decisions. Some other legal research providers include state court trial-level decisions. These are generally not published on freely available websites (so CourtListener cannot scrape them) and are also typically not published in print reporters (so Caselaw Access Project could not scan them).
  • Tribal Law. Even the major vendors have patchy access to tribal law, and CourtListener has holes here as well.

The Elephant in the Room

Of course, another major factor in the increase in legal research tools may be simple economics. In August, Thomson Reuters acquired the legal research provider Casetext for the eye-watering sum of $650 million.  And Casetext itself is a newer legal research provider, founded only in 2013. In interviews, Thomson Reuters cited Casetext’s access to domain-specific legal authority, as well as its early access to GPT-4, as key to its success. 

What’s Next?

Both Courtlistener and Caselaw Acess Project have big plans for continuing to increase access to case law. CAP will launch free API access in February 2024, coordinating with LexisNexis, Fastcase, and the Free Law Project on the launch. CourtListener is planning a scanning project to fix remaining gaps in their coverage (CourtListener’s Mike Lissner tells me they are interested in speaking to law librarians about this – please reach out). And I’m sure we can expect to see additional legal research tools, and potentially entire LLMs (hopefully open source!), trained on this legal data.

Know of anything else I didn’t discuss? Let me know in the comments, or find me on social media or email.

Non-Legal Tangent: A Renewed Appreciation for ChatGPT

Please allow me a brief interlude for a non-legal tangent to update you on an unexpected ChatGPT medical use case and reason for my delayed posting.

Non-Legal Tangent: DALL-E 3 generated image showing a woman divided by uncertainty and struggle with language on one side and relief and clarity on the other.

On October 3rd, I was driving home, the usual thoughts of dinner plans swirling in my head. Unfortunately, the normalcy of my evening shattered as I exited the freeway and stopped at the traffic light. The driver behind me failed to stop at the light or for the accident he caused. Thinking that the damage was minor, I was more aggravated than worried as I described the events to the responding officer.

A few days later, my ability to focus disappeared. What should take minutes stretched into hours. After a trip to see my doctor, I was diagnosed with a mild concussion and told to avoid electronic screens. But the stubborn mule in me decided to power through grading assignments and teaching classes. Bad idea. I ended up causing myself great pain and extended my screen restrictions further.

The most frustrating part? I was suddenly missing words that I had been using for 20+ years. I’d stare at sentences I’d written, knowing something was off, but the right word eluded me. This was terrifying for someone whose profession revolves around precise and accurate word selection. I actively sought to regain my language capabilities.

It remains unclear what led me to the notion that ChatGPT could be a remedy to this problem. I soon found myself, however, feeding incorrect sentences to the chatbot, explaining the improper word choice, and requesting alternatives. And voila! Within seconds, ChatGPT offered options, often including the word that my mind was denying me. If the word did not come up right away, a prompt or two usually provided me with the word I sought. I was beyond grateful for the gift of my missing words.

Fast forward a month, and I am finally feeling closer to myself again. The missing words are minimal, but my appreciation for this technology has not diminished. In addition to being thankful for generative AI, I have begun wondering about its applications for others who have suffered from similar issues. My co-blogger, Becka Rich, is delving into the technology’s application for neurodiverse individuals, research which I follow closely. But I keep wondering if the technology has potential to benefit those who have suffered from traumatic brain injury or even mild dementia.

Two personal reasons shift my thoughts in this direction, beyond my recent concussion. First, I once had a student who was in a serious motor vehicle accident with a significant traumatic brain injury. She was on medical leave for over a year, and when she came back her cognitive struggles to write and speak at her previous levels were obvious. I wish this technology had been available to her then. It may have expedited regaining her confidence and language skills. Second, my family has a history of dementia. One of my biggest fears is losing myself to this disease eventually. Could this technology help delay a decline by reminding a dementia patient of their knowledge and keeping their memory active?

With these motivating thoughts, I began and continue researching the issue. Although abundant literature explores generative AI’s role in diagnostics and treatment planning, a discernible void exists with regard to patient use in cognitive rehabilitation. I finally came across a paper today that discusses AI’s use for diagnosing dementia and goes on to speculate that it has promise as part of the patient’s cognitive rehabilitation toolbox. Unfortunately, the authors do not delve too deeply into this topic or hint that research is currently being conducted on the issue (see p. 8 of PDF). This area seems ripe for further research on the issue.

This post wavers a bit from our legal focus, but hopefully you stuck with me through my non-legal tangent about my personal hiccup and my resulting discovery of an unexpected benefit of access to generative AI. I am curious to know what other, non-legal (as opposed to illegal) uses of generative AI you wish to see explored. While I am certainly not qualified to undertake medical research like this, I hope that this post will inspire someone who is qualified and who can help other grateful patients.

Beware the Legal Bot: Spooky Stories of AI in the Courtroom

The “ChatGPT Attorney” case has drawn much attention, but it’s not the only example of lawyers facing problems with AI use. This blog will compile other instances where attorneys have gotten into trouble for incorporating AI into their practice. Updates will be made as new cases or suggestions arise, providing a centralized resource for both legal educators and practicing attorneys (or it can be used to update a Libguide 😉). I’ll will also add this to one of our menus or headings for easy access.

Attorney Discipline 

Park v. Kim, No. 22-2057, 2024 WL 332478 (2d Cir. Jan. 30, 2024)

“Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT. Because citation in a brief to a non-existent case suggests conduct that falls below the basic obligations of counsel, we refer Attorney Lee to the Court’s Grievance Panel, and further direct Attorney Lee to furnish a copy of this decision to her client, Plaintiff-Appellant Park.”

Mata v. Avianca, Inc. (1:22-cv-01461) District Court, S.D. New York 

I will not belabor the ChatGPT attorney (since it has been covered by real journalists like the NYT) – only provide links to the underlying dockets in case you need them since I get asked for them fairly often:

(Fireworks start at the May 4, 2023 OSC)

Zachariah Crabhhill, Colorado Springs 

In a less publicized case from Colorado, an attorney, Zachariah Crabhill, relied on ChatGPT to draft a legal motion, only to find out later that the cited cases were fictitious. Unfortunately, the court filings are not accessible through El Paso County’s records or Bloomberg Law. If any Colorado law librarians can obtain these documents, please contact me, and I’ll update this post accordingly.

News articles: 

Zachariah was subsequently sanctioned and suspended:

Ex Parte Allen Michael Lee, No. 10-22-00281-CR, 2023 WL 4624777 (Tex.
Crim. App. July 19, 2023)

An Opinion of Chief Justice Tom Grey explains that Allen Michael Lee faces charges related to child sexual assault, with bail set at $400,000, which he hasn’t been able to post. Lee sought a bail reduction through a pre-trial habeas corpus application, but the court denied this, leading Lee to argue that the denial was an abuse of discretion due to excessive initial bail. However, his appeal was critiqued for inadequate citation, as the cases he referenced either didn’t exist or were unrelated to his arguments

Updates:

David Wagner, This Prolific LA Eviction Law Firm Was Caught Faking Cases In Court. Did They Misuse AI?, LAist (Oct 12, 2023)
Submitted by my co-author Rebecca Fordon

Cuddy Law Firm in New York has been submitting exhibits of transcripts of interactions with ChatGPT to their motions for attorneys fees (essentially a back and forth to zero in on what is a reasonable rate) in several cases in S.D. NY.”
[This is an ongoing action and we’re waiting to see if it is allowed]
from reader Jason as a comment (very much appreciated, Jason)

A Spooky Glimpse into the Future

In 2019, Canadian Judge Whitten reduced an attorney’s requested fees on the grounds that the attorney had not utilized AI technology:

The decision concerned a request for attorneys’ fees and expenses by defendant, Port Dalhousie Vitalization Corporation (PDVC). The court granted summary judgment in PDVC’s favor against a woman who sued PDVC after she slipped and fell at an Ontario bar for which PDVC was the landlord. The bar, My Cottage BBQ and Brew, defaulted in the case. In his ruling, Justice Whitten mentioned that the use of AI in legal research would have reduced the amount of time one of the attorneys for the defendant would have spent preparing his client’s case. 

https://www.lexisnexis.com/community/insights/legal/b/thought-leadership/posts/judge-slams-attorney-for-not-using-ai-in-court

In domains where AI can significantly expedite workflows, it could indeed become standard practice for judges to scrutinize fee requests more rigorously. Attorneys might be expected to leverage the latest technological tools to carry out tasks more efficiently, thereby justifying their fees. In this scenario, sticking to traditional, manual methods could be perceived as inefficient, and therefore, not cost-effective, leading to fee reductions. This has led many people to wonder if AI will expedite the decline of the billable hour (for more on that please see this fantastic discussion on 3 Geeks and a Law Blog, AI-Pocalypse: The Shocking Impact on Law Firm Profitability).

We hope that you have a Happy Halloween!

ChatGPT-4 Can See Us Now! And Our Desiccated Potatoes…

Within the rapidly advancing realm of generative AI, ChatGPT is expanding its inventory of human senses available for its GPT-4 subscribers. For a mere $20 a month, you can experience its new ability to see, speak, and hear! One of these senses, the “vision” enhancement, allows the user to upload images which ChatGPT can analyize and answer questions about. Currently, YouTube is abuzz with endless creative demonstrations of this technology – solving math problems, calculating food calories in a shopping cart, identifying landmarks in photos, reading CT scans, turning sketches into art, creating fashion designs, tracking wildlife, aging or de-aging faces, detecting plant diseases, restoring old photos, suggesting interior design changes, and analyzing aerial images for urban development and natural disasters. Multiple videos are highlighting an intriguing paper, “The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision),” which explores over 100 potential applications of this feature, examining how well the system performs for each.

If you don’t subscribe to ChatGPT-4, you can still check out the image analysis concept using the free open source LLaVA: Large Language Model Vision Assistant.

Originally for this blog post, I was going to examine a few examples of possible uses of this technology for legal practice. However, I got bogged down by my first example, analyzing evidence in photographs, because ChatGPT had so much to say about the image I uploaded! So, I’m focusing solely on that one.

Photograph Analysis

Photographs obviously play a huge role in legal practice. Lawyers, prosecutors, jurors, and others often rely heavily on photographs to understand events and assign blame – be it a photograph from a car accident, a crime scene, or even the positioning of a car in a parking violation scenario. Of course, in crime dramas and true crime documentaries, they either have a brilliant detective who identifies the key piece of evidence in the picture, or it is crowdsourced by the public when it appears in the news. Now, with ChatGPT’s newfound ability to “see,” it’s essentially doing its own form of crowdsourcing. Using the wisdom of countless human eyes contained in its massive training data, ChatGPT-4 can assist and provide insights that an individual’s human eyes can miss.

Example – Analysis of Office Shelf Photograph

To test its abilities, I snapped a goofy photo of a shelf in my office and quizzed ChatGPT about the things it spotted. I was pretty impressed by its descriptive abilities and knowledge about some of the items. Exploring its possible uses for like, the police, I pretended the picture was taken in the home of an international assassin, and then asked for some clues about the assassin. I think that it had some interesting ideas!

I uploaded the image and asked away!

Me: Hi! Please list the items you recognize in this picture.

ChatGPT: Sure! Here are the items I recognize in the picture:

  1. A green piñata (or decorative item) made of fringed paper.
  2. A figurine of an animal, possibly a kangaroo or wallaby, holding red objects (perhaps apples).
  3. A piece of wood or nut with googly eyes attached, resembling a face with an open mouth.
  4. Two glass or ceramic bottles with decorative paintings or inscriptions.
  5. A candy cane or similarly striped stick.
  6. A wooden hand sculpture.
  7. A tray or plate with a pink border.
  8. Various stickers or decals on a gray surface, including ones that say “Night Owl”, “Juicy”, and others.

Let me know if you need more information on any of these items!

Me: Great! You see the little bottle with the nature scene on it? Tell me what you know about that picture.

ChatGPT: Certainly! The bottle you’re referring to appears to be a type of “snuff bottle.” Snuff bottles are small bottles which were primarily used in ancient China to hold powdered tobacco (snuff). These bottles became popular during the Qing Dynasty. The use of snuff was a fashion among the Chinese during this period, and they liked to carry it in small bottles like the one in your picture…. (Note: It continued for 3 more paragraphs.)

Me: Can you read the Chinese characters on the bottle?

ChatGPT: (Note: No, too small, but could read it if I uploaded another picture.)

Me: If I told you that the brown thing with eyes is a dessicated potato, could you take a guess at how long it has been drying out?

ChatGPT: While I can provide a general overview based on the image and your description, it’s important to note that accurately determining the exact duration of desiccation for a potato based solely on a photograph is challenging.

That said, when a potato dries out, it loses its moisture and shrinks, and the skin can become wrinkled and tough. Over time, as the potato continues to dry, it may become lighter in weight and more shriveled in appearance. The potato in the image does appear quite shriveled and dried out.

However, the exact duration of desiccation can vary based on several factors, including.. (Note: It goes on with a list of factors.)

Me: Yeah, that was a good guess! I accidentally left it in my office when I went to the AALL conference in July, and it is October 18 now.

ChatGPT: Ah, that makes sense! Leaving a potato in an environment like an office for several months would certainly lead to the kind of desiccation observed in the image. The absence of moisture and a relatively stable environment would have contributed to

its drying out. It’s interesting to see how everyday items can transform over time when left in certain conditions. I hope your AALL conference went well, and thanks for sharing this intriguing observation with me!

Me: Yeah, I was surprised at how it looked after a week! Can you tell what that is in the potato’s mouth?

ChatGPT: (No, didn’t guess correctly.)

Me: What if I gave you a hint? The tooth fairy.

ChatGPT: Ah, with that hint in mind, it seems the object in the potato’s mouth could represent a “tooth”! Perhaps it’s a playful representation of the potato losing a tooth and waiting for the tooth fairy. In that context, the object could be made of a variety of materials resembling a tooth, such as a piece of white foam, plastic, or even a small piece of chalk or stone. It’s a fun and creative idea!

Me: Actually, one day I was looking at the potato, and I saw one of my daughter’s baby teeth on the shelf! I don’t know when I put it there, but I do remember finding one in my wallet and thinking that I should hold on to it. So, I assume I stuck it on my shelf for safekeeping.

ChatGPT: That’s both surprising and heartwarming! Finding personal mementos in unexpected places can bring back a rush of memories and emotions. Baby teeth are often cherished keepsakes for many parents, reminding them of the fleeting nature of childhood. Your discovery adds a whimsical twist to the tradition of the tooth fairy! The potato unintentionally became a guardian for your daughter’s baby tooth. It’s a unique and memorable story you can share with her in the future.

Me: If this picture would taken by the police at the home of, say, an international assassin, would it give us any clues about the assassin?

GhatGPT: While it’s fun to imagine scenarios, it’s essential to approach such questions with a good dose of caution and skepticism. Using the image you provided, let’s hypothesize in a fictional context:

If the image were taken at the home of an “international assassin” in a crime novel or movie, investigators might look for clues in the following ways:

  1. Personal Touches: The dessicated potato with a baby tooth could suggest a sentimental side or a connection to family. The tooth might indicate the assassin has a child or keepsakes from their past.
  2. Cultural Clues: The snuff bottle with Chinese characters could hint at a connection to China, either personally, professionally, or as a destination of interest.
  3. Hobbies or Interests: The various trinkets and decorations might suggest that the assassin has an interest in art, culture, or collecting unique items.
  4. Stickers and Decals: The stickers on the gray surface might provide clues about places they’ve visited, brands they like, or organizations they support or are involved in.
  5. Behavioral Clues: The fact that the potato was left to desiccate for months might indicate a person who travels frequently or isn’t often at home.
  6. Potential Disguises: Items like the piñata could be tools for blending in at parties or social events.

However, it’s essential to note that these are purely speculative interpretations based on a fictional scenario. In a real-world context, such deductions would be highly speculative and could lead to false conclusions. Personal items in someone’s home often reflect their tastes, memories, and everyday life rather than their profession, especially one as clandestine as an “international assassin.”

Me: Thanks for your help today!

Generative AI in the Legal Sphere

Well, that was definitely interesting! Then, naturally, I turned to ChatGPT to explore potential applications in the legal sector, and it had endless ideas. Here are but a few:

Generative AI, especially with its visual capabilities, offers a plethora of applications in the legal sector:

  • Document Review: Beyond just text, AI can now scan and interpret visual elements in legal documents, helping in tasks like verifying signatures or checking seals.
  • Evidence Analysis: AI can be used to enhance low-quality surveillance footage, identify faces in crowded scenes, or even analyze patterns in digital data visualizations.
  • Contract Visualization: Instead of lengthy text, visualize contract terms as diagrams or flowcharts, making them more digestible for clients.
  • Property Assessment: Using satellite imagery, AI can assess property boundaries, land usage, or even environmental compliance.
  • Accident Reconstructions: AI could recreate accident scenes in 3D based on photos and other evidence, aiding in personal injury or insurance cases.
  • Copyright Compliance: Scan vast repositories of images to identify potential copyright violations or unauthorized use of intellectual property.
  • Client Interactions: Use AI to generate visual explanations of complex legal scenarios, helping clients grasp intricate details through interactive diagrams or simulations.
  • Forensic Art: Create composites or age-progression photos in missing person cases or help juries visualize crime scenes.
  • Legal Research: AI can scan through visually rich documents like patents, identifying key diagrams or illustrations that could be crucial in patent litigation.”

Conclusion

With tools like ChatGPT expanding their capabilities, the intersection of AI and law is clear. ChatGPT’s new features highlight potential efficiencies and improvements for legal processes!

Coding with ChatGPT: A Journey to Create A Dynamic Legal Research Aid

I haven’t quite gotten this whole ChatGPT thing. I’ve attended the webinars and the AALL sessions. I generally understand what it’s doing beneath the hood. But I haven’t been able to find a need in my life for ChatGPT to fill. The most relevant sessions for me were the AALS Technology Law Summer Webinar Series with Tracy Norton of Louisiana State University. She has real-world day-to-day examples of when she has been able to utilize ChatGPT, including creating a writing schedule and getting suggestions on professional development throughout a career. Those still just didn’t tip the balance for me.

A few weeks ago, I presented to one of our legal clinics and demonstrated a form that our Associate Director, Tara Mospan, created for crafting an efficient search query. At its heart, the form is a visual representation of how terms and connectors work with each other. Five columns of five boxes, each column represents variations of a term, and connectors between the columns. For a drunk driving case, the term in the first box could be car, and below that we would put synonyms like vehicle or automobile. The second column could include drunk, inebriated, and intoxicated. And we would choose the connector between the columns, whether it be AND, w/p, w/s, or w/#. Then, we write out the whole search query at the bottom: (car OR vehicle OR automobile) w/s (drunk OR inebriated OR intoxicated).

Created years ago by Tara Mospan, this worksheet is loved by ASU Law students who frequently request copies from the law librarians even years after they use it for Legal Research and Writing.

After the presentation, I offered a student some extra copies of the form. She said no, that I presented to her legal writing class last year and she was so taken with the form that she had recreated it in Excel. Not only that, she used macros to transform the entered terms into a final query. I was impressed and asked her to send me a copy. It was exactly as she had described, using basic commands to put the terms together, with OR between terms within a column, and drop downs of connectors. She had taken our static form and transformed it into a dynamic utility.

An ASU Law student recreated the Crafting an Efficient Search PDF using Excel so that it had drop-downs.

Now I was inspired: What if I could combine the features of her Excel document with the clean layout of our PDF form? Finally, I saw a use for ChatGPT in my own life. I had read about how well ChatGPT does with programming and it seemed like the perfect application. It could help me create a fillable PDF, with nuanced JavaScript code to make it easy to use and visually appealing.

I went into ChatGPT and wrote out my initial command:

I am trying to create a fillable PDF. It will consist of five columns of text boxes, and each column will have five boxes. Search terms will be placed in the boxes, although not necessarily in every box. There will be a text box at the bottom where the terms from the boxes above will be combined into a string. When there are entries in multiple boxes in a column, I want the output to put a set of parentheses around the terms and the word OR between each term.

ChatGPT immediately gave me a list of steps, including the JavaScript code for the results box. I excitedly followed the directions to the letter, saved my document, and tested it out. I typed car into the first box and…nothing. It didn’t show up in the results box. I told ChatGPT the problem:

The code does not seem to be working. When I enter terms in the boxes, the text box at the bottom doesn’t display anything.

And this began our back and forth. The whole process took around four hours. I would explain what I wanted, it would provide code, and I would test it. When there were errors, I would note the errors and it would try again. A couple times, the fix to a minor error would start snowballing into a major error, and I would need to go back to the last working version and start over from there. It was a lot like having a programming expert working with you, if they had infinite patience but sometimes lacked basic understanding of what you were asking.

For many things, I had to go step-by-step to work through a problem. Take the connectors, for example. I initially just had AND between them as a placeholder. I asked it to replace the AND with a drop-down menu to choose the connector. The first implementation of this ended up replacing OR between the synonyms instead of the second needed search term. We went back and forth until the connector option worked between the first two columns of terms. Then we worked through the connector between columns two and three, and so on.

At times, it was slow going, but it was still much faster than learning enough JavaScript to program it myself. ChatGPT was also able to easily program minor changes that made the form much more attractive, like not having parentheses appear unless there are two terms in a column, and not displaying the connector unless there are terms entered on both sides of it. And I was able to add a “clear form” button at the end that cleared all of the boxes and reverted the connectors back to the AND option, with only one exchange with ChatGPT.

Overall, it was an excellent introduction to at least one function of AI. I started with a specific idea and ended up with a tangible product that functioned as I initially desired. It was a bit more labor intensive than the articles I’ve read led me to believe, but the end result works better than I ever would have imagined. And more than anything, it has gotten me to start thinking about other projects and possibilities to try with ChatGPT.

Crystal Ballalytics: Judicial Behavioral Forecasting Modeling

The trifecta of big data, advanced analytics, and recent AI innovations is ushering in a new era of judicial analytic mind-reading, enabling software to more accurately predict judges’ court rulings. Last year, in what seems like an interesting leap forward in judicial analytics, Pre/Dicta unveiled its AI-powered litigation prediction software, introducing a novel, perhaps radical, approach to tapping into the judicial mind. According to CEO Dan Rabinowitz, Pre/Dicta is the only litigation analytics platform that makes verifiable predictions about the outcome of lawsuits. He claims that using data science and only a docket number, Pre/Dicta’s software correctly forecasts how judges will decide on motions to dismiss 86% of the time, without factoring in the specific facts of the case. The system covers civil litigation cases at both the state and federal level, but does not attempt to forecast results of jury trials.

Rather than solely depending on a judge’s past rulings and jurisprudence, as is common with other judicial analytics products, Pre/Dicta uses a methodology similar to that used in targeted advertising. This approach forecasts future behavior by examining both past actions, such as purchasing habits, and individual biographical characteristics. Pre/Dicta works by combining historical ruling data with biographical and demographic details to forecast a judge’s decision in a given case. Using around 120 data points, it spots patterns and potential biases in a judge’s past rulings. The system evaluates specifics of past rulings, considering elements such as the nature of the case (e.g., securities fraud, employment discrimination), the attorneys and firms involved (e.g., solo practitioner representing an individual, regional firm representing a corporation, AmLaw 100 firm backing an individual), and the nature of the disputing parties (e.g., individual vs. corporation, small company vs. large corporation). This case-specific information is then combined with the judge’s personal data, like net worth, political affiliations, professional history, and law school alma mater, to generate a prediction.

Prediction in the Legal Landscape

86% accuracy is impressive! Hopefully, Pre/Dicta will spark a judicial prediction analytics arms race. According to Daniel L. Chen in his article, “Judicial Analytics and the Great Transformation of American Law,” predictive judicial analytics “holds the promise of increasing the efficiency and fairness of law.” Targeted advertising seems to work pretty well, so hopefully Pre/Dicta’s advancements in this area is a positive step toward making the judicial process more transparent.

If only we knew what would happen in the future, we would know what to do now! For as long as there have been courts and judges, folks have tried to predict whether a judge would rule in their favor. Attorneys have always engaged in mental “judicial analytics” by gathering and pondering information on a judge’s past rulings and reputation to glean some insights into how they might decide a case. Humans are prediction machines, given our innate tendency to draw on experiences and knowledge to anticipate events—an evolutionarily useful skill that allowed us to sometimes escape being saber-toothed tiger lunch or the victim of grumpy neighboring tribal predations.

From my brief stint practicing family law in the 1990s, I discovered that family law clients are hopeful individuals. Despite clear child support guidelines and a prevailing judicial preference for shared custody, people often believed that if a judge merely heard the specifics of their “special snowflake” scenario involvinga cheating spouse or a deadbeat dad, the judge would surely deviate from the rules and customary practices to grant them a deserved favorable ruling. They struggled to accept that judges could be indifferent to their parade of marital/parental horribles. And even if judges were initially inclined to empathize, after many years of sifting through outright lies and half-truths, they had seemingly given up on given up on deciphering reality anyway. It was always challenging to persuade clients of the judicial propensity to metaphorically split the baby down the middle, whether financially or custodially.

Attorneys have needed to hone their abilities to predict outcomes so they could counsel their clients on different courses of action. While making no promises, they share predictions regarding claim values, the odds of surviving summary judgment, potential jail sentences, the likelihood of obtaining sole custody of children, and so on. Attorneys can only do so much, though. Hopefully, as predictive judicial analytics tools improve and become widely available, they have the potential to promote fairness, cut down on litigation costs, and create a more transparent and predictable judicial system.

Judicial Behavioral Forecasting Modeling

Certainly, judges do provide clients with information that assists in anticipating how a ruling might unfold. I have observed numerous judges delivering impactful speeches during temporary hearings, highlighting the importance of adhering to child support guidelines and the principle of shared custody. When clients receive information regarding a likely outcome, their acceptance of reality accelerates significantly. It would indeed be beneficial, and save a lot of time, money, and anguish, if a client could engage in a comprehensive discussion with a judge, probing various questions about how different pieces of information might influence their ruling. However, this isn’t the modus operandi of judges, as they cannot communicate with one party in a suit independently, nor do they pre-announce their rulings prior to a hearing or trial. Now, however, companies like Pre/Dicta are leveraging the human trait of predictability inherent in judges. Like everyone, judges have their own set of ideas, habits, preferences, prejudices, and temperaments shaped by a mix of genetics and experiences, all of which contribute to a certain level of predictability in their rulings.

Hopefully, soon we will be able to pick the mind of a judge without the necessity of actually speaking with her. With the advancing tide of artificial intelligence and the ongoing proliferation and refinement of judicial analytics products, it seems plausible that the future might produce a family law judge behavioral forecasting model for specific judges. These models could help attorneys and their clients identify potential biases of judges. They could see how a judge might respond to a person based on certain characteristics like sex, race, age, income, profession, or criminal history, especially when compared to another party with a different background. Also, if these models included information about factors that affected past rulings, they could be used to anticipate how certain situations might be viewed by the court. For example, a parent hoping to keep their soon-to-be ex-spouse away from the kids might want to know if the judge objects to stuff like dating app addiction, not taking the child to piano lessons, or multiple DUIs arrests. Armed with information, they can choose the best way to handle their case, including deciding if going to trial is a good idea.

Behavioral forecasting models are of course not new to law and legal practice. They are tools used to predict the likely behaviors of individuals or groups across various domains, aiding in better decision-making. In the legal sector, in addition to predicting the outcome of Supreme Court cases, they aid in litigation strategy, legal analytics, resource allocation, criminal behavior prediction, policy impact analysis, legal document analysis, dispute resolution, and regulatory compliance, by leveraging historical data and legal precedents to inform decision-making and strategy development. They are utilized in other fields too like marketing, finance, HR, healthcare, public policy, urban planning, criminal justice, technology, environmental science, and education to forecast behavioral patterns, helping to optimize strategies and allocate resources more efficiently.

Such an innovation would undeniably be a game changer. Clients in divorce and custody disputes might believe solid advice regarding likely outcomes, rather than cling to the hope that their unique case details will influence the judge. Accurate predictions would likely deter individuals from wasting money, and likely be a boon for judges struggling with a backlog of cases. Having these predictive tools on their websites would no doubt promote case settlements and therefore ease some of the strain on both judges and the judicial system.

Naysayers

As always, naysayers abound. Some argue that judicial analytics could undermine the legitimacy of an impartial judiciary. In fact, in France, judges are so wary of transparency that judicial analytics products are prohibited. Well, at least in the U.S., that boat has sailed, far and fast. Particularly in light of Supreme Court rulings in recent years, many people have realized that judges often base their rulings on ideological leanings and personal preferences. Robots would only further confirm what we already suspect – that judges are just like the rest of us with habits, biases, and opinions. It might be too late to rehabilitate the judiciary, but perhaps the transparency of data-driven prediction could bolster public confidence more than frequent affirmations of judicial objectivity.

Then, there are arguments regarding fairness due to cost. For now, the high cost of Pre/Dicta raises potential fairness issues, as only larger firms and wealthier clients can harness its predictive power. True, they always have an advantage. However, as the technology becomes more common, costs should decrease, making it more and more accessible.

Conclusion

The improvement of AI-driven judicial analytics, exemplified by Pre/Dicta, could mark a revolutionary shift in the legal realm, perhaps promising a new level of predictability and transparency in court outcomes. While concerns about fairness, accessibility, and the perception of judicial impartiality persist, the potential benefits—reduced litigation costs, enhanced transparency, and more informed decision-making—may herald a future where data-driven insights guide legal strategy and expectations. As technology continues to evolve and become more accessible, the future looks promising for both practitioners and those seeking justice.

Unlocking the Power of Semantic Searches in the Legal Domain

The language of law has many layers. Legal facts are more than objective truths; they tell the story and ultimately decide who wins or loses. A statute can have multiple interpretations, and those interpretations depend on factors like the judge, context, purpose, and history of the statute. Legal language has distinct features, including rare legal terms of art like “restrictive covenant,” “promissory estoppel,” “tort,” and “novation.” This complex legal terminology poses challenges for normal semantic search queries.  

Vector databases represent an exciting new trend, and for good reason. Rather than relying on traditional Boolean logic, semantic search leverages word associations by creating embeddings and storing them in a vector database. In machine learning and natural language processing, embeddings depict words or sentences as dense vectors of real numbers in a continuous vector space. This numerical representation of text is typically generated by a model that tokenizes the text and learns embeddings from the data. Vectors capture the contextual and semantic meaning of each word. When a user makes a semantic query, the search system works to interpret their intent and context. The system then breaks the query into individual words or tokens, converts them into vector representations using embedding models, and returns ranked results based on their relevance. Unlike Boolean search which requires specific syntax, (“AND”, “OR”, etc.) semantic search allows for queries in natural language and opens up a whole new world of potential when searches are not constrained by the rules of exact matching of text. 

However, legal language differs from everyday language. The large number of technical terms, the careful precision, and the fluid interpretations inherent in law mean that semantic search systems may fail to grasp the context and nuances of legal queries. The interconnected and evolving nature of legal concepts poses challenges in neatly mapping them into an embedding space representation. One potential way to improve semantic search in the legal domain is by enhancing the underlying embedding models. Embedding models are often trained on generalized corpora like Wikipedia, giving them a broad but shallow understanding of law. This surface-level comprehension proves insufficient for legal queries, which may seem simple but have layers of nuance. For example, when asked to retrieve the key facts of a case, an embedding model might struggle to discern what facts are relevant versus extraneous details.  

The model may also fail to distinguish between majority and dissenting opinions due to a lack of legal background needed to make such differentiations. Training models on domain-specific legal data represents one promising approach to overcoming these difficulties. By training on in-depth legal corpora, embeddings could better capture the subtleties of legal language, ideas, and reasoning. For example, Legal Bert, which stands for Bidirectional Encoder Representations was pre-trained on the CaseHold dataset. The size of this corpus (37GB) is large, representing 3,446,187 legal decisions across all federal and state courts. The CaseHold data set is larger than the size of the Book Corpus/Wikipedia corpus originally used to train the BERT model.  When tested on the LexGlue benchmark- a benchmark dataset to evaluate the performance of NLP methods in legal tasks, Legal Bert performed better than ChatGPT.  

Semantic search shows promise for transforming legal research, but realizing its full potential in the legal domain poses challenges. Legal language is complex and can make it difficult for generalized embedding models to grasp the nuances of legal queries. However, recent optimized legal embedding models indicate these hurdles can be overcome by training on ample in-domain data. Still, comprehensively encoding the interconnected, evolving nature of legal doctrines into a unified embedding space remains an open research problem. Hybrid approaches combining Boolean and vector models are a promising new frontier that many researchers are exploring. 

Realizing the full potential of semantic search for law remains an ambitious goal requiring innovative techniques. But the payoff could be immense – responsive, accurate AI assistance for case law research and analysis. While still in its promising infancy, the continued maturation of semantic legal search could profoundly augment the capabilities of legal professionals. A shift from generic to domain-specific models holds promise.  

Keeping Up With Generative AI in the Law

The pace of generative AI development (and hype) over the past year has been intense, and difficult even for us experienced librarians, masters of information that we are, to follow. Not only is there a constant stream of new products, but also new academic papers, blog posts, newsletters, and more, from people evaluating, experimenting with, and critiquing those products. With that in mind, I’m sharing my favorites, and I’ll also pepper in a few recommendations from my co-bloggers.

Twitter

Before Twitter began its slow decline, it was one of my primary sources for professional connection, and there are many there who are exploring generative AI. I especially enjoy following people outside of the legal world. Many of my favorites are still there, like Ethan Mollick, Anna Mills, and Lance Eaton (all in higher education) as well as critical AI theorists like Timnit Gibru and Emily Bender.

LinkedIn

Despite the good bits that remain on Twitter, many interesting legal tech discussions seem to have moved to LinkedIn (or perhaps I’ve only recently found them there). Some of my favorites to follow on LinkedIn (in no particular order beyond how I’m running across them as I scroll) are: Nicole Black, Sam Harden, Alex Smith, Cat Moon, Damien Riehl, Dennis Kennedy, Uwais Iqbal, Ivy Grey, Robert Ambrogi, Cat Casey, Nicola Shaver, Adam Ziegler, and Michael Bommarito. Both Bob Ambrogi and Nicola Shaver recently had posts gathering legal tech luminaries to follow, so I would recommend checking out those posts and the comments to find more interesting folks. And if anyone else has figured out the LinkedIn etiquette for connecting vs. following someone you only know via other social media, please let me know.

Newsletters

Most of us have many (many, many) newsletters filling our inbox each day. Here are some favorites.

Jenny:

  • AI in Education – a Google group
  • Lawyer Ex Machina – from law librarian Eli Edwards, on legal technology, law practice and selected issues around big data, artificial intelligence, blockchain, social media and more affecting both the substance and the business of law (weekly)
  • The Neuron – AI news, tools, and how-to
  • The Brainyacts – from Josh Kubicki, insight & tips on generative AI use in legal services (daily)

Rebecca:

  • One Useful Thing – from Ethan Mollick, mostly on AI in higher ed (weekly)
  • Do Something – from Sam Harden, on legal tech, often from a small firm and access to justice angle
  • Legal Tech Trends – legal tech links, podcast, articles, products, along with original pieces (every two weeks or so)
  • KnowItAALL – this daily newsletters is a benefit for members of AALL (American Association of Law Libraries), but it is also available to non-members for a fee; great coverage of legal AI, I read it every day
  • AI Law Librarians – is it gauche to recommend our own blog? You can subscribe as a newsletter if you like!

Sean:

Podcasts

There are loads of podcasts on AI, but here are a few we follow:

Blogs & Websites

We’re bloggers, we like blogs. Traditional media can be ok, too, although mind the paywall.

YouTube

Sean also mentioned that much of the interesting stuff is on YouTube, but that it is fairly high-effort because many of the videos are an hour long, or more. Maybe we’ll convince him to share some of his favorite videos soon in a future post!

A Few LibGuides

If you still need more, here are a few libguides:

What about you?

Who are your favorites to follow on social media? Are there helpful newsletters, blogs, podcasts, or anything else that we’ve missed? Let us know in the comments.