News
Kim Kardashian Points Finger at ChatGPT for Law Exam Struggles: ‘Our Study Sessions End in Arguments
Kim Kardashian vs. ChatGPT: When Celebrity Study Sessions Turn Into Arguments
Kim Kardashian described a pattern that sounds familiar to anyone who has tried to outsource thinking to an AI assistant: a frantic study session, a confident answer, and then an argument with the screen after discovering the guidance was wrong. In a Vanity Fair lie detector interview, she acknowledged using ChatGPT for “legal advice,” snapping photos of questions and pasting them into the chatbot. The reveal didn’t unfold as a tech triumph. She said it “made [her] fail tests,” that she would “yell at it,” and that the bot turned into a “frenemy.” The operator deemed the disclosure truthful, which makes the moment more than a meme; it’s a case study in how celebrity, technology, and legal education collide under pressure.
That interview setup mattered. Teyana Taylor, Kardashian’s All’s Fair co-star, pushed with real-world prompts: life advice? dating advice? a chatbot as a friend? The answers were no, no, and sort of—until the conversation pivoted to law. Kardashian’s portrait of generative AI was strangely human: sometimes supportive, sometimes smug, occasionally moralizing about “trusting your instincts.” It’s comedy on camera, but it’s also the uncomfortable truth about tools that can sound authoritative while drifting from verified doctrine. When a high-profile learner leans on AI during test preparation, the stakes aren’t just grades; the stakes are credibility, public perception, and a blueprint millions of students might copy.
Consider the psychology of these AI-fueled cram sessions. A camera-ready personality is no armor against uncertainty when rules of evidence or property law get slippery. The temptation is obvious: paste a tough hypo, get a neat rule statement, move on. Yet law exams reward precise issue spotting, clear rule articulation, and targeted application. A chatbot that generalizes or halluculates a case name can seduce a newcomer into confident error. That’s how a celebrity can exit a glamorous set and walk into an avoidable mistake. And when the answer is wrong, the instinct to “argue back” at the bot makes emotional sense—even if the transcript of the argument isn’t earning points on the grading rubric.
The frenemy metaphor captures the paradox. Tools that promise speed regularly slow learners down if they require constant verification. Worse, they risk teaching the wrong habits: deferring to phrasing over analysis, prioritizing fluency over accuracy, and trusting “sound” over “source.” Kardashian’s on-camera frustration, then, reads less like gossip and more like a flashing caution sign for anyone mixing online learning with a professional gatekeeping exam. The performance is entertaining. The lesson is urgent.
The frenemy problem in a nutshell
Why do smart people end up in battles with a chatbot? A few patterns repeat. When prompts are vague, outputs grow speculative. When time is short, scrutiny fades. When a system sounds confident, criticism softens. And in law, where nuance decides outcomes, those micro-errors accumulate. The result is a spiral: ask, accept, discover, dispute. The celebrity context magnifies it, but the pattern is universal.
- 📸 Snap-and-paste workflows invite shallow analysis, not deep reading.
- ⚖️ “Authoritative” tone can mask missing citations or misapplied rules.
- ⏳ Cramming compresses verification time, amplifying small mistakes.
- 🤖 Over-trusting an AI assistant creates false confidence on exam day.
- 💬 Arguing with a bot burns energy better spent outlining or practicing hypos.
| Scenario 🤔 | Perceived Benefit ✅ | Hidden Risk ⚠️ | Better Move 💡 |
|---|---|---|---|
| Photo of essay prompt dropped into ChatGPT | Fast “issue list” | Missed sub-issues; generic rules | Outline issues first, then ask targeted questions |
| Accepting a clean-sounding rule | Time saved | No citation; wrong jurisdiction | Cross-check with bar outline or treatise |
| Arguing with the bot post-mistake | Emotional relief | Lost study minutes | Log error, refine prompt, move on |
In short, the argument might be entertaining, but the exam grader never sees it—only the answer sheet.

Credit: David Fisher/Shutterstock
Inside the Law Exam Pressure Cooker: Why AI Study Partners Can Backfire
The California bar exam is engineered to stress-test analysis. It blends five one-hour essays, a 90-minute performance test, and 200 multiple-choice questions under time pressure. That structure punishes vague reasoning and rewards organized application. It also exposes how an AI assistant can mislead: an elegant paragraph that doesn’t track the call of the question, a rule that’s 80% right, or a misread fact that detonates an entire analysis. When Kardashian said the tool “made [her] fail tests,” the phrasing was dramatic—but the mechanism is ordinary. Misaligned help, multiplied by minutes and modules, equals lost points.
Look at the components. Essays ask for targeted doctrine across subjects like Contracts, Torts, Evidence, and Constitutional Law. The performance test challenges task execution—drafting a memo, a brief, or a client letter—using only the provided library. The MBE demands precision: choose the single best answer among plausible distractors. Each format stresses different cognitive muscles, and AI’s greatest weakness shows up where the exam is strongest: in granular application and careful reading under time limits.
Imagine a property essay on future interests. A chatbot outlines rules on vested remainders but muddles the Rule Against Perpetuities with a jurisdictional variant. The student writes a beautiful paragraph… about the wrong rule. Or consider a performance test: the bot “remembers” doctrines outside the file, contaminating the task with extra-textual law—an automatic penalty. On the MBE, a confident but shallow explanation nudges the learner to pick the tempting distractor. These are not edge cases; they’re daily realities in test preparation.
Where the cracks form
Failure modes repeat with eerie consistency. Compression of time leads to acceptance of answers. Lack of citations allows hallucinations to slip through. And the seductive cadence of AI outputs can mask analytical gaps. The solution isn’t abstinence—it’s design. But before design, there must be awareness.
- 🧩 Essays: AI may overgeneralize rules, erasing crucial exceptions.
- 🗂️ Performance Test: Any reliance on outside law is a scoring pitfall.
- 🎯 MBE: Overconfident explanations can legitimize the wrong choice.
- 📚 Sources: Missing citations make cross-checking harder and slower.
- 🕐 Timing: Long outputs encourage passive reading instead of active outlining.
| Exam Part 📘 | Format 🧪 | AI Risk 🤖 | Mitigation 🛡️ |
|---|---|---|---|
| Essays | 5 x 1-hour | Generic rules; missed sub-issues | Use IRAC scaffolds; demand pinpoint exceptions |
| Performance Test | 1 x 90-minute task | Injecting outside law | Prompt: “Use only materials in the file/library” |
| MBE | 200 multiple choice | Confident wrong explanations | Ask for analysis of each option with rule cites |
When the clock is ticking, even minor misdirection compounds. That’s why the pressure cooker matters: it reveals whether a tool helps thinking or merely imitates it.
From Baby Bar to Backyard Moments: A 2025 Timeline the Internet Can’t Stop Debating
Public memory can be messy, especially when a celebrity lives multiple lives at once: trainee advocate, producer, and star of Hulu’s All’s Fair. The narrative of Kardashian’s journey includes iconic beats: repeated attempts at the first-year exam (“baby bar”), a breakthrough pass, and a widely shared backyard graduation-style celebration in 2025. Those posts triggered applause and eye-rolls in equal measure—some cheered a nontraditional route through legal education, others questioned whether the milestone equaled bar passage. Meanwhile, the lie detector episode landed, putting her ChatGPT habit under a microscope and reframing the celebration with a fresh twist: even with cameras and gowns, the grind continues.
Context helps. California allows law study via apprenticeship; the early “baby bar” filters candidates. Kardashian’s persistence—multiple shots before a pass—became an Internet morality tale: resilience wrapped in glamour. Then came the study buddy drama: screenshots, group chats, and a “toxic friends” joke endorsed by Teyana Taylor. It’s comedic setup, yet it telegraphs the same tension any bar candidate faces: how to harness online learning without getting burned by it.
Layer in the entertainment cycle. All’s Fair, a drama about an all-female law firm led by Kardashian alongside Naomi Watts, Glenn Close, and Niecy Nash-Betts, is slated to premiere on a Tuesday in early November. Red carpets, interviews, and publicity stills (think London’s Leicester Square) color the legal storyline with Hollywood sheen. Viewers see robes, filings, power lunches—and then a viral clip about an argument with a chatbot. The split-screen is the point: image versus infrastructure, narrative versus notebooks.
Milestones that shaped the debate
Each checkpoint carries a different lesson. The baby bar saga screams about stamina. The backyard cap-and-gown moment refracts how modern achievement is packaged—and how the Internet reacts. The lie detector confirms that the AI study habit isn’t a rumor. Together, they trace a path that many students will recognize even without a Hulu premiere on the calendar.
- 🎓 Persistence on early exams reframed failure as feedback.
- 📱 Social celebrations magnified scrutiny and support in equal measure.
- 🧪 Lie detector confirmation made the AI storyline unavoidable.
- 🎬 On-screen legal roles blurred lines with off-screen study life.
- 🤝 “Frenemy” language resonated with anyone burned by a tool they still use.
| Moment 🗓️ | What Happened ⭐ | Public Reaction 📣 | Lesson for Students 📘 |
|---|---|---|---|
| Baby bar attempts | Pass after multiple tries | Respect for resilience | Iterate, don’t capitulate |
| Backyard graduation vibe | Ceremony-style celebration | Debate about milestones | Define success criteria clearly |
| Lie detector interview | AI usage confirmed | Curiosity and critique | Verify every AI output |
Timeline aside, the takeaway is stable: strategy beats spectacle when the exam clock starts.
Better Study Sessions with AI: A Practical Playbook for Test Preparation
Good news for anyone watching this saga and taking notes: AI can help, provided the workflow honors the exam. Think of a fictional bar candidate—Ari, a paralegal in Los Angeles—who studies nights and weekends. Ari treats the bot like a junior research assistant, not a professor. He starts with an issue outline, drafts a rule in his own words, then asks the model to critique missing exceptions with citations to a bar outline he pastes in. For the performance test, he pastes only the file/library and commands the model to ignore outside law. On the MBE, he asks for analysis of each option and then checks against a trusted explanation bank. The habit is simple: human first, machine second.
To avoid the Kardashian-style “frenemy” spiral, structure matters. Prompts should be explicit about jurisdiction, scope, and the call of the question. Outputs must be audited. Errors are logged in a “gotchas” document so that the same trap doesn’t spring twice. And no answer survives without a cross-check against a commercial outline, hornbook, or state bar resource. That blend of skepticism and systems design turns a chaotic partner into a dependable one.
A friction-tested workflow
Here’s a blueprint refined by tutors who have watched hundreds of candidates fumble—and then fix—their test preparation with AI. The point is not to ban tools; it’s to bind them with process. That protects attention, grades performance under pressure, and keeps the “argument with the bot” from becoming the night’s main event.
- 🧠 Pre-brief yourself: list issues and the precise call before querying the bot.
- 📍 Set scope: specify jurisdiction, time period, and allowed sources.
- 🔎 Demand receipts: ask for pinpoint cites to your uploaded outline or excerpted text.
- 📝 Audit outputs: highlight rules, underline application, strike fluff.
- 📈 Track errors: maintain a log of AI misses to fortify future prompts.
| Action 🎬 | Do ✅ | Don’t ❌ | Reason 🧭 |
|---|---|---|---|
| Essay prompts | Write IRAC yourself, then ask for critique | Paste and submit the bot’s essay | Grader rewards your reasoning, not generic prose |
| Performance Test | Limit to file/library, enforce “no outside law” | Invite general doctrine | Outside law can tank your score |
| MBE review | Request option-by-option analysis | Accept one-line answers | Distractors demand granular comparisons |
One more lever: rhythm. Batch work in 25–50 minute sprints that end with a microscopic review step—did the tool answer the call? did you confirm the rule? did you fix errors? That micro-closing ritual keeps the relationship productive and the study session on track.
Video study is a supplement, not a substitute. Use it to sharpen technique, then return to deliberate practice where points are earned.
Pop Culture Meets Legal Education: What Kim Kardashian’s AI Argument Teaches Everyone
When pop culture collides with professional training, the audience gets a crash course in incentives. Kardashian’s dust-ups with ChatGPT make for irresistible clips, but the real story is how quickly habits spread. Students emulate what they see—especially when the person modeling the behavior has one name and a global audience. It’s why educators increasingly address AI head-on in syllabi and bar prep curricula: the technology isn’t leaving; the question is whether its use is disciplined or performative. The line between those two determines whether the “frenemy” becomes a force multiplier.
There’s also a branding angle. All’s Fair dramatizes legal life, while the off-screen narrative highlights the unglamorous mechanics of memorization, outlining, and feedback. That juxtaposition is powerful. It reframes success not as a montage but as a method. For viewers bingeing episodes and planning their own bar strategy, the message is clear: entertainment can spark interest, but only a calibrated system delivers results under time pressure.
Institutions are responding. Law schools and apprenticeships now publish AI usage guidelines; some require citation of AI assistance, others limit use to brainstorming. Bar prep companies embed model graders, while state bars reiterate that the exam rewards human reasoning under constraints. The shared aim is to prevent the Kardashian-style argument with a bot from becoming the default coping mechanism.
Stakeholders and smart moves
Different actors control different levers. Students can design prompts and verification loops. Instructors can assign AI-aware practice sets. Platforms can reduce hallucinations with retrieval and clear caveats. Media can cover the story without glamorizing shortcuts. Each contribution trims the error budget that ruins an otherwise passable performance.
- 🧑🎓 Students: build outlines first, then consult tools with targeted asks.
- 👩🏫 Educators: grade for citations and application depth, not flourish.
- 🏢 Platforms: add “jurisdiction locks” and document-only modes.
- 📰 Media: highlight process, not just punchlines, in coverage.
- ⚖️ Regulators: clarify ethical bounds of AI use in exams and clinics.
| Who 👥 | Risk 🔥 | Countermove 🧯 | Outcome 📊 |
|---|---|---|---|
| Student | Overtrusting confident outputs | Verification checklists and error logs | Higher accuracy, calmer sessions |
| Instructor | Invisible AI dependence | Require sources and reasoning steps | Transparent work, better feedback |
| Platform | Hallucinated doctrine | Retrieval from approved materials | Grounded, auditable answers |
Handled well, the spectacle becomes a service announcement: tools don’t pass exams—systems do.
Panels increasingly reach the same conclusion: integrate AI as a critique partner, never as an answer machine. That shift turns a headline into a habit change.
Did Kim Kardashian really blame ChatGPT for law exam struggles?
Yes. In a Vanity Fair lie detector segment, she said she used ChatGPT for legal questions and that it led to wrong answers during tests, prompting her to ‘yell at it.’ The operator indicated she was telling the truth.
Can ChatGPT be used safely for bar exam prep?
Yes—if it’s constrained. Set jurisdiction and scope, demand citations to approved materials, verify every output, and never paste AI text as your final answer. Use it as a critique tool, not a shortcut.
What exactly is on the California bar exam?
The exam includes five one-hour essays, one 90-minute performance test, and 200 multiple-choice questions (MBE). Precision, timing, and disciplined application of rules drive scoring.
Why do celebrity study habits impact regular students?
High-visibility habits spread quickly. When a public figure narrates AI-heavy studying, millions take cues. That’s why educators and platforms emphasize responsible, verifiable workflows.
How can a study session avoid turning into an argument with a bot?
Outline first, ask precise questions, require sources, and log errors. If an output conflicts with your materials, stop debating and verify with trusted outlines or treatises.
Jordan has a knack for turning dense whitepapers into compelling stories. Whether he’s testing a new OpenAI release or interviewing industry insiders, his energy jumps off the page—and makes complex tech feel fresh and relevant.
-
Open Ai2 weeks agoUnlocking the Power of ChatGPT Plugins: Enhance Your Experience in 2025
-
Ai models2 weeks agoGPT-4 Models: How Artificial Intelligence is Transforming 2025
-
Open Ai2 weeks agoComparing OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Bard: Which Generative AI Tool Will Reign Supreme in 2025?
-
Open Ai2 weeks agoMastering GPT Fine-Tuning: A Guide to Effectively Customizing Your Models in 2025
-
Open Ai2 weeks agoGPT-4 Turbo 128k: Unveiling the Innovations and Benefits for 2025
-
Ai models2 weeks agoGPT-4, Claude 2, or Llama 2: Which AI Model Will Reign Supreme in 2025?