Table of Contents
I’m putting together this week’s roundup a little differently. Instead of vague “news you might care about,” I’m focusing on what’s actually happening, why it matters, and what you can do about it—especially if you’re a parent, teacher, or anyone who’s ever had to explain to a kid why 911 isn’t a joke.
This issue covers three stories: (1) teens using AI prank imagery to trigger real emergency responses, (2) what’s changing around ChatGPT “deleted chats” privacy, and (3) lawmakers pushing for answers after Medicaid eligibility systems reportedly kicked people off due to technical problems.
- AI Fuels Dangerous Prank
One of the scariest parts of this story is how “easy” it seems to be for pranksters to cause real-world harm. According to reporting tied to police warnings, teens (and others) have been circulating fake AI images—often framed as “homeless people” or people in distress—then getting parents or other adults to call 911 based on the image.
- Here’s what the prank usually looks like, step-by-step:
- Step 1: Create or find fake AI imagery. The images are designed to look convincing at a glance—enough to trigger concern.
- Step 2: Package it as an urgent “real” situation. The post/message implies something is happening right now (or that it “must be reported”).
- Step 3: Push the call to an adult. The teen doesn’t always call 911 themselves. They might send it to a parent, guardian, or family group chat and let the adult make the call.
- Step 4: Emergency responders get dispatched to a false emergency. That wastes time and resources—time that could be needed for a real call.
- And yes, officials are warning that this kind of behavior doesn’t just “clog the system.” It can create dangerous situations when real emergencies are competing for attention. Even if the prank doesn’t end in injury, it still pulls officers, dispatchers, and EMS away from genuine calls.
- What I’d watch for as a parent (or anyone dealing with kids sending “concern” posts):
- Ask for specifics, not just a picture. “Where exactly is it? What’s the address? What’s happening right now?” If the teen can’t answer, that’s a red flag.
- Don’t call 911 based on AI images alone. If you’re unsure, call your local non-emergency line first. In many places, dispatchers can tell you whether it’s appropriate to escalate.
- Talk about consequences before the next “viral” challenge. I know it’s tempting to treat pranks like harmless fun, but 911 misuse can lead to serious legal trouble, especially when it causes dispatches or delays.
- If you want the original reporting details, start with the Gizmodo link above. It’s the best quick way to see what police are warning about and how the “AI homeless” prank is being described publicly.
- No More Chat Logs (Mostly)
This one matters if you’ve ever clicked “delete” and then wondered what “deleted” really means. Ars Technica reports that OpenAI has gained permission to stop saving conversations from deleted ChatGPT chats—though the story also notes that some “deleted” chats may still be reviewed in certain situations, like copyright-related issues.
- In my experience, people don’t delete chats because they want to be sneaky—they delete because they don’t want their prompts and outputs lingering around. So this is a real privacy improvement, even if it’s not a perfect “nothing is ever kept” promise.
- Practical takeaway:
- If privacy is your priority, assume you still shouldn’t paste anything you wouldn’t want reviewed in edge cases. That includes personal identifiers, sensitive documents, or private medical/legal info.
- Use separate accounts or safer workflows for sensitive tasks. For example, draft content in a private doc first, then paste only the parts you want the model to work on.
- Keep an eye on policy updates. Privacy terms can change, and the “mostly” in this story is important.
- Read the Ars Technica piece for the full breakdown, including what’s changing and what exceptions still apply.
- Medicaid Systems Are Under Fire
Last up is another story with real consequences: lawmakers are asking questions about Medicaid eligibility systems that reportedly removed people from coverage due to technical issues.
- According to CBS News reporting, the focus is on contractors tied to eligibility determinations—specifically whether system problems caused people to lose access to care, and whether the contractor’s priorities were aligned with doing the job correctly (instead of maximizing contract value).
- Why this hits hard: Medicaid isn’t a “nice to have.” When eligibility systems malfunction, people can lose coverage, delays can happen, and it can take time to correct errors—time that families may not have.
- If you’re directly affected or supporting someone who is:
- Document everything. Screenshots, letters, dates, and case numbers help when you need to appeal or escalate.
- Ask about the appeal process right away. Don’t wait for the “system to fix itself.”
- Escalate when deadlines loom. Eligibility decisions often come with time limits.
- For the names, context, and what lawmakers are demanding, the CBS News link above is the right starting point.
I’m not doing the “here’s 7 tools, good luck” thing. Here’s what I’d actually check for with each one—what it’s for, what it outputs, and any obvious limitations you should be aware of before you bet your project on it.
- Mocha– Builds full-stack applications using AI (front + back end). Great if you want a working prototype fast. Example workflow: you describe the app idea → you get a runnable project structure with UI routes and backend endpoints. Constraint: like any “one-shot” builder, you’ll likely need to tighten edge cases and permissions once you’re testing with real data.
- Kollegio AI– Helps students match schools and scholarships, then gives essay advice. Example workflow: you provide academic profile + target interests → it suggests matching programs/scholarships → it generates feedback for your essay drafts. Constraint: always verify deadlines and requirements directly with the schools—AI can miss small but important details.
- ReplyZen– Generates reply ideas for social comments with product links that fit the thread. Example workflow: you paste the comment + context → it suggests 3–5 response options → you pick one and attach a relevant link. Constraint: don’t let it write “salesy” replies—use it for tone and structure, then edit for authenticity.
- devlo– Turns basic explanations into an app, then launches it with real-time viewing and visual quality checks. Example workflow: you outline the feature set → it generates the app → you review it live as it’s built. Constraint: if your requirements are ambiguous, you’ll get something that’s “close” instead of “exact,” so be ready to iterate.
- SheetAI– Adds AI help to Google Sheets—content generation and automation without wrestling formulas. Example workflow: you describe what you want (“summarize these rows,” “generate outreach emails,” “classify categories”) → it produces outputs in cells. Constraint: double-check formulas/labels; AI can infer wrong categories if your sheet headers aren’t clear.
- X-Pilot– Converts course concepts into teaching videos and lesson materials. Example workflow: you provide lesson notes or bullet points → it generates a video script and teaching flow → you get a ready-to-record plan. Constraint: the best results come from giving it structured outlines; vague notes produce vague lessons.
- isFake.ai– Detects likely AI-generated content using pattern indicators like heatmaps and trust scores. Example workflow: upload an image → it highlights suspicious regions → it outputs a confidence/trust score. Constraint: no detector is perfect—use it as a “second opinion,” not a final verdict.
If you want a prompt that’s actually useful (and not just filler), try this one:
Prompt: “Act as a school safety coordinator. Write a short, parent-friendly guide (about 250–350 words) explaining how AI-generated images can be used for harmful pranks that trigger emergency calls. Include: (1) warning signs parents should watch for, (2) what to do instead of calling 911 immediately, (3) a simple script parents can use to talk to teens, and (4) a checklist parents can keep on their phone.”
Quick example output idea: “If a teen sends an ‘urgent’ image with no address or real-time details, pause. Use your local non-emergency line to verify. Ask for the exact location, what’s happening now, and whether there’s a live video or eyewitness report. Then talk about consequences: emergency resources are limited, and false calls can delay help for someone who actually needs it.”



