✦ AI Safety · Senior Digital Guide
I learned AI at 65. Nobody warned me about the real risks.
This is the guide I wish I had — written from personal experience, not theory.
📌 Bottom Line First — Your AI Safety Checklist
AI is safe to use. But only if you know where the real dangers are.
What to NEVER do
- Share Social Security / SIN numbers with AI
- Enter bank account or credit card details
- Trust urgent AI-generated messages asking for money
- Believe everything AI tells you without verifying
- Use the same password everywhere
What to ALWAYS do
- Use AI as a tool — you stay in control
- Verify medical / financial / legal info with professionals
- Use strong, unique passwords for each account
- Slow down when something feels urgent or suspicious
- Tell a trusted person if something seems wrong
The Phone Call That Made Me Realize the Risk
It happened about two years after I started using AI regularly.
A friend called me, shaken. She had received an email that looked exactly like it was from her bank. The logo was perfect. The language was professional. The email warned her that her account had been accessed by an unauthorized party and asked her to click a link immediately to secure it.
She clicked the link. She entered her details. Within 48 hours, her account had been drained.
That email was not written by a person. It was written by AI.
I thought about that call for days. I had been enthusiastically recommending AI tools to everyone around me. I had been talking about how much easier they made daily life. But I had not been talking about the other side of this technology — the ways that the same tools that help us can also be used against us.
This guide is my attempt to fix that. Everything here comes from real experience, real mistakes, and real conversations with people who were hurt by not knowing these things.
“AI is not dangerous the way a weapon is dangerous. It is dangerous the way fire is dangerous — extraordinarily useful, and genuinely harmful when you do not understand how it works.”— JongWoo, 65, Canada
$3B+lost to AI-assisted scams
targeting seniors annually
60+age group most targeted
by digital fraud
90%of AI scams preventable
with basic awareness
“The two faces of AI — helper and threat”

A clean split infographic: “AI used FOR you” on the left (writing assistance, health info, travel planning, family connection icons) vs “AI used AGAINST you” on the right (fake emails, voice cloning, deepfake calls, phishing icons). Both sides equal in size — the message is balance and awareness, not fear. Flat design, navy and gold palette, English labels.

Part 1 — How Scammers Use AI Against You
The first thing every senior needs to understand is this: the same AI tools you use to write emails, plan trips, and answer questions are available to criminals. And criminals have been using them — aggressively — to target people over 60.
Here is why seniors are disproportionately targeted. It is not because older people are less intelligent. It is because seniors tend to be more trusting, often have more savings, and are sometimes less familiar with how sophisticated digital deception has become.
Knowing how the attacks work is your first and most powerful defense.
The 5 Most Common AI-Powered Scams Targeting Seniors
Scam #1 — AI Phishing Emails
Emails written by AI that perfectly mimic your bank, Amazon, Medicare, or the CRA/IRS. No spelling errors. Professional tone. Impossible to distinguish from the real thing by appearance alone.
Scam #2 — Voice Cloning
Scammers record 3–10 seconds of your grandchild’s voice from social media, then use AI to clone it. They call you sounding exactly like your grandchild, claiming to be in trouble and needing money urgently.
Scam #3 — Fake AI Assistants
Pop-ups or websites offering a “free AI assistant” that asks you to download software — actually malware — or to share personal information to “set up your account.”
Scam #4 — Romance Scams
AI-powered chatbots that maintain convincing romantic conversations for weeks or months, building emotional trust before asking for money for a “crisis” or “emergency.”
Scam #5 — Tech Support Fraud
AI-generated pop-ups claiming your computer has a virus, with a phone number to call. The “technician” (or AI voice) asks for remote access to your device or payment to “fix” a problem that does not exist.
Scam #6 — Investment AI Bots
Convincing AI chatbots on social media or messaging apps that claim to offer exclusive investment opportunities with “guaranteed returns.” They are always fake. No legitimate investment guarantees returns.
⚠️The Universal Warning Sign
Any message — email, text, phone call, or AI chat — that creates urgency, demands secrecy, requests payment by gift card, wire transfer, or cryptocurrency, or asks for your personal identification numbers is almost certainly a scam.
Legitimate organizations — banks, government agencies, healthcare providers — do not operate this way. When in doubt: hang up, close the tab, and call the organization directly using a number you already know.
“Recognizing AI scams before they strike”
A warm editorial illustration showing an older person receiving a suspicious phone call. Their expression shows alertness and calm skepticism — they are not panicked, they are aware. In a thought bubble above them: the key warning signs they are checking (urgency, secrecy, unusual payment). The phone is held at arm’s length, not clutched in fear. Watercolor style, navy and gold tones, empowering rather than frightening.

A clean visual checklist infographic titled “Is This a Scam?” with a simple decision tree. Starting question: “Does this message make you feel rushed or scared?” → Yes: red flag. “Does it ask for gift cards, wire transfer, or crypto?” → Yes: red flag. “Does it ask you to keep it secret?” → Yes: red flag. Flat design, red and green indicators, English labels, clear and readable at any age.

Part 2 — Using AI Safely: The Rules I Follow Every Day
Here is something important to say clearly: AI tools like ChatGPT are genuinely safe for everyday use. Hundreds of millions of people use them without incident. The risks come from specific behaviors — and those behaviors are entirely avoidable.
These are the rules I follow. I developed them through experience, through conversations with security experts, and unfortunately through watching people I know get hurt when they did not know these things.
🔐 My Personal AI Safety Rules — Non-Negotiable
- ✗Never enter identification numbers. Social Security Number, Social Insurance Number (Canada), Medicare number, passport number — none of these ever go into any AI tool. Ever.
- ✗Never enter financial details. Bank account numbers, credit card numbers, routing numbers, PINs, passwords. Not even partial numbers.
- ✗Never share other people’s private information. If asking AI for help with a letter about someone else, use their first name only. No addresses, phone numbers, or personal details.
- ✓Always verify medical, legal, and financial advice. AI gives a starting point for understanding. A doctor, lawyer, or financial advisor gives the final word. Always.
- ✓Always use official websites directly. If AI tells you to visit a website, type the address yourself rather than clicking a link. Go to bank.com yourself — do not follow a link.
- ✓Always slow down when something feels urgent. Urgency is the scammer’s most powerful tool. A legitimate message can wait five minutes while you verify it independently.
What Is Safe to Share with AI
✓ These are completely safe to share with AI
General descriptions of your situation (“I am a retired person in my 60s”), first names only when asking for help with personal messages, general location like your city or country for travel planning, health topics and symptoms for general understanding (not diagnosis), and any factual questions you would ask a knowledgeable friend.
Think of AI like a public library reference desk. It is completely appropriate for general help. It is not the place for your private identification or financial details.
Part 3 — Protecting Your Accounts and Devices
I used the same password for everything for years. My email, my bank, my social media accounts — all the same password. I thought it was smart because I could always remember it.
It was one of the most dangerous things I was doing.
When hackers break into one website — and it happens to large websites regularly — they try that same username and password combination on every other major site. If you use the same password everywhere, one breach exposes everything.
Password Security — The Non-Negotiable Foundation
- Use a different password for every important account — especially email, banking, and government services
- Make passwords long rather than complicated: “BlueMountain Sunrise2024” is harder to crack than “P@ss1”
- Consider a password manager — apps like 1Password or Bitwarden store all your passwords securely behind one master password
- Enable two-factor authentication (2FA) on your email and banking accounts — this adds a second verification step even if your password is stolen
Email Safety — Where Most Attacks Begin
The majority of successful scams begin with an email. AI has made phishing emails — fake emails designed to steal your information — essentially indistinguishable from real ones in terms of writing quality and appearance.
The only reliable defense is behavioral, not visual.
📧 Email Safety Rules
- ✗Never click links in emails about your bank, government benefits, or any financial account. Go to the website directly by typing the address yourself.
- ✗Never download attachments from unexpected emails, even if they appear to come from someone you know. Their account may have been hacked.
- ✓Check the actual email address carefully — not the display name. Scammers use addresses like “service@amazon-support-help.com” instead of the real Amazon address.
- ✓When an email concerns something important, call the organization directly using a number from their official website — never a number provided in the email.
Recommended illustration: “Building your digital shield — daily safety habits”
A calm editorial illustration of an older person at a tidy desk reviewing their email thoughtfully — not anxiously — with a small checklist visible on a notepad beside them. The scene conveys organized, confident digital citizenship rather than fear. Morning light, warm tones. In the background, a strong door with a solid lock as a gentle metaphor for digital security. Watercolor style, navy and gold.

A clean educational infographic showing the difference between weak and strong passwords. Left column “Weak”: same password everywhere, short, obvious. Right column “Strong”: unique per site, long phrase, password manager icon. Center: a simple lock that goes from open to closed. English labels, flat design, navy and green palette. Educational, not intimidating.

Part 4 — Verifying What AI Tells You
This is the part that surprised me most when I started using AI regularly.
ChatGPT is remarkably helpful. It is also capable of being confidently, fluently, completely wrong.
AI language models do not “know” things the way a human expert knows things. They generate text based on patterns in their training data. Most of the time, this produces accurate, helpful responses. Sometimes it produces plausible-sounding information that is simply incorrect.
This is called “hallucination” in the AI field. And it is one of the most important things any new AI user needs to understand.
When to Always Verify — No Exceptions
⚠️ Critical Verification Rule
Before acting on any AI-provided information about your health, your medications, your legal rights, your financial decisions, or your government benefits — verify with a qualified professional. AI is a research starting point. A doctor, lawyer, or financial advisor is the finishing point.
- Medical symptoms and diagnoses — AI can help you understand terms and prepare questions; only your doctor can diagnose you
- Medication interactions — always verify with your pharmacist, especially when starting a new medication
- Legal rights and obligations — laws vary by region and change over time; AI may provide outdated or jurisdiction-specific information
- Financial advice and tax guidance — always confirm with a qualified advisor before making significant financial decisions
- Any specific facts, statistics, or quotes — AI can confidently state inaccurate information; verify important facts from original sources
How to Spot When AI May Be Wrong
Over time I have developed a sense for when to be more cautious with AI responses. Here are the patterns I watch for.
💡 Signs to verify before trusting
Be extra careful when AI gives very specific numbers, dates, or statistics without citing a source. Be cautious when the answer seems too neat or too certain for a genuinely complex topic. Always cross-check when the information involves your personal health, money, or legal situation. And pay attention when your own experience or instinct says something feels off — trust that feeling.
Part 5 — What to Do If Something Goes Wrong
It happens to careful, intelligent people. If you believe you have been the victim of an AI-assisted scam or if you have accidentally shared information you should not have — the most important thing is to act quickly and not be ashamed.
Shame keeps people from getting help. Getting help is what limits the damage.
Immediate Steps If You’ve Been Scammed
1. Stop all contact with the scammer immediately. Do not send more money, even if they pressure you or threaten you. Threats are part of the script.
2. Contact your bank right away. Call the number on the back of your card. Explain what happened. Ask them to freeze suspicious transactions and review recent activity.
3. Change your passwords. Start with email — email is the key to everything else. Then banking, then other important accounts.
4. Report it. In Canada: Canadian Anti-Fraud Centre at 1-888-495-8501 or antifraudcentre.ca. In the US: Federal Trade Commission at reportfraud.ftc.gov. Your local police as well.
5. Tell someone you trust. You do not have to handle this alone. A trusted family member or friend can help you navigate the next steps and provide practical and emotional support.
A warm, uplifting editorial illustration showing an older person using AI confidently — writing an email, planning a trip on a map, video calling family — all while a gentle visual shield or protective light suggests safety awareness. The scene is peaceful and empowered, not anxious or guarded. Soft watercolor style, green and gold tones, morning light, a sense of capability and calm mastery.

A clean four-quadrant infographic titled “Your AI Safety Framework.” Top-left: “What to SHARE” (general questions, first names, situations). Top-right: “What to NEVER SHARE” (ID numbers, passwords, financial details). Bottom-left: “When to VERIFY” (health, legal, financial info). Bottom-right: “When to STOP and CHECK” (urgency, secrecy, unusual payments). Flat design, English labels, navy and gold, clear and memorable.

A Final Word — Stay Curious, Stay Safe
I want to end with something important.
This guide is not meant to make you afraid of AI. I use AI every single day. It has genuinely enriched my life — helping me write, learn, connect, and create things I never thought I would create at this age.
The risks are real, but they are manageable. The same way you learned to drive a car and understood its risks without giving up on driving — you can learn to use AI and understand its risks without giving up on the extraordinary things it makes possible.
The people who are most at risk are not the people who use AI. They are the people who encounter AI — in scam emails, in fraudulent phone calls, in fake websites — without understanding how it works.
Knowledge is your protection. And you now have more of it than you did an hour ago.
“I am not afraid of AI. I am informed about it. That is a very different thing — and it makes all the difference.”— JongWoo
Summary & Key Tips
Everything you need to remember — in one place.
- AI is safe when you know the rules.The risks are specific and avoidable. Most people who are harmed by AI-related scams simply did not know what to watch for. Now you do.
- Never share personal identification or financial information with any AI tool.This is the single most important rule. Social Security numbers, bank details, passwords — none of these ever belong in a ChatGPT conversation.
- Urgency is always a red flag.Scammers use urgency and fear because it bypasses rational thinking. When something feels rushed or threatening — slow down. A legitimate message can wait five minutes.
- Always verify health, legal, and financial information with professionals.AI is an exceptional research starting point. Your doctor, lawyer, or financial advisor is where important decisions get finalized.
- Use unique passwords for each important account.One password for everything is the single most dangerous habit in digital life. A password manager makes using unique passwords effortless.
- Voice cloning is real — and targeting seniors.If you receive a call from a “grandchild” or “family member” in crisis asking for money, hang up and call them directly on the number you already have. Then confirm with them before doing anything.
- If something goes wrong, act fast and ask for help.Contact your bank immediately, change your passwords, report it to the relevant fraud authority, and tell someone you trust. Speed limits the damage. Shame does not protect you.
Written by JongWoo · 2025
#AISafety#SeniorDigitalSafety#ChatGPTSafety

Leave a Reply