Hackers Don’t Break In, They Log In: Social Engineering, AI, and “Polite Paranoia” You Can Use Today

Social engineering is how most “hacks” really happen

Social engineering is how most “hacks” really happen: humans, not firewalls. In this friendly, plain-English guide we unpack what social engineering is, what ethical hackers like Rachel Tobac actually do, and how AI (voice cloning, agentic calls, deepfakes) is changing the game. You’ll learn a simple threat-modeling approach and a “polite paranoia” checklist: passkeys, number matching, and out-of-band checks—to keep your team and family safe.

Intro (coffee in hand ☕)

If you think hacking is a hoodie and a terminal, you’re only seeing half the movie. Most real-world breaches start with a phone call, a text, or a friendly email that nudges a human to open the door. That’s social engineering, and it’s wildly effective. In this post, we’ll unpack what social engineering is, what ethical hackers actually do, how AI is supercharging both offense and defense, and the practical habits (my “polite paranoia” playbook) you can roll out at home or across your company.

I’m writing this after watching an excellent interview with social engineer Rachel Tobac, who showed—live—how easy it is to spoof a caller ID, clone a voice, and trick a friend into giving up a security answer. Chills. We’ll use lessons from that chat plus current research to give you a friendly, no-jargon guide you can act on today.

What is social engineering?

Social engineering is the art of persuading a person to take an action that helps the attacker—click a link, read out a code, reset a password, hold a door. Think phishing (email), smishing (SMS), vishing (voice), and in-person pretexts. It’s psychology, pressure, and timing.

Why does it matter? Because “the human element” shows up in most breaches. Verizon’s Data Breach Investigations Report has consistently found that people—errors, misuse, and social engineering—are involved in the majority of incidents (e.g., 68% in the 2024 report; 2025 still shows the human element dominating).

When you hear “they were hacked,” translate it to: “someone convinced a human to open the door.”

What is an ethical hacker?

Ethical hackers (also called white-hat hackers) are pros who use the same techniques as criminals—with permission—to find weaknesses before the bad folks do. They run penetration tests, attempt phishing campaigns, and practice social engineering to prove risk, then help fix it. A common cert is the CEH (Certified Ethical Hacker), but the heart of the work is disciplined testing and transparent reporting. (EC-Council defines ethical hacking as legally and legitimately intruding into systems to determine vulnerabilities.)

Ethical hackers aren’t there to “pwn” you. They’re there to teach and harden your defenses.

Who is Rachel Tobac?

Rachel Tobac is the CEO of SocialProof Security, a renowned ethical hacker and social engineer who has competed (and podiumed) multiple times in DEF CON’s Social Engineering Capture the Flag. She trains companies to replace weak identity checks (like KBA) with stronger MFA and process controls, and popularized the idea of being “politely paranoid.” She’s been featured by RSA Conference, TechCrunch, and many security orgs.

Her recent demo (the video that inspired this post) shows how caller-ID spoofing plus voice cloning can extract a security answer from a trusted friend in seconds, a perfect teaching moment for why out-of-band verification and secret passphrases matter.

How (embarrassingly) easy it is to hack humans

From Rachel Tobac’s interview, here’s a simple (and scary) chain:

  • Spoof caller ID: Make a call that shows up as your contact in someone’s phone. If they have you saved, your name and photo appear—credibility unlocked.
  • Clone your voice: With as little as ~10 seconds of clean audio (and you’ve got hours online, right?), an attacker can build a voice clone that sounds exactly like you. Regulatory bodies and consumer protection agencies have warned that voice cloning is now trivial and weaponized in scams.
  • Ask a “harmless” question: “Hey, remind me—what was our middle-school mascot?” Oops. That’s a classic knowledge-based authentication (KBA) answer. Share it once and attackers can reset bank accounts, inboxes, or mobile lines that still rely on KBA.

This is why KBA (“mother’s maiden name,” “first pet”) is fundamentally broken. It’s all online—or can be guessed or coaxed.

Better: move to multi-factor authentication (MFA) and phishing-resistant methods (details below). Even “basic” 2-step verification with SMS blocks a huge share of real-world attacks (Google measured 100% of automated bots, 96% of bulk phishing, and 76% of targeted attacks). It’s not perfect—but it’s so much better than nothing.

Modern attack methods you should actually care about

Here are the things I see again and again when I’m helping teams and beyond:

  1. MFA fatigue (“push bombing”)
    Attackers log in with a leaked or reused password, then spam your phone with approval prompts until you tap “Approve” just to make the noise stop. Microsoft moved to number matching in its Authenticator—showing a code on the sign-in screen that you must enter in the app—to blunt this. Turn it on everywhere you can. 
  2. Help-desk social engineering
    Groups like Scattered Spider / Octo Tempest are infamous for calling support desks, claiming “I dropped my phone in the toilet; need my MFA moved to a new device,” and sweet-talking their way to an account reset. Microsoft has documented this crew’s tactics—SIM swaps, credential phishing, and help-desk abuse. Lock down your support playbooks. 
  3. Credential reuse + password spraying
    One breach feeds a thousand logins. Attackers try the same email/password on your VPN, cloud apps, and payroll. This is why unique passwords (hello, password manager) and MFA are non-negotiable. DBIR calls credential attacks one of the top initial access methods year after year.
  4. Deepfake video/voice in fraud
    2024 gave us a whopper: a finance employee allegedly wired $25M after a deepfake video call made it look like the CFO and colleagues were on the line. That’s not a movie plot—that’s modern business email compromise (BEC) with AI makeup. Build an out-of-band verification culture for any money movement.
  5. Classic phishing, now LLM-polished
    The grammar is better, the timing is sharper, and the payloads are the same: cookie theft, session hijacks, or malware that steals your refresh tokens. Training plus technical controls (DMARC, safe-link rewriting, isolation) reduce the blast radius.

AI has entered the chat: how attackers use it

AI isn’t “the hacker.” It’s the copilot that speeds research and scales the con:

  • Voice cloning at scale — Automated “Hi mum, I’m in trouble” calls and CEO-impersonation vishing are spiking. The FTC published consumer warnings specifically about AI voice clone scams. Use secret passphrases andsecond channels to verify.
  • Polished phishing — LLMs generate emails that mirror your internal tone and style. They can summarize your LinkedIn, GitHub, or press releases to craft you-shaped bait.
  • Agentic calling — Researchers and creators have demoed AI agents that place phone calls, follow a script, and adapt to answers—essentially “robot social engineers.” The takeaway isn’t panic; it’s process: design your help-desk and finance flows so that even a perfect voice can’t bypass controls without out-of-band checks.
  • Data-broker scraping as recon — Attackers don’t need to “hack” your privacy; they can buy it. Recent reporting and regulatory actions show data brokers still make it hard to opt out—and that’s the fuel for convincing pretexts. Shrink your digital footprint.

Threat modeling, coffee-chat edition

Threat modeling is a simple, structured way to ask: what are we protecting, from whom, how could they get it, and what’s our best move to stop them? In practice, you list your critical assets (customer data, payroll, admin console), the likely attackers (opportunistic scammers, insiders, targeted crews), map the most realistic attack paths (phishing + MFA fatigue, help-desk reset, vendor compromise), then score likelihood vs. impact to prioritize. The outcome isn’t a thesis: it’s a short, living plan with specific mitigations (e.g., passkeys for admins, number-matching MFA, out-of-band payment checks, least-privilege access), clear owners, and assumptions you’ll revisit when something changes (new app, new vendor, new team). Done right, threat modeling turns vague “security worry” into a focused backlog that actually lowers risk and saves money.

1) Who might target you?
Are you public-facing? In finance? Gaming? A founder on X? A hospital? Your profile changes the likely attacker (criminal, competitor, opportunist).

2) What do they want?
Money (wire fraud), access (VPN/admin), data (customer PII), clout (defacement), or disruption (ransomware).

3) How would they get it?
Pick 3–5 realistic paths: phishing + MFA fatigue; SIM swap + help-desk reset; vendor compromise; deepfake CFO Zoom + urgent wire.

4) What’s the one action that would stop each path?
Examples: number-matching MFA; no KBA resets; passkeys/security keys for admins; dual-control payments with voiceback on a known number; geo/time-based step-up auth.

AI psychosis: when chatbots become yes-men (and why that’s dangerous)

There’s a growing conversation about people over-relying on AI companions that affirm delusional beliefs, or blur reality in stressful moments. While “AI psychosis” isn’t a clinical term, we’ve already seen tragic and troubling incidents where chatbot conversations appear linked to harm—and experts warning about mental-health risks, especially for teens and young adults. 

Two practical points for families and teams:

  • Don’t outsource reality checks to a chatbot. If a conversation tips into dangerous territory (self-harm, conspiracies, illegal activity), a model should redirect and provide resources, but you can’t assume it will. (Policy and safety guardrails vary.)
  • Create human circuits. For kids: encourage open device-free time with friends and family. For teams: any high-stakes decision (legal, financial, HR) needs human review and documentation.

This isn’t anti-AI. It’s pro-human. Use AI for drafting, summarizing, translating, and content filtering, not as a therapist or a “friend.”

Protecting yourself (and your team) today

Here’s the “polite paranoia” starter pack I recommend to friends, founders, and CFOs. If you implement just half of this, you’ll be miles ahead.

1) Kill knowledge-based authentication (KBA)

If your bank, telco, or help desk verifies identity by DOB, address, or “favorite teacher,” you’re one phone call from a full account takeover. Ask for app-based MFA or security questions you can set to nonsense answers stored in your password manager. (Better: passkeys/security keys—see #3.)

2) Turn on MFA everywhere—and upgrade it

  • At minimum: 2-step verification (SMS) is dramatically better than nothing. Google’s real-world data backs this.
  • Better: app-based prompts or TOTPs (Authy, 1Password, Microsoft Authenticator).
  • Best (for admins & high-risk roles): FIDO2 security keys or passkeys. They’re phishing-resistant by design. Here’s more about what passkeys and FIDO are: FIDO Alliance

3) Move to passkeys/security keys

Passkeys use public-key crypto tied to your device—no password to steal, nothing to phish. They’re now supported by the big platforms and are the easiest path to strong authentication for most users. Start with your email, password manager, and cloud console.

4) Defend against MFA fatigue

Enforce number matching in Microsoft Authenticator (and similar “challenge codes” elsewhere). Teach people: never approve a request you didn’t initiate. If prompts keep coming, report it, fast.

5) Build an “out-of-band” culture

For wires, payroll, or password resets: always verify using a second channel you already control (e.g., call back on a number from your directory, not the email or text you just got). This alone would stop most deepfake/BEC disasters.

6) Shrink your data exhaust (OSINT)

Data brokers publish your phone, address, relatives, and more. That’s recon ammo for social engineers. Use services like Mozilla Monitor or manual opt-outs to reduce your footprint. (Also watch for regulatory action against broker abuses—progress is happening.)

7) Prepare for voice-clone scams

  • Set a family/company passphrase that’s never posted or joked about online.
  • If an urgent call asks for money/codes, hang up and call back on your saved contact.
  • Remind loved ones that clones exist; the FTC has consumer guidance worth sharing.

8) Choose private channels wisely

For sensitive chats, Signal now supports usernames, so you don’t have to expose your phone number. It’s a solid default for end-to-end encrypted messaging.

9) Train like you mean it (but back it with controls)

Run short, friendly simulations and debriefs, then back people up with technical controls (email authentication, conditional access, session monitoring). No shame. Just learning.

How this helps your business

Let’s bring this home to your P&L:

  • Fewer incidents, less downtime: A single wire-fraud or ransomware event can wipe out a quarter’s profit. Passkeys + number matching + out-of-band approvals reduce that risk dramatically.
  • Happier auditors, better insurance: Carriers increasingly ask about phishing-resistant MFA. Having FIDO2 in place helps with premiums and compliance narratives.
  • Faster onboarding/offboarding: Strong identity flows (no KBA, standardized MFA) make IT faster and safer—especially for remote teams.
  • Brand trust: Your customers won’t see “we were socially engineered” in the headlines. That’s priceless.

Want help prioritizing? Start with admins, finance, and support. Shore up those identity flows first, then roll out passkeys to the rest.

Need help in this topic? How to find vetted devs

If you’re a business owner and want help rolling out passkeys, FIDO2 keys, conditional access, or “polite paranoia” training, I recommend hiring vetted, friendly developers and security engineers who live this stuff every day. I recommend Codeable, they vet their developers for both technical skill and professional integrity. You can find some fantastic, trustworthy people there.

Affiliate Disclosure:
This is a link that will take you to Codeable, a platform I’ve worked on now for almost 10 years, and I trust all the single experts they onboard. So feel free to open your task and ask your question, the link we assign this task to me as a referrer for the platform.

Leave a Reply

Your email address will not be published. Required fields are marked *