CSUSB students are part of a new generation navigating an online world where artificial intelligence can mimic reality — from fake emails and text messages to cloned voices that sound like someone they trust. Cyber experts warn that vigilance, not fear, is the new digital survival skill.

A new Google Threat Intelligence report says hackers are now using artificial intelligence to build “adaptive” malware that can change itself in real time and create highly personal phishing scams. The report explains that hackers can now create scams that can change its own code to hide from antivirus software. Cybernews journalist Stefanie Schappert calls it “a new operational phase of AI abuse” and warns that the barrier to entry is dropping as criminal toolkits spread online.

For years, the golden rule of online safety was simple: Don’t click on strange links. People were told to watch out for bad spelling, broken English, or messages from people they didn’t know. But that advice no longer works. The new wave of AI-powered scams looks perfectly normal. Emails are written in flawless grammar, fake websites look exactly like real ones, and text messages sound like they’re from someone you know.

Now imagine getting a text from your mom’s number saying, “Hey, did you see that post from your aunt Sarah? Anyway, I need a favor to respond to her immediately.” It uses your mom’s tone, includes the name of someone you actually know, and then sends you a link that looks like a photo album or a class project. You wouldn’t think twice before clicking. Or imagine you received an email from the university’s financial aid office mentioning the exact type of loan you have, warning that there’s a “problem with your disbursement.” The email uses real university logos and professional wording, and it links to a page that looks just like the CSUSB myCoyote portal. For a student waiting on their financial aid, it feels urgent and totally believable.

This is the new reality. Hackers are now using AI to study people’s lives online, reading their social media posts, analyzing public profiles, and even scanning leaked data from past breaches, to design scams that feel personal. These attacks don’t stand out; they blend in. The old signs we were taught to look forbad spelling, weird phrasing, or obvious fake names, e.t.c., don’t apply anymore. The modern scam doesn’t feel suspicious; it feels familiar. It uses your relationships, habits, and responsibilities against you. That’s what makes campuses like California State University, San Bernardino especially vulnerable.

Most CSUSB students are first-generation college students managing studies, jobs, and family responsibilities, often all at once. They rely heavily on email and text communication for everything from class announcements to financial aid updates. With limited time and many distractions, they are the perfect targets for scams that use urgency to trick people into acting fast. An AI-written message about tuition holds or scholarship opportunities can easily slip through because it sounds real, looks real, and shows up right when you expect it.

Experts say these new scams are so effective because they target emotion rather than logic. When people are anxious, tired, or rushed, they respond automatically. Attackers know this. That’s why AI-generated phishing messages often create a sense of panic or curiosity—like “Your account has been locked,” or “You’ve won a scholarship, click to claim.” The goal is to make the victim react before they think.

The Google report goes even further, warning about malware that doesn’t just trick people but can also change itself to stay hidden. These new “adaptive” viruses can rewrite their code on the fly, helping them bypass traditional antivirus programs. Think of it like a thief who can instantly change their clothes, fingerprints, and voice every few minutes, so no one can identify them. This makes cybersecurity much harder, especially for schools and organizations that manage thousands of student and staff accounts.

For CSUSB, the potential risks go beyond individual students. A single compromised student account could expose sensitive data from financial aid, research, or the student newsroom. For example, if a student worker receives a fake “system update” email and installs malicious software, it could spread across shared drives, shutting down services or leaking personal information. Even the Coyote Chronicle could be targeted with a fake press release or deepfake video meant to discredit the publication or mislead readers.

So what can students and staff do in this new landscape? Experts say we need a new mindset. The rule isn’t “Don’t click strange links” anymore; it’s “Verify, don’t just trust.” That means if a message feels urgent, emotional, or even just slightly off, pause and check it through another source. If your bank sends an alert, don’t click the link, open your bank’s app or website directly. If a friend texts asking for money, call them to confirm. If the university says there’s a problem with your aid, contact the financial aid office directly using the number on CSUSB’s official website.

This shift matters because AI is getting better at blending in. The most powerful defense now is slowing down. Most scams succeed not because people are careless, but because they are busy and trust what seems familiar. Learning to take an extra moment to double-check before acting can make the difference between safety and regret.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php