Professor Wang Xiaoming gave a lecture
Recorded and compiled by Tang Jianhua
In recent years, the financial losses caused by online fraud have continued to rise at an alarming rate. According to a report from the FBI's Internet Crime Complaint Center (IC3), losses in the United States due to cybercrime will exceed $166 billion in 2024, a 33% increase from 2023. The rise of artificial intelligence (AI) has undoubtedly opened a new chapter in fraud crime.
In this technological age, how can we identify new fraud tactics and protect our property and personal information? The Houston Cornerstone Center invited Dr. Xiaoming Wang, professor of criminal justice at the University of Houston, to deliver a lecture at the center on September 6, 2025, which was also livestreamed online and received enthusiastic responses from the audience.
1. Current Situation of Fraud in the United States
Fraud in the United States has exploded in the past few years. According to the Federal Trade Commission (FTC), American consumers lost over $100 billion to fraud in 2023, a record high. This figure echoes the FBI-IC3 report, painting a grim picture. These losses primarily stem from several areas:
- Investment scams are responsible for the greatest losses, particularly following the booming cryptocurrency market. Scammers use AI to create fake investment platforms, celebrity endorsement ads, and even convincing virtual currency trading websites, fooling victims into believing they are making legitimate, high-yield investments. They typically first allow victims to earn a small "profit" to build trust, then quickly abscond with the funds after the victim has invested a significant amount.
- Impersonation scams: Scammers exploit victims' fear of authority or trust in family members to extort money. For example, impersonators may impersonate Department of Homeland Security or IRS officials, falsely claiming victims have unpaid taxes or are involved in illegal activities, and demanding immediate remittance to avoid legal action.
- Technical support scams: These scams often target individuals less familiar with computer technology, particularly the elderly. Scammers impersonate technicians from tech giants like Microsoft and Apple, claiming the victim's computer is infected with a virus or has a security vulnerability. They then trick victims into downloading remote access software, which allows them to steal bank information or transfer funds directly.
In the United States, people aged 60 and over suffer the highest losses from fraud. They are generally more trusting of authority figures and have limited familiarity with complex digital payments, making them vulnerable to investment and tech support scams. Furthermore, emotional blackmail and impersonation scams particularly target older adults, such as those impersonating grandchildren or relatives and falsely claiming to be in an emergency and in need of financial assistance. However, younger generations are not immune. While they may be more technologically savvy than their elders, they also share more personal information online and are more susceptible to "get-rich-quick" schemes, making them potential victims of social media scams, job scams, and investment scams.
Faced with rampant fraud, various U.S. government agencies are actively responding. The FTC, the primary agency protecting American consumers, is responsible for collecting fraud reports, issuing consumer alerts, and providing consumer prevention guides. The FBI and the National White Collar Crime Center also partnered in 2000 to establish the IC3, creating an official platform for reporting cybercrime and coordinating cybercrime investigations in the United States. Meanwhile, the Department of Justice, working with federal and local law enforcement agencies, is actively prosecuting members of fraud rings to deter criminals.
2. The Nature of Fraud: Attacking the Heart
Fraud isn't a modern invention; its history dates back to biblical times. Genesis 27:1-35 records a scam orchestrated by Jacob's mother to secure his father Isaac's blessing. However, technological advances have empowered fraudsters with ever-more powerful tools. Smartphones allow fraud to occur anywhere, anytime, and the internet significantly expands the reach of victims. Today, artificial intelligence (AI) has become the latest weapon in the arsenal of fraudsters.
The core of all successful scams lies in gaining trust, which is the origin of the term "con artist." Another key element is that scammers are often "part of the community." For example, Charles Ponzi, the inventor of the Ponzi scheme, and American financial tycoon Bernard Madoff both established fake high-level investment communities to attract trusting investors. Scams impersonating the Chinese Consulate General often involve Chinese fraud rings exploiting the Chinese community's respect and trust in government agencies, pretending to provide assistance while actually defrauding them of personal information or money.
Regardless of the type of fraud, its core routines can usually be divided into two categories. One isGet-rich-quick schemesGet-Rich-Quick Schemes, which exploits human greed, such as declaring "you are selected" or "you have won a big prize", to induce victims to make decisions in a short period of time.Fake official office routineAuthority Imposter Schemes (AIS) exploit people's fear of authority, such as impersonating the IRS or Social Security Administration and claiming "you are involved in illegal activities" to exert pressure.
The common feature of these two methods isNot giving the victim time to thinkScammers create a sense of urgency, demanding victims make an immediate decision. Even if victims request more information, they may provide fake websites or arrange for accomplices to pose as "satisfied customers" to make the scam appear seamless.
In summary, modern fraudsters use electronic devices to transcend physical boundaries and attack people's minds anytime and anywhere. As long as they "sound convincing", they can succeed. AI can not only help fraudsters "sound convincing" but also "look convincing" visually.
III. The Organization and Division of Labor of Modern Fraud Groups
Modern fraud syndicates are highly organized criminal enterprises with a detailed division of labor, interconnected links, and often operate across national borders, making law enforcement much more difficult. The main division of labor is as follows:
- The ring leader is responsible for overall planning, command, and behind-the-scenes control. These ring leaders are often located overseas, using anonymizing tools such as cryptocurrency to manage funds and commanding their subordinates through encrypted communication software.
- Call Center Agent/Cyber Actor: They interact directly with victims and execute fraudulent tactics. They are usually professionally trained, fluent in multiple languages, and useGenerative AICustomize the dialogue to make the scam conversation more convincing.
- Account Mule Recruiter: Responsible for collecting bankbooks, ATM cards, and passwords from so-called "dummy accounts." These accounts serve as the first hurdle for fraudsters to launder money. Fraudsters use online advertisements, social media, or job search websites, promising "high salaries and easy income" to lure financially disadvantaged or inexperienced individuals into providing their personal bank account information.
- Money Laundering Hubs: These are responsible for complex fund transfers and laundering, creating disruptions in the flow of funds. Once a victim's funds enter a dummy account, the money laundering hubs will immediately disperse and transfer them to multiple other accounts, or directly convert them into cryptocurrency, making it difficult for law enforcement to track them.
- Cash-Out Runners (Cash-Out Runners): These individuals are responsible for collecting cash at the front lines or personally handing over payments to victims. These individuals are often recruited locally by fraud rings and pose the highest risk. They may unwittingly become accomplices (being told they are performing legitimate errands), or they may knowingly take risks for the sake of reward despite being involved in a crime.
- Money Collector: Collects stolen money from drivers and reports it to higher-ups, pooling the funds and handing them over to the mastermind.
In the United States, anyone involved in any of these activities, regardless of whether they directly contact the victim, may face serious federal prosecution. These crimes are typically not prosecuted as a single crime, but rather involve multiple federal offenses due to their interstate or international nature.
IV. Applications and Cases of AI in Fraud
Before we discuss how AI contributes to fraud, we must first understand what AI is. Simply put, AI is the science of enabling computers to mimic human thinking and learning abilities. It is not a single technology, but consists of many branches, the most relevant of which is fraud.Generative AIGenerative AI is like a magician that can "create" content out of thin air. It generates new and realistic texts, pictures, audio and videos by learning from large amounts of data.
For example, if you ask a generative AI to write an urgent notification letter from a bank, it will analyze millions of real bank letters, learning their writing style, tone, and common vocabulary, and then write a letter with smooth grammar that looks just like the real thing. This technology was originally intended for creative, educational, and commercial purposes, but in the hands of criminals, it has become a powerful tool for producing fake content, making fraudulent tactics so difficult to distinguish between real and fake.
The most common AI applications in fraud are: (1)Voice scams, using AI to steal the voices of relatives and friends, simulate their tone and speaking habits, create emergency situations, such as "car accident", "arrest" or "phone broken", and ask victims to remit money urgently. (2)Phishing message, using large language models (LLM) to generate highly personalized, grammatically fluent, and flawless scam emails or text messages. (3)Deep fake technology(Deepfake)Video fraudDeepfakes, which create lifelike videos and voices of celebrities, experts, or corporate executives, are used to commit fraud through fake advertisements or video conferences. In February 2024, a finance representative at a multinational company in Hong Kong was instructed by a fake "CFO" during a video conference to transfer $26 million. The fraud ring leveraged deepfake technology to make the fake CFO appear lifelike in the video, completely ignoring employees' suspicions.
5. Preventing AI Fraud
Facing AI fraud, we must establish a systematic preventive mindset of "Three Don'ts and Five Musts".Don't trust, don't click, don't transfer money,Even if you hear a "familiar" voice, do not trust it completely; do not click on any links or attachments from unknown sources; be vigilant of any request for immediate remittance or transfer.You need to hang up, verify, set up a "password", protect your personal information, and improve your anti-fraud knowledge.If you receive a suspicious call, hang up immediately; contact relatives, friends or official agencies through known and secure channels (such as the original phone number) for verification; establish a unique secret verification word with your family to confirm your identity; reduce sharing of personal voice and video data in public; regularly check anti-fraud information released by official agencies such as the FTC.
Despite the advancements in AI technology, some deepfakes may still have flaws. This is because even the most advanced AI models are still limited by the quality and quantity of their training data, as well as the complexity of the generation model itself, when generating images or videos. When generating content that requires a high degree of detail or involves complex physical laws, AI is prone to inconsistencies or unnatural phenomena, such as disproportionate proportions between the face and body, stiff or unnatural expressions, color differences between the neck and face, and unusual or flickering background environments. Of course, AI technology is constantly improving, and these flaws will become increasingly difficult to detect, but understanding these potential weaknesses can still help us be more vigilant. When faced with any suspicious digital content, a little extra observation and verification can greatly reduce the risk of being deceived.
If you're unfortunate enough to be a victim of a scam, stay calm and act accordingly. If fraudulent content is used on social media, contact the platform immediately to request its removal. If the scam poses a serious threat to your reputation or involves a crime, report it to the police (such as the IC3) and seek legal action. Even if you haven't lost money, report the scam to consumer protection agencies like the FTC to help them track and combat the crime.
The wave of artificial intelligence is sweeping the world at an unprecedented speed. It is not only a powerful engine driving social progress, but also provides criminals with new criminal tools. In this digital age, fraudulent tactics are no longer the traditional crude rhetoric, but have evolved into highly customized and difficult-to-identify "digital disguises." In this silent war against fraud, the most fundamental line of defense is the heart of each of us, because the essence of fraud is to "attack the heart first." The Bible tells us, "Above all else, guard your heart, for it is the source of your life."(Proverbs 4:23), vigilance, rationality, and caution are the best "spiritual shield." May God bless us, so that when faced with temptation and temptation, we can all wear spiritual armor and firmly say "no" to scammers.
