The Growing Problem of Social Engineering in APAC and How GenAI Can Help
The malicious practice of tricking people into sharing private information, also known as social engineering, is becoming more of a threat because of GenAI. With the help of GenAI tools, which can create convincing forgeries in a fraction of the time traditional methods require, bad actors can now target many victims at once.
In the Asia-Pacific (APAC) region specifically, social engineers have been wreaking havoc on governments and banks, supported by easy-to-use GenAI tools. To protect against these advanced attacks, businesses need equally (or more) advanced technologies.
Social Engineering in APAC: Cybercrime Statistics
A recent survey of over 4,000 IT leaders found that 78% had reported experiencing at least one breach in the last year, with governments and banks among the most common targets. Four in five of those 78% reported experiencing at least four breaches, each averaging over $1 million USD in losses.
Highlighting the cross-border nature of the social engineering threat, Vietnam’s 44% year-over-year surge in cybercrime echoes a trend threatening the entire APAC region. This further emphasizes the urgent need for innovative solutions to combat the rise of increasingly sophisticated attacks.
Countries Hardest Hit
- China suffered the most stolen data at 39%, stemming from hacks of government info and big firms.
- India followed at 22%, with hackers stealing millions of bank customer files and key infrastructure data.
- Over 300 million Indonesian identity records were taken through hacked admin accounts.
- Thailand has reported a rise in hacks stemming from the sale of stolen credentials on the darknet.
- Malaysia plans to train 20,000 more cyber professionals by 2025 to address talent gaps, which also increases the attack surface for further social engineering attempts.
Reasons for Rising Numbers
There are a variety of causes behind the climbing social engineering attack rates in APAC. Among them is the fact that many of the region’s financial services have moved online in recent years. Although the introduction of an online, open banking system has enabled substantial economic growth in the region, such a system also broadens the potential cyberattack surface.
Also, in many cases, it’s still common to use simple passwords without multifactor authentication (MFA) for verification, making hacks easier.
Many people in the region have limited access to (or understanding of) technology and struggle to identify social engineering attacks. Criminals capitalize on that lack of knowledge by using hard-to-detect machine learning (ML) and powerful tools to improve their social engineering techniques.
The high ownership rates of phones and participation on social media, especially among younger and less tech-savvy people, enables social engineers to create targeted attacks more easily.
More employees working remotely also raises concerns, from unsafe public Wi-Fi networks to unaddressed remote security loopholes. The number of workplace entry points has grown substantially as the adoption of collaboration tools and cloud services outpaces that of improvements to protection plans.
Common Examples of Social Engineering: How GenAI Helps and Hinders Bad Actors
GenAI is a powerful tool—whether in your hands or in the hands of cybercriminals—and can play a decisive role in any social engineering process. The key to organizational security is to have a more advanced and more robust understanding of GenAI than your attackers.
GenAI in Phishing Attempts
Phishing involves impersonating trusted individuals or brands through fake emails, texts, or other messages. Oftentimes, the impersonator will send a message containing malicious links or attachments that lead to the unwanted installation of remote access tools, malware, or ransomware.
- Bad actors can use large language models (LLMs) to rapidly create convincing texts in whatever style and tone the impersonator wants, adding credibility to their deception.
- Cybersecurity teams can use similar AI technologies to counter these threats by developing advanced detection systems that analyze the writing style and other metadata of such communications—with little to no human direction required.
GenAI in Pretexting Attempts
Pretexting is a form of social engineering attack where the attacker creates a fabricated scenario (the pretext) to engage with their victim in a manner that leads the victim to disclose information or perform actions they wouldn’t ordinarily do.
This sort of attack relies heavily on the attacker’s ability to appear legitimate and trustworthy—or to stoke fear—often by assuming an identity or role that necessitates the requested information or action.
- Bad actors can use GenAI to enhance the believability of their pretexts, synthesizing realistic-looking documents, emails, or even voice and video calls to support the attacker’s fabricated narrative.
- Cybersecurity teams can use GenAI to counter pretexting by developing automated systems that recognize or flag potential social engineering attempts by, for example, analyzing communication patterns.
GenAI in Baiting Attempts
Baiting is a type of social engineering attack that involves offering something enticing (bait) to a victim in exchange for their login credentials or other sensitive information. Examples of bait include free software licenses, promotional rates on credit cards, or some other form of limited-time-only benefit that demands immediate attention.
This kind of attack can occur both online (through malicious email links or downloads) and in person (through infected USB drives or face-to-face deals).
- Bad actors can use GenAI to create more convincing bait, potentially including video or voice deepfakes from trusted individuals. By analyzing vast amounts of data on user behavior and preferences, GenAI also enables them to tailor their baits to each specific target, increasing the likelihood of success.
- Cybersecurity teams can use GenAI to counteract baiting attacks by implementing advanced detection routines that learn and adapt to new threats in real time. These AI-driven systems can analyze patterns and anomalies in communications and proactively take action, like notifying the would-be victim before it’s too late.
GenAI in Quid Pro Quo Attempts
Quid pro quo (which is Latin for “something for something”) attacks are a type of social engineering wherein the attacker promises a benefit in exchange for information. This could be as straightforward as promising IT services in exchange for admin permissions or more complex, like offering free software in exchange for installing files that secretly contain malware.
- Bad actors can use GenAI, specifically text and image generators, to create more believable and contextually relevant offers, like rebates, discounts, or gifts. Coupled with personalized messages that resonate with specific targets, this can significantly increase the chances of successful deception.
- Cybersecurity teams can use GenAI to simulate potential quid pro quo scenarios, training their systems and personnel to recognize and respond to these threats more effectively. GenAI can also help in creating cybersecurity rules and policies that the business’s system can automatically enact when certain situations arise.
GenAI in Hoaxing
Hoaxing is a type of social engineering that involves bad actors spreading false information or creating fake threats to deceive individuals or organizations. This method heavily relies on the attacker’s ability to manipulate the victim’s fear, surprise, or other emotions in order to get them to take a certain action, like exposing confidential information or submitting payment.
- Bad actors can use GenAI to create incredibly realistic images, videos, and even voice recordings that support their false claims. This technology allows them to tailor the hoax to their target audience—making it more difficult to tell fact from fiction—and can even automate the distribution of their hoaxes.
- Cybersecurity teams can use GenAI to detect and neutralize hoaxes by analyzing patterns and inconsistencies in the content that might not be easily noticeable to the human eye. GenAI tools can sift through vast amounts of data at high speeds and generate recommendations or alerts whenever potential hoaxes are identified.
Challenges Facing APAC’s Financial Services Industry
The rise of social engineering in APAC’s financial sector continues to highlight vulnerabilities in the region. The rapid advancement in digital payments and online banking, for example, has inadvertently increased the total attack surface for social engineers, contributing to a 449% annual rise in hacking incidents.
The use of malicious bots and ransomware further complicates the situation. Only 41% of APAC financial institutions are confident in their ability to thwart these threats. Couple that with the region’s reliance on traditional verification methods like one-time passwords, and the need for improvements becomes easier to see.
In 2023, India experienced a substantial rise in UPI-related frauds and QR code scams, with losses exceeding 302.5 billion rupees ($3.6 billion USD). As criminals gain greater technical prowess, partly through openly available GenAI tools, the only way to stop them will be by gaining a better understanding of the technologies driving their malicious actions.
How to Protect Your Business from GenAI-Enabled Social Engineering
Fighting social engineering requires a step up in technology, policies, and education. APAC’s financial institutions should adopt a zero-trust architecture (ZTA) that assumes every network access request could potentially be malicious. ZTA, supported by AI-empowered security systems, would go a long way toward safeguarding sensitive information.
AI systems are instrumental in analyzing data and deploying behavioral anomaly detection in online user activities. Flagging actions that deviate from established patterns as potential security risks is just one way to proactively mitigate the damage of social engineering attacks.
To further fortify defenses against these sophisticated threats, a collaborative effort involving public-private partnerships is needed. By pooling resources and intelligence across governmental bodies, the financial sector, and academic research communities, the APAC region’s collective cybersecurity posture would be significantly enhanced.
Corporate education also plays a pivotal role in mitigating the risks associated with GenAI-enabled social engineering. Continuous awareness programs are essential to keep both employees and customers informed about the evolving landscape of cyberthreats. These programs should focus on equipping individuals with the knowledge to identify and respond to potential social engineering tactics, ultimately fostering a culture of cybersecurity mindfulness.
In the post-COVID era of remote and hybrid work environments, adopting principle-based approaches to identity verification can also help. Regular vulnerability assessments, third-party due diligence, and simulated cyberattack exercises (like red teaming) are critical in identifying and addressing security weaknesses. This is especially true for supply chains.
Moreover, the adoption of continuous improvement philosophies for cybersecurity measures—through regular updates and preparedness to tackle novel, unforeseen threats—is vital in staying ahead of cybercriminals. Global coordination among cybersecurity entities can enhance the speed at which intelligence and solutions are shared.
Next Steps in Combatting Social Engineering in APAC
The fight against GenAI-enabled social engineering attacks demands a comprehensive and proactive adoption of GenAI-literate cybersecurity policies. By turning to cutting-edge technologies, fostering regional collaboration, and promoting cybersecurity literacy, the APAC financial sector can significantly enhance its defenses against these ever-evolving threats.
Through such concerted efforts, banking and financial institutions could not only protect their assets and customer data but also build a more secure and trustworthy digital ecosystem for decades to come.