Home  >  Article  >  Technology peripherals  >  AI weaponization becomes a hot topic on underground forums

AI weaponization becomes a hot topic on underground forums

WBOY
WBOYforward
2024-03-29 22:26:101058browse

AI weaponization becomes a hot topic on underground forums

According to traditional wisdom, a driven attack is defined as the automatic download of malicious files from a compromised website without user interaction. However, in the majority of cases reviewed during the reporting period, user action was involved - facilitating initial access in more than 30% of incidents.

Threat actors use AI to automate attacks

The use of artificial intelligence to accelerate these attacks is receiving increasing attention on major cybercrime forums, with interest in weaponizing the technology is also growing.

Researchers discovered criminal alternatives to mainstream chatbots such as FraudGPT and WormGPT in the specialized AI and machine learning sections of these websites, and hinted at using these options to develop simple malware and distributed denial of service ( DDoS) query.

AI systems can now replicate voices using samples, while video call deepfakes are helping threat actors. In addition, researchers have noticed that more and more threat actors are automating various stages of their attacks or the entire attack chain - specifically targeting Citrix Bleed exploits.

However, while attackers are taking advantage of AI-driven automation, it is also delivering a quantum leap in enterprises’ defense capabilities.

Criminals Prioritize Financial Theft in 2023

Financial theft stands out as a primary target for criminals in 2023, driving 88% of customer incidents. Ransomware activity increased by 74%, with ransomware companies naming a record 4,819 victim entities on data breach websites, with LockBit alone accounting for more than 1,000 entities.

ReliaQuest focuses on significant threats from suspected state-sponsored actors using so-called "parasite on the land (LotL)" technology. In such incidents, threat actors seek to hide their activities through defense evasion techniques, such as cleaning and wiping PowerShell. Until an intrusion was observed in April 2023, a Chinese state-backed threat enterprise focused primarily on using LotL commands to integrate into a company's environment. The enterprise's covert LotL activities allowed access for more than a month.

"As threats continue to evolve, defenders must remain agile, using AI and automation to keep up with the latest attack techniques. Time is the enemy of cybersecurity. To proactively protect against these risks, companies should maximize and visibility beyond the endpoint, leveraging AI and automation to better understand and use your own data and equip your teams with the latest threat intelligence. Taking this approach, we expect to leverage our AI and automation over the next year Capable customers will be able to contain threats in 5 minutes or less," said Michael McPherson, ReliaQuest's senior vice president of technology operations.

Cybersecurity in 2024 will be significantly impacted by the creation of GenAI and malicious AI models and widespread automation in cyberattacks, which enhance the capabilities of threat actors. Automated, dynamic playbooks will give even less-skilled attackers sophisticated ways to accelerate operations, shortening the time from breach to impact.

The above is the detailed content of AI weaponization becomes a hot topic on underground forums. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete