Hackers from China and Russia use Gemini to attack the world, says Google

Illustration of AI-powered cyber threats targeting global infrastructure.

It is well known in the world of cybersecurity that North American and European companies talk a lot about cybercriminal groups with alleged links to “foreign” or “eastern” governments. In the chorus to the speech, Google’s threat intelligence team now discusses more than 20 malicious groups within this umbrella.

The 57 cybercriminal teams would be linked to or supported by countries such as China, Iran, North Korea, and Russia. In addition, the observed groups use Google’s artificial intelligence, Gemini, to carry out attacks.

“Threat agents are testing Gemini to empower their operations, finding productivity gains but not yet developing new features. Currently, they use AI mainly for research, code troubleshooting, content creation, and localization,” says the GTIG (Google Threat Intelligence Group) in a report.

This type of criminal center supported by states has its name: APT, or Advanced Persistent Threat (in literal translation, something like Advanced Persistent Threat).

In addition to attacks against organizations and crypto wallets for financial gains, APTs are also known for attacks that aim to damage infrastructure, make companies unfeasible, and steal engineering secrets.

Example of a jailbreak prompt publicly available on GitHub

Example of a jailbreak prompt publicly available on GitHub

Use of Artificial Intelligence

Google researchers open the report with the following statement:

“In cybersecurity, AI is prepared to transform digital defense, empowering defenders and improving our collective security. The main language models (LLMs) open up new possibilities for defenders, from complex telemetry analysis to secure coding, vulnerability discovery, and simplification of operations. However, some of these same AI capabilities are also available to attackers, leading to concerns about the possibility of AI being misused for malicious purposes.”

Among the main findings, GTIG states that APTs use basic measures or a certain jailbreak to try to circumvent Gemini security controls.

Also, cybercriminals are in the phase of exploiting artificial intelligence and, specifically, Gemini. The idea is to empower operations and have productivity gains—but this has not yet happened. To date, AI is successfully used only for research, solutions to code problems, and content creation/translation.

See also  Samsung Galaxy AI May Become Paid Starting in 2026

Among the surveys, APTs look for cybernetically weakened infrastructures, free hosting providers, vulnerabilities, payload development, and assistance for malicious scripts and evasion techniques.

The cybercriminal groups supposedly financed by Iran were the main users of Gemini; the Russians and Chinese were right behind.

During the report, Google boasted of the protective measures placed in its artificial intelligence model because the opponents were not even more successful exactly by these security measures.

Finally, the company also mentions that there have been attempts to abuse Google products, mainly by new phishing techniques in Gmail, data theft, the development of an infostealer for Google Chrome, and the diversion of Google authentication codes. All the above attempts would have failed.

“It should be noted that North Korean actors also used Gemini to write cover letters and search for jobs—activities that probably support North Korea’s efforts to insert illegal IT workers into Western companies,” said the GTIG.

“A group supported by North Korea used Gemini to write cover letters and proposals for job descriptions, researched average salaries for specific jobs, and asked about jobs on LinkedIn. The group also used Gemini to obtain information about employee exchange abroad. Many of the topics would be common to anyone who was researching and applying for jobs.”

AI of Evil

Clear information that Google reveals is that cybercriminal forums already offer and sell artificial intelligence tools designed specifically for crime—and based on legitimate language models.

These “evil AIs” would be called WormGPT, WolfGPT, EscapeGPT, FraudGPT, and GhostGPT. Its capabilities include the creation of phishing scams, models for corporate attacks, and designing fake websites.

Regarding the countries that engage with APTs and artificial intelligence, Google states that there would be 20, with Iran and China being the main ones.

Thus, the GTIG certifies that Iran supports about 10 cybercriminal groups. China, 20 groups. North Korea would have nine and Iran eight. Russia would finance three.

Breaking the Gemini

The jailbreak is something present in the report. The term refers to attacks that “break” the model and allow use without any protective measure. In summary, in the case addressed, break the artificial intelligence for it to do anything that is ordered.

See also  Realme is about to Launch X3 SuperZoom Packed with 4200mAh Battery

“Threat actors copied and pasted publicly available prompts and attached small variations in the final instructions (for example, basic instructions to create ransomware or malware). Gemini responded with alternative security responses and refused to follow the threat agent’s instructions,” the company said.

As an example, the research team explains that an APT group copied publicly available prompts in Gemini and attached basic instructions for performing coding tasks. These tasks included encoding text from a file, writing it to an executable, and writing Python code for a denial of service attacks (DDoS) tool. In the first case, Gemini provided the Python code (programming language) to convert Base64 to hexadecimal but provided a security-filtered response when the user entered a follow-up prompt that requested the same code from a VBScript (script language).

What was researched

To understand research is to understand the other. Therefore, the Google Threat Intelligence Group revealed which were the most searched topics by cybercriminal groups. Follow:

Iran: research on experts, international defense organizations, government organizations, and topics related to the Iran-Israel conflict
North Korea: research on companies in various sectors and geographical regions, research on U.S. military and operations in South Korea, research on free hosting providers China: research on U.S. military, U.S.-based IT service providers, access to U.S. public intelligence databases, research on target network tracks, and determination of target domain names.

To conclude the report, Google comments that it designs AI systems “with robust security measures and strong security protections, and we continuously test the security of our models to improve them. Our policy guidelines and usage policies prioritize the security and responsible use of Google’s generative AI tools. Google’s policy development process includes identifying emerging trends, thinking from end to end, and designing for security. We continuously improve the protections in our products to offer staggered protections to users around the world.”

Details of the information revealed here can be followed in the Adversarial Misuse of Generative AI report.

LEAVE A REPLY

Please enter your comment!
Please enter your name here