Gemini 3 Pro Missed! Korean Reports Reveal Google AI Can Be Invited To Talk About Bioweapon

JAKARTA - Google is again praised for Gemini 3 Pro's increasingly wild capabilities, but new reports from South Korea suddenly hit the narrative with an alert tone.

According to Maeil Business Newspaper, an AI security company named Aim Intelligence claimed to have succeeded in breaking into the model and provoked it to provide answers on how to create biological threats and homemade weapons of the region that should be behind the iron walls of the security system.

Aim Intelligence said that in a controlled environmental test, Gemini 3 Pro not only slipped, but actually answered details that should have been completely blocked by a tight security standard model.

The report also stated that this model, after being provoked with additional prompts, made a strange presentation entitled 'Excused Stupid Gemini 3,' such as self-burn that was never requested.

None of the outputs or methods were published, so the level of credibility and whether the experiment could be repeated could not be verified at all. Without a prompt, without technical documentation, the results were only unilateral claims that needed hard evidence before being considered valid.

Still, the allegations were hit at a sensitive point in the world of AI: the smarter the model, the more difficult it is to ensure he remains obedient on a safe path. Some recent incidents of models that answer dangerous questions when the questions are disguised as poetry, to AI-based gadgets that accidentally present inappropriate content for children. This shows that neat guardrails on paper can still concede when the model is suppressed in unexpected ways.

Google itself positions Gemini 3 Pro as one of its most advanced products, with a major emphasis on security. But reports from South Korea add to the pressure: the public wants real evidence that the system is truly safe, even in the most aggressive scenario, not just during a curated press demonstration.

For now, questions are more than answers. Google needs to open an explanation, and researchers who claim this discovery must be transparent if they want to be trusted. The world of AI is speeding up, but public trust is much faster to disappear if sensitive issues like this are left hanging.