About Us
DICT vs Grok: A ban that creates the illusion of safety
- Elon Musk, Grok, Henry Aguda, X
DECODED: TECH, TRUTH, AND THREATS
By Art Samaniego
DICT Secretary Henry Aguda’s move to ban Grok is not leadership. It is a reflex. It is the digital equivalent of banning cars because one driver caused an accident, instead of fixing road rules, enforcing licenses, and holding reckless drivers accountable.
Banning Grok does not solve the problem of AI misuse. It only proves that regulators are still more comfortable with prohibition than governance. The abuse of AI images is a failure of safeguards, accountability, and enforcement. It is not a failure of existence. Removing the tool avoids the responsibility of building proper regulatory muscle.
Aguda’s move also exposes a deeper weakness. He is treating AI as a dangerous platform instead of a permanent infrastructure. AI is no longer a product category that can simply be banned away. It is now embedded in search, phones, cameras, messaging, and government systems. Singling out Grok looks decisive on paper, but in reality it is symbolic, selective, and ultimately ineffective.
More troubling is the message it sends to developers and researchers. Instead of encouraging safer design, transparency standards, and audit requirements, the ban tells innovators that experimentation will be punished while other platforms quietly continue operating with similar risks.
The move also shifts blame away from platform responsibility. The real issue is not that Grok exists. The issue is that X released and operated it without sufficient consent protections, image abuse detection, victim reporting pathways, and independent oversight. A ban lets the platform escape deeper structural accountability.
There is also a policy contradiction. The Philippine government is pushing for AI regulation, not AI disappearance. The goal is responsible deployment, not digital erasure. By choosing a ban over proper governance, the DICT risks sidelining the country from serious global discussions on AI policy and accountability.
Most importantly, the victims of image abuse gain no lasting protection from this move. The same harm can be recreated using dozens of other AI tools, many of which are harder to monitor. The ban creates an illusion of safety while the ecosystem remains just as dangerous.
RELATED STORIES:
X to open-source its new recommendation algorithm, says Elon Musk
