About Us
AI in the driver’s seat: Are we ready for the responsibility?
- #AI
Melgorithm
By Mel Migriño
I had an eventful trip in Singapore completing work responsibilities and my personal time off. It was indeed good to be back in a place that is close to my heart – being able to see again good friends, do the morning stroll and think through what you like to do in the future and drive thru the night façade with all the bright colors giving you a different vibe; practically just being you, away from the noise of the community.
During this trip, I happen to chat with one of the entrepreneurs who attended the Global Anti-Scam Summit Asia (GASS) 2025 and he shared the strong influence and take up of AI in his company and how this is making waves reaping high social media engagements, improving their customer journey, thus, yield to some good push in terms of revenue pick up.
Indeed inspiring and the appetite is to grow this adoption is huge while trying to maintain the “just enough” human element that makes every touchpoint personal. But of course, it may seem easy and highly engaging on the surface but behind it is a colossus of technology and data that boost its power.
For all its technical complexity, artificial intelligence (AI) is ultimately a reflection of humanity. We are the ones who design the algorithms, feed the data, and set the parameters that govern AI’s actions. This makes the responsible use of AI not just a technical challenge, but a profound human one.
As these systems become more powerful, they raise critical questions we must answer together: How do we ensure fairness and prevent bias? How do we uphold privacy in an age of constant and uncontrollable data collection? And how do we preserve human autonomy and creativity where machines can do so much? This is a combination of exciting times and fear.
These aren’t abstract questions subject for debates for a distant future but rather they are the core ethical dilemmas in this digital time. To build a world where AI serves humanity, we must first look inward and define the values we want to encode into the technology we create.
The promise of AI is immense, offering the potential to tackle some of humanity’s most complex challenges—from accelerating medical breakthroughs, boosting efficiency in utilities and optimizing global resources distribution to enhancing our creativity and making our daily lives more efficient. Yet, this utopian vision is not an inevitable outcome.
To ensure AI genuinely serves humanity, we must move beyond passive acceptance and embrace a proactive approach. This involves building robust ethical frameworks, implementing clear and fair policies, and fostering a culture of transparency and accountability. It is this intentional work—the deliberate and collective effort to align these powerful systems with our deepest values—that will determine whether AI becomes a tool for widespread human flourishing or a source of new and unforeseen challenges
Central to this effort is the need to address the inherent biases present in AI systems. Because these models are trained on vast datasets often reflecting existing societal prejudices, they can unwittingly learn and amplify human biases related to race, gender, and socioeconomic status. This “garbage in, garbage out” dynamic means that without careful oversight, an AI designed for tasks like hiring or loan applications could perpetuate and even worsen systemic inequities.
Let us understand in simple terms what each component in the AI Governance Principles by AI Verify Testing Framework means. I guess if you are a fan of COBIT, this is something that you will enjoy reading and will surely find it useful.
Transparency provides visibility to the intended use an impact of the AI system. It complements existing privacy and data governance measures. It is crucial to design and implement an in-house policy that is aligned with governing regulations on the communication to consumers and relevant stakeholders that articulates the principles for transparency such as define the purpose and context of communication to determine how and what to communicate.
Explainability in some cases the essence is overlooked. This is about ensuring AI-driven decisions can be explained and understood by those directly using the system to enable or carry out a decision to the extent possible. The degree to which explainability is needed also depends on its objective, including the context, the needs of stakeholders, types of understanding sought, mode of explanation, as well as the severity of the consequences of erroneous or inaccurate output on human beings.
Repeatability is essential in achieving system resilience. With software systems, the ability to reproduce an outcome or error is key to identifying and isolating the root cause. This principle focus on logging capabilities to monitor the AI system, tracking the journey of a data input through the AI lifecycle, and reviewing the input and output of the AI system.
Safety is ensuring that AI should not result in harm to humans and measures should be put in place to mitigate harm. Safety is achieved by reducing risks to a tolerable level. The higher the perceived risks of a system causing harm, the higher the demands on risk mitigation.
Security is the protection of AI systems, their data, and the associated infrastructure from unauthorized access, disclosure, modification, destruction, or disruption. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure.
Robustness requires that AI systems maintain its level of performance under any circumstances, including potential changes in their operating environment or the presence of other agents human or artificial that may interact with the AI system in an adversarial manner.
Fairness in AI should not result in unintended and inappropriate discrimination again individuals or groups including gender, race, political beliefs, religion any other attribute determinants. AI system owner with appropriate stakeholder consultation decides on the fairness parameters specially for sensitive features.
Accountability is about having clear internal governance mechanisms for proper management oversight of the AI system’s development and deployment. It is imperative to Identify acceptable, unacceptable and illegal uses of AI model and output, including criteria for the AI model.
Quite a number of principles to remember but these are all essential towards embarking and implementing projects and initiatives powered by AI. At the bottom layer of these principles are the governing policies and standards to ensure successful implementation and alignment with industry regulations. These principles are being monitored and enhanced through appropriate
oversight and control measures with humans-in-the-loop at the appropriate juncture.
In the end, while AI is poised to be the engine of innovation, humanity remains at the steering wheel. The true measure of our success won’t be in how powerful our algorithms become, but in how wisely and responsibly we use them in achieving our desired outcomes. Our continued vigilance, ethical consideration, and commitment to human values are what will ensure that this transformative technology elevates society rather than undermines it.
The human element is not a bug to be fixed in the AI system, but its most crucial feature.
