May 12, 2026 | 09:30

When AI lacks moral comprehension

Diep Linh

AI is capable of conducting many tasks, but there will likely remain a key role to be played by people in ensuring that output is what was sought.

When AI lacks moral comprehension

While AI may be generating more code than ever, software developers are at no risk of becoming obsolete. Rather, the profession is being reshaped around a new set of priorities: judgment, accountability, architecture, and the ability to work effectively with increasingly capable machines.

That was the key takeaway from a panel discussion at the recent DevDay 2026 in Da Nang, where technology executives and academics examined how AI is changing software engineering, team structures, and enterprise strategy. The panel reflected a growing consensus across the industry that coding itself is becoming easier and faster, but delivering reliable business outcomes remains a human responsibility.

Addressing the event, Mr. Sebastian Sussmann, CIO of Axon Active Vietnam, recalled a prediction he made a year ago: “I think AI will not replace a software developer, but will replace a software developer who is not using AI.” A year on, with AI tools now deeply embedded in development workflows, that statement still seems highly relevant.

Easier coding, harder responsibility

One of the themes emerging at the discussion was that AI is changing what companies pay developers to do. In the past, technical talent was often measured by the ability to produce code quickly and efficiently. Today, large language models can generate boilerplate functions, documentation, test scripts, and prototypes in seconds, reducing the scarcity value of coding alone.

What remains scarce is the ability to decide whether the output is correct, secure, maintainable, and aligned with business needs. Mr. Talal Dib, Managing Director of Open Web Technology Vietnam, said developers are no longer judged only on code production. “Today, this has changed,” he believes. “Developers are no longer only responsible for producing code; they are responsible for the outcome.”

That shift is significant, because AI-generated code does indeed require expert review. Models can, it has been found, hallucinate logic, introduce security vulnerabilities, misunderstand context, or produce solutions that technically function but fail commercially. Engineers are therefore moving into higher-value roles that involve validating outputs, designing systems, and taking ownership of results.

Mr. Dib said reviewing AI-generated code requires more than checking syntax. “You need to fully understand what was produced, because if there is a problem, you have to fix it, and you are responsible for it,” he added. In practical terms, this means the future software engineer may spend less time writing code from scratch and more time acting as reviewer, architect, product thinker, and risk manager.

For his part, Mr. Phan Van Binh, Deputy General Director of MGM Technology Partners Vietnam, noted that concerns over AI unpredictability should be kept in perspective, as human developers are hardly flawless themselves. “Humans are also inconsistent,” he told the gathering, arguing that unclear requirements often produce inconsistent outputs from junior and senior engineers alike. In some cases, AI may even be easier to manage because it can be guided through explicit constraints and detailed instructions.

Raising productivity and pressure

The second major theme was that AI is improving productivity, but not necessarily reducing workloads. Instead, it is shifting where time and pressure are concentrated inside organizations.

Mr. Tai Huynh, CEO and Founder of Kyanon Digital, said AI has accelerated delivery cycles to the point that managers now face heavier review burdens. Faster coding means teams can submit deliverables more quickly, but approvals, architecture checks, quality control, and integration still need human attention. That dynamic is becoming increasingly common in enterprise environments. AI can compress production timelines dramatically, but governance processes rarely move at the same speed. As a result, bottlenecks often shift from engineering output to managerial oversight.

Another pressure point is cost management. Many companies initially adopted cloud AI tools with enthusiasm, only to discover that token-based pricing can scale quickly if usage is not carefully controlled. Mr. Huynh described internal examples of paid AI accounts being consumed within days without generating equivalent business value.

That experience is pushing companies toward more disciplined deployment models. Instead of relying entirely on premium cloud systems, some are experimenting with hybrid strategies that combine paid frontier models for high-value tasks with open source or locally-hosted models for internal workloads. The goal is to improve return on investment while maintaining flexibility and data control.

This reflects a broader market shift. The first wave of enterprise AI adoption focused on experimentation and novelty. The second wave, meanwhile, is focused on efficiency, governance, and measurable business outcomes. Companies are no longer asking only what AI can do. They are asking what it should do, what it costs, and where it genuinely creates value.

More accountable than machines

Despite optimism about progress surrounding AI, panelists repeatedly emphasized that machines still cannot be subject to legal, commercial, or ethical accountability. Mr. Dib summarized the issue: “You can have a contract with a human being. A human being is a legal entity. You cannot have a contract with AI.”

That distinction matters in real-world deployments. If software fails, customer data is exposed, or automated decisions cause financial damage, liability does not fall on the model. It falls on the company, its managers, its vendors, and its technical teams. AI may generate output, but it does not sign contracts, answer to regulators, or repair damaged client relationships.

I think AI will not replace a software developer, but will replace a software developer who is not using AI. 
Mr. Sebastian Sussmann, CIO of Axon Active Vietnam

For that reason, many enterprises are designing safeguards into AI systems rather than granting full autonomy. Confidence thresholds, human approval loops, fallback workflows, and audit trails are becoming standard features of responsible deployment. Mr. Dib argued that these controls need to be considered at the architecture stage, not bolted on later.

Professor Anand Nayyar, IoT Lab Director at Duy Tan University, said AI systems are improving rapidly but are not yet ready to replace human oversight. “AI is booming, and is changing,” he said, adding that the industry may still need several more years before much more predictable systems emerge.

Even then, trust will need to be earned through testing and verification. “We have to retest and rerun the checks before we finally deliver a working prototype,” he said. That caution reflects the gap between consumer excitement around AI and enterprise expectations, where systems must be reliable, secure, and repeatable.

He also argued that AI is narrowing the traditional divide between academia and industry. Researchers and companies now have access to many of the same foundational models, while sectors such as healthcare increasingly bring doctors, engineers, and academics together to train models and improve reliability. In his view, AI is not only changing jobs, but also changing who collaborates to build technology.

The overall conclusion was not that developers are disappearing, nor that AI is overhyped. Rather, the profession is entering a new phase in which coding becomes more automated while human value shifts upward. The developers most likely to thrive will be those who can combine technical depth with judgment, communication, and the ability to direct intelligent tools responsibly. 


Attention
The original article is written and published on VnEconomy in Vietnamese, then translated into English by Askonomy – an AI platform developed by Vietnam Economic Times/VnEconomy – and published on En-VnEconomy. To read the full article, please use the Google Translate tool below to translate the content into your preferred language.
However, VnEconomy is not responsible for any translation by the Google Translate.

Google translateGoogle translate