According to the Ministry of Public Security and the National Cybersecurity Association (NCA), losses from online scams in Vietnam in 2025 still amounted to hundreds of millions of US dollars, as Artificial Intelligence (AI), deepfake technology, and leaked personal data continue to make cyberattacks more sophisticated and widespread.
Against this backdrop, the Digital Trust in Finance 2026 Forum, themed “Building Digital Trust in Finance in the AI Era,” was held in Hanoi on May 12. The event was organized by the Digital Trust Alliance in collaboration with the Department of Cybersecurity and High-Tech Crime Prevention, the NCA and MoMo, under the patronage of the Ministry of Public Security, the State Bank of Vietnam, and the Ministry of Finance.
Speaking at the event, Deputy Minister of Public Security Pham The Tung, emphasized that amid the rapid advancement of AI and digital technologies, digital trust is becoming a core foundation for the sustainable development of the digital economy and the modern financial system. Within this context, the Digital Trust in Finance initiative has been identified as a key strategy to realize the goal of user protection.
According to Mr. Tung, building digital trust is not solely the responsibility of regulators or businesses, but requires close coordination among ministries, sectors, financial institutions, technology companies, and society as a whole to establish a safe, reliable, and sustainable digital financial ecosystem in the AI era.
Representing the fintech sector, Mr. Nguyen Manh Tuong, Co-founder, Co-chairman and Chief Executive Officer of MoMo, said AI can help optimize decision-making, detect fraud, and enhance digital financial experiences, but trust remains the decisive factor driving user engagement and long-term loyalty. He noted that the adoption of AI must go hand in hand with principles of transparency, accuracy, fairness, and risk control.
AI intensifies both opportunities and cyber risks
During the discussion session, industry experts identified three major challenges emerging as AI increasingly takes on a decision-making role in financial transactions:
The first concerns how much authority should be delegated to AI to optimize operations while ensuring system security. The second relates to accountability when algorithms cause financial risks. The third revolves around the roadmap Vietnam needs to avoid falling behind in the “trust economy” race.
According to Mr. Tran Cong Quynh Lan, Deputy Chief Executive Officer of VietinBank, the rapid development of AI is making cyberattacks more sophisticated, harder to detect, and significantly more dangerous than before.
Sharing the same view, Mr. Vu Duy Hien, Deputy Secretary General and Chief of Office of the NCA, said AI is creating a clear paradox by simultaneously improving operational efficiency while enabling cybercriminals to increase the speed and sophistication of attacks.
“In the past, hackers needed extensive preparation and significant resources to carry out cyberattacks. Now, with AI, the preparation time has been drastically shortened, attack methods have become far more sophisticated, and detection has become much more difficult,” Mr. Hien said.
He also warned that the rise of deepfake technology is making impersonation scams increasingly dangerous. Within seconds, AI can generate fake videos or voices impersonating bank executives, financial consultants, or even family members to request money transfers. Fake websites and phishing emails can also now be created at a much faster pace than before.
From the perspective of state management, Dr. Nguyen Hong Quan, Deputy Director of the Department of Cybersecurity and High-Tech Crime Prevention under the Ministry of Public Security, said that while these risks create fear among the public, they represent responsibility for businesses and regulators. “The responsibility is to reduce that fear,” Mr. Quan stressed.
According to Mr. Quan, as technology and data continue to evolve, the right to personal data protection is increasingly recognized worldwide as a new fundamental right. In Vietnam, the legal framework has been established through Decree 13/2023/ND-CP, which took effect on July 1, 2023, and more recently, the Law on Personal Data Protection, which came into effect on January 1, 2026.
Under the regulations, clear boundaries have been established for how businesses can collect and utilize data. Specifically, the use of data must not threaten national security. At the same time, businesses are required to respect users’ data autonomy rights, including the right to know how their data is processed, within what scope, as well as the right to give or withdraw consent. In addition, companies bear responsibility for safeguarding customer data once collected, in order to prevent unauthorized access and data leaks.
According to Mr. Quan, the widespread sale of personal data online has become an increasingly alarming issue. Stolen data can enable fraudsters to personalize attack methods, making scam scenarios far more convincing and difficult to detect. “The responsibility of businesses is to strictly comply with personal data protection laws,” Mr. Quan stressed.
Multi-layer protection and human oversight
From a technical perspective, Mr. Thai Tri Hung, Senior Vice President and CTO of MoMo, said combating deepfakes cannot rely on a single layer of authentication. Effective measures currently include liveness detection techniques such as light, motion, and facial reflection analysis to distinguish real individuals from manipulated videos.
However, MoMo has adopted a “multi-layer protection” approach, in which decisions are not based solely on whether deepfake technology is involved, but also on behavioral patterns, transaction habits, cash flow, and relationships between senders and recipients. “It is not only about identity verification, but also about behavioral assessment,” Mr. Hung said.
Regarding AI governance, Mr. Hung stated that the most important principle is that “tasks AI excels at should be handled by AI, while humans must remain responsible for what AI cannot do.”
According to him, AI can read documents, summarize information, and analyze data faster than humans, but it cannot take responsibility for final decisions. “No matter whether it is AI or any other tool, the final decision must always belong to humans. AI will never replace humans, but rather help people make faster decisions and better protect customers,” he stressed.
Mr. Nguyen Lam Thanh, Director of TikTok Vietnam, said TikTok is among the companies actively applying AI while also providing one of the world’s largest suites of AI-powered tools, both globally and in Vietnam.
However, Mr. Thanh emphasized that TikTok approaches AI cautiously and primarily views it as a tool to support content creation and service delivery. According to him, important decisions should not rely solely on an individual or a single AI system, but instead require the involvement of multiple parties to reduce errors and enhance verification. Final responsibility, he stressed, must still rest with humans.
“For content uploaded by users on the platform, we require clear labeling if the content is generated by AI, so that ordinary users can recognize that it was created using AI,” Mr. Thanh said.
Concluding the discussion, Mr. Tran Cong Quynh Lan said digital transformation is bringing significant convenience to consumers while simultaneously increasing concerns over financial losses and personal data exposure.
He noted that although AI amplifies risks, technology itself will also be the tool used to combat those risks. To achieve this, businesses must invest more heavily in cybersecurity while ensuring that data remains “clean, live, accurate, and sufficient.”
More importantly, he stressed that no single organization can solve the challenge alone. “We cannot operate independently. We must form an alliance to share risk data, detect fraud, and jointly strengthen public trust,” Mr. Lan said.
Google translate