The wave of Artificial Intelligence (AI) is sweeping the globe at an astonishing pace, bringing disruptive changes to all industries. AI has evolved from assisting humans with existing tasks to autonomously completing cross-platform work and seamlessly integrating into human life. It has fundamentally altered how we interact with technology. Now AI applications are no longer merely tools for enhancing efficiency; they demand that enterprises re-examine their positioning and value. While rooted in technological innovation, this evolution will ultimately reshape core corporate functions and operations.
This article examines the impact of AI’s increasing prevalence on the domains of internal audit and internal control, and how audit functions can help enterprises strengthen compliance with laws and regulations. It also charts the future of auditing, addressing challenges, embracing opportunities, and managing the risks associated with AI. The goal is to transition from a role focused on “oversight and compliance” to the one focused on “strategic consulting and value co-creation.” From the author’s perspective, enterprises must swiftly establish AI management systems, simultaneously enhance employee AI literacy and usage training, and cultivate critical thinking and humanistic values among staff. Through corresponding risk management mechanisms, AI-related risks can be effectively controlled, such as the generation of erroneous content, a lack of contextual understanding, information security breaches, privacy violations, bias, and ethical concerns. This is to ensure that AI applications are safe and comply with moral norms.
Strengthening AI Legal and Regulatory Compliance and Implementing Compliance Assessments
The Organization for Economic Co-operation and Development (OECD) provided recommendations for AI development as early as 2019 and released a forward-looking governance framework for emerging technologies this year. This framework provides a reference for countries developing proactive AI policies, focusing on core values such as human-centricity, fairness, transparency, explainability, security, and accountability. National requirements for AI inevitably yield corresponding concrete measures based on these values. Taiwan’s Financial Supervisory Commission (FSC) also issued Guidelines for Artificial Intelligence (AI) Applications in the Financial Industry in June 2024, proposing six major principles, including establishing governance and accountability mechanisms, prioritizing fairness and human-centric values, protecting privacy and customer rights, ensuring system robustness and security, implementing transparency and explainability, and promoting sustainable development.
I believe that both AI system users and providers are responsible for identifying AI’s development process and the risks generated throughout each stage of its life cycle, as well as establishing control measures. Audit units, in particular, must proactively understand laws, regulations, international standards, related guidelines, and technological developments regarding AI systems to design appropriate audit items and procedures. For example, how should we audit whether an AI system poses risks of unfairness, bias, or discrimination toward specific groups? First, we can start at the data level by analyzing the AI system’s training data to determine whether it accurately reflects real-world diversity, encompassing data from various genders, races, age groups, and cultural backgrounds. We can assess whether a bias review mechanism is in place during data input and examine if the training data was appropriately tested and cleansed of potential biases during the processing stage, with any inequalities in the data being corrected promptly. Next, we can look at the model design and development level. We can evaluate the algorithms used by the AI system and determine whether specialized fairness algorithms are employed to train the model. While optimizing model performance, these algorithms also consider fairness metrics for different groups, as well as whether fairness indicators, such as group equality or equal opportunities, are established and monitored to verify fair output across all groups. Finally, we look at the monitoring level. After deployment, we can review whether there is continuous monitoring of its performance across different groups, as well as adequate response and adjustment mechanisms for abnormal events. We can also confirm that relevant user feedback channels and human review processes are in place to enable affected groups to report unfair behavior by the AI system.
In addition to Guidelines for Artificial Intelligence (AI) Applications in the Financial Industry, enterprises can reference the Artificial Intelligence Risk Management Framework (AI RMF) published by the U.S. National Institute of Standards and Technology (NIST) to help deliberate on and manage AI risks, evaluating the system’s reliability, explainability, and fairness. Furthermore, for issues concerning personal data and privacy protection, guidelines such as Singapore’s Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems can be referenced.
Charting the Internal Audit Vision and Transformation Strategy
Given that traditional audit models struggle to effectively cope with the various risks posed by AI systems, auditors must adopt a more forward-looking mindset, transitioning from a passive ex-post review to proactive risk detection and disaster prevention. However, in this process, leveraging AI to facilitate the transformation of the audit function is imperative. This is not merely a technical upgrade but a re-invention of the role and an enhancement of value, fundamentally changing the audit unit’s mindset and workflow. AI empowerment frees auditors from tedious daily tasks, allowing them to dedicate more effort to high-value risk analysis and strategic consulting. They can utilize AI for high-risk anomaly detection, continuous auditing, and even more timely integration and collaboration with the first and second lines of defense to adjust control measures.
Our department has been actively participating in international internal audit annual conferences in recent years, continuously focusing on the latest global trends and applications in auditing and internal control. In the first quarter of this year, we completed a proof of concept (PoC) for a next-generation audit platform. The construction of the platform facilitates centralized collaboration and integrates resources from various systems, connecting information such as auditing data, ESG metrics, regulatory compliance, and cybersecurity intelligence. This not only enhances interaction and data transfer with regulatory bodies, third-party vendors, and subsidiaries but also improves the credibility and authenticity of audit reports. Furthermore, it enables timely communications with relevant units and integrates governance, risk, and compliance processes. This allows TDCC to implement precise audit operations, proactive risk management, and effective control measures. Simultaneously, the platform enables real-time observation and analysis of internal control status, performs risk assessments, creates audit analysis workflows, implements continuous monitoring, and tracks improvements. By applying AI, the platform proactively identifies potential risks for the company and root causes of control gaps, driving further refinement and innovation in internal audit and control.
Given that traditional audit models struggle to effectively cope with the various risks posed by AI systems, auditors must adopt a more forward-looking mindset, transitioning from a passive ex-post review to proactive risk detection and disaster prevention. However, in this process, leveraging AI to facilitate the transformation of the audit function is imperative. This is not merely a technical upgrade but a re-invention of the role and an enhancement of value, fundamentally changing the audit unit’s mindset and workflow. AI empowerment frees auditors from tedious daily tasks, allowing them to dedicate more effort to high-value risk analysis and strategic consulting. They can utilize AI for high-risk anomaly detection, continuous auditing, and even more timely integration and collaboration with the first and second lines of defense to adjust control measures.
Our department has been actively participating in international internal audit annual conferences in recent years, continuously focusing on the latest global trends and applications in auditing and internal control. In the first quarter of this year, we completed a proof of concept (PoC) for a next-generation audit platform. The construction of the platform facilitates centralized collaboration and integrates resources from various systems, connecting information such as auditing data, ESG metrics, regulatory compliance, and cybersecurity intelligence. This not only enhances interaction and data transfer with regulatory bodies, third-party vendors, and subsidiaries but also improves the credibility and authenticity of audit reports. Furthermore, it enables timely communications with relevant units and integrates governance, risk, and compliance processes. This allows TDCC to implement precise audit operations, proactive risk management, and effective control measures. Simultaneously, the platform enables real-time observation and analysis of internal control status, performs risk assessments, creates audit analysis workflows, implements continuous monitoring, and tracks improvements. By applying AI, the platform proactively identifies potential risks for the company and root causes of control gaps, driving further refinement and innovation in internal audit and control.
Conclusion
According to the Global Risks Report published by the World Economic Forum this year, the adverse outcomes caused by AI technology are expected to escalate and become a significant technological risk in the next decade. This is also why countries like the EU, the U.S., and Japan have uniformly emphasized AI transparency, trustworthiness, fairness, ethics, and accountability when formulating AI-related regulations. However, the risks associated with AI systems do not originate from the technology itself, but rather from its developers, trainers, and the data sources from which AI learns. In my view, enterprises should adopt a compliance perspective, referencing both domestic and international AI laws and regulations to properly formulate control objectives and measures for AI systems, ensuring they are human-centric and implement user-friendly services.
The next generation of auditors must be proficient not only in accounting, finance, and regulations, but also in data science, information security, and AI-related knowledge, to effectively address the multidimensional risks presented by AI systems and communicate efficiently with technical personnel. In a world where AI has become increasingly automated, cultivating independent thought and humanistic values is paramount. The auditor’s human intelligence, including critical thinking, communication, coordination, and ethical judgment, is an invaluable asset that machines cannot replace. These qualities will undoubtedly help enterprises strike a balance between embracing innovation and managing risk. The audit function has already transitioned from the passive ex-post role of “catching errors” to the proactive role of “empowerment” and “guidance.”
