
Such latent biases can be observed from the following example: To counter such risks, it is essential to identify the context of the data being utilised and have an understanding of how such data is relevant to the end-product.Ĭontext is of particular significance as the abovementioned latent biases can impede the system’s ability to process the data. It follows that AIDA technology is limited by both latent biases within the data and the algorithmic perpetuation of the same. The effectiveness of artificial intelligence is fundamentally predicated on the data it analyses. In a research study of 36 guidelines on ethics and principles for artificial intelligence, the team at Berkman Klein Center found the theme of “fairness and non-discrimination” to be featured in all of the guidelines studied, the Monetary Authority of Singapore’s (“ MAS”) FEAT principles being one of which. In response to the plethora of risks associated with the adoption of AIDA in finance, regulators across the globe have developed their own guidelines to address what they identify as the major risk categories. The adoption of AIDA by Financial Services Institutions (“ FSI”) has been observed in areas involving internal-process automation and risk management, in the form of credit scoring and fraud detection. AIDA removes human-decision-making as a variable, and replaces it with a data-driven approach. This article provides an update on Singapore’s fairness framework for the adoption of artificial intelligence in finance.Īrtificial intelligence and data analytics (“ AIDA”) technology is increasingly employed for its ability to optimise decision-making processes.

Corporate Governance, Compliance and Regulatory.
