London: A free tool has been launched to support the safe use of AI applications amid the rapid expansion and adoption of the technology across a wide range of sectors, a UK university has said.
The tool has been developed by the PHAWM (Participatory Harm Auditing Workbenches and Methodologies) project led by the University of Glasgow to help organisations, policymakers, and the public make the most of AI apps while identifying their potential harms.
The University of Strathclyde, which is a partner in the project, said in a statement that the tool aims to address the urgent need for rigorous assessments of AI risks amid the rapid expansion and adoption of the technology across a wide range of sectors. It is also designed to support the aims of regulations such as the EU’s AI Act, introduced in 2024, which seeks to balance AI innovation with protections against unintended negative consequences.
It will enable ordinary users to conduct in-depth audits of the strengths and weaknesses of any AI-driven app. It will also actively involve audiences who are usually excluded from the audit process. agencies