(Image: Getty)

The use of facial recognition technology would need to be publicly registered and assessed for potential harms in many cases and, in some cases, banned under regulation proposed by a group of leading academics.

Today the University of Technology Sydney’s Human Technology Institute will publish “Facial recognition technology: towards a model law”, which lays out framework for regulating facial recognition technology in Australia.

While acknowledging the possible benefits of the technology, the report proposes a “risk-based approach” that would require users and sellers of facial recognition to consider and publish how it is used. Additionally, the model law would ban some high-risk uses.

As it stands, regulation of facial recognition happens through the federal Privacy Act, which covers the collection of biometric data, including face data. There are many exemptions to the act (including for small business) and it does not specifically address the unique challenges of facial recognition data.

In the report’s foreword, the experts argue Australian law does not effectively regulate facial recognition technology: “Our law does not reliably uphold human rights, nor does it incentivise positive innovation.”

The academics include former Australian human rights commissioner Edward Santow who in that role has called for a moratorium on the use on the technology in high-risk situations.

A model for regulating the technology

The model law proposes requiring developers and most users of facial recognition (called “deployers” in the report) to complete an assessment of the potential harms — including risks to human rights — and how they can minimise these harms.

A “facial recognition impact assessment” would consider factors such as where the technology is being used, how it’s used, how well it works, what decisions are being made using the data, and whether individuals are giving consent. The report also includes guides about the potential risk associated with some of the factors; for example, use of facial recognition in public spaces is seen as a larger risk than used in a private space.

These assessments would be registered, published publicly and updated as necessary. Individuals using facial recognition in non-commercial purposes in a way that’s covered by a previous assessment would not need to assess or register the technology; users deploying it for a commercial use would need to register their use but may rely on another assessment if it’s used in the same way.

The model law would ban use of facial recognition that is assessed as high risk unless it is for national security or law enforcement, for academic research, or when the regulator grants an exemption. Failing to comply with this law would carry civil penalties and injunctions granted against unauthorised use. The Office of the Australian Information Commissioner is named as the relevant regulator. The report also calls for adequate funding for the office.

In addition to the law, the report creates a technical standard for facial recognition technology. Such a standard could require certain levels of security, audit logging, data quality and performance testing.

“Australia needs a dedicated facial recognition law. This report urges the federal attorney-general to lead this pressing and important reform process,” the report says.

Are you worried about the increasing use of facial recognition technology? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publicationWe reserve the right to edit for length and clarity.