According to Carnegie Endowment it was found that 75 out of 176 countries in the study are actively using AI for video surveillance. AI is being used to analyze video surveillance and reduce human error in monitoring of both public and private premises. AI even provides the analytics to detect faces of people in real-time and track the movements of individuals; responding to crime the moment it occurs or even preventing it from occurring in the first place.
But are governments or public agencies even lawfully allowed to do this, given data protection laws and regulations? As this infringes upon privacy of citizens. AI provides numerous capabilities to transform the way video surveillance takes place, but this needs to be done carefully while ensuring regulatory compliance.
This article talks about how AI can be used for video surveillance while fulfilling regulatory compliances.
Before, we discuss facial recognition and it's compliance, it is important to understand the guidelines for video surveillance. Under GDPR, data collected through video surveillance is considered as personal data and the guidelines for processing of personal data apply.
For any organization to monitor their premise through video surveillance, they are required to comply with the transparency principle where they provide information regarding the surveillance. This is usually done by including a notice on premise that notifies the people that they are being monitored and must contain details such as contact information, the purpose of surveillance etc. as provided in Article 13 and Article 14.
Such disclosure in itself is not enough for video surveillance to take place under GDPR, as the surveillance needs to have at least one of the 6 lawful basis for processing personal data (consent, contract, protection of vital interest, legitimate interests, public task or legal obligation).
How can law enforcement agencies look for a particular suspect within large amounts of video surveillance footage obtained from public areas? Without AI capabilities, they would need to go through each and every footage, which is time and resource consuming. Through facial recognition using AI, the suspect can automatically be identified within large amounts of footage.
However, doing so is not easy as it sounds, given the regulatory complications involved. At one end, facial recognition helps reduce crime and serve justice. On the other end, it potentially infringes upon the privacy of the public.
In the US, the use of facial recognition technology is governed by the recent Facial Recognition Technology Warrant Act, which requires law enforcement agencies to obtain a warrant based on probable cause of criminal activity. The facial recognition capabilities can then be used to track suspects for up to 30 days.
More importantly law enforcement agencies need to minimize the acquisition, retention, and dissemination of information regarding individuals outside the warrant’s purview. This part is challenging as law enforcement agencies should only use AI to obtain the exact points in the footage where the suspect appears and take care in ensuring that privacy of other individuals is not infringed upon. Such footage needs to be stored securely. Faces and other personally identifiable information (PII) needs to be redacted before using such footage in court. We discuss redaction later in this article.
Facial data is recognized as sensitive data and processing of such data is prohibited under Article 9 unless the person has given explicit consent, or there are special circumstances that allow such processing. Law enforcement agencies can use facial recognition only if it is necessary for the establishment, exercise or defense of legal claims.
Moreover, when using facial recognition technology through AI, law enforcement agencies are required to carry out a Data Protection Impact Assessment, to identify and minimize risks related to processing of facial data. As per Article 83(4), failure to do so can lead to a fine of €10 million or 2% of the organization’s global turnover; whichever is higher.
AI can in fact help in complying with regulations. A number of regulations require faces and objects to be redacted in video surveillance. Let’s first look at these regulations in USA and EU.
Several US federal and state laws require organizations and law enforcement agencies to redact Personally Identifiable Information (PII) from data. Law enforcement, when using video surveillance as evidence in court, need to redact faces and PII of people other than the suspects.
To ensure such compliance, organizations and law enforcement agencies can carry out redaction manually or save time using AI capabilities.
Under GDPR, those being recorded have the right to obtain certain information as laid down in Article 15, from the data controller (the person/entity using the video surveillance software). This includes the right to obtain a copy of the video footage. However, when providing such footage, organizations need to ensure that information of other people in the footage is redacted. This redaction can be done manually, or it could be done automatically through AI.
When using AI for redaction, organizations needs to be careful of compliance as analysis of faces may not be allowed under regulations. However, what AI can do is that it can sperate the faces in a video surveillance footage from other objects and redact those faces. This way biometric data in faces is not analyzed and regulations are not violated.
VIDIZMO provides you with the capabilities to redact faces in videos while ensuring compliance.
When analyzing public video surveillance through AI facial recognition capabilities, law enforcement agencies need to make sure they have a warrant to identify a suspect. For organizations monitoring private premises, it is important to obtain consent before using any form of facial recognition technology and provide a notice to people that they are being monitored.
At VIDIZMO, with 20 years of experience in the video industry, we specialize in ethical usage of AI, video surveillance software and their compliances. Our Video streaming and content management system and digital evidence management system allows you to ingest video surveillance footage in a centralized repository. We also offer Artificial Intelligence capabilities to automate business processes related to video/audio.
We process and store content while ensuring various compliances including GDPR, HIPPA etc.
Disclaimer: This article is for information purposes only. We recommend you perform further due diligence by doing your own research and going over the official GDPR articles.