Artificial intelligence (AI) has been perceived to be the pinnacle of innovative inventions of the 21st century. AI has long been imagined in the world of fiction to pave the way for endless possibilities in computer technology. From face-detection software to auto-transcribing tools, smart chatbots and the massive world of IoT (Internet of things), AI has come a long way. But as of this writing, we still stand in the earlier phases of exploring AI.
Source: Paper by Janice C. Sipior, International Journal of Information Management
Today in this article, we will be exploring our current standing in AI and its accuracy. We will analyze the accuracy that AI-based algorithms available to us and enlighten you with the possibilities and limitations of present-day technology. So, let us begin!
Artificial Intelligence And Today's World
Artificial intelligence has been incorporated in today's world in all major aspects of technology. Today, we can find AI in many of its forms everywhere, from cars to kitchens. The technology also exists on a corporate and commercial level, where it serves us to provide solutions that were either difficult without it or simply impossible.
Where Is AI Being Used?
Major applications of AI that we often hear about include smart cars, multilingual translation, transcription and correction, smart household devices, online customer care chatbots, face detection and, in a broader perspective, object detection software. Law enforcement, health and education industries also benefit from the implementation of AI-based practices and technologies.
Many platforms, offering solutions of various forms, incorporate AI into their systems to make their solutions smarter and more efficient for the target user. VIDIZMO is also on the list, integrating AI into its solutions, EnterpriseTube and Digital Evidence Management System (DEMS).
Accuracy In AI Today
The advent of artificial intelligence has led to opening several doors for humanity. But, for a fact, we know that it is still a technology under development. The technology needs to prosper for the next few decades to come closer to providing us with the superficial results we seek based on how AI is perceived in fiction work.
Even amongst all that growth and evolution, we cannot promise everything. While AI is efficient at overcoming several constraints and accomplishing tasks, it still cannot match the human intellect. A human being's rational decision-making is often more adept at working with data presented to them when compared to a machine's algorithm. Thus, the accuracy of AI is a factor that comes into consideration.
A simple example can be considered in the form of language translation. A computer software working on language translation will not be precise when working with contextual texts, whereas a human translator would easily grasp the concept. This phenomenon can be observed in products such as Google Translate and Grammarly, which are market-leading names in language translation and correction, but still tend to fall short in many places where human intervention is necessary. Thus, so far, the process could not be fully automated.
How AI Works?
To understand why AI lacks several aspects and why we have not yet been able to have AI automate everything, like how the Jetsons depicted it, first, we need to know how it works. To put it in other words, we need to understand how the element of intelligence is put into a machine to comprehend how the factor of AI accuracy is affected fully.
Welcome To Machine Learning
Machine Learning, or ML, utilizes the concept of AI-based programming, where a machine can solve upcoming problems by making use of existing data. The data, referred to as a data set, serves as the metric for denoting the machine's intelligence. The benefit of machine learning is that the computer works based on calculations made of its account, free from human intervention and error.
A simple example is that of a program meant to tell if a picture is that of a cat or that of a dog. A data set is created that contains several hundreds of pictures of both cats and dogs, labeled, respectively. This data set allows the computer program to use it as the source data, which will enable it to identify if the input picture is that of a cat or a dog.
To make it easier for your AI model you should use proper data labeling techniques.
You have the liberty to create your data set, or you can use a ready-made one available for free use on the internet. Tensorflow is a notable provider of data sets and libraries for use by novice and advanced programmers alike.
Data Sets – Bigger Is Better
Data sets are the key to getting results out of machine learning-based algorithms. The device delivers more accurate results when provided with larger data sets. This accuracy is because the machine has more data to work with and can provide us with more accurate results.
Reconsidering our example of kittens and pups, let us investigate a more difficult problem. Consider if a picture of a chihuahua, a small breed of dog with a small body and pointy ears, was inputted into the machine. Here, the device would have trouble deciding which category the picture should fall into, considering the similarities of physical features of a chihuahua compared to a standard cat breed.
However, if the data set were increased in size, supposing around 100,000 pictures of cats and dogs each, the algorithm would have a better chance at knowing where a picture of a chihuahua belongs. It will be working with a larger amount of data to compare the input with, thus making more accurate decisions.
However, the tradeoff to accuracy is speed. With larger data sets going through processing for each comparison action, a larger amount of processing power is called for, along with a greater period to process it. If the hardware prerequisites are not met accordingly, the output is generated much slower than anticipated. Moreover, AI can never be on par with human intellect. It does encompass the capability to learn, but not more than what it is programmed to learn. Suppose the model in the example above is provided the picture of a mouse or fox. The machine cannot grade it as anything else besides a cat or dog. In that case, it will try to fit these images into either of the two categories it is programmed. In similar scenarios, AI will treat gibberish, garbage values and useless data by interpreting it as a part of the input given, deterring its accuracy.
AI Accuracy As Perceived By The Media
The mainstream public assumes artificial intelligence to be a magical innovation in the world of technology. This assumption has led to both superficial expectations from the technology and the irrational fear of it. People assume the technology will provide accuracy using data that is both insufficient and irrelevant; something that is deemed impossible. Moreover, adding up to the hysteria, journalism has taken it to an extreme extent, broadcasting claims and studies belonging to the domain of AI and machine learning that themselves hold no scientific ground.
Instances of such events include a 2017 Newsweek article, which covered the story of a Stanford University study. The report mentioned an AI-based predictive model that could ascertain a male individual's sexual orientation by using facial images as input. The study of the project labeled "gaydar" stated that the model had a 91% accuracy. However, this study is patently false as it is factual that facial image data does not correlate with a person's sexuality or sexual identity.
Similarly, in 2017, the Global Times of China reported that Professor Wu Xiaolin and his team developed an AI that can predict individuals' criminality based on their facial features. The study has also been written by MIT Technology Review and The Telegraph UK, where they quoted the model to have an accuracy of 90%. Wu and his student Zhang Xi mentioned that they trained their AI using a data set of 1,856 images of individuals between ages 18 and 55. The AI was able to discriminate between potential criminals and non-criminals based on a select few elements in their facial features. However, like the previous scenario, the claims are factually false as facial features are not determining factors of whether an individual will be a criminal or not.
The Fact And The Fiction
For someone to know the full potential of AI, they must understand how AI works. With awareness of what the machine must do to deliver the output, one would have realistic expectations for the technology. If a person approaches a solution provider with unrealistic expectations regarding how AI is represented in mainstream media, they expect the solution provider to deliver the same, which would be highly unlikely and most probably impossible.
Case In Point: The Accuracy Of Our Platform's AI
Here at VIDIZMO, the platform makes use of Azure Cognitive Services and AWS to power its Artificial Intelligence needs, such as AI-generated automatic transcribing, with the client holding the liberty to choose between the two. Azure Cognitive Services offers support for trilingual audio with an accuracy of 79%, while AWS has an accuracy of 73%, according to publications from March 2020. Both the companies have been working on making their services more accurate as the days pass, using neural networks that keep learning through data input in order to deliver more accurate results. Regardless, the accuracy is highly sufficient when working with high-quality audio with distinct speakers. However, it tends to drop as the number of speakers increase and the quality of the audio drops, where the AI starts to treat noise and gibberish as interpretable words and transcribes it as well.
Concluding Our Analysis
AI, being the emerging, ever-expanding, and revolutionary technology that it is, holds an enormous amount of potential, ready to be explored in the vast world of computer engineering. However, much of the technology is still in the works. This upcoming technology will take time to reach the markets for all. But it is far more important to understand what may never come, compared to what is yet to be. Fiction work has filled our minds with delusions and unrealistic ideals for AI, which are impossible for present-day technology to achieve. Some of these may not be possible even a few decades ahead unless some extraordinary breakthrough in ML occurs along the way. Therefore, it is necessary that as a user (or potential user) of the technology, we may acquaint ourselves with the reality of it.
For Additional Reading:
Sipior, J. C. (2020). Considerations for development and use of AI in response to COVID-19. International Journal of Information Management, 55, 102170.
Posted by Muhammad Nabeel Ali
Nabeel is an Associate Product Marketing Manager at VIDIZMO and an expert in evidence management technologies. He is actively researching on innovative trends in this domain such as Artificial Intelligence. For any queries, feel free to reach out to websales@vidizmo.com