Artificial Intelligence (AI) involves using computers to do things that traditionally require human intelligence. Artificial intelligence is the overarching discipline that covers anything related to making machines smart.
The term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem-solving”.
Artificial Intelligence is a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.
Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
Whether it’s a robot, a refrigerator, a car, or a software application, if you are making them smart, then it’s AI. Below we attempt to explain the important parts of artificial intelligence and how they fit together.
Regular programs define all possible scenarios and only operate within those defined scenarios. AI ‘trains’ a program for a specific task and allows it to explore and improve on its own. A good AI ‘figures out’ what to do when met with unfamiliar situations.
To apply AI, you need data, lots of it. AI algorithms are trained using large data sets so that they can identify patterns, make predictions, and recommend actions, much like a human would, just much faster.
This means creating algorithms to classify, analyze, and draw predictions from data. It also involves acting on data, learning from new data, and improving over time.
Machine Learning (ML) refers to systems that can learn from experience. Machine learning algorithms identify patterns and/or predict outcomes.
Machine Learning is the science of getting a computer to act without programming, the capability of Artificial Intelligence systems to learn by extracting patterns from data. It is an approach or subset of Artificial Intelligence that is based on the idea that machines can be given access to data along with the ability to learn from it.
Machine Learning is a method where the target (goal) is defined and the steps to reach that target is learned by the machine itself by training (gaining experience).
There are three types of machine learning algorithms:
Supervised learning: Data sets are labeled so that patterns can be detected and used to label new data sets
Unsupervised learning: Data sets aren’t labeled and are sorted according to similarities or differences
Reinforcement learning: Data sets aren’t labeled but, after performing an action or several actions, the AI system is given feedback
For example: To identify a simple object such as an apple or orange. The target is achieved not by explicitly specifying the details about the object and coding it, but instead goes through a similar process as a child who learns from seeing different pictures of the object. This allows the machine to define the steps to identify the apple or the orange.
Many organizations sit on huge data sets related to customers, business operations, or financials. Human analysts have limited time and brainpower to process and analyze this data. Therefore, machine learning can be used to:
- Predict outcomes given input data, like regression analysis but on much larger scales and with multiple variables. A perfect example is algorithmic trading, where the trading model must analyze vast amounts of input data and recommend profitable trades. As the model keeps working with real-world data, it can even ‘improve’ itself and adapt its trading strategies to market conditions.
- Find insights or patterns in large data sets that human eyes sometimes miss. For example, a company can study how its customer purchase patterns are evolving and use the findings to modify their product lines.
- Do a lot more in less time. Goodbye grunt work.
Many AI methodologies including neural networks, deep learning, and evolutionary algorithms, are related to machine learning.
Netflix applies machine learning to your viewing history to personalize the movie TV show recommendations you see. Netflix also analyzes what you and people with similar preferences watched in the past, and even auto generates personalized thumbnails and artwork for movie titles, to entice you to click on a title that you’d otherwise ignore.
Artificial Neural Network
Artificial Neural Networks (ANN) -refers to models of human neural networks that are designed to help computers learn. Neural Networks are inspired by human brains and copies the working process of human brains. It is based on a collection of connected units or nodes called artificial neurons or perceptrons. The Objective of this approach is to solve problems in the same way that a human brain does. for e.g. brain modeling, time series prediction, classification, etc.
A good example of an Artificial Neural Network is is Amazon’s Recommendation Engine. Amazon uses Artificial Neural Networks to generate recommendations for its customers. Amazon suggests products by showing you “customers who viewed this item also viewed” and “customers who bought this item also bought”. Amazon assimilates data from all its users browsing experiences and uses that information to make effective product recommendations.
Neural Networks are built to replicate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.
A neural network tries to replicate the human brain’s approach to analyzing data. They can identify, classify and analyze diverse data, deal with many variables, and find patterns that are too complex for human brains to see.
Deep Learning (DL) –refers to systems that learn from experience on large data sets. Deep learning is a subset of machine learning.
When applied to a neural network, it allows the network to learn without human supervision from unstructured data (data that isn’t classified or labeled). This is perfect for analyzing ‘big data’ that organizations collect. These big data sets include different data formats such as text, images, video and sound.
Neural networks are frequently combined with machine learning, deep learning, and computer vision (training computers to derive meaning from pictures). That’s why people talk about ‘deep neural networks,’ which is basically a neural network with more than 2 layers. More layers = more analytical power.
Deep neural networks can be trained to identify and classify objects. A cool use is facial recognition — identifying unique faces in photos and videos. Neural networks also learn over time. For instance, they get better at classifying objects and identifying faces as they are fed more data.
This approach, called Deep Learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter, and even Google.
A subset of machine learning, Evolutionary algorithms are inspired by biological evolution and use mechanisms that imitate the evolutionary concepts of reproduction, mutation, recombination, and selection. Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings. For e.g. genetic algorithms, genetic programming, etc
Evolutionary algorithms self-improve over time. They create a population of algorithms and preserve the ones most successful at predicting outcomes. Applying the ‘survival of the fittest’ principle, the best algorithms are kept alive and the losers are discarded. Sections of code from the winning algorithms are used to create a new population of algorithms, and the selection process repeats.
Evolutionary algorithms are well suited to optimization tasks where there are a lot of variables and a dynamic environment. Basically: find a way to the best possible result.
Evolutionary algorithms in action: Stock selection and trading
Evolutionary algorithms can be built into neural network models to pick stocks and identify trades. Trading rules are set up as parameters and the algorithm works to maximize trading profit. Small changes are introduced into the model over time, and the changes that have the largest desirable impact are kept for the next generation. The model improves with time.
These trading models are popular across institutional quantitative traders. Individuals can also access these models through software packages on the market.
Natural Language Processing
Natural Language Processing (NLP) refers to systems that can understand language.
In the field of natural language processing, we mainly focus on the interactions between human language and computers. NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way. One of the older and best-known examples of NLP is spam detection, which looks at the subject line and the text of an email and decides if it’s junk. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. Current approaches to NLP are based on machine learning.
Automated Speech Recognition
Automated Speech Recognition (ASR) -refers to the use of computer hardware and software-based techniques to identify and process human voice
The principle underlying technologies are Automated Speech Recognition (ASR) and Natural Language Processing (NLP). ASR is the processing of speech to text whereas NLP is the processing of the text to understand meaning. Because humans speak with informalities and abbreviations, it takes extensive computer analysis of natural language to drive accurate outputs.
Automated Speech Recognition and Natural Language Processing fall under Artificial Intelligence. Machine Learning and Natural Language Processing have some overlap as Machine Learning is often used for Natural Language Processing tasks. Automated Speech Recognition also overlaps with Machine Learning. It has historically been a driving force behind many Machine Learning Techniques.
Science of allowing computers to see. This technology captures and analyzes visual information using a camera, analog-to-digital conversion, and digital signal processing. It is often compared to human eyesight, but machine vision isn’t bound by biology and can be programmed to even see through walls. It is used in a range of applications from signature identification to medical image analysis.
It is a field of engineering focused on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently.
Examples include car assembly lines, in hospitals, office cleaner, serving foods, and preparing foods in hotels, patrolling farm areas and even as police officers, or by NASA to move large objects in Space. Robots are the artificial agents which behave like human and build for the purpose of manipulating the objects by perceiving, picking, moving, modifying the physical properties of an object, or to have an effect thereby freeing manpower from doing repetitive functions without getting bored, distracted, or exhausted. Robots are not only part of Computer Science, here Mechanical and Electrical Engineering also plays a big role:
a) AI robots are having mechanical construction and form to accomplish a particular task that can be achieved by Mechanical Engineering.
b) Robots have electrical components, which power and control the machinery and can be achieved by Electrical Engineering.
c) And Robots also contains some level of a computer program. That determines what, when, and how a robot does something and here comes the role of Computer Science.
Implementation of Artificial Intelligence can be found in almost all sectors of society
- AI in business
Robotic process automation is being applied to highly repetitive tasks normally performed by humans. Machine learning algorithms are being integrated into analytics and CRM platforms to uncover information on how to better serve customers.
Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.
- AI in Education.
AI can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. AI could change where and how students learn, perhaps even replacing some teachers.
- AI in Finance
AI in personal finance applications, such as Mint or Turbo Tax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, the software performs much of the trading on Wall Street.
- AI in Law
The discovery process, sifting through documents, in law is often overwhelming for humans. Automating this process is a more efficient use of time. Startups are also building question-and-answer computer assistants that can sift programmed-to-answer questions by examining the taxonomy and ontology associated with a database.
- AI in Manufacturing
This is an area that has been at the forefront of incorporating robots into the workflow. Industrial robots used to perform single tasks and were separated from human workers, but as technology advanced that changed.
- AI in Healthcare
The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and is capable of responding to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema.
Other AI applications include chatbots, a computer program used online to answer questions and assist customers, to help schedule follow-up appointments or aid patients through the billing process, and virtual health assistants that provide basic medical feedback.
- AI in Automobiles
This area of AI has gathered a lot of attention. The list of vehicles includes cars, buses, trucks, trains, ships, submarines, and autopilot flying drones, etc. These use a combination of computer vision, image recognition, and deep learning to build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians. Early models are already safer than human drivers.
Even the best AI today cannot match up to the human brain just yet. While some AI is designed to mimic the human brain, AI today is only good at a relatively narrow range of tasks.
AI can apply massive computing power to a narrow set of data and methods. The brain, on the other hand, applies medium computing power to a much wider set of data and methods. We can apply our brains to almost anything, while AI currently specializes in particular areas.
Many Futurists believe that we will become one with Machines
According to futurist Ray Kurzweil, if the technological singularity happens, then there won’t be a machine takeover. Instead, we’ll be able to co-exist with AI in a world where machines reinforce human abilities.
Kurzweil predicts that by 2045, we will be able to multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud. This will essentially cause a melding of humans and machines. Not only will we be able to connect with machines via the cloud, but we’ll also be able to connect to another person’s neocortex. This could enhance the overall human experience and allow us to discover various unexplored aspects of humanity.
Dr. Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace is one of the premier pioneers in the A.I. field. He had this to say:
Security and Ethical Concerns
The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.
Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.
Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called “deepfakes”, convincingly fabricated videos of public figures saying or doing things that never took place.
Regulation of AI technology
Despite these potential risks, there are few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limits the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe’s GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.
In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.