THE AGE OF ARTIFICIAL INTELLIGENCE(AI)

Valentine Enedah
14 min readMar 19, 2022

--

ORIGIN OF AI

For a long time, people have imagined inanimate objects coming to life as intelligent beings. Robots were mythical to the ancient Greeks, Chinese, and Egyptian engineers who created automatons.
The origins of modern artificial intelligence can be traced back to classical philosophers’ attempts to describe human thinking as a symbolic system. However, the field of artificial intelligence was not formally established until 1956, at a conference at Dartmouth College in Hanover, New Hampshire, where the term “artificial intelligence” was coined.

A postulated interior of the Duck of Vaucanson (1738–1739)

It wasn’t easy to create an artificially intelligent being. Following several reports criticizing AI progress, government funding and interest in the field declined — a period known as the “AI winter” from 1974 to 1980. The field was later revived in the 1980s when the British government began funding it again, in part to compete with Japanese efforts.
From 1987 to 1993, the field experienced another major winter, which coincided with the collapse of the market for some of the early general-purpose computers and reduced government funding.

After that, research resumed, and in 1997, IBM’s Deep Blue became the first computer to defeat a chess champion, defeating Russian grandmaster Garry Kasparov. In 2011, IBM’s Watson question-answering system defeated reigning champions Brad Rutter and Ken Jennings on the game show “Jeopardy!”
Eugene Goostman, a talking computer “chatbot,” made headlines this year for duping judges into thinking he was a real flesh-and-blood human during a Turing test, a competition devised by British mathematician and computer scientist Alan Turing in 1950 to determine whether a machine is intelligent. However, the achievement has been met with criticism, with artificial intelligence experts claiming that only one-third of the judges were duped, and pointing out that the bot was able to avoid some questions by claiming to be an adolescent who spoke English as a second language.

IBM’s Deep Blue
IBM’s Watson

WHAT IS AI?

Artificial intelligence, like all suitcase terms, is notoriously difficult to define precisely. To get started, consider removing one word and redefining the term “intelligence.”
Intelligence is a set of abilities that includes perception, memory, language skills, quantitative skills, planning, abstract reasoning, decision-making, creativity, and emotional depth, among others.

Given how multi-dimensional human intelligence is, it stands to reason that artificial intelligence cannot be limited to a single function or technology. After all, AI is essentially humanity’s attempt to mimic its cove powers in robots. This leads to possibly the finest one-sentence description of artificial intelligence (AI): the study of “intelligent agents,” or any system that understands its environment and takes actions to maximize its chances of attaining its objectives. Computer vision, speech recognition, natural language processing, language translation, manipulation of physical objects, navigation through physical environments, logical reasoning, game-playing, prediction, long-term planning, and continuous learning are just a few of the initiatives that have fallen under the umbrella of artificial intelligence over the years. A related topic to consider is that the definition of artificial intelligence is always evolving. The “AI effect” is how practitioners refer to this phenomenon(often with annoyance).
In general, society considers it most natural to refer to a capability as “AI” only if it has yet to be solved.

The most notable distinction between artificial intelligence and human intelligence is that AI has no clear upper bound. Rather, with each passing year, its bounds continue to expand. Nobody knows for sure where this will lead. Alan Turing, one of the first and most influential AI thinkers, had a provocative viewpoint. In 1951, Turing observed, “It is typical to offer a grain of consolation in the guise of a statement that some particularly human feature could never be imitated by a machine.” “I am unable to provide such consolation because I feel that no such boundaries can be established.”

TYPES OF AI

There are four basic AI categories, according to the current classification system: reactive, limited memory, theory of mind, and self-aware.

  1. Reactive Machines

Basic processes are carried out by reactive machines. This is the most basic level of artificial intelligence. These types provide an output in response to some input. There is no learning going on. Any A.I. system begins with this step. A simple, reactive machine takes a human face as input and outputs a box around it to identify it as a face. The model doesn’t save any data and doesn’t learn anything.
Machine learning models that are static are reactive machines. Their architecture is the most basic, and they are available on GitHub repositories all over the internet. These models are simple to download, trade, pass around, and import into a developer’s toolbox. A popular example of a reactive AI machine is IBM’s Deep Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.

2. Limited Memory

The capacity of an A.I. to keep past data and/or predictions, and then use that data to create better predictions, is referred to as limited memory types. Machine learning architecture becomes a little more sophisticated when memory is limited. Every machine learning model requires a small amount of memory to build, but it may be deployed as a reactive machine. Three major kinds of machine learning models achieve this Limited Memory type:

i. Reinforcement learning: These models learn to make better predictions through many cycles of trial and error. Computers are taught to play games like Chess, Go, and DOTA2 using this technique.

ii. Long Short Term Memory (LSTMs): Researchers reasoned that using past data to predict the next item in a sequence, especially in language, would be beneficial, thus they devised a model based on the Long Short Term Memory. The LSTM classifies more current information as more significant and those from the past as less essential when predicting the following parts in a sequence.

iii. Evolutionary Generative Adversarial Networks (E-GAN): Every evolution, the E-GAN evolves because it has memory. The model generates something that grows. Because statistics is a math of chance, not of exactitude, growing entities do not always pursue the same route.
The model may discover a better path, one with the least amount of resistance, as a result of the alterations. The model’s following generation mutates and evolves in the direction of the path that its forefather discovered by accident. The E-GAN produces a simulation that is analogous to how people have developed on this planet in several ways. Each child is more poised to have an extraordinary life than its parent in the event of flawless, successful replication.

Limited Memory Types in practice
While every machine learning model is created using limited memory, they don’t always become that way when deployed. Limited Memory A.I. works in two ways:

i. A team continuously trains a model on new data.

ii. The A.I. environment is built in a way where models are automatically trained and renewed upon model usage and behavior.

Machine learning must be built-in into the structure of a machine learning infrastructure for it to support a limited memory type. Active Learning is becoming more widespread in the ML lifecycle. There are six steps in the ML Active Learning Cycle:

i. Train Data: An ML model requires data to train on.
ii. Create an ML Model: The model is built.
iii. Predictions from the model: The model provides predictions.
iv. Feedback: Human or environmental inputs provide feedback to the model on its predictions.
v. Feedback becomes data: The data repository receives the feedback and stores it.
Vi. Repeat step 1: This cycle should be repeated.

3. Theory of Mind

We haven’t yet reached the level of artificial intelligence known as the Theory of Mind. These are still in their early stages, but examples include self-driving cars. A.I. begins to engage with human thoughts and emotions in this sort of A.I. At the moment, machine learning models can help a person complete a task a lot. Alexa and Siri, for example, have a one-way connection with A.I. and kowtow to every command. When you cry angrily at Google Maps to take you somewhere else, it does not offer emotional support or remark, “This is the fastest route.” “Who should I call to let you know I’ll be late?” Instead, Google Maps continues to display the same traffic reports and ETAs that it previously showed, seemingly unconcerned by your plight. A Theory of Mind A.I. will be a better companion.

4. Self Aware

This is the last level of AI development, and it only exists in theory at the moment. Self-aware AI, as the name implies, is AI that has progressed to the point where it is so similar to the human brain that it has gained self-awareness. The ultimate goal of all AI research is and will always be to develop this form of AI, which is decades, if not centuries, away from becoming a reality. Not only will this form of AI be able to understand and evoke emotions in individuals with whom it interacts, but it will also have its own emotions, wants, beliefs, and maybe goals. And this is the kind of AI that skeptics of the technology are concerned about. Although the growth of self-awareness has the potential to accelerate our progress as a civilization, it also has the potential to lead to disaster. This is because, once self-aware, AI could have ideals like self-preservation, which could either directly or indirectly mark the end of humanity, as such an entity could easily outmaneuver any human brain and create sophisticated schemes to take over humanity.

The classification of technology into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI) is an alternative system of classification that is more commonly used in tech jargon (ASI).

5. Artificial Narrow Intelligence (ANI)

This form of artificial intelligence encompasses all currently available AI, including the most complex and competent AI ever devised. Artificial narrow intelligence refers to AI systems that can only do a single task autonomously while emulating human skills. These machines have a very limited or narrow range of capabilities since they can only accomplish what they are programmed to do. These systems relate to all reactive and limited memory AI, according to the aforementioned classification approach. ANI encompasses even the most complex AI that employs machine learning and deep learning to teach itself.

6. Artificial General Intelligence (AGI)

Artificial General Intelligence refers to an AI agent’s ability to learn, observe, understand, and function in the same way that a person does. These systems will be able to build various competencies on their own, as well as make linkages and generalizations across domains, significantly reducing training time. By mimicking our multi-functional capacities, AI systems will be just as capable as humans.

7. Artificial Superintelligence (ASI)

Artificial Superintelligence will very certainly be the pinnacle of AI research, as AGI will be far and away from the most capable able form of intelligence on the planet. ASI will be overwhelmingly better at everything they do, in addition to mimicking human intellect’s multi-faceted intelligence. This is due to overwhelmingly higher memory, faster data processing and analysis, and decision-making skills. The convergence of AGI and ASI will result in singularity, as it is commonly known. While the prospect of having such powerful tools at our disposal may appear tempting, these machines may also pose a threat to our survival, or at the very least, our way of life.

APPLICATIONS OF AI

  1. Business: Businesses are adopting AI to improve their customer relationships as well as to complete work that would normally be completed by people but can be completed faster via robotic process automation. Websites are also employing machine learning algorithms to determine the best methods to offer customers. Chatbots are integrated into websites to provide rapid services to clients, which is one of the many applications of Artificial Intelligence in the commercial sector. AI has the potential to increase sales, do predictive analysis, improve customer relationships by improving their overall experience, and create efficient and productive work procedures.
  2. Education: Artificial intelligence (AI) has transformed traditional teaching methods in the realm of education. Assignments can be graded using digital technology, and smart information can be delivered through online study materials, e-conferencing, and other methods. Additionally, admission portals like Leverage Edu are successfully leveraging AI to aid students in identifying the best-fit courses and universities based on their interests and career goals. Online courses, learning platforms, digital applications, intelligent AI tutors, online career counseling, and virtual facilitators, to name a few, are all examples of AI in education.
  3. Healthcare: Artificial Intelligence has transformed medical equipment, diagnosis, and research, among other things, and has shown to be an important and useful technology for the healthcare sector. Apart from leveraging computational technology to improve and speed up disease diagnosis, Artificial Intelligence has a wide range of applications, as complex algorithms can be used to simulate human cognition for the study and interpretation of complex medical and healthcare data. Larger volumes of data can be handled by AI systems, which can then be analyzed to recommend the best treatment options. Many healthcare organizations have created digital applications, such as Lybrate, WebMD, and others, where users may record their symptoms and seek medical help from doctors.
  4. Banking: Artificial intelligence applications in the financial sector are also rapidly expanding. Artificial intelligence is already being used by several banks across the world to detect fraud involving credit card transactions as well as to facilitate online banking. Almost every bank now offers its customers web apps that allow them to manage their account activity, make online payments, and detect anti-money laundering and payment fraud patterns. AI and Deep Learning are used by well-known companies such as MasterCard and RBS WorldPay.
  5. Finance: Artificial intelligence is playing a crucial role in predicting future market patterns in the financial sector. In the finance sector, the major goal and objective of AI-based technology are to extensively study the dynamism of stock trading. The finance industry is combining adaptive intelligence, algorithm trading, and machine learning into financial processes using various AI technologies. They assist individuals in making the best judgments by predicting market values.
  6. Agriculture: Many challenges related to environmental change are hurting farmers’ livelihoods and agricultural productivity. To address agricultural crises, a variety of AI-based equipment, such as robots and algorithms, are at the vanguard, assisting farmers in achieving sustainable agricultural production. They assist farmers in developing more effective weed-control strategies. The ‘Blue River Technology,’ which has produced see and spray machines that can identify weedicide on cotton plants, is an excellent illustration of how Artificial Intelligence can be used effectively in agriculture. These automated devices use computer vision technology to spray plants to protect them against herbicides.
  7. Gaming: Artificial Intelligence applications in the game industry are also worth mentioning. In the gaming sector, artificial intelligence has exploded. A surreal gaming world can be brought forward for gamers using machine learning and algorithms. AI can transform even the most basic video games into dynamic, interesting, and interactive ones through the use of augmented reality and virtual reality gaming to provide a real-world for players. DeepMind’s AI-based AlphaGo software is regarded as one of AI’s most productive and spectacular achievements. Lee Sedol, the world champion in the GO game, was defeated by it. There are numerous examples of artificial intelligence applications in the gaming business, including F.E.A.R. (First Encounter Assault Recon).
  8. Autonomous Vehicles: The best examples of autonomous vehicles are smart cars. It is one of the most widely used artificial intelligence applications in the automotive industry. Waymo, for example, has done several test drives to ensure that their product is successful. To produce and control signals for vehicular movement, they added a vehicle range system, cloud services, GPS, and cameras. Another excellent example of autonomous vehicles is Tesla’s self-driving automobiles, which are entirely based on artificial intelligence.

Other applications include; Astronomy, Travel & Tourism, E-Commerce, Social Media, Data Security, Lifestyle, Navigation, Human resources, and Chatbots.

LIMITATIONS OF AI

  1. Access to Data

Data is required for correctly training prediction or decision models. As many have stated, data has surpassed oil as one of the most sought-after commodities. It has taken on the status of a new currency. Huge amounts of data are currently in the hands of large corporations. These corporations have a built-in advantage, making it unfair to small startups who are just getting started with AI development. If nothing is done, the power dynamic between big tech and startups will become much more polarized.

2. Computing Time

Even though technical improvements have been rapid in recent years, we still have to overcome some hardware limits such as restricted processing resources (for RAM and GPU cycles). Given the price of building such specialized and precise gear, established enterprises have a major advantage here as well.

3. Cost

Mining, storing, and analyzing data will be extremely energy and hardware intensive. The GPT-3 model was anticipated to cost $4.6 million to train. Another video (below) claimed that the training costs for a brain-like model would be significantly higher than GPT-3, coming in at roughly $2.6 billion.

Furthermore, because skilled engineers in these sectors are now in short supply, hiring them will eat into these organizations’ profits. In this case, too, newer and smaller businesses are at a disadvantage.

4. Bias

The various ways biases might seep into data-modeling procedures (which feed AI) is rather worrisome, to say nothing of the authors’ underlying (known or unidentified) preconceptions. Biased AI is far more complicated than polluted data. Bias can sneak into many phases of the deep-learning process, and our existing design techniques aren’t designed to detect it.

Our current approach of even building AI algorithms isn’t geared to uncover and retrospectively correct biases, as this MIT Technology Review article, points out. Because most of these algorithms are solely tested for performance, a lot of undesired fluff gets in the way. Prejudiced data, a lack of social context, and a questionable notion of fairness could all be examples.

5. No Consensus on Safety, Ethics, and Privacy

There is considerable work to be done in determining the boundaries to which AI can be used. Current constraints emphasize the necessity of AI safety, which must be addressed quickly. Furthermore, the majority of AI skeptics disagree about the ethics of deploying it, not only in terms of how it makes privacy a forgotten idea but also philosophically.

We believe our intelligence is essentially human and one-of-a-kind. Giving up exclusivity can seem contradictory. One of the most frequently asked questions is whether robots deserve human rights if they can achieve all humans do and so become equal to humans. If that’s the case, how far do you go in defining the rights of these robots? There are no definitive answers in this situation. Given the recent development of AI, the field of AI philosophy is still in its infancy. I’m quite interested to watch how this AI field evolves.

6. Adversarial Attacks

Because AI isn’t human, it can’t always adapt to changes in the environment. Applying tape on the wrong side of the road, for example, can lead an autonomous car to swerve into the wrong lane and crash. The recording may go unnoticed or unreacted to a human. While the driverless vehicle may be significantly safer in normal circumstances, it is these outlier occurrences that we should be concerned about.

This inability to adapt reveals a serious security issue that has yet to be adequately addressed. While ‘fooling’ these data models might be amusing and innocuous in some circumstances (such as mistaking a toaster for a banana), it can be dangerous in others (such as for defense purposes).

It’s truly difficult to imagine what our world will be like when more advanced AI emerges. However, we still have a long way to go, as AI development is currently at a rudimentary stage compared to where it is expected to go. For those who are pessimistic about AI’s future, this suggests it’s a little early to be concerned about the singularity, and there’s still time to secure AI’s safety. The fact that we’ve only scratched the surface of AI research makes the future much more intriguing for those who are positive about the future of AI.

Sophia The robot

If you enjoyed this article, please share it with your friends and colleagues!

--

--