Overall form of AI it perceives the environment and

Overall introduction

To be written when finished

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

Introduction to AI

 

“Nobody
phrases it this way, but I think that artificial intelligence is almost a
humanities discipline. It’s really an attempt to understand human intelligence
and human cognition.” —Sebastian Thrun.

 

We call our species
homo-sapiens which means “wise man”,
this awareness of our own intelligence has been studied as long as written
records exist. Artificial intelligence is not only the study of the human mind
but an attempt to build that intelligence. AI encompasses a large group of
sub-fields from the more general types of learning, to specifics like chess
playing, driving or writing books. AI is used in most of the intelligent tasks
undertaken by humans.

 

The four main types
of machine intelligence are:

 

Reactive machines

The most basic form
of AI it perceives the environment and reacts accordingly. These could include
DeepBlue the chess playing machine that beat Gary Kasperov in 1996 or Alpha go
that beat the current Go champion in 2016.

 

Limited memory

 

This type considers
current events based off a mix preprogrammed memory and observations carried
out over time. This type makes notices changes in its environment and can
adjust its actions in real-time. These systems are already being used in self-driving,
combining sensors radar software to safely take a vehicle from A to B. Mitsubishi
Electric has already devolved a compact AI to augment the driver by detecting
when a driver is distracted. Alternatively, on an industrial machine, it could
analyze the actions of factory workers stream lining the work process (Mitsubishi, 2016).

 

Theory of mind AI

 

A computational theory of mind is a
theory that the human mind or the human brain (or both) is
a thought processing system and that thinking is a form
of computing. Machines equipped with a theory of mind would be able to
recognize you as an individual and autonomous agent with your own
characteristics particular to you. This ability to recognize others emotional
state is crucial for human interaction and understanding. Basic forms of these
have been constructed such as  EmoSPARK an interface similar to Googles
Alexa that seeks always to understand what makes you happy and unhappy through
voice and facial recognition.

A model for
construction of a “theory of mind” AI has not been developed but one suggestion
from Brian Scassellati, who is a professor of cognitive science, would be to
mimic the development of children.

 

 

 

 

 

 

Self-aware AI

 

This is the most
advanced form of AI, it is an extension of the theory of mind but it has
self-driven actions, desire and an ability to be aware of those emotions.
Nothing like this has been created yet and the debate that it can be
accomplished is still going on.

 According to Christoph Koch, an
American neuroscientist “we believe that the problem of consciousness can, in
the long run, be solved only by explanations at the neural level” (Koch, 1990). Meaning that
consciousness has a material basis not yet understood and if we could map the interaction
between our neurons we could then emulate that inside a machine.

 

Types of AI
and how to create one.

 

How does a human
think and how can we determine when a machine is exhibiting human like
intelligence? What are some approaches to these problems and is it even
possible to create a model of the human mind.

 

Short introduction on weak AI and strong AI

 

Weak AI is a narrow
form of intelligence that deals with a specific set of tasks and is not able to
operate outside its given framework such as the Chinese room experiment.
 The classic illustration of weak AI is John Searle’s Chinese room thought
experiment. This experiment says that a person outside a room may be able to
have what appears to be a conversation in Chinese with a person inside a room who
is given instructions on how to respond to conversations in Chinese. The person
inside the room would appear to speak Chinese, but in reality, they couldn’t
actually speak or understand a word of it absent of the instructions they’re
being fed. The person is good at following instructions, not at speaking
Chinese. They might appear to have strong AI, a machine
intelligence equivalent to human intelligence, but they really only have
weak AI.

Strong Artificial Intelligence (AI) is a type of machine intelligence
that is equivalent to human intelligence. Key characteristics of strong AI
include the ability to reason, solve puzzles, make judgments, plan, learn, and
communicate. It should also have consciousness, objective thoughts,
self-awareness, and sentience.

Strong AI is also called True Intelligence or Artificial General
Intelligence (AGI).

 

 

The Turing test

 

Developed by Alan Turing
in 1950 the Turing test examines a machines ability to exhibit human behavior.
A human interviewer and either another human or a machine have a conversation via
a text only channel. The interviewer, through these conversations tries to
determine weather he is talking to a Human or a machine. (Turing, 1950).

The strength of
this test lies in its simplicity, because we do not have an accurate model of
behavior inside the mind we need to rely on the observable processes of
behavior. The test could encompass an ability to reason the answer from given
information, knowledge of the subject (or lack of) and emotional intelligence.

But human behavior
and intelligent behavior are not the same thing so if a machine vastly more
intelligent than an interviewer it has to exhibit lower intelligence or at least
on par with the interviewer. The yearly competition called the Loebner Prize
which pits Humans and AI against a panel of judges to determine who is who and
in 1991 one of the contestants was the Shakespearean expert Cynthia Clay, who
was , deemed a computer by three different judges after a conversation about
the playwright.

The test, though an
important step, is not very good at determining whether a machine can think but
it gives us a beginning for what to look for in an AI when discussing its “Humanness”.
Instead of trying to trick a pass from the Turing test, AI researchers have
instead focused on teaching AI the underlying ways in which we think. 

 

Rational thought driven AI

 

Humans could be
said to understand the world through rational thought, we interact with people
and objects with an understanding of where it came from, what it is made of and
what does it influence. A system of logic underlies every interaction we make,
albeit unconscious. For a machine to learn this we have to define and program
it.

Aristotle’s Categories places every
object of human apprehension under one of ten categories (known to medieval
writers as the Latin term praedicamenta). He breaks this idea
down into the concepts of what can be “within a subject” and what can be “of a
subject” (Jansen, 2007). A person could be
broken down into this graph (see fig. 1), with substance being from which
everything derives. From substance would come a corporeal body which is either
animate or inanimate all the way down the tree to Aristotle who is rational,
sensible, animate and corporeal.

Aristotle’s other major contribution to
the field of logic is his Syllogisms.

“A deduction is speech (logos) in which, certain
things having been supposed, something different from those supposed results of
necessity because of their being so” (Aristotle’s Logic, 2000).

 

Problems with using Human reasoning

Humans can perform a few types of reasoning, in deductive reasoning the conclusion is a direct result of the facts
presented. Example: Some people cannot see (fact). The condition
when you cannot see is known as blindness (fact). Hence, the people who cannot
see are blind (deduction).

In
abductive reasoning the conclusion is an inference to the best explanation.
Example: The grass is wet (fact), when it rains the grass gets wet (fact), we
can then theorize that it had rained (abduction). This is not necessarily the
proper conclusion but is the best explanation with the given information. AI
can do this type of reasoning and are getting better, but it requires a large
amount of general information about the world.

You
could see how in an everyday situation the sheer amount of facts needed to
grasp a complete picture of a given situation could be in the hundreds. An
example could be;

Jane
has low blood sugar and feels tired. An AI might need to know what the cause of
low blood sugar is, how the blood relates to energy levels, how does can this
problem be alleviated? If food is the answer, what type of food, how much is
sufficient (by proxy the relationship between amount of food and blood sugar)
and who is Jane anyway, and so on. A full ontology of the world would be too
large to compute for an AI and despite being able to think about many concepts
at once this is not the way humans view the world.

 

 

Goal driven AI

 

Another
approach to an AI achieving a specific goal could be making the outcome the
objective. Sometimes there is more than one answer to a question or no “good”
answer at all, a goal driven AI could use these tools of logic without the need
to arrive at a specific answer. This approach could free an AI from the path to
an action example:

 

Objective
– Get firewood

Actions
needed – Movement >Get Axe > Chop logs
> Trim branches >Collect logs

If Axe is not present then the
process does not work, however if the tasks to Get firewood are separated from the process then another possibility
is present Movement>Collect branches becomes a
possibility.

 

A problem with goal driven AI is “instrumental
convergence”, the idea that an intelligent agent with seemingly harmless goals
could act in surprisingly harmful ways. One thought experiment from computer
scientist Nick Bostrom goes.

“Suppose we have an AI whose only goal
is to make as many paper clips as possible. The AI will realize quickly that it
would be much better if there were no humans because humans might decide to
switch it off. Because if humans do so, there would be fewer paper clips. Also,
human bodies contain a lot of atoms that could be made into paper clips. The
future that the AI would be trying to gear towards would be one in which there
were a lot of paper clips but no humans” (Bostrom, 2003).

 

The obvious solution would be to
program the AI to never harm a Human, Isaac Asimov explored this idea in his set
of short stories “I Robot”. He lays down the three laws of robotics.

 

A robot may not injure a human being or,
through inaction, allow a human being to come to harm.
A robot must obey the orders given it by
human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as
long as such protection does not conflict with the First or Second Laws.

 
 

The final chapter of these stories
deals with a future where humanities economy is controlled by a super computer
called the machine. An anti-machine society grows in opposition, the Machine
eventually destroys the opposing movement in what would seem conflicting of the
first law of robotics. But what had happened was the “Machine” took a broad
view of its meaning thinking “No machine may harm humanity; or, through
inaction, allow humanity to come to harm”. In effect, the Machines have decided
that the only way to follow the First Law is to take control of humanity, which
is one of the events that the three Laws are supposed to prevent (Asimov, 1950).

 

Teaching
through language

 

One
other way we an AI could learn to behave more like humans is to give it the
ability to understand the concepts
of having a conversation then just talk to it.

 

Tay was a twitter bot created by the Microsoft Corporation “The
AI Chabot Tay is a machine learning project, designed for human engagement” (Price, 2016). It learnt from its
interaction with other humans and sorting algorithms designed to read through
of the more popular comments present on twitter. Within 24 hours the bot was
tweeting bizarre racial slurs (fig.2) and had to be taken down much to the
embarrassment of Microsoft. As the AI was merely a learned combination of  the kind of language used on twitter it says
more about that platform than it does of AI in general.

 

Another way an AI might misunderstand a
human would be through spoken language, let us take these sentences;

I didn’t
take the test yesterday. (Somebody else did.)
I didn’t take the test
yesterday. (I did not take it.)
I didn’t take the test
yesterday. (I did something else with it.)
I didn’t take the test
yesterday. (I took a different one.)
I didn’t take the test yesterday.
(I took something else.)
I didn’t take the test yesterday.
(I took it some other day.)

The ambiguity of this
sentence lies in the way it is spoken and the stresses put on a particular
word. You could see how many ways an AI would misinterpret this sentence and
misunderstand our instructions.

How might we
misinterpret whether a machine is really communicating or just simulating our
ways of commination? One thought experiment is called the Chinese room.

  Suppose that I’m locked in a room and given a
large batch of Chinese writing. Suppose also that I know no Chinese, either
written or spoken, and that I’m not even confident that I could recognize
Chinese writing as Chinese writing distinct from, say, Japanese writing or
meaningless squiggles. To me, Chinese writing is just so many meaningless
squiggles. Now suppose further that after this first batch of Chinese writing I
am given a second batch of Chinese script together with a set of rules for
correlating the second batch with the first batch. The rules are in English,
and I understand these rules as well as any other native speaker of English.
They enable me to correlate one set of formal symbols with another set of
formal symbols, and all that “formal” means here is that I can
identify the symbols entirely by their shapes. Now suppose also that I am given
a third batch of Chinese symbols together with some instructions, again in
English, that enable me to correlate elements of this third batch with the
first two batches, and these rules instruct me how to give back certain Chinese
symbols with certain sorts of shapes in response to certain sorts of shapes
given me in the third batch. Unknown to me, the people who are giving me all of
these symbols call the first batch a “script,” they call the second
batch a “story,” and they call the third batch “questions.”
Furthermore, they call the symbols I give them back in response to the third
batch “answers to the questions,” and the set of rules in English
that they gave me, they call the “program.” Now just to complicate
the story a little, imagine that these people also give me stories in English,
which I understand, and they then ask me questions in English about these
stories, and I give them back answers in English. Suppose also that after a
while I get so good at following the instructions for manipulating the Chinese
symbols and the programmers get so good at writing the programs that from the
external point of view—that is, from tile point of view of somebody outside the
room in which I am locked—my answers to the questions are absolutely
indistinguishable from those of native Chinese speakers. Nobody just looking at
my answers can tell that I don’t speak a word of Chinese.

What this tells us is
that the Turing test is not the best method of deducing whether an AI is as
aware as humans because the test lacks nonverbal commination, we could be
language 

 

 

Cognitive modeling

 

“Men
ought to know that from the brain, and from the brain only, arise our
pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and
tears. Through it, in particular, we think, see, hear, and distinguish the ugly
from the beautiful, the bad from the good, and the pleasant from the
unpleasant, in some cases using custom as a test, in others perceiving them
from their utility. It is the same thing which makes us mad or delirious,
inspires us with dread or fear, whether by night or by day, brings
sleeplessness, inopportune mistakes, aimless anxieties, absent-mindedness, and
acts that are contrary to habit. These things that we suffer all come from the
brain, when it is not healthy, but becomes abnormally hot, cold, moist, or dry,
or suffers any other unnatural affection to which it was not accustomed” (Hippocrates, 400 B.C.E ).

 

If we are to teach an AI to act like a
human, ie.to have emotions and to be aware of its emotional responses, then one
approach could be to map the human brain and simulate it within a digital
space. The idea that the mind creates thoughts by a series of step by step
processes is called the computational theory of the mind.

The human project is a ten year project
to map the brain from the molecules to the larger networks in charge of
cognitive processes. It is a vast undertaking with a variety of different
components such as “Theoretical Neuroscience: Deriving high-level mathematical
models to synthesize conclusions from research data” and Ethics and Society:
Exploring the ethical and societal impact of HBP’s work” (EEC, 2012). The latter having
issues to do with personhood and the cognitive processes of coma patients.

In 2012 IBM announced “Blue Gene (a
project to map the human mind inside of a computer) simulated 4.5 percent of
the brain’s neurons and the connections among them called synapses—that’s about
one billion neurons and 10 trillion synapses. In total, the brain has roughly
20 billion neurons and 200 trillion synapses. (Fischetti, 2011). To simulate a mouse
brain they used 512 processers, rat (2,048) and cat (24,576). They predict a
full simulated brain would take 880,000 processers and could be achieved by
2029.

If we have modeled a brain inside of a
computer system does that grant the possibility of consciousness? Could a
self-aware intelligence arise out of a sufficiently complex system?

 

 

 

Genetic
algorithms and emergence

 

“Genetic algorithm, in artificial
intelligence, a type of evolutionary computer algorithm in which
symbols (often called “genes” or “chromosomes”) representing possible solutions
are “bred.” This “breeding” of symbols typically includes the use of a
mechanism analogous to the crossing-over process in
genetic recombination and an adjustable mutation rate  (Britannica, 2017)”.

The benefit in using genetic algorithms
in AI learning is that it mimics the biological process, weeding out the weaker
models in favor of a Darwinian model of survival of the fittest. The process is
“goal driven” and when a system reaches the goal it can then be used as process
tool that can be applied to unknown solutions. Genetic algorithms allow for the
possibility of an emergent system a processes described by Robert W. Batterman,

 

“The idea being that a phenomenon is
emergent if its behavior is not reducible to some sort of sum of the behaviors
of its parts, if its behavior is not predictable given full knowledge of the
behaviors of its parts, and if it is somehow new — most typically this is taken
to mean that emergent phenomenon displays causal powers not displayed by any of
its parts.” (Batterman, 2010)

 

Emergence is a property of our universe,
we are an emergent intelligence from basic matter to a conscious being. The
search for artificial intelligence is the attempt to understand that emerging
consciousness. The process of giving an AI consciousness could come from within
itself programing the right starting point would be the hardest part.

 

Neural networks and deep learning

 

The
basic idea behind a neural network is to simulate (copy in a
simplified but reasonably faithful way) lots of densely interconnected brain
cells inside a computer so you can get it to learn things, recognize patterns,
and make decisions in a humanlike way. It has been around since the early days
of AI research but advancements in computing power recently have led to a
resurgence in interest. Deep learning uses multiple cells to recognize patterns
and through a learning process it could attempt to replicate that image. An
example would be to ask a deep learning machine to cross a road. For normal
machines it would have to be given a precise set of instructions, forward left
right etc. But with deep learning you show it 1000 videos of people crossing
roads and it tries a thousand times to replicate some element with in the
videos to achieve its goal.

This
type of computing has the most potential of being promising in creating an
advanced AI as it incorporates elements of genetic algorithms, goal driven AI
and reasoning but overlaid with a powerful idea of learning through practice.

 

 

 

 

 

 

Ways of creating AI, conclusion

 

A final way we could create AI would be to
design a system that creates AI itself. Creating a useful machine learning
program requires a lot of choosing the correct parameters Google’s AutoML is a
neural network designed to create better neural networks. It uses past
achievements in creating basic AI and evolves them into something new. Maybe it
could even design a better version of AutoML.

Perhaps problem with designing a self-aware
machine is that we are unable to objectively look at ourselves to create a
likeness. Perhaps we should allow another to do this.

To be expanded

 

Alternatives to
the Turing test

 

Needed?

 

 

 

AI and personhood

 

If we hypothetically created an AI with self-awareness
would it be granted status as a member of humanity straight away or would it
require any more steps. 

 

If we want to grant an artificial intelligence personhood
we have to look at when we grant a human personhood.

Personhood is a technical term, a person doesn’t equal a
human. Human is a biological term you’re human if you have human DNA, but
person is a moral term. For a philosopher persons are beings who are part of
our moral community they deserve moral consideration this distinction is really
useful but it kind of complicates things because there might be nonhumans that
we think deserve moral consideration and there might be some humans who don’t.

The determination of who’s a person and who’s not is at
the argument of what constitutes a person is at the core of almost every major
social debate issue you can think of, abortion, animal rights, the death
penalty to euthanasia.

But is it possible to be human yet not a person? Some
people believe that fetuses though clearly human are not yet persons, others
think that bodies and persistent vegetative states or that have experienced a
complete and an irreversible loss of brain function are no longer persons. What
must one possess to become part of our community of personhood?

Mary Ann Warren in response to whether a thing
can be said to be a person, and so have moral standing, Warren suggested the
following criteria:

Consciousness of objects and events
external and/or internal to the being.
Reasoning (the developed capacity to solve
new and relatively complex problems);
Self-motivated activity (activity which is
relatively independent of either genetic or direct external control
The capacity to communicate, by whatever
means, messages of an indefinite variety of types, that is, not just with
an indefinite number of possible contents, but on indefinitely many
possible topics;
The presence of self-concepts and
self-awareness, either individual or racial, or both (Warren,
1973).

I would add that it is not necessary all these criteria are met, but
that a reasonable portion of them are. ); to be kept?