Overall form of AI it perceives the environment and

Overall introductionTo be written when finished Introduction to AI “Nobodyphrases it this way, but I think that artificial intelligence is almost ahumanities discipline. It’s really an attempt to understand human intelligenceand human cognition.” —Sebastian Thrun. We call our specieshomo-sapiens which means “wise man”,this awareness of our own intelligence has been studied as long as writtenrecords exist.

Artificial intelligence is not only the study of the human mindbut an attempt to build that intelligence. AI encompasses a large group ofsub-fields from the more general types of learning, to specifics like chessplaying, driving or writing books. AI is used in most of the intelligent tasksundertaken by humans.  The four main typesof machine intelligence are: Reactive machinesThe most basic formof AI it perceives the environment and reacts accordingly.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

These could includeDeepBlue the chess playing machine that beat Gary Kasperov in 1996 or Alpha gothat beat the current Go champion in 2016. Limited memory This type considerscurrent events based off a mix preprogrammed memory and observations carriedout over time. This type makes notices changes in its environment and canadjust its actions in real-time. These systems are already being used in self-driving,combining sensors radar software to safely take a vehicle from A to B. MitsubishiElectric has already devolved a compact AI to augment the driver by detectingwhen a driver is distracted. Alternatively, on an industrial machine, it couldanalyze the actions of factory workers stream lining the work process (Mitsubishi, 2016).

 Theory of mind AI A computational theory of mind is atheory that the human mind or the human brain (or both) isa thought processing system and that thinking is a formof computing. Machines equipped with a theory of mind would be able torecognize you as an individual and autonomous agent with your owncharacteristics particular to you. This ability to recognize others emotionalstate is crucial for human interaction and understanding. Basic forms of thesehave been constructed such as  EmoSPARK an interface similar to GooglesAlexa that seeks always to understand what makes you happy and unhappy throughvoice and facial recognition.A model forconstruction of a “theory of mind” AI has not been developed but one suggestionfrom Brian Scassellati, who is a professor of cognitive science, would be tomimic the development of children.

      Self-aware AI This is the mostadvanced form of AI, it is an extension of the theory of mind but it hasself-driven actions, desire and an ability to be aware of those emotions.Nothing like this has been created yet and the debate that it can beaccomplished is still going on. According to Christoph Koch, anAmerican neuroscientist “we believe that the problem of consciousness can, inthe long run, be solved only by explanations at the neural level” (Koch, 1990).

Meaning thatconsciousness has a material basis not yet understood and if we could map the interactionbetween our neurons we could then emulate that inside a machine. Types of AIand how to create one. How does a humanthink and how can we determine when a machine is exhibiting human likeintelligence? What are some approaches to these problems and is it evenpossible to create a model of the human mind. Short introduction on weak AI and strong AI Weak AI is a narrowform of intelligence that deals with a specific set of tasks and is not able tooperate outside its given framework such as the Chinese room experiment.

 The classic illustration of weak AI is John Searle’s Chinese room thoughtexperiment. This experiment says that a person outside a room may be able tohave what appears to be a conversation in Chinese with a person inside a room whois given instructions on how to respond to conversations in Chinese. The personinside the room would appear to speak Chinese, but in reality, they couldn’tactually speak or understand a word of it absent of the instructions they’rebeing fed. The person is good at following instructions, not at speakingChinese.

They might appear to have strong AI, a machineintelligence equivalent to human intelligence, but they really only haveweak AI.Strong Artificial Intelligence (AI) is a type of machine intelligencethat is equivalent to human intelligence. Key characteristics of strong AIinclude the ability to reason, solve puzzles, make judgments, plan, learn, andcommunicate. It should also have consciousness, objective thoughts,self-awareness, and sentience.Strong AI is also called True Intelligence or Artificial GeneralIntelligence (AGI).  The Turing test Developed by Alan Turingin 1950 the Turing test examines a machines ability to exhibit human behavior.A human interviewer and either another human or a machine have a conversation viaa text only channel.

The interviewer, through these conversations tries todetermine weather he is talking to a Human or a machine. (Turing, 1950). The strength ofthis test lies in its simplicity, because we do not have an accurate model ofbehavior inside the mind we need to rely on the observable processes ofbehavior. The test could encompass an ability to reason the answer from giveninformation, knowledge of the subject (or lack of) and emotional intelligence. But human behaviorand intelligent behavior are not the same thing so if a machine vastly moreintelligent than an interviewer it has to exhibit lower intelligence or at leaston par with the interviewer.

The yearly competition called the Loebner Prizewhich pits Humans and AI against a panel of judges to determine who is who andin 1991 one of the contestants was the Shakespearean expert Cynthia Clay, whowas , deemed a computer by three different judges after a conversation aboutthe playwright.The test, though animportant step, is not very good at determining whether a machine can think butit gives us a beginning for what to look for in an AI when discussing its “Humanness”.Instead of trying to trick a pass from the Turing test, AI researchers haveinstead focused on teaching AI the underlying ways in which we think.

   Rational thought driven AI Humans could besaid to understand the world through rational thought, we interact with peopleand objects with an understanding of where it came from, what it is made of andwhat does it influence. A system of logic underlies every interaction we make,albeit unconscious. For a machine to learn this we have to define and programit. Aristotle’s Categories places everyobject of human apprehension under one of ten categories (known to medievalwriters as the Latin term praedicamenta). He breaks this ideadown into the concepts of what can be “within a subject” and what can be “of asubject” (Jansen, 2007).

A person could bebroken down into this graph (see fig. 1), with substance being from whicheverything derives. From substance would come a corporeal body which is eitheranimate or inanimate all the way down the tree to Aristotle who is rational,sensible, animate and corporeal.

Aristotle’s other major contribution tothe field of logic is his Syllogisms.”A deduction is speech (logos) in which, certainthings having been supposed, something different from those supposed results ofnecessity because of their being so” (Aristotle’s Logic, 2000). Problems with using Human reasoningHumans can perform a few types of reasoning, in deductive reasoning the conclusion is a direct result of the factspresented. Example: Some people cannot see (fact). The conditionwhen you cannot see is known as blindness (fact).

Hence, the people who cannotsee are blind (deduction). Inabductive reasoning the conclusion is an inference to the best explanation.Example: The grass is wet (fact), when it rains the grass gets wet (fact), wecan then theorize that it had rained (abduction). This is not necessarily theproper conclusion but is the best explanation with the given information. AIcan do this type of reasoning and are getting better, but it requires a largeamount of general information about the world. Youcould see how in an everyday situation the sheer amount of facts needed tograsp a complete picture of a given situation could be in the hundreds.

Anexample could be;Janehas low blood sugar and feels tired. An AI might need to know what the cause oflow blood sugar is, how the blood relates to energy levels, how does can thisproblem be alleviated? If food is the answer, what type of food, how much issufficient (by proxy the relationship between amount of food and blood sugar)and who is Jane anyway, and so on. A full ontology of the world would be toolarge to compute for an AI and despite being able to think about many conceptsat once this is not the way humans view the world.

  Goal driven AI Anotherapproach to an AI achieving a specific goal could be making the outcome theobjective. Sometimes there is more than one answer to a question or no “good”answer at all, a goal driven AI could use these tools of logic without the needto arrive at a specific answer. This approach could free an AI from the path toan action example: Objective- Get firewoodActionsneeded – Movement >Get Axe > Chop logs> Trim branches >Collect logsIf Axe is not present then theprocess does not work, however if the tasks to Get firewood are separated from the process then another possibilityis present Movement>Collect branches becomes apossibility. A problem with goal driven AI is “instrumentalconvergence”, the idea that an intelligent agent with seemingly harmless goalscould act in surprisingly harmful ways. One thought experiment from computerscientist Nick Bostrom goes.”Suppose we have an AI whose only goalis to make as many paper clips as possible. The AI will realize quickly that itwould be much better if there were no humans because humans might decide toswitch it off. Because if humans do so, there would be fewer paper clips.

Also,human bodies contain a lot of atoms that could be made into paper clips. Thefuture that the AI would be trying to gear towards would be one in which therewere a lot of paper clips but no humans” (Bostrom, 2003). The obvious solution would be toprogram the AI to never harm a Human, Isaac Asimov explored this idea in his setof short stories “I Robot”. He lays down the three laws of robotics.  A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

  The final chapter of these storiesdeals with a future where humanities economy is controlled by a super computercalled the machine. An anti-machine society grows in opposition, the Machineeventually destroys the opposing movement in what would seem conflicting of thefirst law of robotics. But what had happened was the “Machine” took a broadview of its meaning thinking “No machine may harm humanity; or, throughinaction, allow humanity to come to harm”.

In effect, the Machines have decidedthat the only way to follow the First Law is to take control of humanity, whichis one of the events that the three Laws are supposed to prevent (Asimov, 1950).  Teachingthrough language Oneother way we an AI could learn to behave more like humans is to give it theability to understand the conceptsof having a conversation then just talk to it. Tay was a twitter bot created by the Microsoft Corporation “TheAI Chabot Tay is a machine learning project, designed for human engagement” (Price, 2016). It learnt from itsinteraction with other humans and sorting algorithms designed to read throughof the more popular comments present on twitter. Within 24 hours the bot wastweeting bizarre racial slurs (fig.

2) and had to be taken down much to theembarrassment of Microsoft. As the AI was merely a learned combination of  the kind of language used on twitter it saysmore about that platform than it does of AI in general.  Another way an AI might misunderstand ahuman would be through spoken language, let us take these sentences;I didn’ttake the test yesterday. (Somebody else did.)I didn’t take the testyesterday. (I did not take it.)I didn’t take the testyesterday. (I did something else with it.

)I didn’t take the testyesterday. (I took a different one.)I didn’t take the test yesterday.(I took something else.)I didn’t take the test yesterday.(I took it some other day.)The ambiguity of thissentence lies in the way it is spoken and the stresses put on a particularword. You could see how many ways an AI would misinterpret this sentence andmisunderstand our instructions.

How might wemisinterpret whether a machine is really communicating or just simulating ourways of commination? One thought experiment is called the Chinese room.  Suppose that I’m locked in a room and given alarge batch of Chinese writing. Suppose also that I know no Chinese, eitherwritten or spoken, and that I’m not even confident that I could recognizeChinese writing as Chinese writing distinct from, say, Japanese writing ormeaningless squiggles. To me, Chinese writing is just so many meaninglesssquiggles.

Now suppose further that after this first batch of Chinese writing Iam given a second batch of Chinese script together with a set of rules forcorrelating the second batch with the first batch. The rules are in English,and I understand these rules as well as any other native speaker of English.They enable me to correlate one set of formal symbols with another set offormal symbols, and all that “formal” means here is that I canidentify the symbols entirely by their shapes. Now suppose also that I am givena third batch of Chinese symbols together with some instructions, again inEnglish, that enable me to correlate elements of this third batch with thefirst two batches, and these rules instruct me how to give back certain Chinesesymbols with certain sorts of shapes in response to certain sorts of shapesgiven me in the third batch. Unknown to me, the people who are giving me all ofthese symbols call the first batch a “script,” they call the secondbatch a “story,” and they call the third batch “questions.”Furthermore, they call the symbols I give them back in response to the thirdbatch “answers to the questions,” and the set of rules in Englishthat they gave me, they call the “program.

” Now just to complicatethe story a little, imagine that these people also give me stories in English,which I understand, and they then ask me questions in English about thesestories, and I give them back answers in English. Suppose also that after awhile I get so good at following the instructions for manipulating the Chinesesymbols and the programmers get so good at writing the programs that from theexternal point of view—that is, from tile point of view of somebody outside theroom in which I am locked—my answers to the questions are absolutelyindistinguishable from those of native Chinese speakers. Nobody just looking atmy answers can tell that I don’t speak a word of Chinese.What this tells us isthat the Turing test is not the best method of deducing whether an AI is asaware as humans because the test lacks nonverbal commination, we could belanguage    Cognitive modeling “Menought to know that from the brain, and from the brain only, arise ourpleasures, joys, laughter and jests, as well as our sorrows, pains, griefs andtears. Through it, in particular, we think, see, hear, and distinguish the uglyfrom the beautiful, the bad from the good, and the pleasant from theunpleasant, in some cases using custom as a test, in others perceiving themfrom their utility.

It is the same thing which makes us mad or delirious,inspires us with dread or fear, whether by night or by day, bringssleeplessness, inopportune mistakes, aimless anxieties, absent-mindedness, andacts that are contrary to habit. These things that we suffer all come from thebrain, when it is not healthy, but becomes abnormally hot, cold, moist, or dry,or suffers any other unnatural affection to which it was not accustomed” (Hippocrates, 400 B.C.

E ). If we are to teach an AI to act like ahuman, ie.to have emotions and to be aware of its emotional responses, then oneapproach could be to map the human brain and simulate it within a digitalspace. The idea that the mind creates thoughts by a series of step by stepprocesses is called the computational theory of the mind.The human project is a ten year projectto map the brain from the molecules to the larger networks in charge ofcognitive processes.

It is a vast undertaking with a variety of differentcomponents such as “Theoretical Neuroscience: Deriving high-level mathematicalmodels to synthesize conclusions from research data” and Ethics and Society:Exploring the ethical and societal impact of HBP’s work” (EEC, 2012). The latter havingissues to do with personhood and the cognitive processes of coma patients.In 2012 IBM announced “Blue Gene (aproject to map the human mind inside of a computer) simulated 4.5 percent ofthe brain’s neurons and the connections among them called synapses—that’s aboutone billion neurons and 10 trillion synapses. In total, the brain has roughly20 billion neurons and 200 trillion synapses. (Fischetti, 2011). To simulate a mousebrain they used 512 processers, rat (2,048) and cat (24,576).

They predict afull simulated brain would take 880,000 processers and could be achieved by2029.If we have modeled a brain inside of acomputer system does that grant the possibility of consciousness? Could aself-aware intelligence arise out of a sufficiently complex system?    Geneticalgorithms and emergence  “Genetic algorithm, in artificialintelligence, a type of evolutionary computer algorithm in whichsymbols (often called “genes” or “chromosomes”) representing possible solutionsare “bred.” This “breeding” of symbols typically includes the use of amechanism analogous to the crossing-over process ingenetic recombination and an adjustable mutation rate  (Britannica, 2017)”. The benefit in using genetic algorithmsin AI learning is that it mimics the biological process, weeding out the weakermodels in favor of a Darwinian model of survival of the fittest. The process is”goal driven” and when a system reaches the goal it can then be used as processtool that can be applied to unknown solutions. Genetic algorithms allow for thepossibility of an emergent system a processes described by Robert W. Batterman, “The idea being that a phenomenon isemergent if its behavior is not reducible to some sort of sum of the behaviorsof its parts, if its behavior is not predictable given full knowledge of thebehaviors of its parts, and if it is somehow new — most typically this is takento mean that emergent phenomenon displays causal powers not displayed by any ofits parts.

” (Batterman, 2010) Emergence is a property of our universe,we are an emergent intelligence from basic matter to a conscious being. Thesearch for artificial intelligence is the attempt to understand that emergingconsciousness. The process of giving an AI consciousness could come from withinitself programing the right starting point would be the hardest part. Neural networks and deep learning Thebasic idea behind a neural network is to simulate (copy in asimplified but reasonably faithful way) lots of densely interconnected braincells inside a computer so you can get it to learn things, recognize patterns,and make decisions in a humanlike way.

It has been around since the early daysof AI research but advancements in computing power recently have led to aresurgence in interest. Deep learning uses multiple cells to recognize patternsand through a learning process it could attempt to replicate that image. Anexample would be to ask a deep learning machine to cross a road. For normalmachines it would have to be given a precise set of instructions, forward leftright etc.

But with deep learning you show it 1000 videos of people crossingroads and it tries a thousand times to replicate some element with in thevideos to achieve its goal. Thistype of computing has the most potential of being promising in creating anadvanced AI as it incorporates elements of genetic algorithms, goal driven AIand reasoning but overlaid with a powerful idea of learning through practice.      Ways of creating AI, conclusion A final way we could create AI would be todesign a system that creates AI itself. Creating a useful machine learningprogram requires a lot of choosing the correct parameters Google’s AutoML is aneural network designed to create better neural networks. It uses pastachievements in creating basic AI and evolves them into something new. Maybe itcould even design a better version of AutoML.

Perhaps problem with designing a self-awaremachine is that we are unable to objectively look at ourselves to create alikeness. Perhaps we should allow another to do this.To be expanded  Alternatives tothe Turing test Needed?    AI and personhood If we hypothetically created an AI with self-awarenesswould it be granted status as a member of humanity straight away or would itrequire any more steps.   If we want to grant an artificial intelligence personhoodwe have to look at when we grant a human personhood.Personhood is a technical term, a person doesn’t equal ahuman. Human is a biological term you’re human if you have human DNA, butperson is a moral term.

For a philosopher persons are beings who are part ofour moral community they deserve moral consideration this distinction is reallyuseful but it kind of complicates things because there might be nonhumans thatwe think deserve moral consideration and there might be some humans who don’t.The determination of who’s a person and who’s not is atthe argument of what constitutes a person is at the core of almost every majorsocial debate issue you can think of, abortion, animal rights, the deathpenalty to euthanasia.But is it possible to be human yet not a person? Somepeople believe that fetuses though clearly human are not yet persons, othersthink that bodies and persistent vegetative states or that have experienced acomplete and an irreversible loss of brain function are no longer persons. Whatmust one possess to become part of our community of personhood? Mary Ann Warren in response to whether a thingcan be said to be a person, and so have moral standing, Warren suggested thefollowing criteria: Consciousness of objects and events external and/or internal to the being.

Reasoning (the developed capacity to solve new and relatively complex problems); Self-motivated activity (activity which is relatively independent of either genetic or direct external control The capacity to communicate, by whatever means, messages of an indefinite variety of types, that is, not just with an indefinite number of possible contents, but on indefinitely many possible topics; The presence of self-concepts and self-awareness, either individual or racial, or both (Warren, 1973). I would add that it is not necessary all these criteria are met, butthat a reasonable portion of them are. ); to be kept?