The question of
whether or not an inanimate object could hold any type of conscious state has
always been a question in today’s modern culture. First examples of bringing
back consciousness to an object that is no longer capable of thought can be
found in Mary Shelley’s Frankenstein,
which shined the lime light on science’s ability to manipulate humanity
environment and play a God like figure in its creation. However, even before this question was engraved into the sub
consciousness of the general population in the form of science fiction, there
already was a variation of this question that has been debated over 300 years
by René Descartes. Descartes questioned whether or not machines (animals) hold the
same reasoning faculties as men, and this can be traced to his book Discourse on the Method for Conducting One’s
Reason Well and for Seeking the Truth in the Sciences in Mediation Three: Concerning God that He
Exist in Part 5. The question, if machines (animals) hold
any mode of consciousness, was a question that he pondered in his meditations.
In his meditations, he created two ways to differentiate whether or not
machines hold the same faculties as men. This paper seeks to understand if
these two methods are applicable in evaluating artificial intelligence in
today’s terms and perspectives, and if these two rules still function when we
look at them though the lenses of today’s technological advancements.
René Descartes, in
1641, published Discourse on the Method
for Conducting One’s Reason Well and for Seeking the Truth in the Sciences.
In Mediation Three: Concerning God that
He Exist and it illuminates the way for the modern era by introducing the
notion that men can regard their thoughts as mental environments that can
manipulate modes of thought where Descartes states that “I am a thing that
thinks” (Ariew 47). From this Cartesian
perspective of knowing that I, myself am a conscious and thinking being raises
the question of if it is possible for me to know whether or not my fellow man
or the animals that appear to have inherited the same Earth as myself are also
conscious, thinking, and rational beings. By using René Descartes model of
testing consciousness in machines to understand whether or not such said system
holds a state of self-awareness in the Discourse
on the Method for Conducting One’s Reason Well and for Seeking the Truth in the
Sciences; and to understand whether or not this method still applies to
current day machines, and if not what aspects of if still hold true, and what components
of his method have failed do to advancements in technology in the twenty first century.
René Descartes
first developed in Discourse on the
Method for Conducting One’s Reason Well and for Seeking the Truth in the
Sciences a method to distinguish between machine (animals) and human
consciousness. In his method, he
creates two categorical rules that divide humans and animals into two
categories, those that are considered conscious – humans, and those that are
simply composed of mechanical parts that constitute a living entity without a
soul. However, Descartes never intended his abstraction of animal machines to
be taken literally in today’s society – not that animals are machines, we are
developing computer systems that are starting to mimic neural networks that
could one day take on a form of consciousness. That is autonomous to the code
that was created by humanity (Krogh 195 -197). The two rules are as follows:
“First is that they could never use words or
other signs, or put them together as we do in order to declare our thoughts to
others” (Ariew 33).
“The second means is that, although they
might perform many task very well or perhaps better than any of us, such
machines would inevitably fail in other tasks by this means one would discover
that they were acting, not through knowledge, but only through the disposition
of their organs” (Ariew 33).
“For while reason is a universal instrument
that can be of help in all sorts of circumstances, these organs require some
particular disposition for each particular action; consequently, it is for all
practical purposes impossible for there to be enough different organs in a
machine to make it act in all the contingencies of life in the same way as our
reason makes us act” (Ariew 33).
To
paraphrase the words of René Descartes if there was a “machine” that resembles
our bodily structure and imitated our actions, then according to Descartes we
would have two methods of differentiation on now we could separate our fellow
humans from our counter machines. The first method states that this machine
could not use any other abstract signs that are not already pre-programmed into
its being. Thus, it would not be able to abstract an original sentences or
words that have not been already thought up by another man. The second method implies that there is
not enough room for the organs, or components of the machine to fit inside one
machine entity to give it the ability to reason through every contingent event
that the machine were to encounter. Thus, it is impossible for this machine to
act in all contingencies of life in the same capacity that our reason allows us
to act in those same events.
Thus, to René Descartes, the ability to fit
every possible mechanism and component into a machine was technologically
impossible. However, Descartes was not aware of the technological laws that
govern our modern society, and fuel today’s technological revolutions and
innovations. These laws are the driving force of technological advancement in
the 20th and 21st century, and they depend on the
fundamental principle of exponential growth, which is now called the Law of
Accelerating Returns. To
understand the Law of Accelerating Returns, the reader needs a basic
understanding of what it means to grow something or a
number exponentially. In this example, I am going to use the story of the
Chinese emperor’s favorite game, chess, and his reward to the inventor of the
game. The story goes something like this: The Chinese emperor loved the game of
chess so much that he wanted to show his gratitude to the inventor. Thus, he
said to the inventor, “I will give you anything in my kingdom. Just ask, and it
shall be yours.” The inventor replied, “All that I ask is that you place one
grain of rice on the first block of the chess board, and then two pieces of
rice on the second block then four pieces on the third block, doubling the
numbers of rice until you fill all 64 blocks of the chess board.” The emperor
thought it was a modest request, said “okay” and granted it. After doubling
each piece of rice 63 times the emperor went bankrupt, and the inventor had 18
million trillion grains of rice that required rice fields that covered the
surface of the Earth twice, including the oceans.
Now
we can use the same concept of exponential growth and apply it to the growth of
computer systems.[1] To first
understand the Law of Accelerated Returns and how it applies to the exponential
growth of computer systems, we need to have a grasp on where it first
originated in the biological context. The law of accelerating returns by
Ray Kurzweil states that:
1. Evolution
applies positive feedback in that the more capable methods resulting from one
stage of evolutionary progress are used to create the next stage.
2. As a result, the
rate of progress of an evolutionary process increases exponentially over time.
Over time, the “order” of the information embedded in the evolutionary process
(i.e., the measure of how well the information fits a purpose, which in
evolution is survival) increases.
3. A correlate of the
above observation is that the “returns” of an evolutionary process (e.g., the
speed, cost-effectiveness, or overall “power” of a process) increase
exponentially over time.
4. In another positive
feedback loop, as a particular evolutionary process (e.g., computation) becomes
more effective (e.g., cost effective), greater resources are deployed toward
the further progress of that process. This results in a second level of
exponential growth (i.e., the rate of exponential growth itself grows
exponentially).
While
there is more to the Law of Accelerated Returns, for this paper we only need to
know the first four facts. The first point states that the evolution of
each organism is based or builds upon the evolution of its predecessors. Thus,
without the evolution of its past predecessor, the evolution of the future
organism could not continue or, in some cases, even exist. The easiest way to
think about this is to visualize the construction of a skyscraper. If you
remove the concrete from the construction, you would not have a foundation or
the columns to support the weight of the building. The same is applied to the
Law of Accelerating Returns; if you removed one building block the whole system
will fail. The second and third point can be condensed into one
explanation. As the complexity of an organism increases, as does the time at
which new evolutionary milestones are met within a shorter period of time,
accelerating with every evolutionary step it takes.
To
summarize the words of Kurzweil, the evolution of life took billions of years
for the first building blocks to form, then followed primitive cells and the
process slowly started to accelerate as these single cell organisms turned into
a multi cellular organism until we reach the Cambrian explosion, which took
approximately tens of millions of years. Later, Humanoids developed over a
period of millions of years and, finally, mankind during the last hundreds of
thousands of years (Kurzweil). The fourth step states that once evolution
hits a certain point it starts to require more resources to further the
evolution of that specific organism. Thus creating a second level of
exponential growth, in other words the rate at which the original exponential
growth starts to double.
Now
that we have a basic understanding of how the Law of Accelerated Returns
applies from an evolutionary stand point, it becomes easier to understand how
accelerated returns applies to technology in the twenty-first century. If
you were to look at the first technologies man developed, it would be basic
rock tools, fire, and the wheel. This growth remained fairly constant. You
could compare this growth to the evolutionary growth of the first organisms,
very slow and time consuming, developing the building blocks of technology that
helped form modern day technology. This growth remained fairly constant until
around 1000 A.D when a paradigm shift occurred, and two centuries later in the
ninetieth century (Kurzweil), after the discovery of electricity in the 1800’s
the exponential growth of technology truly started to manifest itself.
Finally,
when the Internet was first developed, the fourth stage of Kurzweil Law of
Accelerated Returns started to apply to technology and double the rate at which
technology started to exponentially double (see back to the fourth law). This
is where I believe you could compare it to the evolution of mankind on the
timescale of evolutionary events. However, there is one final evolutionary step
that we have not yet discussed – the point of Singularity. However, before we
dive into the ‘what if’ possibility of the singularity, There is one last fact
about exponential growth that we need to know. As we learned from the story of
the Chinese emperor and the inventor of chess, once you reach a certain number
raised to a power (2^2 or grains_of_rice^blocks_on_chest_board), you start to
experience extremely large numbers. According to the Law of Accelerated
Returns, the same can be applied to the human knowledge
(human_knowledge^number_of_years). Thus, as the amount of human knowledge
increases and the time at which it happens. The number of scientific
breakthroughs will turn into a downhill rolling snowball of exponentially, and
the downhill is time. In the twenty-first century over the next 100 years
we will experience 20,000 years of technological growth (Kurzweil).
As
for the point of Singularity, Ray Kurzweil believes that technology will reach
a point where it surpasses human intelligence. We can see what I believe to be
the second milestone in computers surpassing the human intelligence. The first
being when Deep Blue, a computer, that beat the International Master David Levy
in a chess competition (Computer chess 1). The second being the creation of
Watson, an artificial intelligence that beat the world’s top Jeopardy players
(IBM). However, having computers surpass the human intelligence is not the full
aspect of the singularity. Kurzweil believes that the point of Singularity is
when both artificial intelligences become integrated with human intelligence,
creating another stage in human and machine evolution where both become fused
together and indistinguishable between one another.
However,
in today’s context we already have human and machine integration, from basic
bionic arms for wounded soldiers, to basic communication devices for people
with diseases such as Lou Gerhrig's disease. One might think that these technologies
represent the point of singularity, but this is only the point of horizon.
Until then, we are going to continue to see smaller technologies that can be
packed into a more confined space, giving these systems more processing power
and the ability to interact with humans on a human level. Because of these
technological laws and our ability to place more transistors into a smaller space,
giving us the ability to pack more processing power into a much smaller surface
area. That ultimately grants humans the theoretical ability to create a device
that could hold all off the possible components of that would be equal or
greater to human faculties. Because of the Law of Accelerating Returns we can
say that René Descartes second method for identifying machines and their
inability to hold enough components or mechanisms is an invalid form for
identifying artificial intelligences in the 21st century. Referring back to René Descartes first
rule for identifying a machine, he states that
“First is that they could never use words or
other signs, or put them together as we do in order to declare our thoughts to
others” (Ariew 33).
Thus,
from this line of thought a machine could not reproduce or create a novel piece
of work or develop or create a new type of word that has not already be
developed by man and already embedded within the machine’s code to allow it to
use these pieces of information that we can think of as ideas. However René Descartes was not the only
one that believed that a machine developing novel ideas was outside of its
operating parameters. The Lady Lovelace’s
Objection this objection states:
“The Analytical Engine has no pretensions to originate
anything. IT can do whatever we know how to order it to perform” (Turing
450).
There
is also a simplified variant of Lady
Lovelace’s Objection, which states:
“A machine could ‘never do anything really new’” (Turing 450).
“A machine could ‘never do anything really new’” (Turing 450).
Due
to the analytical engine that states that machines are incapable of independent
learning which could be thought of as machines are incapable of independent thinking.
Because everything the machine has ‘learned’ has been programmed into the hard
drive(s), which gives it the ability to remember a set algorithm that dictates the
machines next move will be in either calculations or movement when we talk
about robotic systems. For
example, a rudimentary algorithm for a robotic system to pick up a glass of
water could possibly look like this:
Step 1: Locate glass of water in space (If said
glass of water is located then Step 2, if not repeat Step 1.)
Step 2: Calculate
distance and then extend arm until hand is 1.5 inches away from glass. (If hand
is 1.5 inches away from glass then Step 3, if not repeat Step 1.)
Step 3: Contract
hand and lift from table. (If glass is in hand then Step 4, if not repeat Step
1.)
However, let’s
say that the robot reaches step 3 and then applies too much force to the glass and
breaks it. Through this process, the machine would repeat Step 1 indefinitely
and would never fulfill the algorithm and be allowed to move on to the next
task. It would also be impossible
for this system to allow for any type of learning or ‘out of the box thinking’
that would give it the capability to either a.) pick up the broken pieces of
glass and get a new glass or b.) come up with a creative solution for it to use
another object as a glass to satisfy its thirst.
Because René Descartes developed his method for separating machines and
humans over 300 years ago, some aspects of his method no longer apply to our
society due to technological growth that could not have been foreseen. Yet the
first rule developed by René Descartes still holds a certain amount
applicability today, and deserves recognition for its potential application in
determining whether or not a machine has the ability to think creatively and
learn abstract meanings that are not embedded into the machine’s code. Even though René Descartes’ whole method
is not 100 percent valid today, it is still important that we remember the
first part of his contribution and how even 300 years later, the question of
whether or not a machine could obtain the ability to think freely as men still
holds weight in an ever advancing modern society.
Bibliography
Ariew, Roger,
and Eric Watkins. Modern Philosophy: An
Anthology of Primary Sources. 2nd ed. Indianapolis: Hackett Publishing
Company, Inc., 2009. Print. The work by René Descartes Discourse on Method on the
Method for Conducting One’s Reason Well and for Seeking the Truth in the
Sciences (1637)
Krogh, Anders.
"What are artificial neural networks?." Nature Biotechnology . 26. (2008): 195 - 197. Print.
<http://pc8ga3qq6a.search.serialssolutions.com/?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&rfr_id=info:sid/summon.serialssolutions.com&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=What
are artificial neural networks?&rft.jtitle=Nature
biotechnology&rft.au=Krogh,
Anders&rft.date=2008-02-01&rft.eissn=1546-1696&rft.volume=26&rft.issue=2&rft.spage=195&rft_id=info:pmid/18259176&rft.externalDocID=18259176>.
Kurzweil, Ray.
“The Law of Accelerating Returns.” Kurzweil
Accelerating Intelligence. N.p., March 7, 2001. Web. 7 Apr 2011.
<http://www.kurzweilai.net/the-law-of-accelerating-returns>.
Turning, A.M.
"Computing Machinery and Intelligence." Computing Machinery and Intelligence. 59.236 (1950): 433-460.
Print. <http://www.jstor.org/stable/2251299>.
[1] It is important to remember that the first transistor was
created in 1954, 304 years after
René Descartes’ death in 1650.
No comments:
Post a Comment