Wednesday, July 31, 2013
Tuesday, July 30, 2013
The Extinction Of The Pure Human
There is no such thing as the Human, only the Transhuman.
We are all transhuman. The transhumanism spectrum varies in
magnitude, and so does the way that the transhuman philosophy is implemented in
the lives of the common man. To help us better understand why we are all
transhuman, and why most of the human species has already disposed of their
pure human form, we need to understand the current working definition of
transhumanism from Max More.
“Transhumanism is a class of philosophies of life that seek the
continuation and acceleration of the evolution of intelligent life beyond its
currently human form and human limitations by means of science and technology,
guided by life-promoting principles and values.” (Max More 1990) (1).
According to this definition, we are using science and technology
as a catalyst to accelerate human evolution and intelligence to exceed human
limitations. Furthermore, we are able to manipulate our environment, and
enhance our genetic composition. Thus, by using any type of enhancement to
improve the quality of life, pushing the boundaries of human limitations would
ultimately fall into the transhumanism philosophical category.
By using this definition of Transhumanism, at what point do we
make the transition from pre-programmed evolutionary mechanisms to the
transhuman being? Do we become transhuman when our parents placed the first
electronic gadget into our hands? Perhaps, when we received our first
“life-promoting” vaccines such as the hepatitis A, B, or the influenza vaccine?
My answer is no. We become transhuman much earlier. We shed our pure
human form and become transhuman in the womb; through the use of genetically
engineered foods, iodized salt, and prenatal pills (2).
All of these standardized health procedures are all forms of
technology that strip away our pure evolutionary form, and converts us into transhuman
beings. Today, we only consider people transhuman if they carry some sort of
advanced gadgetry or ingest advanced neural enhancing pills. However, those are
all extreme cases of transhumanism, and as technology continues to merge with
biological beings, the extreme spectrums of transhumanism will be pushed
further out. Transhuman practices of today will become standard practices in
future societies. This gives us the ability to take our evolutionary destiny
into our own hands, creating a being that is conscious of it’s evolutionary
development.
Even though the transition into worldwide transhumanism was a
subtle metamorphosis, most people do not consider themselves transhuman. It is
advisable to expect a similar evolution when contemplating the progression of
posthumanism. As humans, it is natural to strive to improve the quality of life
for ourselves and for our children. Survival is our most basic biological need.
Because of this it is reasonable to suspect that the same subconscious actions
will be take when moving from the transhuman state to the posthuman state.
Sources:
(1)
http://humanityplus.org/philosophy/transhumanist-faq/#answer_19
(2)
http://blogs.discovermagazine.com/crux/2013/07/23/how-adding-iodine-to-salt-boosted-americans-iq/
Thursday, July 25, 2013
The History and Future of Computer Input.
As technology increases in complexity with more servers
crunching more data, shouldn’t the way the humans interact with computers also
grow in complexity? While also ameliorating to a state where complex objectives
can be executed without equally complicated input?
As the vastness and the interconnectedness of computer systems ingrains itself into ever-complex systems. The way that we input data into thus said systems needs to evolve with increasing complexity to compensate for the amount of human effort that would be required to operate these future systems.
As the vastness and the interconnectedness of computer systems ingrains itself into ever-complex systems. The way that we input data into thus said systems needs to evolve with increasing complexity to compensate for the amount of human effort that would be required to operate these future systems.
The way that our species has historically interacted with
computer systems has experienced little to no growth in its complexity and simplification
– until recently.
Where engineers and computer scientist are making great strides in the development and implementation of natural language systems, neural input, and artificial intelligence. These technologies are merging together to create a new way to interact with technology
Where engineers and computer scientist are making great strides in the development and implementation of natural language systems, neural input, and artificial intelligence. These technologies are merging together to create a new way to interact with technology
The objective of this article is to give the reader an
understanding of the history of input and where the future of input is heading
in the coming years and decades.
One example of the stagnation in the evolution of input is the QWERTY keyboard, which has been the golden standard of data input since the mid 1900s. However, in the last 5 to 10 years we have started witness the mergence of two new types of new data input.
One example of the stagnation in the evolution of input is the QWERTY keyboard, which has been the golden standard of data input since the mid 1900s. However, in the last 5 to 10 years we have started witness the mergence of two new types of new data input.
(The data points on the graph represent a new type of input or a substantial evolution in a certain input technology. Each technology is graphed out in chronological order. The data in the chart will be provided at the end of the article.)
In the graph Rate of
Input Change Throughout History in 1866 the first teletype machines or
keypunches were invented as a means to program software. These keypunches were
slow, difficult to correct errors, and required armies of office soldiers to
hole punch cards to create software with limited functionality. Leaving its
growth to be fairly consistent until the 1946 when the first computer keyboards
were adopted from the punch card and early teletype machines.
In 1946 the Eniac computer used a punched card reader as its input and output device. Then, in 1948, the Binac computer used electromechanically controlled typewriter to input data directly onto magnetic tape for feeding the computer data and to print the results.
In 1946 the Eniac computer used a punched card reader as its input and output device. Then, in 1948, the Binac computer used electromechanically controlled typewriter to input data directly onto magnetic tape for feeding the computer data and to print the results.
Now that we have established a limited part of the history
of input from a tactile perspective, which will be identified as the first wave
of technological develop; it is important to note that the second wave of
technological development does not begin at the end of first phase, but in the
middle. This allows for constant
advancement without causing technological development to become stagnate and falter.
By 1936 the first ‘electronic speech synthesizer ‘ was
created by AT&T Bell Labs that produced the first electronic speech
synthesizer called “Voder” by Dundley, Riesz and Watkins. This allowed for the first phase of
tactile input of computer systems to continue onto the top of the S curve,
where a technology is ‘matured’ and experiences little to no significant
growth, while the second phase of technological growth beings its exponential
cycle, and takes over the first phase of development. Giving rise to a more
efficient technology, one could compare this type of development to Darwin’s
theory of evolution. (To learn more about the evolution of technology see Methods of Futuring part 2)'
By 1971 DARPA (Defense Advanced Research Projects Agency)
established the ‘Speech Understanding Research’ (SUR) program with the
objective to develop a computer system that could understand continuous speech,
which received $3 million per year of government funding for 5 years. From this
initiative several project groups were erected. Such as CMU, SRI, MIT Lincoln
Laboratory, Systems Development Corporations (SDS), and Bolt (See graph for
sources).
(The
graph above is the growth of voice technology and neural implants from 1930 to
2000.)
Fast forward 11 years to 1982, Dragon systems releases its
first language technology by its two founders, Drs. Jim and Janet Baker. Just
13 years later they release dictation speech recognition technology, allowing
the public for the first time to dictate natural language to a computer system.
In 2000 the first world-wide voice portal was created by
Tellme, and just 3 years later, healthcare is radically impacted by highly
accurate speech recognition. This
leaves us with a continuous growth of voice recognition software that gives us the
current day technology of Watson, Siri, and Google’s voice recognition
technologies.
This leaves us with highly integrated computer systems that can execute extremely complicated mathematical calculations and deliver rapid explanations that are easy for humans to digest. In the next few years, it is possible to have a cloud service that allows us to speak naturally to our devices and implement commands that would take us several minutes, if not hours to program or research for a simple answer.
This leaves us with highly integrated computer systems that can execute extremely complicated mathematical calculations and deliver rapid explanations that are easy for humans to digest. In the next few years, it is possible to have a cloud service that allows us to speak naturally to our devices and implement commands that would take us several minutes, if not hours to program or research for a simple answer.
To give an example of what type of functionality such a
system could execute.
(user speaking to computer) “Computer: what are the top
three diseases that are associated with my family’s medical history that are
neurological, and what is my likelihood of developing one of these mental
disorders?”
By 1978 the third phase of new technological input development started to gain traction. Dobelle’s first prototype was implanted into “Jerry” a man blinded in adulthood. In 1978 a single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. In 1998, 20 years after “Jerry”, Johnny Ray (1944-2002) suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. One year later he under went a surgery that placed a brain implant that allowed him to control a computer cursor with pure thought.
By 1978 the third phase of new technological input development started to gain traction. Dobelle’s first prototype was implanted into “Jerry” a man blinded in adulthood. In 1978 a single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. In 1998, 20 years after “Jerry”, Johnny Ray (1944-2002) suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. One year later he under went a surgery that placed a brain implant that allowed him to control a computer cursor with pure thought.
Just 2 years after Ray’s implant, a team of researchers succeeded in
building a BCI that reproduced a monkey’s movements; while the monkey operated
a joystick or reached for food. The BCI operated in real time and could also
control a separate robot remotely over the Internet. However, the monkeys were not
able to see the arm moving and did not receive any feedback from the arm.
In 2005 Matt Nagle became the first person to control an
artificial hand using a BCI as part of the first nine-month human trial of
Cyberkinetic’s BrainGate chip-implant. The chip was implanted into Nagle’s
right precentral gyrus (area of the motor cortex responsible for arm movement),
the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by
thinking about moving his hand along with other electronic devices such as a
computer cursor, lights and TV.
One year later the Altran Foundation for Innovation developed
Brain Computer Interface with electrodes located on the surface of the skull,
instead of directly in the brain that requires surgery.
Fast forward 7 years; we now have caps equipped with EEG
sensors that are sensitive enough to detach the EEG waves through the cranium to
determine when a user commands, (left, right, up, down), giving the person the
ability to control a helicopter with their mind.
(The
graph above is from 1995 to 2013 with primary the growth of neural implants and
EEG readers that function as an input device.)
FUTURE IMPLICATIONS: THE
INVISIBLE FOURTH TECHNOLICAL PARADIGM OF INPUT
Fourth phase of technological growth that is happening in parallel with phase two and three, combining natural language, neural implants, and A.I. . Natural language and neural implants will increase our ability to accomplish complex tasks. However, this data will be incomprehensible for our biological brains to process. Instead, we will merge our biological, and synthetic implants with a God-like artificial intelligence to accomplish ever-sophisticated commands in a simplistic way that our biological intelligence can understand.
Fourth phase of technological growth that is happening in parallel with phase two and three, combining natural language, neural implants, and A.I. . Natural language and neural implants will increase our ability to accomplish complex tasks. However, this data will be incomprehensible for our biological brains to process. Instead, we will merge our biological, and synthetic implants with a God-like artificial intelligence to accomplish ever-sophisticated commands in a simplistic way that our biological intelligence can understand.
Leading to the final phase of computer input, where an
artificial intelligence that is integrated not only into our brains, but also
in our minds, giving humans the ability to program complex systems from pure
thought. However, technology such
as this will not be available until all three phases of computer input have
fully matured, and our understanding of the brain and mind has evolved into a
more concrete and fully understood science.
Sources: All source information and data can be found in the
Google Doc with data from the graphs, and where I compiled all of my data.
This document will be left open to the public to add and
manipulate data points to create a more in depth picture of where the future of
input is heading.
Please feel free to add your predictions to the excel document.
Link to locked document: https://docs.google.com/spreadsheet/ccc?key=0AlCsYSuSapuZdGFEOHo4dzJoY1JGRXVCdUl0dzhhdWc&usp=sharing
Link to open document: https://docs.google.com/spreadsheet/ccc?key=0AlCsYSuSapuZdExHQTJxQldUbk1ReWJpNVlmcnhTd3c&usp=sharingSaturday, July 20, 2013
What if Google bought Detroit?
What if Google bought Detroit? Is it financially possible, and if so, what would Google do with an entire city?
According to Yahoo! Finance, Google’s market cap is approximately $297.46 billion dollars as of July 20th 2013. An article written by usatoday.com describes how Detroit’s bankruptcy is one of the largest of its kind in U.S history. With a population of approximately 700,000, Detroit debts and liabilities could reach as high as $20 billion dollars.
Google most definitely has enough purchasing power to bail out the city of Detroit, but what would it do with a whole city? Historically Detroit has been the heart of car manufacturing, where Henry Ford invented what we consider the modern day assembly line. Thus, Google could convince the existing car manufacturers to start producing Google Cars and a premium rate. However, Google could convince any manufacturing company to produce their driver-less cars at a premium rate.
What would be appealing to Google would be the ability to produce city wide legislature that allowed them to use the entire city of Detroit as real life testing ground for all of their technologies without having to comply to city laws and regulations. This would allow them to test cutting edge technologies in everyday scenarios. It would also present the authority needed to re-imagine how a city operates on an information level, and not only to test their driver-less cars, but test products such as mobile commerce, free public internet and free public transportation as well.
Most importantly though, Detroit could become an example to other cities across the United States of how to develop a sustainable city using groundbreaking technology that would normally get stuck in the standard bureaucratic processes. It also may radically changing our perspectives on education, transportation, green energy, and public policy.
Having a city such as this would draw in leading minds from all around the world, including scientist, engineers, coder, IT experts, and green architects. These individuals could present groundbreaking ideas and test their new technologies on a laboratory scale that has been historically unprecedented. Taking Detroit from the automotive center of the United States and transforming it into an innovative technological hub could challenge the authority of Silicon Valley as the most technologically inventive city in the world.
What do you think? Share your opinion below in the comments section.
*This article is a hypothetical scenario, and not a call to action.*
Subscribe to:
Posts (Atom)