As technology increases in complexity with more servers
crunching more data, shouldn’t the way the humans interact with computers also
grow in complexity? While also ameliorating to a state where complex objectives
can be executed without equally complicated input?
As the vastness and the interconnectedness of computer systems ingrains itself into ever-complex systems. The way that we input data into thus said systems needs to evolve with increasing complexity to compensate for the amount of human effort that would be required to operate these future systems.
As the vastness and the interconnectedness of computer systems ingrains itself into ever-complex systems. The way that we input data into thus said systems needs to evolve with increasing complexity to compensate for the amount of human effort that would be required to operate these future systems.
The way that our species has historically interacted with
computer systems has experienced little to no growth in its complexity and simplification
– until recently.
Where engineers and computer scientist are making great strides in the development and implementation of natural language systems, neural input, and artificial intelligence. These technologies are merging together to create a new way to interact with technology
Where engineers and computer scientist are making great strides in the development and implementation of natural language systems, neural input, and artificial intelligence. These technologies are merging together to create a new way to interact with technology
The objective of this article is to give the reader an
understanding of the history of input and where the future of input is heading
in the coming years and decades.
One example of the stagnation in the evolution of input is the QWERTY keyboard, which has been the golden standard of data input since the mid 1900s. However, in the last 5 to 10 years we have started witness the mergence of two new types of new data input.
One example of the stagnation in the evolution of input is the QWERTY keyboard, which has been the golden standard of data input since the mid 1900s. However, in the last 5 to 10 years we have started witness the mergence of two new types of new data input.
(The data points on the graph represent a new type of input or a substantial evolution in a certain input technology. Each technology is graphed out in chronological order. The data in the chart will be provided at the end of the article.)
In the graph Rate of
Input Change Throughout History in 1866 the first teletype machines or
keypunches were invented as a means to program software. These keypunches were
slow, difficult to correct errors, and required armies of office soldiers to
hole punch cards to create software with limited functionality. Leaving its
growth to be fairly consistent until the 1946 when the first computer keyboards
were adopted from the punch card and early teletype machines.
In 1946 the Eniac computer used a punched card reader as its input and output device. Then, in 1948, the Binac computer used electromechanically controlled typewriter to input data directly onto magnetic tape for feeding the computer data and to print the results.
In 1946 the Eniac computer used a punched card reader as its input and output device. Then, in 1948, the Binac computer used electromechanically controlled typewriter to input data directly onto magnetic tape for feeding the computer data and to print the results.
Now that we have established a limited part of the history
of input from a tactile perspective, which will be identified as the first wave
of technological develop; it is important to note that the second wave of
technological development does not begin at the end of first phase, but in the
middle. This allows for constant
advancement without causing technological development to become stagnate and falter.
By 1936 the first ‘electronic speech synthesizer ‘ was
created by AT&T Bell Labs that produced the first electronic speech
synthesizer called “Voder” by Dundley, Riesz and Watkins. This allowed for the first phase of
tactile input of computer systems to continue onto the top of the S curve,
where a technology is ‘matured’ and experiences little to no significant
growth, while the second phase of technological growth beings its exponential
cycle, and takes over the first phase of development. Giving rise to a more
efficient technology, one could compare this type of development to Darwin’s
theory of evolution. (To learn more about the evolution of technology see Methods of Futuring part 2)'
By 1971 DARPA (Defense Advanced Research Projects Agency)
established the ‘Speech Understanding Research’ (SUR) program with the
objective to develop a computer system that could understand continuous speech,
which received $3 million per year of government funding for 5 years. From this
initiative several project groups were erected. Such as CMU, SRI, MIT Lincoln
Laboratory, Systems Development Corporations (SDS), and Bolt (See graph for
sources).
(The
graph above is the growth of voice technology and neural implants from 1930 to
2000.)
Fast forward 11 years to 1982, Dragon systems releases its
first language technology by its two founders, Drs. Jim and Janet Baker. Just
13 years later they release dictation speech recognition technology, allowing
the public for the first time to dictate natural language to a computer system.
In 2000 the first world-wide voice portal was created by
Tellme, and just 3 years later, healthcare is radically impacted by highly
accurate speech recognition. This
leaves us with a continuous growth of voice recognition software that gives us the
current day technology of Watson, Siri, and Google’s voice recognition
technologies.
This leaves us with highly integrated computer systems that can execute extremely complicated mathematical calculations and deliver rapid explanations that are easy for humans to digest. In the next few years, it is possible to have a cloud service that allows us to speak naturally to our devices and implement commands that would take us several minutes, if not hours to program or research for a simple answer.
This leaves us with highly integrated computer systems that can execute extremely complicated mathematical calculations and deliver rapid explanations that are easy for humans to digest. In the next few years, it is possible to have a cloud service that allows us to speak naturally to our devices and implement commands that would take us several minutes, if not hours to program or research for a simple answer.
To give an example of what type of functionality such a
system could execute.
(user speaking to computer) “Computer: what are the top
three diseases that are associated with my family’s medical history that are
neurological, and what is my likelihood of developing one of these mental
disorders?”
By 1978 the third phase of new technological input development started to gain traction. Dobelle’s first prototype was implanted into “Jerry” a man blinded in adulthood. In 1978 a single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. In 1998, 20 years after “Jerry”, Johnny Ray (1944-2002) suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. One year later he under went a surgery that placed a brain implant that allowed him to control a computer cursor with pure thought.
By 1978 the third phase of new technological input development started to gain traction. Dobelle’s first prototype was implanted into “Jerry” a man blinded in adulthood. In 1978 a single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. In 1998, 20 years after “Jerry”, Johnny Ray (1944-2002) suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. One year later he under went a surgery that placed a brain implant that allowed him to control a computer cursor with pure thought.
Just 2 years after Ray’s implant, a team of researchers succeeded in
building a BCI that reproduced a monkey’s movements; while the monkey operated
a joystick or reached for food. The BCI operated in real time and could also
control a separate robot remotely over the Internet. However, the monkeys were not
able to see the arm moving and did not receive any feedback from the arm.
In 2005 Matt Nagle became the first person to control an
artificial hand using a BCI as part of the first nine-month human trial of
Cyberkinetic’s BrainGate chip-implant. The chip was implanted into Nagle’s
right precentral gyrus (area of the motor cortex responsible for arm movement),
the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by
thinking about moving his hand along with other electronic devices such as a
computer cursor, lights and TV.
One year later the Altran Foundation for Innovation developed
Brain Computer Interface with electrodes located on the surface of the skull,
instead of directly in the brain that requires surgery.
Fast forward 7 years; we now have caps equipped with EEG
sensors that are sensitive enough to detach the EEG waves through the cranium to
determine when a user commands, (left, right, up, down), giving the person the
ability to control a helicopter with their mind.
(The
graph above is from 1995 to 2013 with primary the growth of neural implants and
EEG readers that function as an input device.)
FUTURE IMPLICATIONS: THE
INVISIBLE FOURTH TECHNOLICAL PARADIGM OF INPUT
Fourth phase of technological growth that is happening in parallel with phase two and three, combining natural language, neural implants, and A.I. . Natural language and neural implants will increase our ability to accomplish complex tasks. However, this data will be incomprehensible for our biological brains to process. Instead, we will merge our biological, and synthetic implants with a God-like artificial intelligence to accomplish ever-sophisticated commands in a simplistic way that our biological intelligence can understand.
Fourth phase of technological growth that is happening in parallel with phase two and three, combining natural language, neural implants, and A.I. . Natural language and neural implants will increase our ability to accomplish complex tasks. However, this data will be incomprehensible for our biological brains to process. Instead, we will merge our biological, and synthetic implants with a God-like artificial intelligence to accomplish ever-sophisticated commands in a simplistic way that our biological intelligence can understand.
Leading to the final phase of computer input, where an
artificial intelligence that is integrated not only into our brains, but also
in our minds, giving humans the ability to program complex systems from pure
thought. However, technology such
as this will not be available until all three phases of computer input have
fully matured, and our understanding of the brain and mind has evolved into a
more concrete and fully understood science.
Sources: All source information and data can be found in the
Google Doc with data from the graphs, and where I compiled all of my data.
This document will be left open to the public to add and
manipulate data points to create a more in depth picture of where the future of
input is heading.
Please feel free to add your predictions to the excel document.
Link to locked document: https://docs.google.com/spreadsheet/ccc?key=0AlCsYSuSapuZdGFEOHo4dzJoY1JGRXVCdUl0dzhhdWc&usp=sharing
Link to open document: https://docs.google.com/spreadsheet/ccc?key=0AlCsYSuSapuZdExHQTJxQldUbk1ReWJpNVlmcnhTd3c&usp=sharing
No comments:
Post a Comment