Friday, August 30, 2013

DNA Nanorobots: A New Method For Treating Cancer

Kevin Russell and myself had the opportunity to interview Ph.D Ido Bachelet from the Bar-Iran Institute of Nanotechnology and Advanced Materials. Dr. Bachelet and his team are developing a new form of cancer delivery system that has the potential to eradicate cancerous tissue from the body without damaging healthy cells.
However, before I begin, it’s important to understand that all of the technologies we are going to discuss are not science fiction, but science reality.
DNA origami is a technique that allows scientist to use DNA molecules as programmable building blocks, which make use of the programmable molecular recognition of complementary DNA cohesion to assemble designed structures. By taking a single strand of DNA, scientist are able to manipulate the genetic code, telling the DNA to self-assemble into predetermined shapes. In order to do this, scientist use software that is similar to CAD. It programs the DNA and tells it to fold back and forth into a desired shape or pattern.
CAD Programmed DNA
Almost seven years after the original technique of DNA origami was developed by Paul Rothemund at the California Institute of Technology, Dr. Ido Bachelet and his team evolved the concept of DNA origami into a radical new drug delivery system. In Dr. Bachelet’s recent publication ‘Designing a bio-responsive robot from DNA origami‘ his team was able to take the genome of a virus as the primary building block of his structure and create a cage like scaffolding that has the capability to house life promoting drugs such as antibiotics and chemotherapy medicines.
However, these nanorobots not only have the ability to house powerful medicines; they can also deliver the drugs to the precise location that requires healing.
The current version of these nanorobots are free floating robots that float through the bloodstream by the billions and remain neutral until they encounter a location that requires assistance. The nanorobots know that they have reached the proper location by molecular cues that are programmed into them to move from their closed neutral state to its open state (See image 2 below). These molecular cues act as the key to activate the neutralized nanorobot into combat ready mode, and tell it to treat the infection site, delivering the drugs directly to the cancerous spot or site of infection.
Nano
Currently, one of the primary problem with chemotherapy is that the drugs being injected into the patient are not only killing the rogue cancerous cells but healthy cells as well. By taking a sample of the cancerous cells, or by knowing the specific molecular markers of the rogue cells, scientists are able to program the nanorobots to only attack the enemy cells with a specific payload.
The idea is that the nanorobots don’t excrete the drug or release it. Instead, they make the drug accessible or inaccessible by turning it on and off. Because the drug is linked to the robot, one could think of it as a sword and the wielder. As the nanobot prepares to attack the cell that it was programmed to destroy, it enables its sword (the drug), that attacks the cell and then sheaths the drug again, leaving all of the healthy cells around the infection site unaffected by the potent chemotherapy drugs. Once could also think of this technology as predator dronedrone that have the ability to hone in and wipe out any enemy insurgents while leaving the healthy citizen population unaffected by the combat.
I’m sure some of you are asking ‘what happens when these nanorobots have achieved their objective? I don’t want millions maybe even billions of loaded nanorobots with powerful chemo drugs floating in my body.’ The nanorobots have a half-life of an hour or two, but scientist can modify them to live up to 3 days before they start the disintegration process, which is via enzymes. These enzymes slowly start to form segregates about a half-micron in size (size of bacteria).  As they slowly dismantle the nanobot, the payload is gradually released into the body at non-lethal doses until the enzymes have completed their task of disassembling, leaving the body free of the cancer and of any nanorobots.
FUTURE IMPLICATIONS
The current model of nanobots are extremely efficient in disengaging certain types of cells or delivering payloads to specific sites in the body. However, for diseases such as Alzheimer’s disease or Parkinson’s disease, where the body suffers a death on the molecular level, these nanobots are non-effective. In the future, it is possible that we will see an all-in-one nanorobot package. These nanobots would not only have the ability to destroy cells but promote the rejuvenation of cells without increasing likelihood of tumors or cancers as well.
Another additional future functionality that we will see in coming nanorobot versions is the ability to direct or steer nano particles to the precise location that requires treatment. Technically, this would be creating a new surgeon; the Nanorobot Surgeon. These doctors would have the ability to cut, stitch, and sample cells without ever having to perform what we consider modern day surgery. Dr. Bachelet and his team have already connected these nanorobots to an Xbox controller, acting as the conductor to a symphony of nanorobots working in unison to eradicate cancerous cells. These systems of controlling these nanorobots will grow in complexity and sophistication, completely changing the coming face of healthcare around the world.

Saturday, August 17, 2013

Your next GUI will be a BUI

When the common language of computers was first being established, engineers had to agree that the piece of code 1101 was the equivalent to an A (example). This collaboration helped create a standard working model that current and future developers could use to build off of. As computers continued to grown in sophistication, so did the standardized models that helped current developers push new software developments without having to rebuild the wheel.

After the standard model of computer architecture was firmly established in the computing community, a new method for operating computers started to manifest itself in the form of a Graphical User Interface (GUI). This GUI was a new and exciting development, and was one of the major launching points to the personal computer. However, this was the first GUI in development, and would be consumed by a mass audience that more than likely had never seen a computer, let alone a GUI before. Developers had to create a standard graphical model that allowed the end users, no matter what GUI they might be operating, to have a standard subconscious model of how this system operated. They did this by giving them an idea of how one GUI relates to another, and how to complete simple task with little cognitive strain.The GUI started to integrate itself into society and take a concrete form, new touch technologies began to be launched for mass consumption, repeating the same process as the GUI. This created a standard model of touch technology that set a precedence for what hand gestures represented for a certain input command to the device. This allowed for the standardized gestures to be adopted for mass integration into all touch technology. For example, the thumb and index finger coming together represents a close or zone out command for the device.

With the development of EEGs and new brain to computer technologies, a new standardized model needs to be developed for how to think and operate these new emerging technologies. For example, to operate a thought-guided helicopter that Professor Bin He and his team created, your EEG patterns or the thoughts that you are thinking to maneuver the device need to be calibrated to the computer. So, before you could begin to operate the helicopter, you would have to have the computer register what you are thinking for up, down, left, and right. Normally, to do this, most people either think of an object or color. Green for take off, red for stop, and a random object or color for left and right.process needs to take place in order for the device to understand what you are trying to convey. This is why a Brain User Interface (BUI) needs to be developed as a standard natural model to translate what we are trying to communicate to our devices. A standard operating procedure such as this would help standardized how to think and control our technologies with our mind until we are able to develop true mind reading technologies. This would lay down a foundation that is similar to gesture and GUI models so that the mass audience could adopt and apply the same ‘thought principles’ to all EEG devices. It was create out of the box devices that require little EEG calibration and operate on the same thought-principles as every other EEG device.

Friday, August 16, 2013

"Singularity Terrorism: Military Meta-Strategy in Response to Terror and Technology"

"Can the same strategies and tactics, coupled with radically empowering and decentralized technologies, be put to use by militaries to similarly disrupt the patterns of terrorists themselves?" -- Read the full article here

Tuesday, August 6, 2013

The Psychology of Failed Predictions

Psychology plays a role in every decision that we make, to choosing a mate and starting a family, to the advertisements that persuade us in purchasing the latest electronics. But, what role does psychology play when it comes to our ability to predict world events or the latest emerging technologies? Is it possible that we are limited to certain psychological barriers that hinder our ability to reasonably survey and forecast future events? In a new study by Machine Intelligence Research Institute (MIRI) describes ‘How We’re Predicting AI – or Failing To’ by Stuart Armstrong and Kaj Sotala. It shows that there is substantial evidence for suspicion when it comes to our ability to predict future events. According to the data in the study, it is possible that forecasters could struggle with subconscious psychological predispositions that hinder their ability generate successful forecast. This study was not limited to expert forecast, but also included the forecast of non-experts, and cites data that shows there is an indistinguishable difference between the predictions of expert and non-expert forecasters.

Figure 1: “Median estimate for human-level AI, graphed against date of prediction” (1).

One of the most common mistake that expert and non-expert forecasters make is the so-called Maes-Garreau Law formulated by Kevin Kelly that states, forecasters will predict that a certain event with happen within their life time. “In this case, the rise of AI that will save them from their own deaths, akin to a Rapture of the Nerds” (1), a second mistake that forecasters make is that ‘event X is within 20 years’.
Figure 2: “Difference between the predicted time to AI and the predictor’s life expectancy, graphed against the predictor’s age” (1).

In order to compile and execute more accurate forecast and predictions, it is important that we understand our psychological procedures that could obstruct our vision of the future. By doing so, it could allow us to peer into the future with unbiased eyes, and a fresh perspective of what is logically possible according to our data, and not blindly project our ego onto our scenarios and forecast.

SOURCE: (1) HTTP://INTELLIGENCE.ORG/FILES/PREDICTINGAI.PDF