A few years ago, eye tracking might have seemed like something for spies and secret agents. However, with new technology, there are much more realistic uses for eye tracking. Let's check it out!
First, how does eye tracking work? Well, typically there are 2 components: a light source and a camera. The light source creates a reflection on the eye and the camera detects this reflection, as well as other features of the eye like the pupil. The camera and computer track the rotation of the eye, the direction of the gaze, blink frequency and pupil dilation.
What do they do with this data? The most common thing to do is analyze the visual path, which you can see below. Each piece of data is translated into a set of pixel coordinates, which are compiled and analyzed to find what features were seen, what captures attention, speed of eye movement, etc. All of these things say something about cognitive function and can give insight to industries like advertising, entertainment, web, medicine, and automotive.
As we all know, technology is ALWAYS changing and advancing! The University of Copenhagen has applied this technology to language and reading. They are working on technology that analyzes eye movement to reveal which words cause readers problems. The software could provide students with translations and definitions for words that their eyes linger on. However, what is new and different about this technology is that they have created an algorithm that is tailored to the individual reader. In the past, experts would have to annotate the texts by hand and identify parts of speech or words that can be omitted. With the new general model method, this human labor is no longer necessary because the eye tracking technology is like annotating in real time. This is super helpful in education, assisting students of any age with reading and learning. Eye tracking isn't just for spies anymore! So cool!
References:
http://www.eyetracking.com/About-Us/What-Is-Eye-Tracking
http://phys.org/news/2016-10-eyetracking-language-technology-readers.html
Friday, October 28, 2016
Friday, October 21, 2016
New Fetal Scan Algorithm!
As we all know, fetal development is usually monitored through ultrasounds. This is an inexpensive, portable method that assesses blood flow in the placenta. But what if there was another method that had more diagnostic benefits?
Well, turns out there is! Researchers from MIT's Computer Science and Artificial Intelligence Laboratory are working with researchers from Boston Children's Hospital and Massachusetts General Hospital on a new project using MRI's to evaluate fetus health. The MRI can measure oxygen absorption rates in the placenta and identify any placental disorders that could put the fetus in danger.
An ultrasound looks at fetal growth and velocities of waveforms in the umbilical arteries, but these do not directly measure the function of the placenta. Researchers are attempting to come up with a method to assess spatiotemporal function of the placenta so that if they have to intervene, they can do so before the placenta fails.
The researchers have created an algorithm that analyzes MRI scans and corrects for very small motions that can occur during an MRI, taking into account the possible movement of the baby. It tracks organs through a sequence of MRI scans. An MRI image is made up of hundreds of 2D cross sections of the body, which put together become a 3D image. To measure the chemical changes, scientists analyze sequences of about 300 of these 3D images frame by frame, image by image. Because of possible movements of the fetus during the MRI that doctors cannot control, it is not as easy to compare frame by frame as the fetus can move dramatically between frames. This is where the new algorithm comes in...
The algorithm creates a mathematical function that maps the pixels from the first frame to the second frame, then to the third, etc. The end result is a series of mathematical functions that describe the movement of the scan. After the computer has calculated the mathematical functions, a human draws boundaries around organs and things of interest in the first frame. The movement of these boundaries can then be calculated using the mathematical operations defined earlier. The human only has to define the boundaries around organs and things of interest for the first frame, then the computer does the rest for the following frames. This saves humans a lot of time and really tough work! One of the major problems with MRI's is dealing with motion, so the ability to account for that and still be able to use MRI analysis is a huge step!
Check out this MRI clip of a fetus! The algorithm made it possible to correct for small movements of the fetus! So rad! --> https://www.youtube.com/watch?v=djJnsC_CddI&feature=youtu.be
References:
http://phys.org/news/2016-10-algorithm-fetal-scans-interventions-warranted.html
https://kingsimaging.wordpress.com/page/2/
Friday, October 14, 2016
IBM Watson
My dad works for IBM, so when 60 Minutes did a segment on IBM Watson, he made the whole family gather around the TV to watch. He was being a big time nerd about it, but it actually was really cool.
So what can Watson do? Well, this computer can answer questions, quickly identify key info from any document, and find patterns and relationships across data. It can even analyze unstructured data, which makes up 80% of all data, and includes things like news articles and social media posts. This is unique about Watson, because it uses natural language processing to understand grammar and context (huge deal) to analyze this unstructured data.
How does Watson learn all of this? To learn a subject, all related material is loaded into Watson: word documents, PDFs and webpages. Watson is trained through question and answer pairs and is automatically updated as new information is published.
The loading of this information is called the "corpus", which is created with the help of humans. This data is pre-processed by Watson; it builds indices and other meta data to create a knowledge graph (see below), which makes working with the content more efficient. Once the data has been organized, Watson continues to learn through ongoing interactions with users. This is called "machine learning".
Watson takes parts of speech in a question and generates a hypothesis. Then, it searches millions of documents to find thousands of possible answers to support or refute the hypothesis. Next, Watson uses an algorithm to rate the quality of the evidence it finds. These steps sound pretty similar to the scientific method we use to answer a question right? Except Watson does it much, much faster than a human ever could.
Why does this matter? Who cares if a computer can beat some humans in Jeopardy? Well, Watson has a much more important role, and that is in the field of medicine. Health care data doubles every 2 years because there are always new studies and trials being done. Doctors would have to read 29 hours a workday to keep up with the new information. Watson can read 200 million pages of text in 3 seconds. That is incredible. It can read a million books per second. Then, it can use learning algorithms to find patterns in that massive amount of data. This is huge, particularly for cancer patients. Watson has tons of information about cancer which it can trace through its indices (see below) to find different types of cancer, symptoms, treatments, side effects, etc. to find the treatment that best fits a specific patient. In 99% of cases, Watson chose the same treatment that doctors would've, but much faster. And in 33% of those cases, Watson found something new. This is a huge, and very exciting, development that could change everything in the world of medicine.
Check out the 60 minutes segment here:
http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/
References:
http://www.ibm.com/watson/what-is-watson.html
http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/
So what can Watson do? Well, this computer can answer questions, quickly identify key info from any document, and find patterns and relationships across data. It can even analyze unstructured data, which makes up 80% of all data, and includes things like news articles and social media posts. This is unique about Watson, because it uses natural language processing to understand grammar and context (huge deal) to analyze this unstructured data.
How does Watson learn all of this? To learn a subject, all related material is loaded into Watson: word documents, PDFs and webpages. Watson is trained through question and answer pairs and is automatically updated as new information is published.
The loading of this information is called the "corpus", which is created with the help of humans. This data is pre-processed by Watson; it builds indices and other meta data to create a knowledge graph (see below), which makes working with the content more efficient. Once the data has been organized, Watson continues to learn through ongoing interactions with users. This is called "machine learning".
Watson takes parts of speech in a question and generates a hypothesis. Then, it searches millions of documents to find thousands of possible answers to support or refute the hypothesis. Next, Watson uses an algorithm to rate the quality of the evidence it finds. These steps sound pretty similar to the scientific method we use to answer a question right? Except Watson does it much, much faster than a human ever could.
Why does this matter? Who cares if a computer can beat some humans in Jeopardy? Well, Watson has a much more important role, and that is in the field of medicine. Health care data doubles every 2 years because there are always new studies and trials being done. Doctors would have to read 29 hours a workday to keep up with the new information. Watson can read 200 million pages of text in 3 seconds. That is incredible. It can read a million books per second. Then, it can use learning algorithms to find patterns in that massive amount of data. This is huge, particularly for cancer patients. Watson has tons of information about cancer which it can trace through its indices (see below) to find different types of cancer, symptoms, treatments, side effects, etc. to find the treatment that best fits a specific patient. In 99% of cases, Watson chose the same treatment that doctors would've, but much faster. And in 33% of those cases, Watson found something new. This is a huge, and very exciting, development that could change everything in the world of medicine.
Check out the 60 minutes segment here:
http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/
References:
http://www.ibm.com/watson/what-is-watson.html
http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/
Friday, October 7, 2016
Are You a Gamer?
Calling all gamers! Have you ever wondered how video games actually work? If so, this is the blog for you!
Whether you're playing Call of Duty, Super Mario Brothers, or Cooking Mama, your game is running on a "game loop". This is a piece of code that runs over and over again, possibly hundreds of times per second, to tell the hardware what to draw on the screen. The image below shows how the computer updates the screen:
The three main stage of the loop are: update player input, update the game world, and tell the graphics card what to render. An image of this code is shown below:
References:
http://howtomakeanrpg.com/a/how-do-video-games-work-basic-architecture.html
Whether you're playing Call of Duty, Super Mario Brothers, or Cooking Mama, your game is running on a "game loop". This is a piece of code that runs over and over again, possibly hundreds of times per second, to tell the hardware what to draw on the screen. The image below shows how the computer updates the screen:
The three main stage of the loop are: update player input, update the game world, and tell the graphics card what to render. An image of this code is shown below:
The human brain and eye perceives images individually if there are 10-20 frames per second. Any more, and the images become continuous and the different frames are not perceived! If it is producing 30 frames per second, that means the computer has .0333 seconds to produce each frame. 60 frames per second, and the computer only has .01666 seconds. How do video games produce images so quickly? While an image is being shown, the computer is producing the next image on the fly! These images are drawn on something called a framebuffer, which is a large grid of pixels on the screen. While the computer is working 60 frames a second on graphics, that only leaves .005 to detect and respond to player input, produce sounds, and work on collision detection. Computers work at a speed so much faster than the human brain, which allows computers to make video game graphics so realistic! So cool!
http://howtomakeanrpg.com/a/how-do-video-games-work-basic-architecture.html
Subscribe to:
Comments (Atom)








