Friday, November 25, 2016

Google Loons

What the heck is a Google Loon? Sounds pretty loony right? Well that's where part of the name comes from, because the idea is pretty loony. It is also called Loon because it is based on balloons. What in the world? What is Google up to now?



In order to provide Internet to more rural areas of the world that do not have access, Google has created balloons with equipment for wireless networks that float up in the stratosphere. Users have an internet antenna attached to their building. The balloons communicate with each other through the balloon network, then with equipment on the ground connected to an Internet service provider, then finally to the global Internet. Each balloon is said to provide internet to people on the ground who are within 25 miles, with hundreds of people being able to connect at a time. The data coverage is said to be on par with LTE 4G networks.




These balloons have been designed to withstand many different weather conditions and are solar powered. The Loons also contain GPS tracking devices as well as sensors that monitor the environmental conditions of the atmosphere. Google has developed an algorithm that predicts wind patterns and then steers the balloon accordingly. Google also has an operations system called Mission Control, which sends directions to the balloons every 15 minutes and can alter the path every minute. They use these directions to alter their altitude in the stratosphere so that they can catch winds that are moving in the direction they need to go. So in a way, they are solar and wind powered! Pretty neat!


References:
https://en.wikipedia.org/wiki/Project_Loon
http://computer.howstuffworks.com/google-loon5.htm

Friday, November 18, 2016

Fake Facebook?

If you have been perusing Facebook at all this election, you may have seen some pretty bizarre articles like "Pope Francis Shocks the World, Endorses Donald Trump for President" or "WikiLeaks CONFIRMS Hillary Sold Weapons to ISIS". Pretty crazy right? Why are these fake articles popping up on your newsfeed?


Well, to start off, companies like Facebook tailor what they share with you based on what they think you like or may be interested in. How would they know what you like? How would they know that what you NEED right now is an online quiz that will tell you what breed of dog you would be? Facebook uses trackers, which "share information intended to record, profile, or share your online activity." There can be as many as 228 trackers watching your internet activity at any time. This is because when you go to a page, a company like Facebook will request to see your activity and a piece of Javascript code allows Facebook to run code and track you. The code can write cookies and make more requests for information.

There are a few problems with the way this Facebook algorithm works based on "engagement". First, many think this is an invasion of privacy. It's a little creepy when you are shopping for sneakers online and Facebook gives you a bunch of ads for the same exact sneakers you were just looking at on a different site. However, this is also an incredible way to advertise. If there is an algorithm that can provide companies with information they can use to advertise their product to a specific individual who they know would be interested, why would they not take advantage of that?

Another problem with this "engagement" algorithm is that is creates something called "filter bubbles". Because Facebook can track what you read, they will continue to feed you things they know you will like. As a result, you are only being exposed to information, articles, and opinions you agree with. These "filter bubbles" perpetuate your own biases. If you mostly click on conservative articles, Facebook will slowly stop showing you liberal posts, and vice versa. Also, if you click on one fake article that catches your eye, they might keep feeding you fake articles. Sites that create this fake content continue to do so because they know people are clicking in it-- in fact, they're tracking it.

Spooky!!




References:  
http://www.businessinsider.com/this-is-how-facebook-is-tracking-your-internet-activity-2012-9#curious-how-else-you-are-being-tracked-online-11
https://www.scientificamerican.com/article/facebook-s-problem-is-more-complicated-than-fake-news/
https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles
https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles#t-166917

Friday, November 11, 2016

GPS

If anyone appreciates a good GPS, it's me. I've lived in my hometown for most of my life and I still need directions sometimes! Some may call it pathetic but I just consider it taking advantage of some pretty cool technology! Woah! How does this technology work?



GPS stands for Global Positioning System. The GPS is actually the 27 satellites that orbit the Earth and the thing we use for directions is the GPS receiver.  The receiver uses a mathematical technique called trilateration, in which it finds at least 4 of the satellites, finds the distance from each, and figures out its location in 3-dimensional space. It creates spheres around designated points, based on where the satellites above you are and how far away from them you are. Where the spheres intersect is where you are (see below). The receiver figures this out by using radio signals from the satellites.



To calculate the distance between the GPS receiver and the GPS satellite, it starts off with the satellite. At a specific time, the satellite starts transmitting a pseudo-random code and at the same exact time the receiver starts running the same exact code. When the satellite's code gets to the receiver, it will be lagging behind. The receiver takes the length of this lag, multiplies it by the speed of light, and determines how far the signal traveled. 

The receiver has the predicted location of the satellites at any given time stored in an almanac.  Once the receiver has calculated your location, it plugs the latitude and longitude into map files stored in its memory, making GPS usage more user-friendly. GPS receivers can also track your location as you move, constantly in communication with the satellites. This can provide you with information about your speed, how far you've traveled, how long you've been traveling, your ETA, etc. How rad is it that we can communicate so quickly with satellites that are orbiting the Earth?! The answer is super rad!!



References: 
http://electronics.howstuffworks.com/gadgets/travel/gps.htm
http://stackoverflow.com/questions/33637/how-does-gps-in-a-mobile-phone-work-exactly


Friday, November 4, 2016

Twitter

These days, everyone is connected through some sort of social media. Unless you've been living under a rock, you probably know what Twitter is. This social network allows its users to share their thoughts and feelings in 140 characters with all of their followers. Pretty rad huh? Keep reading to find out how Twitter works to keep us connected!


Twitters API is based off of the RepresentationalStateTransfer (REST) architecture. This architecture is more of a philosophy than a strict, written out plan- it doesn't describe specific arrangement of resources. It is a collection of methods for addressing and accessing data that allows it to work with most Web syndication formats. Twitter is most compatible with Really Simple Syndication (RSS) and Atom Syndication Format (Atom). This means that the app gathers information form one source and distributes it to various locations; like how it collects data from you (your tweet) and distributes is to other locations (your followers' newsfeed).

These Web syndication formats only have a few lines of code, which can be embedded in the code of a website. Users can subscribe to receive updates in their "feed" and whenever the administrator updates the web page, the users are notified. Twitter uses this strategy so that you, the user, can receive updates whenever other users update their web page, or tweet. Something that makes Twitter unique it that "by allowing third-party developers partial access to its API, Twitter allows them to create programs that incorporate Twitter's services." Very cool!!

References:
http://computer.howstuffworks.com/internet/social-networking/networks/twitter2.htm





Friday, October 28, 2016

Eyetracking Data Technology

A few years ago, eye tracking might have seemed like something for spies and secret agents. However, with new technology, there are much more realistic uses for eye tracking. Let's check it out!


First, how does eye tracking work? Well, typically there are 2 components: a light source and a camera. The light source creates a reflection on the eye and the camera detects this reflection, as well as other features of the eye like the pupil. The camera and computer track the rotation of the eye, the direction of the gaze, blink frequency and pupil dilation.

What do they do with this data? The most common thing to do is analyze the visual path, which you can see below. Each piece of data is translated into a set of pixel coordinates, which are compiled and analyzed to find what features were seen, what captures attention, speed of eye movement, etc. All of these things say something about cognitive function and can give insight to industries like advertising, entertainment, web, medicine, and automotive.


As we all know, technology is ALWAYS changing and advancing! The University of Copenhagen has applied this technology to language and reading. They are working on technology that analyzes eye movement to reveal which words cause readers problems. The software could provide students with translations and definitions for words that their eyes linger on. However, what is new and different about this technology is that they have created an algorithm that is tailored to the individual reader. In the past, experts would have to annotate the texts by hand and identify parts of speech or words that can be omitted. With the new general model method, this human labor is no longer necessary because the eye tracking technology is like annotating in real time. This is super helpful in education, assisting students of any age with reading and learning. Eye tracking isn't just for spies anymore! So cool!

References:
http://www.eyetracking.com/About-Us/What-Is-Eye-Tracking
http://phys.org/news/2016-10-eyetracking-language-technology-readers.html



Friday, October 21, 2016

New Fetal Scan Algorithm!

As we all know, fetal development is usually monitored through ultrasounds. This is an inexpensive, portable method that assesses blood flow in the placenta. But what if there was another method that had more diagnostic benefits?


Well, turns out there is! Researchers from MIT's Computer Science and Artificial Intelligence Laboratory are working with researchers from Boston Children's Hospital and Massachusetts General Hospital on a new project using MRI's to evaluate fetus health. The MRI can measure oxygen absorption rates in the placenta and identify any placental disorders that could put the fetus in danger.
An ultrasound looks at fetal growth and velocities of waveforms in the umbilical arteries, but these do not directly measure the function of the placenta. Researchers are attempting to come up with a method to assess spatiotemporal function of the placenta so that if they have to intervene, they can do so before the placenta fails.   

The researchers have created an algorithm that analyzes MRI scans and corrects for very small motions that can occur during an MRI, taking into account the possible movement of the baby. It tracks organs through a sequence of MRI scans. An MRI image is made up of hundreds of 2D cross sections of the body, which put together become a 3D image. To measure the chemical changes, scientists analyze sequences of about 300 of these 3D images frame by frame, image by image. Because of possible movements of the fetus during the MRI that doctors cannot control, it is not as easy to compare frame by frame as the fetus can move dramatically between frames. This is where the new algorithm comes in...


The algorithm creates a mathematical function that maps the pixels from the first frame to the second frame, then to the third, etc. The end result is a series of mathematical functions that describe the movement of the scan. After the computer has calculated the mathematical functions, a human draws boundaries around organs and things of interest in the first frame. The movement of these boundaries can then be calculated using the mathematical operations defined earlier. The human only has to define the boundaries around organs and things of interest for the first frame, then the computer does the rest for the following frames. This saves humans a lot of time and really tough work! One of the major problems with MRI's is dealing with motion, so the ability to account for that and still be able to use MRI analysis is a huge step! 

Check out this MRI clip of a fetus! The algorithm made it possible to correct for small movements of the fetus! So rad! --> https://www.youtube.com/watch?v=djJnsC_CddI&feature=youtu.be

References: 
http://phys.org/news/2016-10-algorithm-fetal-scans-interventions-warranted.html
https://kingsimaging.wordpress.com/page/2/

Friday, October 14, 2016

IBM Watson

My dad works for IBM, so when 60 Minutes did a segment on IBM Watson, he made the whole family gather around the TV to watch. He was being a big time nerd about it, but it actually was really cool.


So what can Watson do? Well, this computer can answer questions, quickly identify key info from any document, and find patterns and relationships across data. It can even analyze unstructured data, which makes up 80% of all data, and includes things like news articles and social media posts. This is unique about Watson, because it uses natural language processing to understand grammar and context (huge deal) to analyze this unstructured data.

How does Watson learn all of this? To learn a subject, all related material is loaded into Watson: word documents, PDFs and webpages. Watson is trained through question and answer pairs and is automatically updated as new information is published.

The loading of this information is called the "corpus", which is created with the help of humans. This data is pre-processed by Watson; it builds indices and other meta data to create a knowledge graph (see below), which makes working with the content more efficient. Once the data has been organized, Watson continues to learn through ongoing interactions with users. This is called "machine learning".



Watson takes parts of speech in a question and generates a hypothesis. Then, it searches millions of documents to find thousands of possible answers to support or refute the hypothesis. Next, Watson uses an algorithm to rate the quality of the evidence it finds. These steps sound pretty similar to the scientific method we use to answer a question right? Except Watson does it much, much faster than a human ever could.



Why does this matter? Who cares if a computer can beat some humans in Jeopardy? Well, Watson has a much more important role, and that is in the field of medicine. Health care data doubles every 2 years because there are always new studies and trials being done. Doctors would have to read 29 hours a workday to keep up with the new information. Watson can read 200 million pages of text in 3 seconds. That is incredible. It can read a million books per second. Then, it can use learning algorithms to find patterns in that massive amount of data. This is huge, particularly for cancer patients. Watson has tons of information about cancer which it can trace through its indices (see below) to find different types of cancer, symptoms, treatments, side effects, etc. to find the treatment that best fits a specific patient. In 99% of cases, Watson chose the same treatment that doctors would've, but much faster. And in 33% of those cases, Watson found something new. This is a huge, and very exciting, development that could change everything in the world of medicine.



Check out the 60 minutes segment here:
 http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/

References:
http://www.ibm.com/watson/what-is-watson.html
http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/
       

Friday, October 7, 2016

Are You a Gamer?

Calling all gamers! Have you ever wondered how video games actually work? If so, this is the blog for you!

Whether you're playing Call of Duty, Super Mario Brothers, or Cooking Mama, your game is running on a "game loop". This is a piece of code that runs over and over again, possibly hundreds of times per second, to tell the hardware what to draw on the screen. The image below shows how the computer updates the screen:



The three main stage of the loop are: update player input, update the game world, and tell the graphics card what to render. An image of this code is shown below:


The human brain and eye perceives images individually if there are 10-20 frames per second. Any more, and the images become continuous and the different frames are not perceived! If it is producing 30 frames per second, that means the computer has .0333 seconds to produce each frame. 60 frames per second, and the computer only has .01666 seconds.  How do video games produce images so quickly? While an image is being shown, the computer is producing the next image on the fly! These images are drawn on something called a framebuffer, which is a large grid of pixels on the screen.  While the computer is working 60 frames a second on graphics, that only leaves .005 to detect and respond to player input, produce sounds, and work on collision detection. Computers work at a speed so much faster than the human brain, which allows computers to make video game graphics so realistic! So cool!

 References:
http://howtomakeanrpg.com/a/how-do-video-games-work-basic-architecture.html

Friday, September 30, 2016

Animation


No matter how old we get, we always love a good Disney/Pixar movie. Animation has come a long way over the years, but how does this process work? How do we get our favorite characters like Woody, Mike Wazowski, and Nemo?



First, an artist draws the characters. Once the characters have been drawn, animators use the computer to draw a skeleton inside of the characters body. When they move the skeleton, the software makes the body move with it. Initially, the characters are just a wireframe, without color or texture, made up of individual cubes and spheres. Then they are given as many as 100 hinges, called "avars", that animators use to make the character move, which you can see below. Animators use a computer software that allows them to move the characters, almost like puppets, into key positions or poses. Then the computer creates the frames in between the key frames to connect them. The characters movements are programmed into the computer, to transform the 2D still pictures into 3D moving characters. It starts as a rough cut and then the images get smoothed out to become more fluid.



There is also specialized software that synchronizes the character's mouth with the dialogue where the technician works one syllable at a time to choose the mouth shape that best fits. They also cut and paste the characters onto the backgrounds. Once they create the original image, they use shading, lighting, and other techniques to make the animation more realistic. You can see the differences below.

This is the original animation:















Followed by the shading:















Then finally the lighting:














Rendering is when all the information that makes up a shot (the lighting, texture, etc.) is translated to make a frame. Pixar's program for rendering is called RenderMan (creative!). This program "draws" the finished image by computing every pixel of the image. It takes approximately 6 hours to render a single frame. In a Pixar film, there are an average of 24 frames per second (fps). For an average film, there are almost 130,000 frames, hence why it takes 3-4 years to create these animated films.

Check out this cool video that shows the steps in the animation process!
https://www.youtube.com/watch?v=Z_V752_-8F0&feature=youtu.be

References:
https://www.youtube.com/watch?v=0g1eb8O9j1M
http://pixar-animation.weebly.com/pixars-animation-process.html

Friday, September 23, 2016

Streaming Video

In today's day and age, video streaming is very popular. People watch Netflix on their computers and are also surfing Youtube for funny Vines. How do these videos stream to your laptop and phone?

The process of streaming video is time based and multi-processed. There are three parts: the encoded bits (called AAC), the container that holds the bits together in encoded video data (called FLV or MP4) and the transport that moves it from the server to the player (called RTMP). Because video files are so large, they are broken down into smaller pieces and send individually to their specific location and the M3UP tells the player the order in which to play the stream. The data gets where it needs to be using rules called protocols, which say how the data will travel from one device to another- for example HTTP.


Video starts out as raw files, which are high quality digital files that are not digitized and have not been compressed or distorted in any way. However, as I said, video files are very large. While they break the data into parts, they also make the files smaller. Two ways this can be done are:
1. Making the picture smaller so it doesn't fill the whole screen. You can see this when you are watching a video; you lose quality when you make it full screen. Below you can see Dwight from The Office, both in the smaller frame and stretched to full screen. Although it is not super apparent in these photos, this stretching to full screen does affect the quality when you're watching.
2. Reducing the frame rate. A video is a series of still images, so you can reduce the number of total images so that it takes less data to recreate them. Sometimes you can see videos flicker because eyes and brain can sense the transition between pictures.




Making files smaller requires codec, compression/decompression software, which discards all unnecessary data and lowers the resolution. This reduction of quality depends on a number of factors, one of which being bitrate, which is the speed of transfer from the server to the computer. For example, the bitrate of a tv broadcast is 240,000 kilobits, whereas dial-up internet is 56 kilobits. You can also create files that stream differently at different transfer rates, which is called multibitrate encoding. After going through this complex, efficient process of streaming, your computer discards the data as you watch. Who would've thought all this work goes into streaming The Office on Netflix!

References:
http://computer.howstuffworks.com/internet/basics/streaming-video-and-audio2.htm
https://www.youtube.com/watch?v=AeJzoqtuf-o

Friday, September 16, 2016

Google


So, you want to search for pictures of puppies. Easy, you just type into Google "pics of cute puppies" and then it gives you tons of results, showing you the cutest puppies you've ever seen in your life. But how does Google work?

Well, Google uses an algorithm to scan through information and find keywords. The programs that do this are called spiders (lol Go Spiders!) or crawlers. Search engines in general will use these spiders to create indexes of keywords. It will scan a page, then follow links to other pages with the same keywords and keep tracking the pages it finds to create an index. Indexes are built with a method called hashing, which is a formula that applies a numerical value to each word that is indexed. Creating this list of words is called Web Crawling. The spiders will start at more popular sites, then branch out from there to other links.


Search engines like Google will index hundreds of millions of pages a day in response to tens of millions of queries. What makes Google unique is that it ranks the results based on how many times keywords show up and how long the webpage has existed. At the beginning, Google's system used 3 spiders at once, each of which could keep 300 connections to web pages open at one time. Using 4 spiders, Google could go through 100 pages per second, generating 600 kilobytes of data each second. Incredible! Because content on the internet is always changing, the spiders are always crawling. Computers are rad!

References:
http://computer.howstuffworks.com/internet/basics/search-engine1.htm
http://computer.howstuffworks.com/internet/basics/google1.htm

Friday, September 9, 2016

Finger Printing

We use our fingerprints so many times a day to unlock our iPhones (or Droids, if you're into that sort of thing). But how does this really work? How can you just put your finger on the home button for a few seconds and then gain access to everything in your phone? We have the world at our fingertips!


Well, the oldest method of fingerprint scanning is optical scanning. This method essentially takes a digital photograph of your fingertip and then uses algorithms to find patterns. The programs look for light and dark areas of the image to identify ridges and lines. The scanner uses LED lights to brighten the image and analyze the data.


The second method is capacitive scanning which is most commonly used today. The capacitors in this method go through a change of charge when a ridge of the fingerprint is pressed against it. Air gaps, or valleys, do not change the charge. The changes in charge are tracked and recorded.

So how can the computer compare and check your fingerprint? Well, each print is analyzed for features called minutiae, where lines in our fingerprint end or split in two. The computer does something similar to connect the dots and measures the distances and angles between the minutiae, creating something like this:

The computer takes this data and uses an algorithm to transform it into a numeric code. It then compares the codes to see if the current fingerprint matches the stored one. To decrease the necessary processing power, the program does not compare the whole finger but rather several minutiae. This allows the process to work more quickly and allows it to work despite smudging or off-centered fingerprints. If the codes match, you gain access! How rad!


References:
http://www.androidauthority.com/how-fingerprint-scanners-work-670934/
http://computer.howstuffworks.com/fingerprint-scanner4.htm
http://www.explainthatstuff.com/fingerprintscanners.html

Friday, September 2, 2016

Robotic Surgery

Robotic surgery is becoming more and more common in the medical field. This is because the machines are more precise and flexible than the human hand. Robotic surgery reduces chance of infection, decreases recovery time, and is less invasive, leaving the patient with much less scarring and discomfort.




The robots function with the da Vinci Surgical System, which provides a magnified vision system and gives surgeons a 3D, HD, 360-degree view inside of the patient’s body; a view that they wouldn’t be able to get with the naked eye. It allows the surgeon’s hand movements to be translated into smaller, more precise motions controlling the small surgical instruments inside the body. One of these instruments is a camera with a light on the end, which sends the image to a video monitor in the operating room. The camera and other mechanical arms with dime-sized tools are controlled by the surgeon at a computer console next to the operating table. They control the tools with hand and foot controls that move the robotic arms attached to the surgical instruments, while another surgeon is at the operating table to ensure the correct placement of the instruments. The robotic arms are much more steady, and are able to reach places a human head wouldn’t.




These two videos show a surgical robot peeling a grape and then stitching it back together! Such precision! How cool that computer science helped make robots that are better at doing surgery than we are!
https://www.youtube.com/watch?v=0XdC1HUp-rU


References:
http://www.mayoclinic.org/tests-procedures/robotic-surgery/basics/definition/prc-20013988