Created by Matt Richardson
The Descriptive Camera works a lot like a regular camera—point it at subject and press the shutter button to capture the scene. However, instead of producing an image, this prototype outputs a text description of the scene. Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.
As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities.
This device is pretty cool. The idea of having a simple camera mount that can be used to track your movement given a tag that is placed on your body. This could also be used in the home to track small children and the elderly, not just using it when you don’t have an extra person to film you like they say in the video.
To continue the theme started at the Jam session: Using ordinary object to produce sound. This is a piece of bamboo, which a slot cut where the phone sits and the audio resonates through the bamboo. Can’t tell the sound quality from the video, but it’s a cool idea. (Don’t buy it. Make your own!)
An interesting article about a DoD project to help soldiers be able to focus on a HUD and distant objects. It’s a cool idea the only allows screens emitting polarized light to be view from short distances in front of the eye. The problem that I have with this, is their claim that you can focus on the nearground and background at the same time. They make it seem like the soldiers will be able to look at their HUD and something out in the battle field. I think this is a limitation of human capabilities, because it may just be me, but I can only focus on a small area at a time. Therefore, the helpfulness of the contact lenses is from the soldier not having to refocus their eyes between a near and far object, which for me takes as much time as it does to shift my eyes from point A to point B.
Granted, the contact lens idea is really cool, because it allows you to view a screen at the distance between your eyes and your sunglasses, which given a normal human eye, is impossible to focus on.
The idea behind this device is light-painting, which is drawing with a light source while a camera takes a long exposure. The acrylic tube houses a 2 meter LED strip which is controlled from a PC via Xbee. The LEDs flash in a pre-programmed sequence as the shutter to the camera opens and the artist walks at a steady pace holding the tube until the shutter closes. The result is seen above. Also, the creator, The Mechatronics Guy, has made it all open source so you can make your own.
This article is about a new phone. The phone is composed of three screens that can form a kind of triangular prism without ends with a keyboard somewhere in there. The phone can also be folded to create the standard phone structure, or manipulated to have two viewing screens. (Easier to see in the pictures) The thing that I find the coolest about this new product is the pliable screens. How could this technology be used once it becomes a norm? Mobile screens that you can roll or fold up and put in your pocket. Wearable screens that are much more advanced than the current wearable technology.
This paper was about adding sensors to the infrastructure of the home to do motion sensing, In particular adding them to the air filter in the HVAC Systems in homes. They installed five sensors to measure the change in the total static pressure, which is a measure of the resistance on the HVAC’s blower. The static pressure is affected by changes in the airflow back to the unit. Through this, the sensors were able to detect changes in the homes such as doors opening and closing and people standing in the doorways.
I chose this paper for a few reasons. First of all, this paper was assigned as an additional reading in Mobile Ubiquitous Computing last semester, and I wasn’t able to read it. Another reason was that I like the idea of adding sensors to the infrastructure of an environment instead of to the objects in the environment.
The paper is well written and presented but there are some obvious limitations to the practical use of the system. The system can sense the pressure changes due to doors being opened and closed and if adults are standing in doorways, but sensing movement in the rooms themselves isn’t feasible.