New camera technologies: vision

I earlier write about new camera technologies concentrating on light field photography. There are many other new camera technologies that deserve to be mentioned.

Is ‘vision’ the next-gen must-have user interface? article tells that IMS Research issued recently a press release questioning if Apple and the iPad are falling behind competitors in user interface technologies. The industry still wants to know: Where will the battle lines be drawn for the next-generation user interface – beyond touch. Will it be gesture, motion, or voice? A growing number of FPGA, DSP and processor companies are now betting the future on embedded vision. Jeff Bier, president of Berkeley Design Technology, Inc., said, “Thanks to Microsoft’s Kinect (used in Xbox 360), we now have ‘existence proof’ for embedded vision. We now know it works.” Embedded vision is in fact a “classic long-tail story”: There are thousands of applications; and its market is extremely diverse. IMS Research said the market for intelligent automotive camera modules alone was estimated at around $300 million in 2011 and is forecast to grow at an average annual rate of over 30% to 2015. The article also mentions some interesting application examples which are worth to go through.

The next killer app: Machines that see article asks: Do embedded processors shape applications, or is it the other way around?
In reality, it works both ways. This is particularly evident in digital-signal-processing-intensive applications, such as wireless communications and video compression. These applications became feasible on a large scale only after the emergence of processors with adequate performance and sufficiently low prices and power consumption. And once those processors emerged, these applications started to take off. Then, the growing market attracted competition and investment.

Image processing has evolved to state that many things earlier though to be science fiction or available to intelligence agencies are nowadays widely available. Facebook Facial Recognition: Its Quiet Rise and Dangerous Future article tells that new facial recognition technology used to identify your friends in photos could have some interesting applications–and some scary possibilities. Facebook would be using facial recognition to suggest the names of friends who appeared in newly uploaded photos. Fake ID holders beware: facial recognition service Face.com can now detect your age article tells that fake IDs might not fool anyone for much longer, because Face.com claims its new application programming interface (API) can be used to detect a person’s age by scanning a photo. With its facial recognition system, Face.com has built two Facebook apps that can scan photos and tag them for you. The company also offers an API for developers to use its facial recognition technology in the apps they build. Its latest update to the API can scan a photo and supposedly determine a person’s minimum age, maximum age, and estimated age.

Image editing is nowadays easy and images everywhere. Digg pointed to article Verifeyed uses a camera’s ‘mathematical fingerprint’ to find manipulated images that tells that nalysis startup Verifeyed wants to bring a new a sense of legitimacy to the world of digital images. Image editing tools like Adobe Photoshop easily allow the creation of fake images with just a few clicks, so as a result, digital images have lost their trustworthiness. Verifeyed plans solve the problem using its patent pending technology that is able to certify the originality (or absence of modification) for digital images taken from any device. It uses math (a lot of it) — a product of the founders specialty as PhD researchers in the area of applied mathematics. This could be valuable for example insurance companies authenticating claims.

Are gestures suitable to be used as camera user interface? What if framing a scene with your fingers actually caused photos to be created? Air Camera Concept Shoots When You Pretend to Take a Picture article tells about a clever camera concept “Air Camera” by designer Yeon Su Kim that would make that idea a reality. It consists of two components: a ring-like camera worn on the thumb, and a tension-sensing device worn on the forefinger. If the tension unit senses that you’re making a camera gesture, it triggers the camera to snap a photo. Make a video camera gesture, and it begins recording video!

Actually this idea isn’t very new. It was mentioned a few years back in Ted talk discussing SixthSense technology (time 6:28). Prototype Camera Lets You Shoot Photos by Framing Scenes with Your Fingers article tells that Air camera concept may soon become a reality. Researchers at IAMAS in Japan have developed a tiny camera called Ubi-Camera that captures photos as you position your fingers in the shape of a frame. The shutter button is triggered with your opposite hand’s thumb, and the “zoom” level is determined by how far the camera is from the photographer’s face.

78 Comments

  1. Tomi Engdahl says:

    This DIY Camera Takes Short Stories, Not Pictures
    http://www.pcworld.com/article/254452/this_diy_camera_takes_short_stories_not_pictures.html?cid=Newsletter+-+EDN+Fun+Friday

    There’s an old saying that a picture says a thousand words, but one clever modder took that saying to the next level by making it literal. Meet Descriptive Camera, a DIY camera capable of capturing a scene with text rather than a photo.

    Descriptive Camera by Matt Richardson also uses metadata, but it looks at the actual content of the image to describe the scene textually instead of visually.

    Matt made the camera as a way of finding better methods to catalog and search images once they are filed. Now, you will probably be wondering how the heck a camera can translate an image into text and the technology behind it, but actually, most of the tricky work is done by humans.

    Wait, what?

    Using Amazon Mechanical Turk, every time a photo is taken by the DIY camera, some coding will send the image data to a willing worker, who looks at the photo and writes up a description of what is happening in the frame. The text is then sent back in six minutes or less and printed out on a thermal printer, polaroid style. While the image-to-text transformation is processing, an amber LED light will flash above the camera.

    Descriptive Camera, 2012
    http://mattrichardson.com/Descriptive-Camera/

    “The Descriptive Camera is one of our favorite little ‘photography’ projects: instead of insta-printing an image, the ‘camera’ prints a textual description of the captured scene.”
    -Popular Science

    Reply
  2. Tomi Engdahl says:

    Samsung Galaxy S III announced: 4.8-inch 720p display, eye tracking, available later this month
    http://www.theverge.com/2012/5/3/2996619/samsung-galaxy-s-iii

    Samsung’s touting some fascinating software customizations — an emphasis on “natural interaction,” it says — that’ll debut in the S III. First off, Smart Stay uses the front-facing camera to monitor your eyes and maintain a “bright display for continued viewing pleasure.” S Voice is a Siri-like customized voice recognition system that allows the phone to recognize a variety of commands at any time — as an example, Samsung says you can simply say “snooze” to shut off your alarm when it goes off. S Beam, meanwhile, is an enhanced version of the Android Beam system that launched in Android 4.0 and allows large files to be transferred between phones quickly — a 1GB file within 3 minutes, for instance.

    Reply
  3. Tomi Engdahl says:

    The Making Of Touchy: A Wearable Human Camera Activated By Touch
    http://thecreatorsproject.com/blog/the-making-of-touchy-a-wearable-human-camera-activated-by-touch

    The merging of human and camera has finally happened, conceptually at least, with a wearable camera called Touchy. Hong Kong artist Eric Siu has come up with a device that goes over the user’s head like a helmet, rendering the wearer blind by having closed shutters over the eyes. The only way they can alleviate this predicament is through human touch—once human contact is made for 10 seconds, the camera is activated, the shutters opens, and a picture taken.

    The project looks at how technology can distance us from each other when it becomes the only means of communication—human contact is replaced with the virtual, the touch of human skin replaced with keyboards and touchscreens. It also re-imagines the idea of the camera, so instead of being a machine that just coldly and distantly captures different environments through its mechanical eye, it becomes humanized.

    Reply
  4. Tomi Engdahl says:

    IEEE preps cameraphone image-quality test
    http://www.edn.com/article/521715-IEEE_preps_cameraphone_image_quality_test.php?cid=EDNToday_20120507

    The IEEE aims to release within two years a test suite to help consumers assess the picture quality of cellphone cameras. The suite will consist of a variety of metrics, probably simplified to a single score.

    The CPIQ (Camera Phone Image Quality) effort, now IEEE P1858, started in 2007 as a project at the I3A (International Imaging Industry Association). The IEEE acquired the project and related assets from I3A and officially re-launched the project in March.

    To date, the I3A identified fundamental attributes for a test suite, as well as existing standards related to them. The P1858 aims to define methods for measuring and communicating those features to consumers.

    The image attributes the group aims to measure may include depth of field, glare, color consistency and white balance, said an IEEE spokesman. “The group wants to create something like a five-star rating system to let people know what quality a camera phone can deliver—they want to make it real simple for consumers,” he said.

    Reply
  5. pomoc drogowa łódź says:

    W gruncie rzeczy to trafiłem na powyższy wpis przypadkowo, lecz jestem bardzo niezwykle zadowolony. Dzięki artykułowi zwróciłem uwagę na problemy, których wcześniej jakoś nie dostrzegałem.

    Reply
  6. Tomi Engdahl says:

    Group to test direct-to-brain bionic eye on human patients
    Novel approach from team out of Australia has led to unprecedented success in developing field
    http://www2.electronicproducts.com/Group_to_test_direct_to_brain_bionic_eye_on_human_patients-article-fajb_bionic_eye_may2012-html.aspx

    Group to test direct-to-brain bionic eye on human patients

    Novel approach from team out of Australia has led to unprecedented success in developing field

    The team believes that their solution will effectively provide vision to nearly 85% of those declared clinically blind. This is important because there’s already close to 300 million visually impaired people in the world, so research into how to restore vision for this growing group has long since passed the critical point.

    Bionic eyes are nothing new — they’ve actually been around for quite some time.

    The direct-to-brain bionic eye

    This is what makes the MVG’s approach so unique: The doctors are completely bypassing the complexities that come with trying to reconstruct the complexities of the human eye. Instead, what they’ve developed is a pair of glasses that have tiny, high-resolution cameras in them to serve as the eye’s retina. Video recorded by these cameras is sent to a pocket-worn digital processing unit that converts the camera’s recorded video into electrical signals which, in turn, get sent to a microchip implanted directly on the surface of the patient’s visual cortex (located in the back the brain).

    More specifically, specially written algorithms transform the camera’s image data to a pattern that gets wirelessly transmitted to the micro-sized electrodes on the brain implant chip. Upon receiving these signals, the chip responds with the appropriate voltage, current, and timing to stimulate an image within the visual cortex of the brain.

    The results right now are rudimentary, black-and-white images, but many believe that this is the start of a very promising, alternative approach to providing sight to the visually impaired.

    Reply
  7. Tomi Engdahl says:

    Klik App Does Mobile Facial Recognition in Real Time
    http://allthingsd.com/20120510/klik-app-does-mobile-facial-recognition-in-real-time/

    Is taking pictures of your friends on your phone, tagging their names and uploading them just too darn hard? Now there’s a free helper app for that, called Klik, which launches out of testing today on the iPhone.

    Using facial recognition, Klik can identify people even before a photo is taken — if you hold up your phone to take a picture of someone, Klik will guess who it is by hovering that person’s first name over the person’s head. If the app doesn’t get it right, it will give you its top choices and you can teach it to improve. Then Klik helps users share the tagged photos on Facebook, Twitter, email and its own public social network.

    The app is made by Tel Aviv-based Face.com, which already offers a facial-recognition API to 45,000 developers to enable them to do things like unlock users’ computers by recognizing their faces.

    Facebook touched off a privacy backlash last year, especially in Europe, when it enabled automatic photo tag suggestions. And Google has sworn it won’t do mobile facial recognition. Google built such technology, but decided never to release it because of the potential for abuse, Google chairman Eric Schmidt said at D9 last year.

    Reply
  8. Tomi Engdahl says:

    Surveillance Camera System Searches Through 36 Million Faces In One Second
    http://www.diginfo.tv/v/12-0040-r-en.php

    This surveillance camera system can search through data on 36 million faces in one second. Developed by Hitachi Kokusai Electric, the system can automatically detect a face from either surveillance footage or a regular photo, and search for it.

    The search results are displayed immediately, showing thumbnail images of potential candidates. When a thumbnail is selected, the associated recorded surveillance footage can be viewed, so users can quickly review the persons actions before and after the image was taken.

    With this system, it’s assumed that faces are turning within around 30 degrees in the horizontal and vertical directions from the camera, and the faces are at least 40 x 40 pixels in size.

    “We think this system is suitable for customers that have a relatively large-scale surveillance system, such as railways, power companies, law enforcement, and large stores.”

    “We plan to release this system next fiscal year.”

    Reply
  9. Tomi Engdahl says:

    Overexposed? Thanks to SceneTap, San Francisco bars are now profiling you
    http://venturebeat.com/2012/05/13/scenetap-is-watching/

    Imagine this. You and your girlfriend walk into a neighborhood bar, order a cocktail, and, unbeknownst to you both, a camera above is scanning your faces to determine your age and gender. Your deets are combined with data on other bar patrons and then spit out to looky-loo mobile application users trolling for a good-time venue with the right genetic make-up.

    SceneTap is a maker of cameras that pick up on facial characteristics to determine a person’s approximate age and gender. The company works with venues to install these cameras and track customers. It also makes web and mobile applications that allow random observers to find out, in real-time, the male-to-female ratio, crowd size, and average age of a bar’s patrons. And no one goes unnoticed. “We represent EVERYONE in the venue,” SceneTap proudly proclaims on its website.

    The technology walks the thin line between creepy and fascinating.

    What’s the harm in a little anonymous profiling?

    We do know that SceneTap hopes to put its facial detection cameras to work in other markets. The startup already has its eyes on the retail sector.

    Reply
  10. Cieszygor says:

    Hej widzę, że masz świetny blog. Zapraszam również do mnie Cieszygor

    Reply
  11. Tomi Engdahl says:

    IIDC2–A new dawn for industrial digital video cameras
    http://www.eetimes.com/design/industrial-control/4373102/IIDC2-A-new-dawn-for-industrial-digital-video-cameras-

    The true sign of successful specifications are revisions driven by the industry leaders in concert with the open source community.

    Industrial Digital Camera 2 specification (IIDC2), which is a major revamping of the industry standard IIDC 1.3* and Digital Camera (DCAM) specifications.

    Digital video camera functionality generally has been bi-model, as consumer digital camcorders generate compressed audio/video streams and follow the Audio Video Control (AV/C IEC-6188* specifications). In contrast, instrumentation and industrial digital video cameras generate uncompressed video streams (no audio) and follow the DCAM and IIDC 1.3* specifications. The DCAM and IIDC 1.3* specifications include extensive camera controls, for example, brightness, frame rate, shutter speed and white balance, all of which are not included in the AV/C specifications.

    Uncompressed video key for real-time applications

    Digital video cameras for instrumentation and industrial applications are unique because of their focus on uncompressed video, raw frame rate and high resolution. The ability to operate with uncompressed video is critical for real time applications like security systems and automotive back-up cameras where latency cannot be tolerated. Latency is introduced by the video compression routines used in video camcorders, webcams and cell phone cameras. For safety critical applications, which are the most stringent, a maximum latency of 5 milliseconds can be tolerated.

    The 1394 Trade Association developed the first digital video camera specification in 1996 as IIDC 1.04, and it was updated to IIDC 1.32 in 2008. The transport mechanism chosen for the early IIDC specifications was IEEE 1394 (FireWire)

    IIDC 1.32 became the basis for many digital video camera open source (Linux) community projects

    In 2009, the Japan Industrial Imaging Association (JIIA) and the 1394 TA initiated a development effort to update the IIDC 1.32 specification to a more “modern” standard which in contrast groups all elements of a feature into a contiguous register space that can be implemented in products with less effort (cost). The IIDC2 standard was created to simplify the design of industrial video cameras and to make it easier for personal computers to detect the individual features of a particular digital video camera when connected to a PC. The IIDC2 specification is not backward compatible with the IIDC 1.32 specification.

    IIDC2 products will first use the IEEE-1394 transport mechanism at 800MB/sec for data transport and power distribution. Future implementations of IIDC2 digital video cameras may make use of Ethernet or USB transport mechanisms.

    Reply
  12. Tomi Engdahl says:

    Facial Recognition Cameras Peering Into Some SF Nightspots
    http://yro.slashdot.org/story/12/05/20/2346250/facial-recognition-cameras-peering-into-some-sf-nightspots

    “On Friday, a company called SceneTap flipped the on switch enabling cameras installed in around 20 bars to monitor how full the venues are, the mix of men and women, their ages — and to make all this information available live via an iPhone or Android app. Privacy advocates are unimpressed, though, as the only hint that people are being monitored is via tiny stickers on the windows.”

    Reply
  13. Tomi Engdahl says:

    Why the Leap Is the Best Gesture-Control System We’ve Ever Tested
    http://www.wired.com/gadgetlab/2012/05/why-the-leap-is-the-best-gesture-control-system-weve-ever-tested/

    On Monday, Leap Motion wowed technology enthusiasts with a video of its new gesture-control platform. The video showcased a system of incredible speed and precision, but controlled demos can sometimes oversell a technology’s real-world capabilities.

    Would the Leap 3-D gesture device disappoint us during a real-world hands-on? No — far from it. We were somewhat surprised to discover the Leap is everything portrayed in the Leap Motion video.

    Like the Kinect, the peripheral tracks human body gestures, and translates this movement into corresponding motions on a video display. According to Leap Motion, its input device is 200 times more precise than Kinect or anything else on the market. It’s a bold claim that’s difficult to test. So we sat down with Leap Motion co-founders Michael Buckwald and David Holz to wiggle our fingers at the new device.

    Reply
  14. Tomi Engdahl says:

    Minority Report-style swishery demoed with cheap webcam
    You’ll be sorry if you flip it off, though
    http://www.theregister.co.uk/2012/05/24/touch_free_computing/

    HPC blog New tech at GTC12 lets punters pretend to be Tom Cruise in Minority Report: opening windows, moving them, closing them, and essentially acting like a cool, futuristic cop.

    Eyesight Mobile Technologies performed an interesting demonstration on the exhibit floor. In the video, marketing director Liat Rostock shows us how the firm’s software, using just a cheap consumer webcam, allows you to control a laptop with hand gestures.

    More details:
    http://www.eyesight-tech.com/

    Reply
  15. Tomi Engdahl says:

    Mega 12ft interactive electro-whiteboard lures GTC12 punters
    Uses crunchy Nvidia graphics cards
    http://www.theregister.co.uk/2012/05/25/mega_display_hooks_viewers_at_gtc12/

    HPC blog While wandering the exhibit floor at GTC12, my attention was captured by what looked like a massive (12in x 4in, 3.66m x 1.22m) electronic whiteboard with fast-moving screens portraying information in lots of different forms. Each window was being created, resized, moved, then closed at high speed without lag or distracting video artifacts. The demonstrator was also able to handwrite callouts and notes without missing a beat. With the hook firmly set in my fish-like mouth, I had to find out more.

    I’ve seen and used a few, older, electronic whiteboard-like things and found them to be on the slow side and a bit clumsy (which, coincidentally, is how I’m usually described). This one uses an interactive camera approach that is very well- executed. You can see a brief video of him putting it through its paces for me here.

    First off, the display surface isn’t magic at all; it’s just a hunk of white material. The images are projected onto the board by projectors suspended above, which are accompanied by cameras that track the movements of the pen over the material.

    The secret sauce is in the software, says Adam Biehler, senior account manager with Scalable Display Technologies. It’s the software that projects the image and, through use of its calibration tools, ensures that the aspect ratios are preserved and the angles are correct.

    Their software also blends the output of two to more than six projectors, making sure that the edges are properly aligned and that overlaps are seamless. It also gives you full control over applications and allows you to capture and preserve screen contents at any point

    The software runs on any reasonably configured Windows PC, but it performs best with NVIDIA professional-level Quadro graphics cards (a 5000 model is recommended)

    Reply
  16. Tomi Engdahl says:

    Facebook rumored to buy facial recognition tech startup Face.com for up to $100 million
    http://thenextweb.com/facebook/2012/05/28/facebook-rumored-to-buy-facial-recognition-tech-startup-face-com-for-up-to-100-million/

    In fact, Face.com has long been rumored to be an acquisition target for Facebook, even way before there was talk about the social networking company going public.

    Founded in 2007, Face.com offers accurate facial recognition software that could help Facebook users identify people in photos faster, both on desktop and mobile.

    Reply
  17. Tomi Engdahl says:

    Computer program can read human expressions better than humans can
    http://io9.com/5913704/computer-program-can-read-human-expressions-better-than-humans-can

    Ehsan Hoque, a graduate student in MIT’s Affective Computing Group, led a new study designed to improve the way computers read and understand human faces.

    As simple as this study is, it provides data that improves the capacity of computers to read emotion. When updated with this information, the group’s computers can distinguish between delighted smiles and frustrated smiles better than a human can.

    The hope is that, as the researchers gather more data and program the computers with more information on human expressions, these programs can help people with autism or others who have difficulty recognizing facial expressions understand the emotions flitting across other people’s faces. Perhaps, though, if these programs really do understand expressions better then we do, it can help us all.

    Reply
  18. Crissy Hillerman says:

    Wow! This could be one particular of the most helpful blogs We’ve ever arrive across on this subject. Basically Excellent. I am also an expert in this topic therefore I can understand your hard work.

    Reply
  19. Tomi Engdahl says:

    ‘DIY streetview’ camera lets you be Google
    http://news.cnet.com/8301-17938_105-57449205-1/diy-streetview-camera-lets-you-be-google/

    Camera kit from German company Streetview Technology lets you create your own Street View-style images and maps and post them to your Web site.

    http://www.diy-streetview.com/

    Reply
  20. Tomi Engdahl says:

    PRODUCT FOCUS – Looking to the Future of Vision
    http://www.vision-systems.com/articles/print/volume-16/issue-12/features/looking-to-the-future-of-vision.html

    Thinking ahead to standards updates and the evolving needs of machine-vision systems helps developers and component vendors get a jump on innovation

    Reply
  21. Tomi Engdahl says:

    Facebook acquires Face.com – stock up 4.7 percent
    http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2012/06/18/BUHV1P413T.DTL

    Facebook Inc., the world’s largest social-networking website, has acquired Face.com, adding technology that enables facial recognition in photos.

    Terms of the deal, announced in a blog post by Face.com on Monday, weren’t disclosed.

    “This transaction simply brings a world-class team and a longtime technology vendor in house.”

    Reply
  22. Franco says:

    Some truly nice and useful information on this website , as well I believe the style has superb features.

    Reply
  23. Tomi Engdahl says:

    Image sensors evolve to meet emerging embedded vision needs – Part 1
    http://www.edn.com/design/consumer/4375300/Image-sensors-evolve-to-meet-emerging-embedded-vision-needs?cid=EDNToday

    In Part 1 look at examples of embedded vision and how the technology transition from elementary image capture to more robust image analysis, interpretation and response has led to the need for more capable image sensor subsystems.

    In Part 2, “HDR processing for embedded vision,” by Michael Tusch of Apical Limited, an EVA member, we discuss the dynamic range potential of image sensors, and the various technologies being employed to extend the raw image capture capability.

    Reply
  24. Tomi Engdahl says:

    How Many Computers to Identify a Cat? 16,000
    http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?_r=1&pagewanted=all

    Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

    There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

    The neural network taught itself to recognize cats

    “You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

    While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

    And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.

    “This is the hottest thing in the speech recognition field these days,”

    Reply
  25. Tomi says:

    Microsoft Mulls a Stylus for Any Screen
    http://www.technologyreview.com/news/428521/microsoft-mulls-a-stylus-for-any-screen/

    Adding a camera to a stylus could let it interact with any device—even one without a touch screen.

    Microsoft is considering whether to release a stylus that could, after a software upgrade, interact with almost any existing display or device. Researchers at the company’s Silicon Valley site designed the stylus to good internal reviews and are waiting to hear if the company will continue its development with an eye on testing its potential as a product.

    While styluses are available that work with any touch-screen device, such as an iPad or iPhone, they are relatively inaccurate. True stylus support requires an extra layer of sensors built into a device’s display, which adds costs. If the new Microsoft stylus concept were to become available, it would allow precise stylus use on any display, even on those that aren’t already touch-sensitive.

    Andreas Nowatzyk and colleague Anoop Gupta hit upon the idea of using the grid of pixels that make up a digital display as a navigational system for their backwards-compatible stylus. In their design, a small camera inside the stylus looks down at the display and counts off pixels as they pass by to track its movement. That is fed back to the device via a wireless link, much as a wireless mouse reports its motion to a computer. The way the stylus tracks its motion is similar to the way “smart pens” such as the LiveScribe, a device for aiding note-taking, use a camera to track dots on special paper

    However, for the stylus to work, it also needs to know precisely where on the screen it is at any time. The Microsoft researchers’ solution was to have the related software “massage” the color of the blue pixels in a display so that their pattern of brightness encodes their position; the stylus then knows where it is. “Blue is chosen because the human eye doesn’t have many blue cones in the fovea,” the area of the retina used for our central vision, says Nowatzyk.

    The design Nowatzyk and colleagues sketched out should be workable on stand-alone and mobile displays, he says, even very high resolution ones on tablets and phones

    However, researchers would need a new type of image sensor to actually test prototypes. A good quality wireless mouse now uses a compact image sensor with a resolution of 30 by 30 pixels. To work, the new stylus design would require one with a resolution of 512 by 512 pixels to see the details as small as a tenth of a millimeter and to capture images at a relatively high rate to track motion smoothly.

    Reply
  26. Tomi says:

    Stratfor emails reveal secret, widespread TrapWire surveillance system
    http://rt.com/usa/news/stratfor-trapwire-abraxas-wikileaks-313/

    Former senior intelligence officials have created a detailed surveillance system more accurate than modern facial recognition technology — and have installed it across the US under the radar of most Americans, according to emails hacked by Anonymous.

    Every few seconds, data picked up at surveillance points in major cities and landmarks across the United States are recorded digitally on the spot, then encrypted and instantaneously delivered to a fortified central database center at an undisclosed location to be aggregated with other intelligence. It’s part of a program called TrapWire

    According to a press release (pdf) dated June 6, 2012, TrapWire is “designed to provide a simple yet powerful means of collecting and recording suspicious activity reports.” A system of interconnected nodes spot anything considered suspect and then input it into the system to be “analyzed and compared with data entered from other areas within a network for the purpose of identifying patterns of behavior that are indicative of pre-attack planning.”

    Since its inception, TrapWire has been implemented in most major American cities at selected high value targets (HVTs) and has appeared abroad as well.

    Reply
  27. fkawau says:

    This is the right New camera technologies: vision Tomi Engdahl’s ePanorama blog blog for anyone who wants to seek out out active this matter. You observe so such its virtually tiring to present with you (not that I truly would want…HaHa). You definitely put a new gyrate on a substance thats been scripted some for eld. Metropolis force, only zealous!

    Reply
  28. Tomi Engdahl says:

    Infrared-Camera Algorithm Could Scan for Drunks in Public
    http://www.wired.com/wiredscience/2012/09/infrared-camera-algorithm/

    Computer scientists have published a paper detailing how two algorithms could be used in conjunction with thermal imaging to scan for inebriated people in public places.

    alcohol causes blood-vessel dilation at the skin’s surface, so by using this principle as a starting point the two began to compare data gathered from thermal-imaging scans. One algorithm compares a database of these facial scans of drunk and sober individuals against pixel values from different sites on a subject’s face. A similar method has been used in the past to detect infections, such as SARS, at airports

    The pair found that, when inebriated, an individual’s nose tends to become warmer while their forehead remains far cooler.

    Thermal imaging is already used to spy on potential criminals

    Reply
  29. Tomi Engdahl says:

    FBI launches $1 billion face recognition project
    http://www.newscientist.com/article/mg21528804.200-fbi-launches-1-billion-face-recognition-project.html

    The Next Generation Identification programme will include a nationwide database of criminal faces and other biometrics

    “FACE recognition is ‘now’,” declared Alessandro Acquisti of Carnegie Mellon University in Pittsburgh in a testimony before the US Senate in July.

    It certainly seems that way. As part of an update to the national fingerprint database, the FBI has begun rolling out facial recognition to identify criminals.

    A handful of states began uploading their photos as part of a pilot programme this February and it is expected to be rolled out nationwide by 2014. In addition to scanning mugshots for a match, FBI officials have indicated that they are keen to track a suspect by picking out their face in a crowd.

    Another application would be the reverse: images of a person of interest from security cameras or public photos uploaded onto the internet could be compared against a national repository of images held by the FBI. An algorithm would perform an automatic search and return a list of potential hits for an officer to sort through and use as possible leads for an investigation.

    Reply
  30. Tomi Engdahl says:

    Topographic Light Painting Maps Rooms and People in 3-D
    http://www.wired.com/rawfile/2012/08/topographic-light-painting/

    His topographic light paintings circumscribe surfaces and people throughout his house, creating captivating 3-D models in the process. Parviainen first started in 2007 by using small LED lights to trace human bodies

    Light painting, the act of tracing shapes or designs with a light-source during a long camera exposure, is nothing new, of course. Picasso dabbled in it and today people are using light painting in increasingly inventive ways.

    While many light painting projects can become gimmicky, Parviainen’s images build on the format with his own techniques and perspective. He uses a Sony Alpha DSLR-A200 to make the shots and says there is absolutely no post processing. He purposely shoots both JPEG and Raw at the same time so that he can post the JPEGs online without converting them.

    Reply
  31. Tomi Engdahl says:

    Dark-Energy Camera Starts Taking Pictures
    http://www.wired.com/wiredscience/2012/09/dark-energy-survey/

    The latest, greatest hunt for dark energy has begun, with a massive camera installed on a Chilean mountaintop returning the first of millions of photographs that should help astronomers learn more about the strange forces driving our universe’s evolution.

    The photos were released Sept. 12 by the Dark Energy Survey Collaboration, operators of the 570-megapixel Dark Energy Camera, the most powerful astronomical imager ever built.

    “It works like other digital cameras, only it’s much larger, much more sensitive, and mounted on a large telescope,”

    Reply
  32. Tomi Engdahl says:

    MIT’s Picard demos emotion monitoring at DESIGNEast, and creeps me out
    http://www.edn.com/electronics-blogs/other/4396719/MIT-s-Dr–Picard-demos-emotion-monitoring-at-DESIGNEast–and-creeps-me-out?cid=Newsletter+-+EDN+Weekly

    Inversion: The signs are looking at you now
    Picard was introduced on stage by Jeff Bier, founder of the Embedded Vision Alliance and president of BDTI. The founding principal of the Alliance is that we’ve only scratched the surface of how embedded vision can be applied.

    In her keynote, Picard demonstrated this clearly, showing of how a webcam can be used to detect heart rate by discerning changes in facial color due to the pulsing of the heartbeat.

    In another example, Picard demonstrated Affdex, an application of visual analysis that reads emotional states such as liking and attention from facial expressions using a webcam. This gives marketers faster, more accurate insight into consumer response to brands and media.

    One of these ‘intelligent’ things to be Internet-ed is digital signage. Now, instead of advertisers and retailers guessing how many people may have looked at their digital signage ad, they can get measureable results using Intel’s Audience Impression Metrics Suite (Intel AIM Suite).

    Reply
  33. Tomi Engdahl says:

    Facebook suspends photo tag tool in Europe
    http://www.bbc.com/news/technology-19675172

    Facebook has suspended the facial-recognition tool that suggests when registered users could be tagged in photographs uploaded to its website.

    The move follows a review of Facebook’s efforts to implement changes recommended by the Data Protection Commissioner of Ireland last year.

    Billy Hawkes, who did not request the tool’s total removal, said he was encouraged by the decision to switch it off for users in Europe by 15 October.

    It is already unavailable to new users.

    In December 2011 the Data Protection Commissioner (DPC) gave Facebook six months to comply with its recommendations.

    “Our intention is to reinstate the tag-suggest feature, but consistent with new guidelines. The service will need a different form of notice and consent.”

    Reply
  34. Tomi Engdahl says:

    How camera makers are getting their design groove on
    http://news.cnet.com/8301-11386_3-57517481-76/how-camera-makers-are-getting-their-design-groove-on/

    A wave of experimentation is sweeping the camera industry as it grapples with the second phase of the digital revolution. Be careful: today’s bold design could be tomorrow’s evolutionary dead end.

    COLOGNE, Germany — A decade ago, a cataclysm rocked the photography business as digital image sensors replaced fim.

    It turns out that was just the beginning.

    At the Photokina show here, it was clear a second wave of change is sweeping through the industry. Cameras produced during the first digital photography revolution looked and worked very similarly to their film precursors, but now designers have begun liberating them from the old constraints.

    Three big developments are pushing the changes: a new class of interchangeable-lens cameras, the arrival of smartphones with wireless networking, and the sudden enthusiasm for full-frame sensors for high-end customers.

    Sure, plenty of things remain unchanged. A digital SLR looks the much the same as a-end film SLR, and it accommodates the same lenses. The rules of focal length, aperture, and shutter speed are still in effect.

    But just about everything else is in play — even the question of whether Canon’s dominance will continue. Camera makers, no doubt educated by Kodak’s disastrous inability to cope with the first digital revolution, are pulling out all the stops.

    Ecosystem wars

    Cameras have long had an ecosystem element of their own: the proprietary lens mount used to attach lenses to camera bodies. Today’s SLR (single-lens reflex) cameras continue with the same mounts introduced in the 35mm film era. Two companies, Canon and Nikon, overwhelmingly dominate the SLR market.

    There is an explosion of new lens mounts as camera makers venture beyond the traditional SLR design. New “mirrorless” cameras are spreading across the industry as camera makers try to marry the smaller sizes of compact cameras with the lens flexibility, image quality, and profit margins of SLRs.

    It’s not a coincidence that Nikon and Canon, with the most to lose from anything that eats into SLR sales, were the last to join the mirrorless party.

    The smartphone crisis
    For ordinary people, the biggest change in photography is the arrival of smartphones with respectable if not stellar cameras. Early phone cameras suffered from dismal image quality and performance, but each passing year has shown improvements in resolution, low-light performance, lens quality, and speed. And as the smartphone market has expanded, more people get access to those capabilities.

    Because people always carry their phones, those cameras are the ones increasingly used to document people’s lives photographically. As the phones’ cameras improve, there’s less and less reason to carry a regular camera even for special occasions. And that’s just the first reason camera makers need to be worried.

    The second is that people do things with their smartphone photos — post them to Facebook, edit and share them with Instagram, annotate Evernote documents, digitize business cards so contact information is synchronized with the cloud, feed them into Google to translate a sign in a foreign language. That’s all possible because smartphones, unlike most cameras, are connected to the Internet.

    But wireless networking in the camera industry in general has been conspicuous by its absence, isolating cameras from people’s in-the-moment sharing activities.

    Full-frame frenzy
    As my colleague Lori Grunin has noted, full-frame sensors are suddenly fashionable. These are sensors the size of a frame of 35mm film, 36x24mm, and they’re consequently expensive to build.

    Even as mirrorless cameras push what can be done with smaller sensors and threaten sales of lower-end SLRs, a small but lucrative and prestigious higher-end SLR market has grown.

    Reply
  35. Tomi Engdahl says:

    Meet the Nimble-Fingered Interface of the Future
    http://www.technologyreview.com/news/429426/meet-the-nimble-fingered-interface-of-the-future/

    A startup uses 3-D cameras to keep track of hands and fingers, enabling more complex gesture control.

    Microsoft’s Kinect, a 3-D camera and software for gaming, has made a big impact since its launch in 2010.

    Now a San Francisco-based startup called 3Gear has developed a gesture interface that can track fast-moving fingers. Today the company will release an early version of its software to programmers. The setup requires two 3-D cameras positioned above the user to the right and left.

    Reply
  36. Tomi Engdahl says:

    How to Look Inside Any Building in the World
    http://gizmodo.com/5948653

    People take Instagram photos everywhere. Even—especially—in places you can’t easily get into. That’s why it seems like the most natural thing in the world for something like Worldcam to exist.

    Worldcam’s premise is simple: You can search for the locations of buildings or public addresses in any city, and it will show you a list of all the Instagram photos tagged there.

    People take Instagram photos everywhere. Even—especially—in places you can’t easily get into. That’s why it seems like the most natural thing in the world for something like Worldcam to exist.

    Worldcam
    http://worldc.am/

    Reply
  37. Tomi Engdahl says:

    Google confirms it’s buying facial recognition firm Viewdle
    http://news.cnet.com/8301-1023_3-57525666-93/google-confirms-its-buying-facial-recognition-firm-viewdle/

    It’s official — Google’s Motorola Mobility acquires the Ukrainian maker of facial recognition technology that automatically tags photos.

    “Motorola Mobility today announced that it has acquired Viewdle, a leading imaging & gesture recognition company,” a Motorola spokesperson told CNET today.

    Facebook also has made a play in this space, earlier this year buying Face.com along with its Photo Tagger auto-tagging app.

    Reply
  38. Tomi says:

    Visually control your computer
    http://www2.electronicproducts.com/Visually_control_your_computer-article-FANE_Fujitsu_eye_tracking_Oct2012-html.aspx

    Tired of using a mouse to control your computer screen? New eye-tracking technology developed by Fujitsu could allow PC users to manipulate their computer screens using just their eyes.

    Fujitsu Laboratories discovered a new method for noncontact computer interfacing by using compact, inexpensive cameras and light-emitting diodes (LED) embedded into PCs.

    The system comprises near-infrared LED, a general-purpose camera already found in a PC, an eye-tracking module with a width of 7 mm, and processing software.

    From the image processing, pupil and corneal reflection are detected and positional relationship is determined. This is when the direction of sight is calculated.

    The corneal reflection method employed uses a camera and a LED to emit near-infrared light.

    Reply
  39. Tomi Engdahl says:

    Pillcam uses latest in wireless, imaging, packaging to expose your innards
    http://www.edn.com/design/medical/4397880/Pillcam-uses-latest-in-wireless–imaging–packaging-to-expose-your-innards

    Anyone who’s ever had to endure an endoscopy or colonoscopy knows those tests can be, so to speak, a bitter pill to swallow. Given Imaging’s Pillcam Colon 2 ingestible capsule offers a friendlier alternative for getting a good look at your innards, from stomach to lower intestine, with less chance of mortality. Now there’s an incentive.

    A bit larger than your average vitamin, without the option to blend it in a shake, the Pillcam Colon 2 is a classic exercise in low-power wireless system design, with advanced imaging and novel packaging techniques.

    To use the device, the subject swallows the capsule. The images it captures are sent to a sensor array that is worn strapped to the subject’s chest and that connects to a data recorder. The subject returns the recorder to the doctor the next day and excretes the use-once Pillcam within a couple of days.

    Reply
  40. Tomi Engdahl says:

    New interactive system detects touch and gestures on any surface
    http://www.eurekalert.org/pub_releases/2012-10/pu-nis100912.php

    People can let their fingers – and hands – do the talking with a new touch-activated system that projects onto walls and other surfaces and allows users to interact with their environment and each other.

    The system identifies the fingers of a person’s hand while touching any plain surface. It also recognizes hand posture and gestures, revealing individual users by their unique traits.

    “Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology,” said Niklas Elmqvist, an assistant professor of electrical and computer engineering at Purdue University. “You can use any surface, even a dumb physical surface like wood. You don’t need to install expensive LED displays and touch-sensitive screens.”

    “Basically, it might be used for any interior surface to interact virtually with a computer,”

    “We project a computer screen on any surface, just a normal table covered with white paper,” Ramani said. “The camera sees where your hands are, which fingers you are pressing on the surface, tracks hand gestures and recognizes whether there is more than one person working at the same time.” The Kinect camera senses depth, making it possible to see how far each 3-D pixel is from the camera. The researchers married the camera with a new computer model for the hand.

    Patents are pending on the concept.

    Reply
  41. Tomi Engdahl says:

    Magic Finger input device is a camera on your finger tip
    http://hackaday.com/2012/10/16/magic-finger-input-device-is-a-camera-on-your-finger-tip/

    What if we could do away with mice and just wear a thimble as a control interface? That’s the concept behind Magic Finger. It adds as movement tracking sensor and RGB camera to your fingertip.

    The concept video found after the break shows off a lot of cool tricks used by the device. Our favorite is the tablet PC controlled by moving your finger on the back side of the device, instead of interrupting your line of sight and leaving fingerprints by touching the screen.

    Reply
  42. Tomi says:

    3D hand controls electronics
    http://www2.electronicproducts.com/3D_hand_controls_electronics-article-FANE_digits_microsoft_sensor_Oct2012-html.aspx

    Imagine playing video games without controllers, turning an imaginary knob to raise the volume on your radio, or dialing a telephone number on an imaginary keypad to make a call. Well, achieving such interactions is not a distant possibility anymore.

    A team of Microsoft researchers led by David Kim of Newcastle University, U.K., recently unveiled a prototype of a gloveless, freehand 3D computer interaction called “Digits” at the ACM Symposium on User Interface Software and Technology in Cambridge, MA.

    The Digits prototype is created from all off-shelf hardware and comprises an infrared camera and laser line generator, diffuse illuminator, and an inertial-measurement unit to track hand movements.

    When the user wears the device, a laser beams onto the fingers and creates the reflections that are used to determine a change in position. The camera attached to the sensor records and feeds the image to the self-contained software and a 3D model is then constructed.

    Reply
  43. Tomi Engdahl says:

    U.S. looks to replace human surveillance with computers
    http://news.cnet.com/8301-1009_3-57540826-83/u.s-looks-to-replace-human-surveillance-with-computers/

    Security cameras that watch you, and predict what you’ll do next, sound like science fiction. But a team from Carnegie Mellon University says their computerized surveillance software will be capable of “eventually predicting” what you’re going to do.

    Computer software programmed to detect and report illicit behavior could eventually replace the fallible humans who monitor surveillance cameras.

    The U.S. government has funded the development of so-called automatic video surveillance technology by a pair of Carnegie Mellon University researchers who disclosed details about their work this week — including that it has an ultimate goal of predicting what people will do in the future.

    “The main applications are in video surveillance, both civil and military,” Alessandro Oltramari, a postdoctoral researcher at Carnegie Mellon who has a Ph.D. from Italy’s University of Trento, told CNET yesterday.

    Think of it as a much, much smarter version of a red light camera: the unblinking eye of computer software that monitors dozens or even thousands of security camera feeds could catch illicit activities that human operators — who are expensive and can be distracted or sleepy — would miss. It could also, depending on how it’s implemented, raise similar privacy and civil liberty concerns.

    Reply
  44. Tomi Engdahl says:

    What’s In A Picture? The Descriptive Camera Will Tell You
    http://singularityhub.com/2012/04/28/whats-in-a-picture-the-descriptive-camera-will-tell-you/

    What if a camera could not only take a picture but describe the scene, identify objects, and list the names of people within it?

    The artificial intelligence necessary to perform such a feat may seem to be a distant reality but a communications project by a New York University grad student has ignited interest using a clever workaround. Matt Richardson’s creation is called the Descriptive Camera. Take a picture, wait between approximately 3-6 minutes, and the camera prints out a description of what’s in the scene. Inside the camera is a webcam, BeagleBone board, and a thermal printer.

    The camera connects to the Internet through an ethernet cable (ideally it would be Wi-Fi) and sends the image from the webcam to Amazon’s Mechanical Turk, a service that allows a requester to post small tasks that workers, many overseas, can complete for minimal payments. So once someone describes the image, it is transmitted back to the camera for printing.

    While some may see the Descriptive Camera as a bunch of hype, Richardson sees it as a window into the future. That’s probably why, along with getting class credit, he was willing to invest $200 in parts and many hours of programming to get it to work. One way to look at this camera is that it’s a device for collecting searchable data.

    Reply
  45. Tomi Engdahl says:

    Image-Guided Surgery
    http://www.edn.com/design/medical/4399519/Image-Guided-Surgery?cid=Newsletter+-+EDN+on+Analog

    Image-guided surgery technology has revolutionized traditional surgical techniques by providing surgeons with a way to navigate through the body using three-dimensional (3D) images as their guide. Furthermore, those images can be changed, manipulated and merged to provide a level of detail not seen before in the operating room. The technology is similar to that used by today’s global positioning satellite systems, which can track the exact location and direction of vehicles at any point on the globe. Because the view is so precise and so controllable, a surgeon can actually see where healthy tissue ends and a brain tumor begins, or precisely where on the spine to place a pedicle screw to maximize patient mobility.

    Reply
  46. Tomi Engdahl says:

    Video: Wearable Sensor Builds Maps on the Fly
    http://www.designnews.com/author.asp?section_id=1386&doc_id=253316&cid=NL_Newsletters+-+DN+Daily

    The same MIT researchers who are helping the US military create robots that can autonomously generate 3D maps of their immediate location have developed similar technology humans can wear to navigate new and potentially dangerous environments.

    Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have built a wearable system that senses the environment of its wearer and builds a digital map of the area as the person moves through it.

    Reply
  47. Tomi Engdahl says:

    Xbox team’s ‘consumer detector’ would dis-Kinect freeloading TV viewers
    http://www.geekwire.com/2012/microsoft-diskinect-freeloading-tv-viewers/

    The patent application, filed under the heading “Content Distribution Regulation by Viewing User,” proposes to use cameras and sensors like those in the Xbox 360 Kinect controller to monitor, count and in some cases identify the people in a room watching television, movies and other content. The filing refers to the technology as a “consumer detector.”

    In one scenario, the system would then charge for the television show or movie based on the number of viewers in the room. Or, if the number of viewers exceeds the limits laid out by a particular content license, the system would halt playback unless additional viewing rights were purchased.

    The system could also take into account the age of viewers, limiting playback of mature content to adults, for example.

    The patent application, made public this week, was originally submitted in April 2011.

    Reply
  48. Del Quigley says:

    There may be some thing incorrect with your RSS feed. You should have a web designer have a look at it.

    Reply
  49. Livia Pardey says:

    Thanks extremely nice blog!

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*