New camera technologies: vision

I earlier write about new camera technologies concentrating on light field photography. There are many other new camera technologies that deserve to be mentioned.

Is ‘vision’ the next-gen must-have user interface? article tells that IMS Research issued recently a press release questioning if Apple and the iPad are falling behind competitors in user interface technologies. The industry still wants to know: Where will the battle lines be drawn for the next-generation user interface – beyond touch. Will it be gesture, motion, or voice? A growing number of FPGA, DSP and processor companies are now betting the future on embedded vision. Jeff Bier, president of Berkeley Design Technology, Inc., said, “Thanks to Microsoft’s Kinect (used in Xbox 360), we now have ‘existence proof’ for embedded vision. We now know it works.” Embedded vision is in fact a “classic long-tail story”: There are thousands of applications; and its market is extremely diverse. IMS Research said the market for intelligent automotive camera modules alone was estimated at around $300 million in 2011 and is forecast to grow at an average annual rate of over 30% to 2015. The article also mentions some interesting application examples which are worth to go through.

The next killer app: Machines that see article asks: Do embedded processors shape applications, or is it the other way around?
In reality, it works both ways. This is particularly evident in digital-signal-processing-intensive applications, such as wireless communications and video compression. These applications became feasible on a large scale only after the emergence of processors with adequate performance and sufficiently low prices and power consumption. And once those processors emerged, these applications started to take off. Then, the growing market attracted competition and investment.

Image processing has evolved to state that many things earlier though to be science fiction or available to intelligence agencies are nowadays widely available. Facebook Facial Recognition: Its Quiet Rise and Dangerous Future article tells that new facial recognition technology used to identify your friends in photos could have some interesting applications–and some scary possibilities. Facebook would be using facial recognition to suggest the names of friends who appeared in newly uploaded photos. Fake ID holders beware: facial recognition service Face.com can now detect your age article tells that fake IDs might not fool anyone for much longer, because Face.com claims its new application programming interface (API) can be used to detect a person’s age by scanning a photo. With its facial recognition system, Face.com has built two Facebook apps that can scan photos and tag them for you. The company also offers an API for developers to use its facial recognition technology in the apps they build. Its latest update to the API can scan a photo and supposedly determine a person’s minimum age, maximum age, and estimated age.

Image editing is nowadays easy and images everywhere. Digg pointed to article Verifeyed uses a camera’s ‘mathematical fingerprint’ to find manipulated images that tells that nalysis startup Verifeyed wants to bring a new a sense of legitimacy to the world of digital images. Image editing tools like Adobe Photoshop easily allow the creation of fake images with just a few clicks, so as a result, digital images have lost their trustworthiness. Verifeyed plans solve the problem using its patent pending technology that is able to certify the originality (or absence of modification) for digital images taken from any device. It uses math (a lot of it) — a product of the founders specialty as PhD researchers in the area of applied mathematics. This could be valuable for example insurance companies authenticating claims.

Are gestures suitable to be used as camera user interface? What if framing a scene with your fingers actually caused photos to be created? Air Camera Concept Shoots When You Pretend to Take a Picture article tells about a clever camera concept “Air Camera” by designer Yeon Su Kim that would make that idea a reality. It consists of two components: a ring-like camera worn on the thumb, and a tension-sensing device worn on the forefinger. If the tension unit senses that you’re making a camera gesture, it triggers the camera to snap a photo. Make a video camera gesture, and it begins recording video!

Actually this idea isn’t very new. It was mentioned a few years back in Ted talk discussing SixthSense technology (time 6:28). Prototype Camera Lets You Shoot Photos by Framing Scenes with Your Fingers article tells that Air camera concept may soon become a reality. Researchers at IAMAS in Japan have developed a tiny camera called Ubi-Camera that captures photos as you position your fingers in the shape of a frame. The shutter button is triggered with your opposite hand’s thumb, and the “zoom” level is determined by how far the camera is from the photographer’s face.

78 Comments

  1. Domitila Mazuera says:

    A mill cannot grind with the water that is past. – Italian Proverb

    Reply
  2. Tomi Engdahl says:

    Implementing vision capabilities in embedded systems
    http://www.edn.com/design/systems-design/4402000/Implementing-vision-capabilities-in-embedded-systems?cid=EDNToday

    We use the term “embedded vision” to refer to the use of computer vision technology in embedded systems. Stated another way, “embedded vision” refers to embedded systems that extract meaning from visual inputs. Similar to the way that wireless communication has become pervasive over the past 10 years, we believe that embedded vision technology will be very widely deployed in the next 10 years.

    It’s clear that embedded vision technology can bring huge value to a vast range of applications. Two examples are Mobileye’s vision-based driver assistance systems, intended to help prevent motor vehicle accidents, and MG International’s swimming pool safety system, which helps prevent swimmers from drowning. And for sheer geek appeal, it’s hard to beat Intellectual Ventures’ laser mosquito zapper, designed to prevent people from contracting malaria.

    Just as high-speed wireless connectivity began as an exotic, costly technology, embedded vision technology has so far typically been found in complex, expensive systems,

    Similarly, advances in digital chips are now paving the way for the proliferation of embedded vision into high-volume applications. Like wireless communication, embedded vision requires lots of processing power – particularly as applications increasingly adopt high-resolution cameras and make use of multiple cameras. Providing that processing power at a cost low enough to enable mass adoption is a big challenge. This challenge is multiplied by the fact that embedded vision applications require a high degree of programmability.

    With embedded vision, we believe that the industry is entering a “virtuous circle” of the sort that has characterized many other digital signal processing application domains. Although there are few chips dedicated to embedded vision applications today, these applications are increasingly adopting high-performance, cost-effective processing chips developed for other applications, including DSPs, CPUs, FPGAs, and GPUs. As these chips continue to deliver more programmable performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products. Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, efficiency, and programmability.

    Reply
  3. Tomi Engdahl says:

    Heterogeneous Multicore & the Future of Vision Systems
    http://www.designnews.com/author.asp?section_id=1365&doc_id=254891&itc=dn_analysis_element&

    It’s tricky writing about the future of vision systems because the performance of vision systems has largely been overpromised and underdelivered. Setting unrealistic expectations for what vision systems are truly capable of can hinder the growth of the industry.

    Rather than getting into the easily imagined future that science fiction is continually depicting, the focus here is on the processing technology that will improve the vision systems of today and turn them into the advanced vision systems of tomorrow.

    In many applications, the thing preventing the use of vision systems is high accuracy in real-world conditions.

    Face recognition is a good example of what works well in the lab, but a quick look at the number of conditions needed to get the best results quickly shows that it only operates best under conditions fairly removed from the real world.

    In the field of video security, making sure a security system can execute its proper duties (sound alarms, record events, etc.) when someone is breaching it is relatively easy. What is extremely difficult is triggering only when someone is breaching the system and not generating a false positive when there isn’t a real incident. These false positives are holding vision systems back, and it’s not an easy problem to solve.

    There are several different ways to improve the accuracy of a vision system:

    Increasing the amount of information;
    Improving the way the information is used;
    Overlapping approaches.

    All three of these improvements are pushing the need for heterogeneous multicore processors.

    While some areas are able to get all the information they need from a CIF or QVGA resolution image, the trend is to go to higher megapixel sensors and increase resolution. With each increase from CIF to D1 to 720p to 1,080p and beyond, the amount of data is at least doubling at each step, mapping directly to a similar increase in needed processing capability.

    Resolution is just one dimension increasing the amount of data. Other dimensions are:

    Temporal: increasing frame rates;
    Color: from gray scale;
    Number of vision inputs: from mono to stereo to multi-view vision;
    Mode of inputs: from vision-only to multi-modal that can combine audio input with video input.

    As vision systems’ inputs expand along each of these dimensions, the demands on computation will only continue to increase.

    Once you have the image, you can improve a vision system by extracting better information from it. This is the art of vision processing; taking image data and returning useful information.

    Heterogeneous multicore
    The trend toward increasing information, using advanced algorithms, and requiring redundancy within a system all point toward the processing need continuing to grow in vision systems.

    Heterogeneous multicore goes a necessary step further by increasing the efficiency of the processing by using a mix of different processing cores so that each type of core handles the part of the system for which it is best.

    A digital signal processor (DSP) is tailor-made for implementing vision functions. The DSP specializes in real-time signal processing of math-intensive functions resulting in high performance and predictable latency

    However, a RISC processor is more efficient at putting the information returned from the DSP(s) together, running the high level OS and control code.

    An ideal computational platform for vision systems would consist of both RISC and DSP cores.

    Without a well thought out architecture that provides enough bandwidth, memory, and efficient communication, bottlenecks can severely limit the performance of the device.

    Vision systems will move from the labs to the real world. The high performance delivered from heterogeneous multicore devices will enable improvements in system accuracy by providing the processing capability needed by the increase in information.

    Reply
  4. Tomi Engdahl says:

    Implants to restore vision
    http://www.edn.com/electronics-blogs/design-rx-blog/4403079/Implants-to-restore-vision?cid=Newsletter+-+EDN+on+Systems+Design

    MIT Technology Review recently had an article by Megan Scudellari that discussed retinal implants.

    Soon, retinal implants that fit entirely inside the eye will use nanoscale electronic components to dramatically improve vision quality for the wearer, according to two research teams developing such devices.

    Retinal prostheses on the market today, such as Second Sight’s Argus II, allow patients to distinguish light from dark and make out shapes and outlines of objects, but not much more.

    This device was the first “bionic eye” to reach commercial markets. It contains an array of 60 electrodes, akin to 60 pixels, that are implanted behind the retina to stimulate the remaining healthy cells. The implant is connected to a camera, worn on the side of the head, that relays a video feed.

    A similar implant, is made by Bionic Vision Australia, that has just 24 electrodes.

    Reply
  5. Tomi Engdahl says:

    Researchers compare time of flight cameras
    http://www.vision-systems.com/articles/2012/11/researchers-compare-time-of-flight-cameras.html?cmpid=EnlVSDDecember132012

    Italian researchers from the Politechnico Di Torino (Torino, Italy) have conducted a study to compare the performance of two Time-of-Flight (ToF) cameras.

    More specifically, the researchers chose to evaluate the SR-4000 camera from Mesa Imaging (Zurich, Switzerland) and the CamCube3.0 by PMD Technologies (Siegen, Germany).

    Reply
  6. Tomi Engdahl says:

    Smart camera helps the wheels go ’round and ’round
    http://www.matrox.com/imaging/en/press/feature/automotive/IBG_Automation/?utm_campaign=Img_VSD_automation&utm_medium=Web&utm_source=VSD

    Machine vision-based assembly system fits and mounts wheels onto cars in continuous operation

    Reply
  7. Tomi Engdahl says:

    Cell phone camera detects allergens in food
    http://www.vision-systems.com/articles/2012/12/cell-phone-camera-detects-allergens-in-food.html?cmpid=EnlVSDDecember272012

    Food allergies are an emerging public concern, affecting as many as eight percent of young children and two percent of adults. While consumer-protection laws regulate the labeling of ingredients in pre-packaged foods, cross-contaminations can still occur during processing, manufacturing and transportation.

    A team of researchers from the UCLA Henry Samueli School of Engineering and Applied Science (Los Angeles, CA, USA) has developed a lightweight device called the iTube which attaches to a common cell phone to detect allergens in food samples.

    The iTube attachment uses the cell phone’s built-in camera, along with an accompanying smart-phone application that runs a test with the same high level of sensitivity a laboratory would.

    Reply
  8. Tomi Engdahl says:

    Intel will offer tiny Kinect-like interface for PCs, partner with OEMs for ‘computer senses’
    http://www.theverge.com/2013/1/7/3848012/intel-nuance-voice-face-interface

    Reply
  9. Tomi Engdahl says:

    Navy’s Next-Gen Binoculars Will Recognize Your Face
    http://www.wired.com/dangerroom/2013/02/binocular-face-scan/

    Take a close look, because the next generation of military binoculars could be doing more than just letting sailors and soldiers see from far away. The Navy now wants binoculars that can scan and recognize your face from 650 feet away.

    That’s according to a Jan. 16 contract announcement from the Navy’s Space and Naval Warfare Systems Command, which is seeking a “Wireless 3D Binocular Face Recognition System.” During a testing period of 15 months, the plan is to improve “stand-off identification of uncooperative subjects” during daylight, using binoculars equipped with scanners that can read your mug from “100 to 200 meters” away, or about 328 to 650 feet. After scanning your mug, the binoculars then transmit the data to a database over a wireless network, where the data is then analyzed to determine a person’s identity. The no-bid contract, for an unspecified amount of money, went to California biometrics firm StereoVision Imaging.

    “High level, it’s a surveillance and identification system,”

    Depending on how well the binoculars work — and there’s reason to be cautious — it could give the Navy the ability to take advanced facial recognition into a much more portable and long-distance version than many current systems. Facebook uses the technology to match faces when users upload new photos. Google has its own version as well for its its Picasa photo service, and Apple has been researching face recognition as a way to unlock smartphones. (There are apps for iOS that do this, too.)

    Reply
  10. Tomi Engdahl says:

    Cameras keep drivers on track
    http://www.edn.com/electronics-blogs/automotive-innovation/4406913/Cameras-keep-drivers-on-track

    With the rise of small cameras and the local image processing, lane change and vehicle avoidance are becoming part of both OEM vehicles and the aftermarket. There are several approaches to the lane change issue – multiple cameras on the side of the car or single cameras in the front being the most common.

    The challenge for the system is to identify what is a curve in the road versus line anomalies and where there are adjacent vehicles.

    These side-facing systems typically employ downward-facing cameras that look for the lines designating the lane edge markers.

    These downward-facing systems minimize the impact of direct glare from sun conditions impacting image quality, however, they are more susceptible to image blockage from road debris, dirt, and reflected light in foggy conditions.

    Reply
  11. Tomi Engdahl says:

    USB3 Vision -
    High Bandwidth Yet Simple Connectivity
    http://www.visiononline.org/vision-standards-details.cfm?type=11

    First draft of USB 3.0 vision standard published
    http://www.vision-systems.com/articles/2012/05/first-draft-of-usb-3-vision-standard-published.html

    USB3 Vision – New Camera IInterface Standard for Machine Vision
    http://spectronet.de/portals/visqua/story_docs/vortraege_2011/111108_vision/111110_14_00_dierks_basler.pdf

    NI Announces Support for USB3 Vision Standard for Easy Usability and Powerful Image Acquisition in NI LabVIEW
    http://digital.ni.com/worldwide/bwcontent.nsf/websearch/6603c0413954e79e86257aaf0045500d
    · NI vision software and LabVIEW to support the newly adopted USB3 Vision camera standard based on the USB 3.0 protocol with numerous advantages compared to existing standards

    Reply
  12. Tomi Engdahl says:

    New MRI ‘fingerprinting’ could spot diseases in seconds
    http://news.cnet.com/8301-17938_105-57574960-1/new-mri-fingerprinting-could-spot-diseases-in-seconds/

    Researchers say the tech, which could help spot heart disease, multiple sclerosis, specific cancers, and more, may make MRI scans a standard procedure during annual exams.

    Our body tissue, not to mention diseases, each have their own unique “fingerprint,” which can in turn be examined to diagnose various health issues at very early stages.

    Now, researchers at Case Western Reserve University in Cleveland say that after a decade of work they’ve developed a new MRI (magnetic resonance imagining) technique that can scan for those diseases very quickly. In just 12 seconds, for instance, it may be possible to differentiate white from gray matter in cerebrospinal fluid in the brain; in a matter of minutes, a full-body scan would provide far more data, making diagnostics considerably easier and less expensive than today’s scans.

    “The overall goal is to specifically identify individual tissues and diseases, to hopefully see things and quantify things before they become a problem,”

    Reply
  13. Tomi Engdahl says:

    Nvidia’s skirt-chasing chips ogle babes, eyeball Twitter and YouTube
    Like what that stranger’s wearing? Snap a pic for image-matching GPU
    http://www.theregister.co.uk/2013/03/21/nvidia_frocks/

    These addresses are usually chock-full of demonstrations showing where we are in terms of state-of-the-art graphics, scientific and technical computing, entertainment, and now: finding dresses. In this demonstration, Huang leafed through the latest edition of In Style magazine.

    Using this GPU-accelerated technology, shoppers can snap pics from magazines, newspapers, clothes people on the bus are wearing, stuff on the shelves in the high street, or anything they take a fancy to; and then pull up similar or matching products to purchase online.

    In your correspondant’s view, this technology isn’t confined only to dresses.

    The company also demonstrated that it’s possible to capture a particular pattern and then search for clothing that has the same, or a similar, look. To my untrained eye, it looked to do a pretty good job.

    So – aside from everyone who likes to shop for clothes – who will use this technology? The online shops that want to make it quicker and easier for potential customers to comb through their vast inventories of goods.

    Reply
  14. Tomi Engdahl says:

    Lasers capture 3D images from a kilometre away
    Not even distance will save you
    http://www.theregister.co.uk/2013/04/05/laser_3d_distance_imaging/

    It’s not a completely new idea, of course. It is, in fact, quite close to how we use airborne LIDAR to get high-resolution digital elevation models. However, using infrared “ToF” or time-of-flight imaging for photography poses challenges, most particularly in getting a decent reflection from clothing and other soft materials.

    The scanner uses 1,560 nm pulses which the researchers say works well in the atmosphere and doesn’t get drowned out by sunlight.

    Applications could include “target identification”, a wonderful euphemism for “working out of there’s someone you want to shoot hidden in the foliage”, as well as remote vegetation monitoring, and watching the movement of rock faces if they’re in danger of collapse.

    Reply
  15. Tomi says:

    Glasses that read to the blind
    http://www.electronicproducts.com/Videos/Glasses_that_read_to_the_blind.aspx

    What began as a project for a student competition may possibly result in a breakthrough for the 39 million people suffering from blindness worldwide.

    The invention consists of a pair of eyeglasses, 2 Micro HD cameras, and earphone and a 4 GB Hard Drive. On either side of the eyeglasses you will find 2 Micro HD cameras to capture the images of the text. The text is then quickly processed by the 4 GB HD and software creating an immediate playback to the wearer through the earphone. Using simple components, the team wishes to make this a cost-effective solution for the vision impaired. Impact Currently, there is no affordable way for the blind to read anything. This places many limitations on their lives.

    Reply
  16. powszechnegotowanie says:

    Live Free or Die: Is our governing administration on an not sustainable spending spree?

    A referendum so as to halt that development passed
    by a huge handful of ballots.

    Reply
  17. foto escort says:

    I really wanted to post a quick note in order to express gratitude to you for those fabulous recommendations you are giving out on this website. My considerable internet investigation has at the end of the day been honored with excellent points to go over with my friends and classmates. I would believe that most of us visitors actually are very much lucky to be in a useful community with very many perfect professionals with helpful tactics. I feel quite fortunate to have encountered your website page and look forward to so many more exciting minutes reading here. Thank you once again for all the details.

    Reply
  18. camera says:

    Hi, Neat post. We have an trouble with your website with internet adventurer, could possibly examine this particular? Web browser is still the market industry chief plus a good portion of persons will probably pass over the spectacular producing for this reason issue.

    Reply
  19. Tomi Engdahl says:

    CurvACE gives robots a bug’s eye view
    http://www.gizmag.com/curvace-robot-compund-eye/27625/

    Robots are getting down to the size of insects, so it seems only natural that they should be getting insect eyes. A consortium of European researchers has developed the artificial Curved Artificial Compound Eye (CurvACE) which reproduces the architecture of the eyes of insects and other arthropods. The aim isn’t just to provide machines with an unnerving bug-eyed stare, but to create a new class of sensors that exploit the wide field of vision and motion detecting properties of the compound eye.

    If the resolution is so bad, why compound eyes? The answer is that they have their own strengths. Compound eyes have a very large field of vision. The cross section of a compound eye is also thin, so it can wrap around an animal’s head without sacrificing the interior. And it’s extremely good at detecting motion.

    Leaving aside metaphysics, it’s probable that what a compound eye sees is a single, blurry, pixelated image. That may seem like a disadvantage, but such a low-resolution pixelated image highlights movement beautifully, making compound eyes very good for motion detection.

    CurvACE isn’t the first attempt to exploit the architecture of the compound eye, but CurvACE aims to make a much deeper emulation combined with fast digital image processing.

    Reply
  20. raspberry ketones gnc says:

    I just like the valuable info you provide on your articles. I’ll bookmark your blog and test again here regularly. I’m somewhat certain I will be informed plenty of new stuff proper right here! Good luck for the following!

    Reply
  21. Isidro says:

    It is appropriate time to make some plans for the longer term
    and it is time to be happy. I have read this publish and if I may I want to suggest you some interesting
    things or suggestions. Perhaps you can write subsequent
    articles relating to this article. I wish to read more issues about it!

    Reply
  22. watch Dogs ps3 says:

    I read this paragraph completely regarding the comparison of latest and earlier technologies, it’s amazing article.

    Reply
  23. rome total war ii says:

    It’s a pity you don’t have a donate button! I’d without a doubt donate to this brilliant blog! I suppose for now i’ll settle for book-marking and adding your RSS feed to my Google account.
    I look forward to brand new updates and will talk about this blog with my Facebook group.
    Chat soon!

    Reply
  24. zainteresuj się says:

    I every time spent my half an hour to read this webpage’s articles or reviews every day along with a mug of coffee.

    Reply
  25. Ardis says:

    Fɑstidious response in return of this question with solid аrguments and telling the whole thing regaгding that.

    Reply
  26. tomi says:

    I have had trouble with hackers.

    The things that should be done to avoid big problems:
    - make sure that you have always you data properly backed up (decide what you are prepared to loose compared to work on backups)
    - make sure that the backups you have made really work (that you can restore data from them in case you really need them, non-working backups are just waste of time)
    - keep your WordPress installation up-to-date (update quickly when serious security issues are reported)

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*