Audio and video trends for 2017

Here are some audio and video trends picks for the year 2017:

It seems that 3D craze is over. So long, 3DTV – we won’t miss youBBC News reports that at this year’s CES trade show, there was barely a whimper of 3D TV, compared to just two years ago when it was being heralded as the next big thing. In the cinema, 3D was milked for all it was worth, and even James Cameron, who directed Avatar, is fed up with 3D. There are currently no major manufacturers making 3DTVs as Samsung, LG and Sony have now stopped making 3D-enabled televisions. According to CNet’s report, TV makers are instead focusing on newer technologies such as HDR.

360 degree virtual reality video is hot how. Movie studios are pouring resources into virtual reality story-telling. 360-Degree Video Playback Coming to VLC, VR Headset Support Planned for 2017 article tells that VLC media player previews 360° video and photo support for its desktop apps, says the feature will come to mobile soon; dedicated VLC apps for VR headsets due in 2017.

4K and 8K video resolutions are hot. Test broadcasting of 8K started in August 2016 in Japan and full service is scheduled for 2018. According to Socionext Introduces 8K HEVC Real-Time Encoder Solution press release the virtual reality technology, which is seeing rapid growth in the global market, requires an 8K resolution as the current 4K resolution cannot support a full 360-degree wraparound view with adequate resolution.

Fake News Is About to Get Even Scarier than You Ever Dreamed article tells that advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways. Adobe showed off a new product that has been nicknamed “Photoshop for audio” that allows type words that are expressed in that exact voice of someone you have recording on. Technologists can also record video of someone talking and then change their facial expressions in real time. Digital avatars can be almost indecipherable from real people – on the latest Star Wars movie it is hard to tell which actors are real and which are computer-generated.

Antique audio formats seem to be making come-back. By now, it isn’t news that vinyl albums continue to sell. It is interesting that UK vinyl sales reach 25-year high to point that Vinyl Records Outsold Digital Downloads In the UK at least for one week.

I would not have quessed that Cassettes Are Back, and Booming. But a new report says that sales of music on cassette are up 140 percent. The antiquated format is being embraced by everyone from indie musicians to Eminem and Justin Bieber. For some strange reason it turns out there’s a place for archaic physical media of questionable audio fidelity—even in the Spotify era.

Enhance! RAISR Sharp Images with Machine Learning. Google RAISR Intelligently Makes Low-Res Images High Quality article tells that with Google’s RAISR machine learning-driven image enhancement technique, images can be up to 75% smaller without losing their detail.

Improving Multiscreen Services article tells that operators have discovered challenges as they try to meet subscribers’ requirements for any content on any device. Operators must choose from a variety of options for preparing and delivering video on multiple screens. And unlike the purpose-built video networks of the past, in multiscreen OTT distribution there are no well-defined quality standards such as IPTV’s SCTE-168.

2017: Digital Advertising to overtake TV Advertising in US this year article tells that according to PricewaterhouseCoopers, “Ad Spend” on digital advertising will surpass TV ads for the first time in 2017.For all these years, television gave a really tough fight to internet with respect to Ad spend, but online advertising to decisively take over the market in 2017. For details check How TV ad spending stacks up against digital ad spending in 4 charts.

Embedded vision, hyperspectral imaging, and multispectral imaging among trends identified at VISION 2016.

 

624 Comments

  1. Tomi Engdahl says:

    Television’s Most Infamous Hack Is Still a Mystery 30 Years Later
    https://entertainment.slashdot.org/story/17/11/22/1756241/televisions-most-infamous-hack-is-still-a-mystery-30-years-later?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    It has been 30 years since the Max Headroom hack, arguably the creepiest hack in the television history took place. Caroline Haskins, writes about the incident for Motherboard:

    The Mystery of the Creepiest Television Hack
    https://motherboard.vice.com/en_us/article/pgay3n/headroom-hacker

    It was like any other Sunday night at Chicago’s WGN-TV. And then the signal flickered into darkness.

    A squat, suited figure sputtered into being, and bounced around maniacally. Wearing a ghoulish rubbery mask with sunglasses and a frozen grin, the mysterious intruder looked like a cross between Richard Nixon and the Joker. Static hissed through the signal; behind him, a slab of corrugated metal spun hypnotically. This was not part of the regularly scheduled broadcast.

    “Well, if you’re wondering what’s happened,” he said, chuckling nervously, “so am I.”

    Reply
  2. Tomi Engdahl says:

    5 ways we’re toughening our approach to protect families on YouTube and YouTube Kids
    https://youtube.googleblog.com/2017/11/5-ways-were-toughening-our-approach-to.html

    Reply
  3. Tomi Engdahl says:

    Charlie Warzel / BuzzFeed:
    YouTube details steps it’s taking in ongoing effort to clamp down on videos that exploit children, will ramp up removal of offensive material, halt monetization — Across YouTube, an unsettling trend has emerged: Accounts are publishing disturbing and exploitative videos aimed …

    YouTube Is Addressing Its Massive Child Exploitation Problem
    https://www.buzzfeed.com/charliewarzel/youtube-is-addressing-its-massive-child-exploitation-problem

    After BuzzFeed News provided YouTube with dozens of examples of videos — with millions of views — that depict children in disturbing and abusive situations, the company is cracking down.

    Reply
  4. Tomi Engdahl says:

    Harold Faltermeyer’s Favorite Synths
    http://daily.redbullmusicacademy.com/2015/07/harold-faltermeyer-favorite-synths

    Harold Faltermeyer dominated the ’80s by composing two of the most enduring pieces of movie-inspired synth madness: “Axel F” (the theme to Beverly Hills Cop) and the “Top Gun Anthem.” When he wasn’t soundtracking your favorite cop car and fighter jet chases, Harold Faltermeyer was Giorgio Moroder’s right-hand man, arranging and producing for the likes of Donna Summer, Cheap Trick, and the Pet Shop Boys.

    Reply
  5. Tomi Engdahl says:

    Camera design: Sony marks transition from CCD to CMOS with new camera release
    http://www.vision-systems.com/articles/print/volume-22/issue-10/departments/technology-trends/camera-design-sony-marks-transition-from-ccd-to-cmos-with-new-camera-release.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27

    Sony Europe’s Image Sensing Solutions has announced a new series of SXGA camera modules which are positioned to enable users to move from CCD to global shutter CMOS image sensor.

    The first camera available in the series is the XCG-CG160, which is available in color and monochrome, and is based on the 1/3″ IMX273 global shutter CMOS image sensor-a sensor Sony notes as an ideal replacement for cameras using the popular ICX445 CCD sensor.

    Reply
  6. Tomi Engdahl says:

    Product Focus: Fiber emerges as an alternative camera-to-computer interface
    http://www.vision-systems.com/articles/print/volume-22/issue-5/features/product-focus-fiber-emerges-as-an-alternative-camera-to-computer-interface.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27

    Boasting long-distance image transfer, fiber is emerging as an alternative camera-to-computer interface

    In choosing cameras for machine vision applications, systems integrators are faced with a number of options ranging from the resolution of the image sensor, the data rate required and the types of lenses needed. These choices are compounded by the number of different and emerging standards that exist for camera-to-computer connections, each of which has its own price/performance trade-offs. Such standards include networking based interfaces such as Gigabit Ethernet, bus-based designs such as USB3 Vision or point-to-point protocols such as Camera Link, Camera Link HS (CLHS) and CoaXPress (CXP).

    Reply
  7. Tomi Engdahl says:

    Dual camera system checks automotive electronic assemblies
    http://www.vision-systems.com/articles/print/volume-22/issue-7/features/dual-camera-system-checks-automotive-electronic-assemblies.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27

    Semi-automated imaging system analyzes component placement and Data Matrix data to inspect automotive electronic assemblies at speeds of five parts per minute.

    Reply
  8. Tomi Engdahl says:

    3D microscope enables viewing of 3D objects in virtual meeting room
    http://www.vision-systems.com/articles/2017/06/3d-microscope-enables-viewing-of-3d-objects-in-virtual-meeting-room.html

    To allow team members working in different locations to view a 3D object simultaneously in virtual meeting rooms, Octonus developed the 3DDM, a flexible 3D digital microscope.

    Octonus’ 3DDM is based on a Leica Microsystems M205a stereomicroscope, which is a standard Leica platform fitted with a few additions: an object holder mounted on a motorized stage, a custom LED illumination system, and a pair of industrial cameras from FLIR (formerly Point Grey.) To create an image, an operator mounts a sample on the 3DDM’s object holder, where the operator can rotate the sample under the cameras’ field of view using a standard device such as a mouse, keyboard, or 3D joystick. Adjustments to a focusing drive fitted to the cameras and optical aperture of the system enable the capture of 3D video images, according to FLIR.

    As the illuminated sample rotates, the cameras capture a live video stream. The cameras used in the microscope are Grasshopper GS3- U3-23S6C-C color cameras. This color camera features the 2.3 MPixel Sony IMX174 global shutter CMOS image sensor, which has a 5.86 µm pixel size and can reach speeds of up to 163 fps. Additionally, the camera features a USB3 Vision interface and a C-Mount lens mount.

    Reply
  9. Tomi Engdahl says:

    Cameras and camera interfaces span the bandwidth spectrum
    http://www.vision-systems.com/articles/print/volume-22/issue-2/features/cameras-and-camera-interfaces-span-the-bandwidth-spectrum.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27

    Numerous cameras are now available that are tailored specifically for industrial machine vision, medical and scientific applications. With these choices, many cameras are offered with a range of interfaces ranging from bus-based designs such as USB and FireWire, to networked-based systems based on Gigabit Ethernet or point-to-point, more deterministic interfaces such as Camera Link, Camera Link HS and CoaXPress (CXP).

    After considering which camera and camera interface is the best possible solution for a given application, systems designers are also faced with numerous types of cables and connectors that will best fit their application. To fully comprehend the numerous options that are available, all these considerations must be taken into account.

    Camera interfaces

    As digital camera interfaces have replaced their analog counterparts, numerous standards have been proposed and adopted. These include computer-interface peripheral standards such as Firewire (1394), USB, PCIExpress and Thunderbolt, computer networking standards such as Gigabit Ethernet, 10 Gigabit Ethernet and NBASE-T and those such as Camera Link, Camera Link HS and CoaXPress that have been especially developed for the machine vision industry.

    While each standard presents its own price/performance tradeoffs, the need to tailor many of these to meet the needs of machine vision systems has led the AIA (Ann Arbor, MI USA; http://www.visiononline.org) to standardize GigE Vision, Camera Link, Camera Link HS, Camera Link HS and USB3 Vision, while the Japan Industrial Imaging Association’s Technical Committee (JIIA, Tokyo, Japan; http://jiia.org) is responsible for the maintenance of the CXP standard.

    One of the main factors when choosing such an interface is the bandwidth and the camera-to-cable distances that can be achieved. However, comparing the different standards available can be confusing.

    Reply
  10. Tomi Engdahl says:

    Integration of vision in embedded systems
    http://www.vision-systems.com/articles/print/volume-22/issue-1/features/integration-of-vision-in-embedded-systems.html?cmpid=enl_vsd_vsd_newsletter_2017-11-27

    Embedded vision architectures enable smaller, more efficient vision solutions optimized for price and performance
    Embedded computer systems usually come into play, when space is limited and power consumption has to be low. Typical examples are mobile devices, from mobile test equipment in factory settings to dental scanners. Embedded vision is also a great solution for robotics, especially when a camera has to be integrated into the robot’s arm.

    Furthermore, the embedded approach allows reducing the system costs compared to the classic PC-based setup. Let’s say you spend $1,700 on the system with a classic camera, a lens, a cable and a PC. An embedded system with the same throughput would cost $300, because each piece of hardware is cheaper

    So be it smart industrial wearables, automated parking systems, or people counting applications, there are several embedded system architectures available for integrating cameras into your embedded vision system.

    Camera integration into the embedded world

    In the machine vision world, a typical camera integration works with a GigE or USB interface, which more or less is a plug-and-play solution connected to a PC (or IPC). Together with a manufacturer’s software development kit (SDK) it is easy to get access to the camera and this principle can be transferred to an embedded system

    Utilizing a single-board computer (SBC), this basic integration principle remains the same (Figure 3). Low-cost and easy to obtain SBCs contain all the parts of a computer on a single circuit board SoC, RAM, storage slots, IO Ports (USB 3.0, Gig-E, etc.).

    Popular single-board computers like Raspberry Pi or Odroid have compatible interfaces (USB/ Ethernet). There are also industry-proven single-board computers available from companies such as Toradex (Horw, Switzerland; http://www.toradex.com) or Advantech (Taipei, Taiwan; http://www.advantech.com) that provide these standard interfaces.

    More and more camera manufacturers provide their software development kit (SDK) also in a version working on an ARM platform, so that users can integrate a camera in a familiar way as on a Windows PC.

    Specialized embedded systems

    Embedded systems can be specialized to an even higher level, when the processing technology needs to be even more stripped down, for certain applications. That is why many systems are based on a system on module (SoM). These very compact board computer modules only contain a processor (to be precise: typically, a system on chip, SoC), microcontrollers, memory and other essential components.

    Special image data transfer

    A direct camera-to-SoC-connection for the image data transfer can be achieved by an LVDS-based connection or via the MIPI CSI2 standard. Both methods are not clearly standardized from the hardware side. This means there are no connectors specified, not even numbers of lanes within the cable. As a result, in order to connect a specific camera, a matching connector must usually be designed-in on the carrier board and is not available in standardized form on an off-the-shelf single-board computer.

    CSI2, a standard coming from the mobile device industry, describes signal transmission and a software protocol standard. Some SoCs have CSI interfaces and there are drivers available for selected camera modules and dedicated SoCs. However, they are not working in a unified manner and there are no generic drivers. As a result, the driver may need to be individually modified and the data connection to the driver can require further adaptation on the application software side in order to enable the image data collection. So, CSI2 does not represent a ready-to-use solution working immediately after installation.

    Camera configuration

    Another aspect of these board-to-board connections is the camera configuration. Controlling signals can be exchanged between SoC and camera via various bus systems, e.g. CAN, SPI or I²C. As yet, no standard has been set for this functionality.

    Embedded vision can be an interesting solution for certain applications; several applications based on GigE or-more typically-on USB, can be developed using single-board computers. Given that these types of hardware are popular and offer a broad range in price, performance and in compliance with quality standards (consumer and business), this is a reasonable option for many cases.

    For a more direct interface, LVDS or CSI2-based camera-to-SoC connections are possible for image data transfer.

    Reply
  11. Tomi Engdahl says:

    Stationhead allows anyone to become a streaming radio DJ, with live listener calls
    https://techcrunch.com/2017/11/29/stationhead/?utm_source=tcfbpage&sr_share=facebook

    Streaming services like Spotify have turned playlists into one of the main ways to discover new music, but I’d argue that they’re missing some of the personality of traditional radio — the kind of radio where I knew not just the names of my favorite DJs

    That’s the experience that Stationhead
    is trying to bring to the streaming music world. The smartphone app allows anyone to turn their playlists into a personal radio station

    While your station will continue playing automatically when you’re not around, Star told me that the live experience is key. You’re not just sharing a playlist but hosting a broadcast where you introduce the songs and talk about anything else that’s on your mind.

    it integrates with existing streaming services — it launched with Spotify and recently added Apple Music.

    Reply
  12. Tomi Engdahl says:

    Amazon’s AWS DeepLens is an AI camera for developers
    https://techcrunch.com/2017/11/29/amazons-aws-deeplens-is-an-ai-camera-for-developers/

    MenuTechCrunch
    AWS re:INVENT 2017
    November 27 – December 1, 2017

    Amazon’s AWS DeepLens is an AI camera for developers
    Posted 13 hours ago by Brian Heater (@bheater)

    Here’s a little surprise from today’s AWS re:Invent keynote. In an event peppered with talk of containers and bizarre musical interludes, Amazon introduced its AWS DeepLens camera. The device functions similarly to Google’s recently announced Clips camera, utilizing AI to grab better shots, only Amazon’s version is targeted specifically at developers.

    The video camera was designed as a way to help developers up to speed with Amazon’s various forays into AI, IoT and server less computing, according to the company.

    Reply
  13. Tomi Engdahl says:

    Adi Robertson / The Verge:
    Google announces $45 AIY Vision Kit for Raspberry Pi, touting it as a cheap and simple computer vision system that doesn’t require access to cloud processing

    Google is making a computer vision kit for Raspberry Pi
    https://www.theverge.com/2017/11/30/16720322/google-aiy-vision-kit-raspberry-pi-announce-release

    Google is offering a new way for Raspberry Pi tinkerers to use its AI tools. It just announced the AIY Vision Kit, which includes a new circuit board and computer vision software that buyers can pair with their own Raspberry Pi computer and camera. (There’s also a cute cardboard box included, along with some supplementary accessories.) The kit costs $44.99 and will ship through Micro Center on December 31st.

    The AIY Vision Kit’s software includes three neural network models: one that recognizes a thousand common objects; one that recognizes faces and expressions; and a “a person, cat and dog detector.” Users can train their own models with Google’s TensorFlow machine learning software.

    Reply
  14. Tomi Engdahl says:

    YouTube deletes 150,000 videos as it cleans up kids’ content
    https://www.cnet.com/news/youtube-deletes-150000-videos-following-boycott/

    The Google-owned video site faces controversy over inappropriate videos and comments aimed at children — and major brands have reportedly pulled ads.

    Reply
  15. Tomi Engdahl says:

    HEVC and the Windows 10 Fall Creators Update
    by Ganesh T S on November 30, 2017 11:55 PM EST
    https://www.anandtech.com/show/12106/hevc-and-the-windows-10-fall-creators-update

    The Windows 10 Fall Creators Update (FCU) came with a host of welcome changes. One of the aspects that didn’t get much coverage in the tech press was the change in Microsoft’s approach to the bundled video decoders. There had been complaints regarding the missing HEVC decoder when the FCU was in the Insider Preview stage. It turned out to be even more puzzling when FCU was released to the stable ring. This has led to plenty of erroneous speculation in the user community. We reached out to Microsoft to clear things up.

    The missing HEVC decoder is not a factor for users playing back media through open source applications such as Kodi, MPC-HC, or VLC. However, users of the Movies and TV app built into Windows 10 FCU or software relying on the OS decoders such as Plex will find HEVC videos playing back with a blank screen. While this is a minor inconvenience at best, a more irritating issue is the one for users with systems capable of Netflix 4K playback. Instead of 4K, FCU restricts them to 1080p streams at 5.8 Mbps (as those are encoded in AVC).

    Reply
  16. Tomi Engdahl says:

    Google is making a computer vision kit for Raspberry Pi
    https://www.theverge.com/2017/11/30/16720322/google-aiy-vision-kit-raspberry-pi-announce-release

    The world’s first deep learning enabled video camera for developers
    https://aws.amazon.com/deeplens/

    AWS DeepLens helps put deep learning in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand deep learning skills.

    Reply
  17. Tomi Engdahl says:

    Anti-piracy firm’s scaremongering attack on Kodi boxes should make you angry
    https://betanews.com/2017/12/04/scaremongering-attack-on-kodi/

    You can’t have failed to notice, but copyright holders and anti-piracy groups are waging war on Kodi — and “fully loaded” Kodi boxes in particular — at the moment. And as is the case in all wars, the first casualty is truth.

    A new video from the Hollywood-backed Digital Citizens Alliance is so full of lies and nonsense it will have you shaking your head in wonderment. Does anyone truly believe this propaganda anymore (if they ever did)? Clearly the DCA thinks they do.

    The video, which you can see below, talks about “disreputable devices” which have “slipped through the cracks,” while showing a Kodi box which TorrentFreak rightly points out appears to be a cased Raspberry Pi.

    Reply
  18. Tomi Engdahl says:

    Forecast: Pay TV to Lose 26% of Subs by 2030
    http://www.broadbandtechreport.com/articles/2017/11/forecast-pay-tv-to-lose-26-of-subs-by-2030.html?cmpid=enl_btr_btr_video_technology_2017-12-04

    According to the Diffusion Group, the traditional residential pay TV industry will lose 26% of its subscribers by 2030. The research house foresees virtual and over-the-top (OTT) operators such as Sling TV and DirecTV Now competing with traditional cable, satellite and telco pay TV providers for a larger slice of a declining market.

    TDG expects the penetration of live multi-channel pay TV services to decline from 85% of U.S. households in 2017 to 79% in 2030. While statistically a loss of only 7%, it nonetheless illustrates the ongoing decline of a once healthy market space, TDG says. TDG predicts that by 2030, roughly 30 million U.S. households will live without a multichannel video programming distributor (MVPD) service of any kind, be it virtual or legacy.

    NPD: OTT Driving Increases in TV Viewing
    http://www.broadbandtechreport.com/articles/2017/11/npd-ott-driving-increases-in-tv-viewing.html?cmpid=enl_btr_btr_video_technology_2017-12-04

    Research from the NPD Group indicates a shift in how much time Americans spend watching TV and movie content and how they’re watching it. As subscribers to Netflix, Hulu and other over-the-top (OTT) subscription video-on-demand (SVOD) services continues to increase, so too does the amount of time viewers watch television and movies on their TVs, personal computers and mobile devices. Consumers viewed one more hour per week of TV and movie content in August 2017 than they did the previous year.

    The research house says viewing is shifting from live TV to subscription services, as more streaming content is made available.

    “Digital content continues to reshape the video landscape,”

    Reply
  19. Tomi Engdahl says:

    Orange Releases Open Source Multiscreen Software
    http://www.broadbandtechreport.com/articles/2017/12/orange-releases-open-source-multiscreen-software.html?cmpid=enl_btr_btr_video_technology_2017-12-04

    Orange (Euronext Paris:ORAN) has announced the open source release of its OCast multiscreen video software.

    OCast is designed to let users use a smartphone to play videos on devices including TV set-top boxes, TV sticks or TV sets and control playback of the video (pause, fast forward, rewind, etc.). It can also play and control slideshows, playlists and web apps. Users can browse and explore their content libraries via their preferred interface, smartphone, tablet or TV and watch in or out of the home.

    OCast is designed to be integrated into service provider set-top boxes with no specific development. Developers of mobile applications that incorporate long videos can offer VOD and SVOD content providers access to the big screen TV.

    All the code is published, without license fees, and is designed for easy integration into operators’ set-top boxes and equipment, as well as in the applications of video services providers. Operators retain control over the applications authorized to operate on their set-tops.

    Orange subsidiary Viaccess-Orca will integrate the OCast technology in its range of solutions for TV operators.

    Orange announces the Open Source release of its OCast software technology
    https://www.orange.com/en/Press-Room/press-releases/press-releases-2017/Orange-announces-the-Open-Source-release-of-its-OCast-software-technology

    Reply
  20. Tomi Engdahl says:

    BuzzFeed:
    YouTube says moderation team will have 10K+ people by end-2018, up 25% according to sources, and pledges new approach to ads to curb child exploitation problem — The company plans to have over 10,000 content moderators on staff by the end of 2018, YouTube CEO, Susan Wojcicki said.

    Here’s What YouTube Is Doing To Stop Its Child Exploitation Problem
    https://www.buzzfeed.com/charliewarzel/youtube-will-add-more-human-moderators-to-stop-its-child

    The company plans to have over 10,000 content moderators on staff by the end of 2018, YouTube CEO, Susan Wojcicki said.

    Reply
  21. Tomi Engdahl says:

    Sony’s Super Slo-Mo Cellphone Camera
    https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/sonys-super-slomo-cellphone-camera

    This past summer, Sony debuted a high end cellphone, the Xperia XZ, that can take slow-motion videos at frame rates over an order of magnitude faster than its competitors’ handsets. The phone’s camera can capture the flapping of birds’ wings or a skateboard trick at a rate of 960 frames per second. By contrast, the iPhone X offers a maximum of 60 frames per second at 4K (ultra-high definition); the Samsung Galaxy S8 offers half that frame rate at 4K, and up to 60 fps when recording in high definition.

    This week, at the International Electron Devices Meeting in San Francisco, Sony presented details about how it made this speedy camera work within the space and power constraints of a cellphone. The key is an unusual 3D-stacked design that sandwiches a layer of DRAM between a CMOS image sensor and a layer of logic.

    Camera speeds are typically limited by the time it takes to transfer data off of individual pixels

    The 19.3-million-pixel image sensing chip, a logic layer, and a layer comprising 1 gigabit of DRAM are fabricated on separate wafers, then bonded together, thinned, and connected through interlayer links called through-silicon vias. At about 130 micrometers thick, this stacked sensor is still small enough for a cellphone

    Reply
  22. Tomi Engdahl says:

    Finnish company Specim Spectral Imaging has introduced the world’s first easily portable Iq hyperspectral camera. The 1.3 kg novelty allows its manufacturer to analyze material samples in seconds, anywhere.

    Specim Iq can provide its users with critical decisions and rapid reaction responses For example, the police are able to find evidence of the crime scene instantly. Or in the field of art, the immediate detection of counterfeits can become a routine part of trading. Industry, food and health areas are also future users.

    Up to now, the complexity of the competing devices, the large size and the lack of real-time information have been limited by the use of hyperspectral imaging. The use of a new device is promised to be clearly easier.

    Source: https://www.uusiteknologia.fi/2017/12/07/suomalaisyritykselta-kannettava-hyperspektrikamera/

    Reply
  23. Tomi Engdahl says:

    EU Cable Industry Hits €23.5 Billion
    http://www.broadbandtechreport.com/articles/2017/12/eu-cable-industry-hits-23-5-billion.html?cmpid=enl_btr_weekly_2017-12-05

    According to IHS Markit (NASDAQ:INFO) and Cable Europe, the European cable industry continued to show steady growth in 2016, increasing 4% from the prior year, to €23.5 billion.

    Among the findings:

    The number of unique cable homes in the EU continued to climb steadily, reaching 65.1 million – or 30.5% of total TV households – at the end of 2016.
    Reflecting trends in consumer behavior, Internet revenue continues to rise, now comprising 34% of western European cable operator revenue.
    Germany remained the largest EU market, with more than three times more unique cable homes than the next biggest markets – Romania, the UK and Poland – each of which had just over 5 million unique subscribers, compared to Germany’s 18.6 million.

    Reply
  24. Tomi Engdahl says:

    Camera Design: Modular lens and lighting components boost smart camera flexibility
    http://www.vision-systems.com/articles/print/volume-22/issue-8/departments/technology-trends/camera-design-modular-lens-and-lighting-components-boost-smart-camera-flexibility.html?cmpid=enl_vsd_vsd_newsletter_2017-12-07

    Smart cameras can now perform machine vision applications ranging from barcode reading and presence/absence detection to glue-bead inspection, optical character reading and package inspection. Whether the device is called an image-based barcode reader, a vision sensor or a smart camera, such products typically include at least an image sensor coupled with on-board processing and I/O capability.

    Smart camera popularity is on the rise because such devices can save development time and money, especially in machine vision applications that can be precisely defined. Instead of having to specify the individual camera, frame grabber, PC, and software components often associated with machine vision system design, developers can simplify deployment by leveraging integrated lighting, lenses, and processors, and focusing mainly on the software development required to perform a particular task.

    Having said that, such all-in-one solutions that are shipped pre-built with integrated optics and illumination may offer limited flexibility, especially if application requirements change between the evaluation, integration and testing phases of the project.

    Reply
  25. Tomi Engdahl says:

    Lucas Shaw / Bloomberg:
    Sources: YouTube to introduce paid, on-demand streaming music service in March; Warner has signed on, talks continue with Sony, Universal, and Merlin — Warner has signed on, talks underway with Sony, Universal — Artists asked to help promote service, internally called Remix

    YouTube to Launch New Music Subscription Service in March
    https://www.bloomberg.com/news/articles/2017-12-07/youtube-is-said-to-plan-new-music-subscription-service-for-march

    Reply
  26. Tomi Engdahl says:

    The world’s first deep learning enabled video camera for developers
    https://aws.amazon.com/deeplens/

    AWS DeepLens helps put deep learning in the hands of developers, literally, with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand deep learning skills.

    Reply
  27. Tomi Engdahl says:

    TechCrunch:
    Sources: Apple to acquire music recognition app Shazam; one source puts the deal at about $400M; Shazam’s post-money valuation after its 2015 funding was $1.02B

    Sources: Apple is acquiring music recognition app Shazam
    https://techcrunch.com/2017/12/08/sources-apple-is-acquiring-music-recognition-app-shazam/

    As Spotify continues to inch towards a public listing, Apple is making a move of its own to step up its game in music services. Sources tell us that the company is close to acquiring Shazam, the popular app that lets people identify any song, TV show, film or advert in seconds, by listening to an audio clip or (in the case of, say, an ad) a visual fragment, and then takes you to content relevant to that search.

    We have heard that the deal is being signed this week, and will be announced on Monday, although that could always change.

    Reply
  28. Tomi Engdahl says:

    VESA Announces DisplayHDR Specification: Defining HDR Capabilities In Performance Tiers
    by Nate Oh on December 11, 2017 12:00 PM EST
    https://www.anandtech.com/show/12144/vesa-announces-displayhdr-spec-and-tiers

    Today, VESA is announcing the first version of their DisplayHDR specification, a new open standard for defining LCD high dynamic range (HDR) performance. Best thought of as a lightweight certification standard, DisplayHDR is meant to set performance standards for HDR displays and how manufacturers can test their products against them. The ultimate goal being to help the VESA’s constituent monitor and system vendors to clearly display and promote the HDR capabilities of their displays and laptops according to one of three different tiers.

    The core of the DisplayHDR standard is a performance test suite specification and associated performance tiers. The three tiers have performance criteria related to HDR attributes such as luminance, color gamut, bit depth, and rise time, corresponding to new trademarked DisplayHDR logos. Initially aiming at LCD laptop displays and PC desktop monitors, DisplayHDR permits self-certification by VESA members, as well as end-user testing, for which VESA is also developing a publicly available automated test tool.

    For consumers, the three new logos of DisplayHDR-400 (low-end), DisplayHDR-600 (mid-range), and DisplayHDR-1000 (high-end) represent discrete and publicly defined levels of HDR capabilities.

    Summary of DisplayHDR Specs
    https://displayhdr.org/performance-criteria/

    Reply
  29. Tomi Engdahl says:

    AI-Assisted Fake Porn Is Here and We’re All Fucked
    https://motherboard.vice.com/amp/en_us/article/gydydm/gal-gadot-fake-ai-porn

    Someone used an algorithm to paste the face of ‘Wonder Woman’ star Gal Gadot onto a porn video, and the implications are terrifying.

    There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.

    The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

    It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance it seems believable. It’s especially striking considering that it’s allegedly the work of one person—a Redditor who goes by the name ‘deepfakes’—not a big special effects studio

    Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we’re on the verge of living in a world where it’s trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.

    “This is no longer rocket science.”

    According to deepfakes—who declined to give his identity to me to avoid public scrutiny—the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities’ faces, deepfakes said he used Google image search, stock photos, and YouTube videos. Deep learning consists of networks of interconnected nodes that autonomously run computations on input data. In this case, he trained the algorithm on porn videos and Gal Gadot’s face. After enough of this “training,” the nodes arrange themselves to complete a particular task, like convincingly manipulating video on the fly.

    Artificial intelligence researcher Alex Champandard told me in an email that a decent, consumer-grade graphics card could process this effect in hours, but a CPU would work just as well, only more slowly, over days.

    “This is no longer rocket science,” Champandard said.

    Reply
  30. Tomi Engdahl says:

    Google just launched three new photography apps
    Part of the company’s new ‘Appsperiments’ program
    https://www.theverge.com/2017/12/11/16763544/google-appsperiments-storyboard-selfissimo-scrubbies-apps-photography-motion-stills

    Google just announced three new photography applications: Storyboard (Android only), Selfissimo! (iOS and Android), and Scrubbies (iOS only). The releases are part of a new “appsperiments” program that Google just launched.

    The new program was inspired by the company’s Motion Stills app, which was released last year with the goal of taking technology in development from Google and turning it into actual apps. Like Motion Stills, the trio of new apps are fully functional software in their own right, but “built on experimental technology” that Google will continue to build out over time.

    It’s an approach that sounds similar to Microsoft’s Garage program, which offers developers at the Redmont company a chance to create similar smaller, experimental apps.

    Next is the somewhat ridiculously named Selfissimo!, which is kind of like an automated black and white photo booth on your phone. Once you tap the screen to start a shoot, Selfissimo! will snap a picture every time you pose. The idea is to move around into different poses, with the app taking a picture every time you stop moving.

    Reply
  31. Tomi Engdahl says:

    Google Photos: One year, 200 million users, and a whole lot of selfies
    https://blog.google/products/photos/google-photos-one-year-200-million/

    Reply
  32. Tomi Engdahl says:

    The mixed cell also sees in the dark

    The IT-EMCCD image sensor, which combines two different technologies, sees exactly both night darkness and bright daylight. Thanks to its wide dynamic range, one camera can accurately capture views that contain both very dark and bright spots.

    For example, at 30 frames per second, the standard image sensor is capable of capturing images with a few lux lighting levels, reflecting the brightness of full moon or twilight. The second dominant parameter is the dynamics range of the image sensor, which determines the amount of light to be measured at one time from the smallest to the largest range of the range. If you want images in unmanageable lighting conditions such as a dark alley at night, you need both good sensitivity and great dynamics. Sensitivity is needed to get to the deepest shadows and dynamics to capture brightly lit spots without losing the detail of the image.

    The requirements set by such objects can be met by an image sensor utilizing the IT-EMCCD composite structure. It combines the best of the Interline Transfer CCD image sensor and electron multiplication CCD (EMCCD).

    Source: http://www.etn.fi/index.php/13-news/7294-yhdistelmakenno-nakee-tarkasti-myos-pimeassa

    Reply
  33. Tomi Engdahl says:

    YouTuber Convicted For Publishing Video Piracy ‘Tutorials’
    https://torrentfreak.com/youtuber-convicted-for-publishing-video-piracy-tutorials-171212/

    A YouTuber in Brazil has been prosecuted and fined for publishing videos that explain how people can pirate content online using IPTV devices. A TV industry group took exception to the man’s tutorials and the Court agreed they served no other purpose than to help people infringe copyright.

    While piracy-focused tutorials have been around for many years, the advent of streaming piracy coupled with the rise of the YouTube star created a perfect storm online.

    Even a cursory search on YouTube now turns up thousands of Kodi addon and IPTV-focused channels, each vying to become the ultimate location for the latest and hottest piracy tips. While these videos don’t appear to be a priority for copyright holders, a channel operator in Brazil has just discovered that they aren’t without consequences.

    Reply
  34. Tomi Engdahl says:

    Is there any sense in making taking photos “retro” harder:

    This camera app requires users to wait three days for pictures to ‘develop’
    https://www.digitaltrends.com/mobile/gudak-brings-back-old-school-photography/

    Thanks to the convenience of smartphones, snapping photos has become second nature. Anyone can grab a phone and take as many pics as they want. Even more convenient is the fact that you can instantly see what the photos look like.

    The 99-cent app is called Gudak Cam and has become very popular in South Korea and Japan, especially among high school girls. The app came out earlier this year, and is meant to simulate the look and feel of using a Kodak disposable camera (remember those?). The app requires you to fill up a “roll of film,” which contains 24 shots. Once you’ve finished with that roll, you can have it developed in a process that takes three days, at the end of which you can view the photos on your phone.

    All of this may sound incredibly inconvenient and, to a large extent, it is. However, the appeal of Gudak Cam lies in the fact that it forces users to slow down and really think about each shot.

    Whether Gudak is just a passing fad or the start of something bigger remains to be seen. However, it already has more than 1.3 million users, so it seems to be doing something right.

    Reply
  35. Tomi Engdahl says:

    Time to Rethink Computer Vision
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1332710&

    Today, computers are fast becoming the world’s largest consumers of images, and yet, this is not reflected in the way that images are captured.

    It’s time to rethink how machines look at the world, using inspirations drawn from human vision to reshape computer vision and enable a new generation of vision-enhanced products and services.

    What’s the issue with the way that computer vision works now, and what do we do about it? Simply put, digital cameras have worked the same way for decades – all the pixels in an array measure the light they receive at the same time, and then report their measurements to the supporting hardware. Do this once and you have a stills camera. Repeat it rapidly enough and you have a video camera – an approach that hasn’t changed much since Eadweard Muybridge accidentally created cinema while exploring animal motion in the 1880s.

    This approach made sense when cameras were mainly used to take pictures of people for people. Today, computers are fast becoming the world’s largest consumers of images, and yet this is not reflected in the way that images are captured. Essentially, we’re still building selfie-cams for supercomputers.

    Events not images
    What does our sensor do differently? The most obvious difference is that its array of pixels doesn’t have a common frame rate. In fact, there are no frames at all. Instead, each pixel only outputs the intensity data it has measured once the light falling upon it has changed by a set amount. If the incident light isn’t changing (for example, in the background of a security camera’s field of view) then the pixel stays silent. If the scene is changing (for example, a car drives through it), the affected pixels report the change. If many cars pass, all the affected pixels report a sequence of changes.

    This approach has intriguing advantages. Motion blur becomes a thing of the past, because the faster the image changes the faster each affected pixel reports that change. Conversely, static parts of the image don’t keep diligently reporting their unchanging status, reducing the amount of redundant data being processed.

    Probably the most important aspect of our event-driven sensor, though, is the way it changes how we think about computer vision. If looking at a conventional video is like being handed a sequence of postcards by a friend and being asked to work out what is changing by flicking through them, an event-driven sensor’s output is more like looking at a single postcard while that friend uses a highlighter to mark every change in the scene as it happens – no matter the lighting conditions in the scene.

    Reply
  36. Tomi Engdahl says:

    Disney details overarching direct-to-consumer plan possible through Fox deal
    https://techcrunch.com/2017/12/14/disney-details-overarching-direct-to-consumer-plan-possible-through-fox-deal/?utm_source=tcfbpage&sr_share=facebook

    Disney CEO Bob Iger positioned his company’s acquisition of Fox’s TV and movie businesses as a way for the company to prepare for a future in which streaming and direct-to-consumer dominate media consumption, on a conference call this morning to discuss the $52 billion deal. He noted that while they’re still planning to support cable channels and external distribution channels, this will also set them up to be ready to “flip a switch and distribute those programs and channels direct to consumer through platforms we’ve created.”

    Reply
  37. Tomi Engdahl says:

    Large-area OLED microdisplays miniaturize VR glasses
    http://www.laserfocusworld.com/articles/2017/12/large-area-oled-microdisplays-miniaturize-vr-glasses.html?cmpid=enl_lfw_lfw_enewsletter_2017-12-14

    Virtual reality (VR) glasses are increasingly popular, but they have usually been heavy and oversized–until now. Large-area organic light-emitting diode (OLED) microdisplays make it possible to produce ergonomic and lightweight VR glasses and reach very high frame rates and high resolutions with “extended full HD”.

    Because OLEDs are self-illuminating, they are energy-efficient and yield very high contrast ratios > 10,000:1. In addition, the fact that there is no need for a backlight means that they can be constructed in a simpler fashion, with fewer optical components.

    Within LOMID, Fraunhofer FEP is responsible for designing the circuit on the silicon chip, creating OLED prototypes, and coordinating the whole project.

    “Our goal is to develop a new generation of OLED displays that provide outstanding picture quality and make it possible to produce VR glasses in a compact format. We aim to achieve that by means of a specially designed OLED microdisplay.” The microdisplays achieve extended full HD, which means they have a resolution of 1920 x 1200 pixels (WUXGA). The diagonal screen size is about one inch, and the frame rate is around 120 Hz.

    Especially with regard to microdisplays in consumer-facing augmented reality glasses, the researchers still see some as yet unresolved challenges that they wish to tackle in the future. These challenges include: very high levels of luminance and efficiency (which will necessitate removing the color filters used until now, and replacing these with directly structured emitters); a high yield for a large (chip) area; curved surfaces for more compact optics; circular light panels; irregular pixel matrices at even higher pixel density; integrated eye tracking; and transparent substrates.

    23 | 2017 – Lightweight, compact VR glasses made possible by large-area microdisplays
    Dresden / 1.12.2017
    https://www.fep.fraunhofer.de/en/press_media/23_2017.html?utm_campaign=pm1723en

    Reply
  38. Tomi Engdahl says:

    Murdoch’s Fox empire is set to become a literal Mickey Mouse outfit
    Disney buys Fox empire for $66bn, news divs to be spun off
    https://www.theregister.co.uk/2017/12/14/disney_buys_fox_66bn_dollars/

    Most of Rupert Murdoch’s 21st Century Fox empire is being flogged to Disney for $66bn, including large chunks of the film and telly businesses.

    An announcement from the Walt Disney Company, as old Walt’s empire is formally known, said that a “definitive agreement” had been entered into between 21st Century Fox and Disney.

    Reply
  39. Tomi Engdahl says:

    Ultra-Thin Camera Says Good-Bye to the Lens
    https://www.techbriefs.com/component/content/article/tb/news/news/27133?utm_source=TBnewsletter&utm_medium=email&utm_campaign=20171214_Main_Insider&eid=376641819&bid=1950539

    A new proof-of-concept design retires one of the most familiar parts of a traditional camera: the lens. By swapping out the glass lens with a tiny array of light receivers, a California Institute of Technology team believes the thinner, lighter model supports a new wave of ubiquitous imaging.

    A conventional camera’s curved lens bends, or refracts, light that can then be focused onto film or a sensor. The rounded size and shape, however, has prevented manufacturers from creating truly flat imagers — even on the latest (and thinnest) iPhones.

    Instead of a lens, the Caltech researchers, led by Professor Ali Hajimiri, used an ultra-thin optical phased array (OPA) to manipulate incoming light and capture an image.

    Phased arrays, commonly employed in wireless communication and radar applications, are collections of individual transmitters – each sending out the same signal. By staggering the timing of transmissions made at various points across the device, the array’s tightly focused signal beam can be steered in desired directions.

    Caltech’s optical phased array receiver uses a similar principle. Light signals received by the array’s various transmitters form a focused, controllable “gaze” where the waves are amplified. By adding a tightly controlled timed delay to the light being received, an operator can selectively change the camera’s direction and focus.

    With today’s camera options, switching from a large focal point to a small focal point requires a swapping of lenses.

    By adjusting the delays to detect different points in space, users of the Caltech device can quickly scan an entire surface or select a small fraction of the field of view, instantaneously shifting from “fish-eye” to telephoto modes respectively.

    Reply
  40. Tomi Engdahl says:

    Light Camera Founder Explains Delays, Software Bugs, and Slow Data Transfer
    https://spectrum.ieee.org/view-from-the-valley/consumer-electronics/gadgets/light-camera-founder-explains-delays-planned-software-fixes-and-why-data-transfer-takes-so-long

    Light, the company that aims to revolutionize photography by digitally combining the output of dozens of small, low-cost camera modules with plastic lenses to create professional-quality images, started filling pre-orders last July, after about a year of delays.

    Reviews, to date, have been less than enthusiastic, dinging the device on its low light performance, slow transfer rates, focusing issues, and spotty resolution with artifacts. The company promises, however, that these problems are solvable—and will be fixed quickly.

    Inside the Development of Light, the Tiny Digital Camera That Outperforms DSLRs
    https://spectrum.ieee.org/consumer-electronics/gadgets/inside-the-development-of-light-the-tiny-digital-camera-that-outperforms-dslrs

    Reply
  41. Tomi Engdahl says:

    Plexamp, Plex’s spin on the classic Winamp player, is the first project from new incubator Plex Labs
    https://techcrunch.com/2017/12/18/plexamp-plexs-spin-on-the-classic-winamp-player-is-the-first-project-from-new-incubator-plex-labs/

    Media software maker Plex, which has been enjoying a bit of growth following its newer focus on DIY cord-cutters, today announced a new incubator and community resource called Plex Labs. The idea here is to help the company’s internal passion projects gain exposure, along with those from Plex community members. Plex Labs will also offer in-depth technical writing about Plex, the company says.

    Today, Plex Labs is also unveiling its first project: a music player called Plexamp.

    The player’s name is a nod to the long-lost Winamp, which it’s designed to replace. The player was built by several Plex employees in their free time, and is meant for those who use Plex for music.

    As the company explains in its announcement, the goal was to build a small player that sits unobtrusively on the desktop and can handle any music format. The team limited itself to a single window, making Plexamp the smaller Plex player to date, in terms of pixel size.

    Under the hood, Plexamp uses the open source audio player Music Player Daemon (MPD), along with a combination of ES7, Electron, React, and MobX technologies.

    The end result is a player that runs on either macOS or Windows and works like a native app.

    Introducing Plexamp
    https://medium.com/plexlabs/introducing-plexamp-9493a658847a

    Reply
  42. Tomi Engdahl says:

    HD thermal imager from Sierra-Olympic is capable of 1080p output
    http://www.laserfocusworld.com/articles/2017/10/hd-thermal-imager-from-sierra-olympic-is-capable-of-1080p-output.html?cmpid=enl_lfw_lfw_detectors_and_imaging_newsletter_2017-12-19

    The Vayu HD is an uncooled thermal camera with true HD (1920 × 1200 pixels), and is capable of 1080p output. Its longwave-infrared (LWIR) spectral response is 8 to 14 μm for security, military imaging, and wide-area surveillance. Its image resolution uses a microbolometer VOx sensor with a capacity of over 2.3 million pixels on a 12 μm pixel pitch in a standalone camera.

    https://www.sierraolympic.com/

    Reply
  43. Tomi Engdahl says:

    Telia will build a fiber network for Ice Hockey League matches from various ice rinks in Finland located in Helsinki. Telia has promised to revolutionize hockey viewing experience from the beginning of the season. Also 4K high definition and 5G transmissions require fast fiber connections.

    At the moment, the Liiga is traditionally done on the spot by running the outdoor production company’s fleet next to the ice rink. In Telia’s and Streamteam Nordic’s solutions, everything is done as a remote production from Helsinki.

    “According to our agreement, the old production equipment will not be left with cable. In practice, all devices will be replaced by new cameras, “says Olli-Pekka Takanen , Head of Hockey at SM-Media Media Rights Heads.

    The new set allows you to create up to seven parallel games in one place. Also 4K ready is available from the beginning.

    “The 4K Remote Production Center will change the making of Finnish TV to a whole new level,”

    Remote production enables more efficient and faster production for Telia when no need to move equipment or allocate resources to different locations.

    Source: https://www.uusiteknologia.fi/2017/12/20/kuitu-tuo-maakunnan-jaahallit-helsinkiin/

    Reply
  44. Tomi Engdahl says:

    Analyzing The Losses In Visually Lossless Compression Algorithms
    https://semiengineering.com/analyzing-the-losses-in-visually-lossless-compression-algorithms/

    Methods to assess algorithm quality, and observations about VESA Display Stream Compression using these algorithms.

    Over the past few years there has been a remarkable progress in the quality of display devices, with 4K displays becoming the norm, and 8K and 10K displays following closely. However, this increase in quality has led to a tremendous increase in the amount of data being transmitted over display links. To meet these demands most display interfaces are now making use of compression.

    https://www.synopsys.com/cgi-bin/verification/dsdla/pdfr1.cgi?file=vesadisplay-vip-wp.pdf

    Reply
  45. Tomi Engdahl says:

    REPORT: Kids in ‘Netflix Only’ Homes are Being Saved from 230 Hours of Commercials a Year
    http://exstreamist.com/report-kids-in-netflix-only-homes-are-being-saved-from-230-hours-of-commercials-a-year/

    The average child watches 2.68 hours of television a day, or almost 980 hours a year
    One hour of television contains 14.25 minutes of commercials, or 24% of airtime
    ‘Netflix Only’ homes are saving kids 230 hours of commercials a year, or 9.6 days of ads

    TV viewership has completely changed for children. While most families have watched plenty of television ever since its invention and mass adoption, the ways in which we consume entertainment continues to evolve. From one box sitting in the living room with four channels to choose from, to multiple devices strewn around the house all utilizing different options like cable, streaming services, and internet entertainment.

    In 2017, instead of watching traditional children’s television like Saturday morning cartoons or after-school specials, more kids than ever are using streaming services like Netflix for their entertainment, with a “for kids” section, and zero commercials.

    Reply
  46. Tomi Engdahl says:

    AnyDVD Supports UHD Blu-Ray Ripping, While Devices Patch Security Holes (
    https://yro.slashdot.org/story/17/12/21/225243/anydvd-supports-uhd-blu-ray-ripping-while-devices-patch-security-holes?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    The controversial ripping tool AnyDVD has released a new beta version that allows users to decrypt and copy UHD Blu-Ray discs. The software makes use of the leaked keys that came out recently and appears to work well. Meanwhile, disc drive manufacturers are patching security holes.

    https://torrentfreak.com/anydvd-supports-uhd-blu-ray-ripping-while-devices-patch-security-holes-171220/

    For a long time UHD Blu-Ray discs have been the holy grail for movie rippers.

    Protected by the ‘unbreakable’ AACS 2.0 encryption, pirates were left with regular HD releases. While that’s fine for most people, it didn’t sit well with the real videophiles.

    This year there have been some major developments on this front. First, full copies of UHD discs started to leak online, later followed by dozens of AACS 2.0 keys. Technically speaking AACS 2.0 is not confirmed to be defeated yet, but many discs can now be ripped.

    This week a popular name jumped onto the UHD Blu-Ray bandwagon. In its latest beta release, AnyDVD now supports the format, relying on the leaked keys.

    The involvement of AnyDVD is significant because it previously came under legal pressure from decryption licensing outfit AACS LA. This caused former parent company Slysoft to shut down last year, but the software later reappeared under new management.

    Based on reports from several AnyDVD users, the UHD ripping works well for most people. Some even claim that it’s faster than the free alternative, MakeMKV.

    The question is, however, how long the ripping party will last. TorrentFreak has learned that not all supported Blu-Ray disc drives will remain “UHD-friendly.”

    According to one source’s information, which we were unable to independently verify, device manufacturers have recently been instructed to patch the holes through firmware updates.

    Reply
  47. Tomi Engdahl says:

    Scientists can match photos to individual smartphones
    http://hexus.net/ce/news/mobile-phones/113405-scientists-can-match-photos-individual-smartphones/

    Researchers at the University at Buffalo NY have discovered that it is possible to identify individual smartphones from just a single photo taken by the device. The technique is compared directly to ‘barrel matching’ or identifying a gun which has fired a particular bullet. In the case of smartphones, each one takes photos with a telltale “pattern of microscopic imaging flaws that are present in every picture they take”. Specifically, the manufacturing imperfections creating tiny variations in each camera’s sensor is referred to as its photo-response non-uniformity (PRNU).

    Explaining why there are differences in recorded photos from these mass produced products, the UB Blog says that while camera modules and lenses are built for identical performance, manufacturing imperfections cause tiny variations and “these variations can cause some of sensors’ millions of pixels to project colours that are slightly brighter or darker than they should be.”

    The differences between the different smartphone outputs, especially shots of the same scene by the same device model are not easily to see by the naked eye, if at all. However, the lack of uniformity in mass production “forms a systemic distortion in the photo called pattern noise”. Extracted by special filters, the pattern is unique for each camera and can be saved as its PRNU.

    In tests scientists accurately identified which of 30 different iPhone 6s smartphones and 10 different Galaxy Note 5s smartphones took each of 16,000 images in a database correctly 99.5 per cent of the time.

    Beyond the obvious implications that come from the comparison between smartphone camera output and gun barrels / bullets, there are other uses for this tech. The UB team suggests that you could register your PRNU with a bank or retailer, for example, and it adds an extra layer of security to ID verification. Potentially the tech could be used to defeat three of the most common tactics used by cybercriminals, think the researchers; fingerprint forgery attacks, man-in-the-middle attacks, and replay attacks.

    Your smartphone’s next trick? Fighting cybercrime.
    http://www.buffalo.edu/news/releases/2017/12/013.html

    Like bullets fired from a gun, photos can be traced to individual smartphones, opening up new ways to prevent identity theft

    “Like snowflakes, no two smartphones are the same. Each device, regardless of the manufacturer or make, can be identified through a pattern of microscopic imaging flaws that are present in every picture they take,” says Kui Ren, the study’s lead author. “It’s kind of like matching bullets to a gun, only we’re matching photos to a smartphone camera.”

    The new technology, to be presented in February at the 2018 Network and Distributed Systems Security Conference in California, is not yet available to the public. However, it could become part of the authentication process — like PIN numbers and passwords — that customers complete at cash registers, ATMs and during online transactions.

    Digital cameras are built to be identical. However, manufacturing imperfections create tiny variations in each camera’s sensors. These variations can cause some of sensors’ millions of pixels to project colors that are slightly brighter or darker than they should be.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*