Audio and video trends 2015

MEMS mics are taking over. Almost every mobile device has ditched its old-fashioned electret microphone invented way back in 1962 at Bell Labs. Expect new piezoelectric MEMS microphones, which promise unheard of signal-to-noise ratios (SNR) of up to 80 dB (versus 65 dB in the best current capacitive microphones) in 2015. MEMS microphones are growing like gangbusters.

Analysts and veterans of the International CES expect to see plenty of 4K ultra-high-definition televisions, new smartwatch uses, and a large section of the show floor dedicated to robotics.  2015 will be the first year CES gets behind 4K in a big way, as lower price points make the technology more attractive to consumers. Samsung, Sony, Sharp, and Toshiba will be big players in the 4K arena. OEMs must solve the problem of intelligence and connectivity before 4K will really take off. CES attendees may also see 4K TVs optimized for certain tasks, along with a variety of sizes. There will be 10-inch and 14-inch and 17-inch UHD displays.

4K is not enough anymore? 8K – finally come true? Korean giant LG has promised to introduce ehdan 8K TV at CES 2015 exhibition in January8K means a total of 33.2 million pixels, or 7680 x 4320 resolution. 4K video material fate is still uncertain, 8K video can not with certainty not available for a long time.

Sound bars will be a big issue at shows. One problem with new TVs — the thinner they are, the harder it is to get sound out.

Open file formats Matroska Video (MKV) and  Free Lossless Audio Codec (FLAC) gets more widely used as Windows 10 To Feature Native Support For MKV and FLAC.

Watching shows online is more common now. More people are watching videos on smaller screens. You can use a tablet as personal TV. Phablets and portable televisions have taken off in China, Japan, and Korea, where many people watch videos during long commutes. Tablets now have become so ubiquitous and inexpensive that you can buy them for a specific application. Much of the innovation will be in software, rather than hardware — tuning the tablets to boot up like a television instead of an Android tablet

We’re all spending more time with smartphones and tablets. So much so that the “second screen” may now be the “first screen,” depending on the data you read. It seems inevitable that smartphones and tablets will replace the television in terms of time spent. Many metrics firms, including Nielsen, report on the rapid increase of mobile device usage—especially when it comes to apps. Half of YouTube’s views now come from phones and tablets.

Qualcomm will push this year broadcast LTE. That will be picked up more and more by some vendors in tablets, so they can have broadcast TV signals, but it doesn’t have to be generic LTE.

There will be lots of talking on traditional TV vs new streaming services, especially on who gets which program material and at what price. While it’s possible to create a TV platform that doesn’t deal with live channels, smart TVs and game consoles alike generally try to integrate the content as best they can.

Netflix’s new strategy to take on cable involves becoming best friends with cable to get its app included on set-top boxes of cable, fiber and satellite TV operators. Roughly 90 million U.S. households subscribe to cable or other forms of pay TV, and more than 73 million subscribe to the biggest five operators alone. That’s why Netflix has been working hard to team up with one of these major operators.

Google intends to integrate content best it can. Google Publishes ‘Live Channels For Android TV’ App Into The Play Store. G  The “Live Channels for Android TV” app is unsurprisingly incompatible with phones and tablets, maybe because for some reason those markets are intentionally artificially tried to be kept separate.

Virtual reality video is trying to get to spotlight. Samsung’s new Milk VR to round up 360-degree videos for Gear VR article tells that Milk VR will provide the videos for free as Samsung hopes to goose interest in virtual reality. Milk VR service will provide free 360-degree videos to anyone using a Gear VR virtual-reality headset (uses Galaxy Note 4). Samsung wants to jump-start the virtual-reality movement as the company is looking at virtual reality as a potential growth engine at a time when one of its key traditional revenue sources — smartphones — has slowed down. The videos will also serve as a model for future filmmakers or artists looking to take advantage of the virtual-reality medium, as well as build up an ecosystem and viewership for VR content.

Although digital video is increasing in popularity, analog video remains in use in many applications.

1,154 Comments

  1. Tomi Engdahl says:

    Engineers reveal record-setting flexible phototransistor
    http://phys.org/news/2015-10-reveal-record-setting-flexible-phototransistor.html

    Inspired by mammals’ eyes, University of Wisconsin-Madison electrical engineers have created the fastest, most responsive flexible silicon phototransistor ever made.

    The innovative phototransistor could improve the performance of myriad products—ranging from digital cameras, night-vision goggles and smoke detectors to surveillance systems and satellites—that rely on electronic light sensors. Integrated into a digital camera lens, for example, it could reduce bulkiness and boost both the acquisition speed and quality of video or still photos.

    While many phototransistors are fabricated on rigid surfaces, and therefore are flat, Ma and Seo’s are flexible, meaning they more easily mimic the behavior of mammalian eyes.

    “We actually can make the curve any shape we like to fit the optical system,” Ma says. “Currently, there’s no easy way to do that.”

    One important aspect of the success of the new phototransistors is the researchers’ innovative “flip-transfer” fabrication method, in which their final step is to invert the finished phototransistor onto a plastic substrate. At that point, a reflective metal layer is on the bottom.

    “In this structure—unlike other photodetectors—light absorption in an ultrathin silicon layer can be much more efficient because light is not blocked by any metal layers or other materials,”

    Read more at: http://phys.org/news/2015-10-reveal-record-setting-flexible-phototransistor.html#jCp

    Reply
  2. Tomi Engdahl says:

    Make your own LED video wall
    http://www.edn.com/electronics-blogs/led-zone/4440656/Make-your-own-LED-video-wall

    Have you ever looked at a wall in your house, apartment, or dorm room and thought, “Hey, this could really use an eye-blistering LED wall”? The folks at Adafruit Labs have those sorts of urges all the time and have found just the thing to scratch that itch. Their pro-grade video controller board provides the perfect foundation for building your own LED video wall using their 16×32 LED matrix panels. The controller’s transmitter/decoder board converts standard DVI video to an Ethernet data stream with resolutions up to 1280×1024 pixels. The companion receiver board can control 8 strings of panels, up to 32 panels long.

    LED Video Wall Controller Set/Programmed for Adafruit LED Panels
    http://www.adafruit.com/products/1453

    Reply
  3. Tomi Engdahl says:

    Alex Weprin / Politico:
    By making Star Trek exclusive to All Access, CBS shows it is serious about competing with Netflix, Amazon, others with its own streaming service — CBS is serious about taking on Netflix — There’s no question about it anymore: CBS is serious in its effort to take on Netflix, Amazon …

    CBS is serious about taking on Netflix
    http://www.capitalnewyork.com/article/media/2015/11/8581486/cbs-serious-about-taking-netflix

    Reply
  4. Tomi Engdahl says:

    Eriq Gardner / Hollywood Reporter:
    MPAA claims responsibility for shutting down popcorntime.io and torrent site YTS — MPAA Touts Big Legal Success Against Popcorn Time — The studios trade association has scored injunctions in Canada and New Zealand over PopcornTime.io and the torrent outfit YTS.

    MPAA Touts Big Legal Success Against Popcorn Time
    http://www.hollywoodreporter.com/thr-esq/mpaa-touts-big-legal-success-836329

    The studios trade association has scored injunctions in Canada and New Zealand over PopcornTime.io and the torrent outfit YTS.

    The Motion Picture Association of America is hailing a success in the ongoing fight against piracy with word that the trade association has won the shutdown of the “official” Popcorn Time fork as well as torrent outfit YTS.

    In an announcement on Tuesday, the MPAA says that Popcorntime.io closed after a court order in Canada. The shutdown of the main Popcorn Time fork was first reported in late October and attributed at the time to in-fighting by some of its core developers.

    In the past few weeks, the MPAA appears to have ramped up its international efforts on the legal front.

    Reply
  5. Tomi Engdahl says:

    Leslie Picker / New York Times:
    When asked if Netflix would create a live evening newscast, Netflix CEO Reed Hastings says he doesn’t “invest in things that are dying”

    Netflix’s Reed Hastings Sees Need for More Content
    http://www.nytimes.com/2015/11/04/business/dealbook/netflixs-reed-hastings-sees-need-for-more-content.html?_r=0

    Attention cable cord-cutters: Do not expect to see live sports or evening newscasts on Netflix anytime soon.

    When asked whether Netflix would ever create a live evening newscast, Reed Hastings, the company’s chief executive and one of its founders, responded: “You don’t want to invest in things that are dying.” He also said that he did not see a day when the site would offer live sporting events.

    He did concede that making high-quality shows was “very challenging.”

    Reply
  6. Tomi Engdahl says:

    Jack Nicas / Wall Street Journal:
    Maker of Polaroid cameras sues GoPro, claiming Hero4 Session violates its patent for the Polaroid Cube camera

    Polaroid Maker Sues GoPro Over Tiny, Cubical Camera
    C&A Marketing says GoPro’s Hero4 Session violates its patent for the Polaroid Cube camera
    http://www.wsj.com/article_email/polaroid-maker-sues-gopro-over-tiny-cubical-camera-1446563105-lMyQjAxMTE1ODA0MzAwNTM4Wj

    The maker of Polaroid cameras is suing wearable-camera maker GoPro Inc. for allegedly infringing on its patent for cube-shaped cameras, a dispute between old and new in the industry that poses another problem for GoPro’s latest product.

    GoPro said several European Union patents for the Session and a U.S. patent for the Session’s plastic case, all issued in March, show “that GoPro was working on Hero4 Session well before the competitor filed for its patent, which covers its own product—not GoPro’s.”

    The year-old Polaroid Cube and the four-month-old GoPro Session look like cubes with rounded corners. Both have a camera lens on the front side and large control button on the top. The Session is slightly larger than the 1.4-cubic-inch Cube.

    A federal jury now might have to decide whether those similarities mean GoPro infringed on C&A’s design patent, which has just a 12-word claim—“The ornamental design for a cubic action camera, as shown and described”

    If a jury decides the Session infringes on the patent, what does that mean for other makers of cube-shaped cameras?

    The lawsuit is the latest Session-related headache for GoPro.

    Reply
  7. Tomi Engdahl says:

    Sennheiser announces €50,000 headphones (we checked, no typos)
    Michelangelo’s preferred marble pressed into service for rich audiophiles
    http://www.theregister.co.uk/2015/11/04/sennheiser_announces_50000_headphones/

    Sennheiser has announced a new pair of headphones it says will cost “around €50,000” (£35,886 or US$55,165).

    The forthcoming “Orpheus” model boasts silver-plated copper cable and “gold-vaporized ceramic electrodes and platinum-vaporized diaphragms … exactly 2.4 µ thick, the result of extensive research that shows that any thinner or thicker would be sub-optimal.” The accompanying amplifier boasts “comes from Carrara in Italy and is the same type of marble that Michelangelo used to create his sculptures.

    The tubes “rise from the base and start to glow”, and “the control elements, each of which are crafted from a single piece of brass and then plated with chrome … slowly extend from the marble housing.”

    Products with this kind of price tag can appear ridiculous, but there’s clearly a market for this stuff

    Reply
  8. Tomi Engdahl says:

    Sony prepares to lop off semiconductor biz
    Devices division set for overhaul to also include batteries and storage
    http://www.theregister.co.uk/2015/10/06/sony_spinoff/

    Sony is reorganizing its devices division, and spinning off its semiconductor business.

    The Japanese electronics giant said that on April 1, 2016, Sony Semiconductor Solutions will begin operating as its own business with manufacturing, R&D, and sales operations. The new business will continue to operate as part of the Sony Group of companies.

    “The aim of this new structure is to enable each of the three main businesses within this segment, namely the semiconductor, battery, and storage media businesses, to more rapidly adapt to their respective changing market environments and generate sustained growth,” Sony said in announcing the move.

    Central to the new business will be Sony’s imaging sensors operation. The wildly successful sensor business develops the image sensors for cameras – ranging from smartphones to surveillance equipment – and has become a cash cow.

    Sony is estimated to have a 40 per cent share of the entire imaging sensor market.

    Reply
  9. Tomi Engdahl says:

    What Your Photos Know About You
    http://yro.slashdot.org/story/15/11/03/1725231/what-your-photos-know-about-you

    Sandra Henry-Stocker became curious about how much more complex the jpg format had become since she first did a deep dive into it more than twenty years ago, so she dug into how much information is stored and where. “This information is quite extensive — depending on the digital camera you’re using,” says Henry-Stocker

    What do your photos know about you?
    http://www.itworld.com/article/2999967/personal-technology/what-do-your-photos-know-about-you.html

    Ever wonder about how much information is stored in your image files beyond the pixels that comprise the images themselves? And what that data might tell about you, your camera, your photographic style, and your location? It just might be a lot more than you ever imagined.

    To begin, all the metadata regarding the photos that we take with our digital cameras and phones are stored in the EXIF (exchangeable image format) data that is incorporated into the jpg (and tiff) photos. This information is quite extensive — depending on the digital camera you’re using, containing detailed information about the photo such as the make and model of the digital camera that was used, whether a flash was used, the focal length, light value, and the shutter speed that was used when it was taken. And, if your phone/camera has geotagging turned on, it will also include the altitude, longitude and latitude of the place where the photo was taken.

    In addition, when I update an image with Gimp, say to crop it or rotate it 90 degrees, Gimp also adds information to the image file.

    Geotagging causes the longitude and latitude to be captured.

    Sometimes, you might want to remove, change, or simply examine this data. For example, if you’re a very serious photographer, you might not want to share all the details of how you made the shot. These might comprise your own kind of “photography trade secret”. Or you might not want anyone to know where you were a photo was shot or the date and time that it was shot. Or you just might not want the extra data adding bulk to the size of your files. You might also sometimes want to add data — for example, to insert a copyright notice into your image files. All of these things can easily be done with exiftool.

    Reply
  10. Tomi Engdahl says:

    Digging HDMI Out Of UDP Packets
    http://hackaday.com/2015/11/04/digging-hdmi-out-of-udp-packets/

    [Danman] was looking for a way to get the HDMI output from a camera to a PC so it could be streamed over the Internet. This is a task usually done with HDMI capture cards, either PCI or even more expensive USB 3.0 HDMI capture boxes. In his searches, [danman] sumbled across an HDMI extender that transmitted HDMI signals over standard Ethernet. Surely there must be a way to capture this data and turn it back.

    Reverse engineering Lenkeng HDMI over IP extender
    https://danman.eu/blog/reverse-engineering-lenkeng-hdmi-over-ip-extender/

    Short time ago, I was searching for a way how to get HDMI output from camera to PC (and then stream on the Internet). There are PCI-x HDMI input cards on the market, but they cost 100$+. Suddenly, I have found a device, which transmitted HDMI signals over IP network for half of the above price so I took the chance. Specification said something about MJPEG so I thought it might be possible.

    When the package arrived, first thing was to test if it really works as described and it really did – audio and video output from DVD player transmitted through common ethernet switch to my TV. Second thing I did was of course starting wireshark and sniffing the data.

    Highest bitrate was on port 2068 what indicated it is video stream. After a while I found a packet with JFIF header (on picture above) – great! data is not encrypted, nor compressed and contains JPEGs.

    So I searched google for python multicast listener and modified it to save data to .jpeg files

    After looking on wireshark packets again, I found out, there are some bytes in the end of packet not part of UDP payload.

    So I googled raw socket listener (thanks Silver Moon) and manually parsed IP and UDP headers to match correct packets and extract UDP payload with trailing bytes any voila! it worked, I’ve got correct JPEG frames

    Thank you chinese engineers! Because of wrong length in IP header (1044) I have to listen on raw socket!

    All this was done, when both sender and receiver were plugged into network.

    I was searching for solution for live video mixing from cameras with HDMI output. They can be far from mixing computer connected via switches and cheap ethernet technology. With this knowledge I have build a prototype for mixing software in Qt separating input video streams by source IP and mixing/switching them. It’s still in beta phase, but you can see it on my github.
    https://github.com/danielkucera/Mixer-gui

    As requested by readers, today I did a speed/quality test. I took jpeg 1080×1920 image and let it play on DVD player.
    Transmitter was streaming 1080p@18fps with 90Mbps bitrate.

    Together with Tom Warren we modified the script in a way that now it plays correctly in VLC via pipe

    Reply
  11. Tomi Engdahl says:

    Reverse Engineering an HDMI Extender
    http://hackaday.com/2014/01/25/reverse-engineering-an-hdmi-extender/

    There’s a number of devices out there that extend HDMI over IP. You connect a video source to the transmitter, a display to the receiver, and link the two with a CAT5/5e/6 cable. These cables are much cheaper than HDMI cables, and can run longer distances.

    [Daniel] didn’t care about extending HDMI, instead he wanted a low cost HDMI input for his PC. Capture cards are a bit expensive, so he decided to reverse engineer an IP HDMI extender.

    After connecting a DVD player and TV, he fired up Wireshark and started sniffing the packets. The device was using IP multicast on two ports. One of these ports had a high bitrate, and contained JPEG headers. It looked like the video stream was raw MJPEG data.

    Reverse engineering Lenkeng HDMI over IP extender
    https://danman.eu/blog/reverse-engineering-lenkeng-hdmi-over-ip-extender/

    Reply
  12. Tomi Engdahl says:

    NewsON Brings Your Local News Stations To iOS, Android And Roku
    http://techcrunch.com/2015/11/04/newson-brings-your-local-news-stations-to-ios-android-and-roku/

    A company called NewsON launched today, offering cord cutters and mobile consumers access to local news with an app that works on iOS, Android, and Roku devices. The service, which is financially backed by a cohort of TV stations, claims to offer local news coverage across 75 percent of the U.S., including video content from a total of 118 stations in 90 markets.

    The launch comes at a time when more consumers are dropping their cable and satellite TV subscriptions in favor of streaming services and other over-the-top offerings, like those from Netflix, Hulu, HBO and Amazon, for example. But when you cut the cord, one thing that often goes missing is the extensive news coverage you had previously through pay TV and its many channels.

    Various companies are attempting to solve this problem in diverse ways, whether that’s news brands offering standalone news apps they built in-house; news brands partnering with streaming services, like VICE did with HBO; subscription services like Dish’s Sling TV bringing live cable TV news to customers; startups offering their own video aggregators that work on mobile, like Haystack TV or Watchup; DVR makers that record over-the-air programming captured by antennas, like TiVo; and so on.

    Reply
  13. Tomi Engdahl says:

    It’s a Bird. It’s Another Bird!
    http://www.linuxjournal.com/content/its-bird-its-another-bird?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    So, what does my obsession with bird watching have to do with Linux? Well, obsession demands that either I stare out my window all day and lose my job, or I figure out some way to watch my birds while staring at a computer screen. Enter: BirdCam. I needed a way to stream a live video feed of BirdTopia, without spending any more money. (The “not spend money” part was implied by my wife.)

    Because I don’t share an office with anyone, my camera options didn’t have to be pretty. I considered a USB Webcam, but all the Webcams I have are really low quality. Thankfully, I have a drawer full of old cell phones that have been replaced with newer models.

    I purchased a $5 application called iWebcamera, which turns an iOS device into an IP camera with a built-in Web server.

    Next up was my Galaxy S2 phone with the cracked screen. Obviously the crack didn’t matter, and the camera is much nicer. Also, the Google Play store has an app called IP Webcam that is completely free and completely awesome. The application puts a big-ugly ad on the screen of the phone, but the remotely viewed video has no ads at all.

    Both the iOS app and the Android app have a built-in Web server that allows for direct viewing of the video stream

    The built-in Web server on the phone is probably sufficient if you just want to watch from one or two computers on your network. For me, however, it wasn’t enough.

    I also wanted to be able to share my BirdCam with the world, but I wanted to serve everything myself, rather than depend on a service like Ustream.

    Although my business Internet connection here in my home office has 5Mbit upload speeds, it turns out that streaming multiple video feeds will saturate that type of bandwidth very quickly. I also had the problem of taxing the embedded Web server on the phone with more than one or two connections.

    I still hadn’t given up on full streaming, so my first attempt at “Global BirdCam” was to re-encode the phone’s video on my Linux server, which would be able to handle far more connections than an old Android handset.

    Thankfully, VLC will run headlessly and happily rebroadcast a video stream. Getting just the right command-line options to stream mjpeg properly proved to be a challenge, but in the end, this long one-liner did the trick

    Although the VLC solution did work, it didn’t really fit my needs. I couldn’t stream to the Internet due to lack of bandwidth

    I figured if I took a high-res photo every second, I could get a far better image and also save boatloads of bandwidth. I still wanted a video-like experience, so I concocted a handful of scripts and learned some JavaScript to make a sort of “flipbook video” stream on a regular Web page. This was a two-part process. I had to get constantly updated photos, plus I had to build a Web page to display them properly.

    Reply
  14. Tomi Engdahl says:

    The invention converts 2D television image to three-dimensional

    Massachusetts University of Technology, MIT and Qatar of computing QCRI Institute’s researchers have developed a way to convert an ordinary two-dimensional to a three-dimensional television image. The amended video can be viewed on any 3D television screen or virtual reality classes, such as Oculus Rift.

    Three-dimensional movies are often made by hand afterwards with different kinds of post-processing steps.

    Sport broadcast should create a three-dimensional image instantly and automatically.

    The researchers investigated how the Microsoft Fifa 13 football game produces real-time three-dimensional image. This algorithm is also used as a two-dimensional real game of football for modeling three-dimensional.
    The algorithm searches for the real football match contracted 2D image of football images.
    The algorithm uses this second image in stereo image pair with the original, when the sports fan gets an artificial three-dimensional viewing experience. The process delay is one-third of a second

    Source: http://www.tivi.fi/Kaikki_uutiset/keksinto-muuntaa-2d-televisiokuvan-kolmiulotteiseksi-6063104

    Reply
  15. Tomi Engdahl says:

    Stephen Pulvirent / Bloomberg Business:
    Lytro debuts its Immerge video camera for virtual reality filmmaking, available in 2016 for between $250K and $500K

    This UFO-Shaped Mega-Camera Might Be the Future of Virtual Reality
    The Lytro Immerge is a next-generation VR camera rig for professionals that will start at $250,000.
    http://www.bloomberg.com/news/articles/2015-11-05/this-ufo-shaped-mega-camera-might-be-the-future-of-virtual-reality

    Virtual reality entertainment is slowly approaching the mainstream. And it’s doing so in what looks like a UFO.

    Lytro first burst on the scene in 2012 pushing a small $150 camera that let you refocus pictures after you took them. Advancing that idea a step, Lytro then released the Illum in 2014, a more robust take on the original Lytro with a zoom lens, bigger sensor, and more advanced features. The core of these cameras is something called light field image sensing. To risk oversimplifying a bit, this basically means the image sensor is capturing the color, intensity, and direction of light beams (most camera sensors omit direction), letting it create 3D representations of whatever it’s receiving.

    The new Immerge rig is built on the same idea but is a far cry from the original consumer Lytro cameras.

    The body is made of a spherical array of high-definition video cameras fitted into rings, then mounted on a three-legged base. It’s meant to simulate a human head, but one that’s looking in all possible directions at once. This means it sees 360 degrees around the center, as well as above and below the unit, still capturing color, intensity, and depth. Lytro calls the new capture method a “light field volume.”

    I haven’t seen any footage taken with the Immerge, but Lytro suggests that the difference between this experience and what’s currently being shot with other 3D VR rigs is dramatic.

    There are some major caveats with the Immerge. The tripod you see here is only part of the actual camera rig. Coming out from between the legs will be a bundle of cables not seen in these renderings. That will have to be plugged into a proprietary server unit that can store an hour of footage before it needs to offload the data to a more permanent storage solution. From there, editors can work with the footage in whatever application they’re accustomed to using (Lytro opted for plug-ins instead of creating its own proprietary editor) and publish in formats that will work with any commercially VR headset, including Oculus Rift and Samsung Gear VR.

    All this means the Immerge is much better suited to shooting in studio or urban settings, since power sources aren’t readily available for a setup like this in the middle of the desert (or anywhere to hide the server from the camera, for that matter).

    Lytro says they should have working prototypes ready for user testing in early 2016, with final units ready soon after that.

    Reply
  16. Tomi Engdahl says:

    Drone Maker DJI Takes Minority Stake In Iconic Swedish Camera Company Hasselblad
    http://techcrunch.com/2015/11/06/drone-maker-dji-takes-minority-stake-in-iconic-swedish-camera-company-hasselblad/

    On the heels of a $75 million investment from Accel, Chinese drone king DJI is putting some of its funding to use by making some investments of its own. The company this week announced that it would be taking a stake in Hasselblad, a camera maker from Sweden that focuses (!) on high quality equipment, often used in more challenging environments. This is a minority stake and neither company disclosed the amount when contacted by TechCrunch. But it is large enough to give DJI a seat on Hasselblad’s board of directors.

    The companies say that while they will continue to build their businesses independently of each other, the investment and partnership will also mean that they will be working on products together, with collaborations using DJI’s expertise in unmanned flying vehicles, and Hasselblad’s technical imaging know-how, specifically for the professional market.

    “We are honored to be partnering with DJI, the clear technology and market leader in its segment,” said Perry Oosting, Hasselblad’s CEO in a statement.

    DJI making strategic investments in the wider ecosystem of components that (literally and figuratively) make the drone market fly may be a way of building the greater market, but it is also a way of locking in the more promising companies that work in the space into DJI’s own ecosystem in a more direct way, above that of its competitors.

    To be clear, the Canonical partnership also announced this week, which will see a new Ubuntu computer called Manifold embedded on to DJI drones, “was purely a partnership based on technology,”

    DJI Teams Up With Canonical To Launch Embedded Computer For Drones
    http://techcrunch.com/2015/11/02/dji-teams-up-with-canonical-to-launch-embedded-computer-for-drones/#.ojwuxm:MEn8

    Drone manufacturer DJI and Canonical, the corporate entity behind the Ubuntu Linux distribution, today announced the launch of Manifold, a small embedded computer that’s optimized for building applications for drones.

    The Manifold only fits on top of DJI’s Matrice 100 platform, so don’t expect to put this one on your phantom drone. The $3,300 Matrice 100 is essentially DJI’s flying developer platform, with the ability to carry hardware like the Manifold and customizable sensors.

    Inside, the Manifold features a quad-core ARM Cortex A-15 processor and an NVIDIA Kepler-based GPU. The GPU is obviously not there to render graphics, but to make use of its image processing and parallel computing power.

    This, DJI argues, will enable “new artificial intelligence applications such as computer vision and deep learning.” The computer also features standard USB and Ethernet ports for attaching infrared cameras, atmospheric research devices, surveying equipment, and other sensors. There’s also an HDMI port for connecting monitors.

    Reply
  17. Tomi Engdahl says:

    Tim Bradshaw / Financial Times:
    Snapchat says it now has 6B daily video views, up from 4B in September, and 2B in May — Snapchat triples video traffic as it closes the gap with Facebook — Snapchat is closing the gap with Facebook in the social networks’ battle for scale in video. The number of videos viewed …

    Snapchat triples video traffic as it closes the gap with Facebook
    http://www.ft.com/intl/cms/s/0%2Fa48ca1fc-84e7-11e5-8095-ed1a37d1e096.html

    Reply
  18. Tomi Engdahl says:

    Coming set-top box mandate may help break pay TV firms’ hold over viewers
    http://www.latimes.com/business/technology/la-fi-set-top-box-future-20151109-story.html

    That unsightly and costly metal box that funnels cable or satellite service into your TV might be going the way of the black rotary-dial telephone — in the technology trash heap.

    A holdover from the early days of pay television, the set-top box is an energy-inhaling contraption that also sucks money from Americans’ wallets each month.

    About 99% of the nation’s 100 million pay TV subscribers lease a set-top box, with the average household paying $231 a year in rental fees

    Those costs are one reason a growing number of so-called cord cutters are dropping their conventional pay TV service and now are streaming programming over the Internet directly through smart TVs or via much smaller devices, such as Roku, Chromecast and Apple TV, that they can purchase instead of rent.

    Set-top boxes typically cost less than $10 a month, but the average customer rents about 2.6 set-top boxes to cover multiple TVs.

    Still, Time Warner Cable Inc., and some other providers are experimenting with their own customized apps that would enable customers to ditch the set-top box and access their programming on a variety of devices. Pay TV companies are warning against adding new federal mandates as video options rapidly evolve.

    Reply
  19. Tomi Engdahl says:

    Pinterest’s New Tool Lets You Do Searches Using Pictures
    http://www.wired.com/2015/11/pinterests-new-tool-lets-you-do-searches-using-pictures/

    Pinterest is diving into wordless searches.

    The site’s new visual search tool is Pinterest’s latest effort to help its 100 million users discover the things they didn’t even know they liked, leveraging the vast repository of 1 billion “boards” and 50 billion “pinned” images now on the social scrapbooking site. The new tool lets users zoom in on a specific object in an image that could contain multiple elements—say, if a user was looking at picture-perfect living room showing off a lamp, table and couch—to see more of a specific object that looks visually similar. A user could choose to see more lamps, for instance, with similar colors, shapes, and patterns.

    The company says that it has indexed about a billion images on its social network for the new search engine with the help of the Berkeley Vision and Learning Center, known for its expertise in “deep learning” techniques.

    “Pinterest is about discovery, and visual search is another dimension of discovery,”

    Yes, Google still rules search. But Pinterest has its own an edge. Because the very act of pinning is supposed to signal a user’s preference, that makes it a potentially valuable record of consumer desire.

    Reply
  20. Tomi Engdahl says:

    Two Turntables and No Microphone
    http://hackaday.com/2015/11/09/two-turntables-and-no-microphone/

    It used to be that you had to spend real money to get an alternative controller for your electronic musical arsenal. These days, with cheap microcontrollers and easily-accessible free software libraries, you can do something awesome for pocket change. But that doesn’t mean that you can’t make a sexy, functional piece of art along the way!

    Wooden sensor box w/ 2 rotary disks
    A homebuilt wooden sensor box i made, mainly for controlling PureData.
    https://hackaday.io/project/8371-wooden-sensor-box-w-2-rotary-disks

    A homebuilt wooden sensor box using different kinds of sensors and the Teensy 3.1 microcontroller.
    For sensing the disk movement, I’m making use of IR LEDs and phototransistors and a technique called “Quadrature Encoding”. I hacked two old hard drives for the motors and the hard drive platters.

    Currently I’m receiving raw data. I’m writing PD patches to find reasonable applications for music. I use Processing for any kind of visual representation. It’s not meant for DJing in the first place, but I am interested in using jog wheels as sensors, for whatever case, I’ll see.

    Reply
  21. Tomi Engdahl says:

    The first OLED screen, through which you can see

    Planar Systems has developed an organic LED panel, which does not need background illumination, and casing. This can be carried out the world’s first OLED displays, through which can really be seen.

    Planar display can be used to view videos, pictures and text to virtually frameless screen. Transparency can be used, for example, so that the contents of the panel can be displayed together with the background. Planar, innovation creates a whole new kind of displays in this category.

    55-inch transparent OLED panel is self-emitting, and it will emit 45 per cent of the light to pass through. Transparency impression is thus a truly authentic.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3578:ensimmainen-oled-naytto-jonka-lapi-voi-nahda&catid=13&Itemid=101

    Reply
  22. Tomi Engdahl says:

    China may have just made it harder for its citizens to ever get Spotify or Apple Music
    http://qz.com/545395/china-may-have-just-made-it-harder-for-its-citizens-to-ever-get-spotify-or-apple-music/

    China doesn’t think its online censorship laws are strict enough. So government officials, in an effort to continue purging potentially subversive content from the country, are now turning their attention toward an unexpected target—music.

    Starting next year, companies that offer online music in China will be ordered to filter their libraries for “harmful” content before making any music available to the public, Reuters and several other news agencies reported Monday (Nov. 9). China’s Ministry of Culture announced the rule on its website.

    Three of the biggest web service sites in the country—Alibaba, Baidu, and Tencent—offer music streaming services and will more than likely be subject to the change. These companies already have to censor their web content, and many of them employ large teams of people to find and erase sensitive online material. Under the new rules, the size of those teams will probably have to increase.

    China looking to scrub its Internet of offensive songs
    http://www.cnet.com/news/china-reaches-into-online-music-to-squelch-offensive-songs/#ftag=CADf328eec

    The People’s Republic is taking aim on music, ordering all music streaming companies to examine songs before they’re released to the public to ensure they’re not violent, overly sexual or inappropriate.

    China’s Internet is one of the most censored in the world, and now could face even fiercer censorship thanks to new regulations set to come into effect next January.

    Under the new rules, announced on Monday, companies that provide or host music will need to examine what’s being made available before it’s posted to ensure it’s appropriate for public consumption, as per China’s Ministry of Culture.

    The result could be particularly unfortunate for Chinese fans of hip hop, with authorities having blacklisted dozens of rap songs in August, claiming they promote violence and obscenity.

    The tightening control over online music is the latest attempt by the Chinese government to keep the Internet clean of offensive, pornographic and culturally inappropriate content. The People’s Republic is already well known for having a heavily censored Internet, with sites like Facebook, Google and Twitter being blocked behind what’s referred to as The Great Firewall of China.

    Even with strong censorship of content, Chinese companies have been aggressive in offering streaming entertainment, with the country having a massive 480 million Internet users who listen to online music, according to the China Internet Network Information Center.

    Reply
  23. Tomi Engdahl says:

    20th Century Fox’s Danny Kaye talks mobile entertainment
    http://www.edn.com/electronics-blogs/catching-waves/4440746/20th-Century-Fox-s-Danny-Kaye-talks-mobile-entertainment-?_mc=NL_EDN_EDT_EDN_today_20151110&cid=NL_EDN_EDT_EDN_today_20151110&elq=a71fb7cf88404f23aaa6987b8b9e3ee8&elqCampaignId=25640&elqaid=29181&elqat=1&elqTrackId=5bf2e54e7ff443979be5c27ff781eb5c

    EDN: I’m sure there are business challenges, but what are the biggest technical challenges that remain to offer fuller media experiences on mobile or automotive devices?
    Kaye: There are several. First, we have to give the consumer both quality and convenience in the same content and devices. Then, we have to deliver next-gen content and experiences to devices over networks that are challenged by traffic and bandwidth constraints. And finally, we have to provide seamless integration between the local device and cloud storage.

    EDN: How might the movie industry change as a result of the latest mobile and virtual reality technologies?
    Kaye: Movies may be produced with virtual reality in mind, and virtual reality productions might be integrated directly into film production.

    Reply
  24. Tomi Engdahl says:

    Dawn Chmielewski / Re/code:
    T-Mobile unveils Binge On free streaming service from 24 providers excluding YouTube; exempts video from data caps, resolution lowered to DVD-like quality — T-Mobile Will Let Customers Stream HBO, Netflix and ESPN Without Racking Up Data Charges — T-Mobile will allow some subscribers …

    T-Mobile Will Let Customers Stream HBO, Netflix and ESPN Without Racking Up Data Charges
    http://recode.net/2015/11/10/t-mobile-announces-free-video-streaming/

    T-Mobile will allow some subscribers to stream video from 24 popular services without burning through their data caps.

    The nation’s third-largest wireless carrier is looking to gain competitive advantage over rivals Sprint, AT&T and Verizon by giving its customers the ability to stream videos on their smartphones and tablets without generating data charges. Subscribers can choose among popular streaming services including Netflix, HBO Now, HBO Go, Watch ESPN, Fox Sports and Hulu.

    Notable omissions from the list include YouTube, the world’s biggest video site, and Facebook and Snapchat, both of which have made big pushes into video in the last year.

    “Video streams free,” T-Mobile CEO John Legere said Tuesday. “Binge on. Start watching your shows, stop watching your data.” Legere’s offer applies to customers who pay for at least three gigabytes of data a month.

    Reply
  25. Tomi Engdahl says:

    Google Chromecast 2015: Puck-on-a-string fun … why not, for £30?
    Streamer, you know you are a streamer
    http://www.theregister.co.uk/2015/10/06/review_google_chromecast_2015/

    OK, so we all know what Google’s Chromecast is, yes? Someone at the back – why are they always at the back – seems unsure. In a sentence, then, Chromecast is a small Wi-Fi-connected slug that you slip into a spare HDMI port on your TV, and which plays video and audio under the direction of a remote control app.

    Google introduced Chromecast a couple of years ago, but it took a further nine months to get here. There’s no such latency with the latest version: unveiled less than a week ago, it’s already on sale in the UK.

    The update centres on improved Wi-Fi and a new physical design, but that’s about it. Changes have been made to the software, but they’re being rolled out to the first-generation Chromecast too.

    If your telly lacks a USB port or it isn’t up to snuff, Google includes an external PSU to power the Chromecast. As before, the new Chromecast needs an external power source, either a bundled 5V/1A wall wart or a correct-spec USB port on the TV itself.

    How well a TV’s USB port will work will also depend on whether the TV keeps it powered while in standby. Mine does, so thanks to Chromecast’s support for HDMI-CEC, I can just fire up BBC iPlayer, tap the Chromecast icon and my TV will turn on and switch input automatically. If everything is turned off physically, Chromecast takes about 20 seconds to boot up before an app can detect it.

    Reply
  26. Tomi Engdahl says:

    Video With Sensor Data Overlay Via Arduino Mega
    http://hackaday.com/2015/11/11/video-with-sensor-data-overlay-via-arduino-mega/

    If you haven’t been paying attention, big wheel trikes are a thing. There are motor driven versions as well as OG pedal pushing types . [Flux Axium] is of the OG (you only get one link, now its on you) flavor and has written an instructable that shows how to achieve some nice looking on screen data that he syncs up with the video for a professional looking finished product which you can see in the video after the break.

    [Flux Axium] is using an Arduino Mega in his setup along with a cornucopia of sensors and all their data is being logged onto an SD card. All the code used in his setup is available in his GitHub repository.

    Sadly [Flux Axium] uses freedom hating software for combining the video and data, Race Render 3 is his current solution and he is pleased with the results.

    Build a ‘blackbox’ datalogger for adding on screen display gauges to your videos by fluxaxiom
    http://www.instructables.com/id/Build-a-blackbox-datalogger-for-adding-on-screen-d/

    Arduino Mega based datalogger for use in video on screen display
    https://github.com/fluxaxiom/Stat_Cache

    Create Amazing Videos with RaceRender 3!
    http://racerender.com/RR3/Features.html

    Powerful Features Made Easy – Quickly create amazing videos with custom data overlays, GPS telemetry, multiple camera picture-in-picture, logo overlays, and more. Impress your fans with high-tech video of you in action!

    Your Video + Your Data – Use the cameras and data equipment that you already have! Works with GoPro, Sony ActionCam, Garmin VIRB, Contour, and many other cameras. Visualizes data from a huge selection of GPS devices, data loggers, and apps.

    Available for Microsoft Windows® and Apple Mac OS X®
    Try RaceRender Today for Free!

    Reply
  27. Tomi Engdahl says:

    Facebook conjures up a trap for the unwary: scanning your camera for your friends
    Auto-spam your friends with Photo Magic
    http://www.theregister.co.uk/2015/11/10/facebook_scans_camera_for_your_friends/

    Facebook has decided it doesn’t pester its users enough, so it’s going to use its facial recognition technology as the basis of a new nag-screen.

    The ad network is testing a feature in its Android app that will scan a user’s recent images for photos that look like their friends. If it spots a match, it’ll ask if the photos should be shared with other people in them.

    The feature is being tested on Australian users first, with iOS to arrive by the end of the week, and if they don’t grab pitchforks and torches, The Social NetworkTM threatens promises to take it to the US soon.

    The pic-scanning isn’t restricted to photos you’ve already uploaded to Facebook – the app scans your phone’s photo collection for new images, and will raise a dialogue asking if you want to post it to your friends.

    There will be an opt-out, just in case you don’t want a careless selfie in flagrante delicto with a partner’s friend turning up on your feed because the evening involved a lot of booze and not much good sense, and Facebook users who can navigate the baroque maze of its privacy settings can already opt out of having their faces detected in other users’ photos.

    Reply
  28. Tomi Engdahl says:

    Oculus Originator Gives Away MEMS Version
    Kickstarter campaign co-founder claims to have better device
    http://www.eetimes.com/document.asp?doc_id=1328210&

    After making millions from co-founding the Oculus Virtual Reality (VR) Kickstarter campaign (which Facebook bought for $2.4 billion), Jack McCauley is now giving away the license to a better VR headset that beats the top two- – Oculus and Valve– by using a microelectromechanical system (MEMS) laser “head” finder.

    “If you are a MEMS manufacturer and you are not involved in VR, then you should be,” McCauley said to the capacity crowd at the MEMS Executive Congress (MEC). McCauley was already a millionaire from inventing Guitar Hero and co-owning a factory in China before the Oculus sale.

    “I’ve already given away all my money from the Facebook buyout to my favorite charity and now I’m inventing a VR headset that will be light years ahead of them all and that you can license for free–if you agree to donate some of your profits to my favorite charity,” McCauley told EE Times in an exclusive interview.

    Now president of McCauley Labs (Livermore, Calif.), his staff of five are busy re-inventing VR by using MEMS inertial sensors to detect head orientation, and adding MEMS micro-mirrors to create a super-small infrared “lighthouse” the frees the user to walk around experiencing virtual realities as high-resolution as the real world and with no trace of the “simulator sickness” that the Facebook Oculus models can cause, McCauley told EE Times.

    Today Valve Corp. (Bellevue, Wash.) has a VR headset that is “10 generations ahead anything available today”, according to McCauley, by using more accurate head tracking by replacing the camera Oculus uses with a rotating motor driven line-laser.

    Reply
  29. Tomi Engdahl says:

    Apple Music Comes To Android As An Emissary
    http://techcrunch.com/2015/11/10/apple-music-comes-to-android/#.b5imzi:IzCD

    Today, Apple Music comes to Android phones. It’s the first user-centric app that Apple has created for Android (but not its first).

    As people download and dissect it, they’ll doubtless be looking at how Apple builds on Android, what features are ported over from iOS and what Apple’s pan-operating-system Music philosophy looks like in the mobile age. In advance of the launch, I spoke with Apple’s SVP of Internet Software and Services, Eddy Cue, about exactly those things.

    “We’ve obviously been really excited about the response we’ve gotten to Apple Music. People love the human curation aspects of it, discovery, radio,”

    “So if you’ve got another device with Apple Music and you’ve got your whole music library in the cloud you can access it from Android,” says Cue. “If you haven’t, but you’ve purchased music from iTunes in the past, if you use the same Apple ID when you join on Android it’ll read all the music you’ve purchased.”

    Apple Music is a beta on Android, which means it’s missing a couple of features.

    Reply
  30. Tomi Engdahl says:

    TV Networks Cutting Back On Commercials
    http://entertainment.slashdot.org/story/15/11/11/1624211/tv-networks-cutting-back-on-commercials?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Cable providers aren’t the only ones feeling pressure from cord cutters. The TV networks themselves are losing viewers the same way. A lot of those viewers are going to Netflix and other streaming services, which are often ad-free, or have ad-free options. Now, in an effort to win back that audience (and hang on to the ones who are still around), networks are beginning to cut back on commercial time during their shows.

    Fox says the shorter ads, which require viewers to engage with them online, are more effective because they guarantee the audience’s full attention.”

    Why TV Networks Are Cutting Back on Commercials
    http://www.bloomberg.com/news/articles/2015-11-10/tv-networks-cut-commercial-time-as-viewers-become-numb-to-ads

    Hate commercials? Good news: You may see fewer of them.

    Media companies, including Time Warner Inc., 21st Century Fox Inc. and Viacom Inc., have started cutting back on commercials after years of squeezing in as many ads as possible.

    The new strategy is an attempt to appeal to younger viewers, who are more accustomed to watching shows ad-free on online streaming services like Netflix Inc., and to advertisers concerned their messages are being ignored amid all the commercial clutter.

    Time Warner’s truTV will cut its ad load in half for prime-time original shows starting late next year, Chief Executive Officer Jeff Bewkes said last week on an earnings call. Viacom has recently slashed commercial minutes at its networks, which include Comedy Central and MTV. Earlier this month, Fox said it will offer viewers of its shows on Hulu the option to watch a 30-second interactive ad instead of a typical 2 1/2-minute commercial break. Fox says the shorter ads, which require viewers to engage with them online, are more effective because they guarantee the audience’s full attention.

    ‘Training’ Consumers

    “We know one of the benefits of an ecosystem like Netflix is its lack of advertising,” Howard Shimmel, chief research officer at Time Warner’s Turner Broadcasting, said in an interview. “Consumers are being trained there are places they can go to avoid ads.”

    By airing fewer ads, the theory goes, the remaining ones become more memorable, and thus more valuable to advertisers, letting programmers charge higher rates. Yet, by shrinking the commercials, they run the risk of leaving money on the table at a time when the industry is under growing pressure from consumers ditching pay-TV services for cheaper online alternatives.

    “The customer can avoid a whole number of other ads, so we’ll charge a higher price,” Fox CEO James Murdoch said at a conference in September.

    Reply
  31. Tomi Engdahl says:

    Headphone market has grown exponentially in the last few years. At the same time emerged keeping them. For some time at trade fairs are presented in a variety of imaginative headphone racks.

    Source: http://www.hifimaailma.fi/uutiset/kuuloketeline-suomalaisittain/

    Reply
  32. Tomi Engdahl says:

    Low Power Full HD Transcoder
    http://www.eeweb.com/company-news/socionext/low-power-full-hd-transcoder/

    The MB86H57 and MB86H58 are full HD transcoders that can convert MPEG-2 video data and H.264 video data. It uses Fujitsu Semiconductor’s proprietary transcode technologies which is able to transcode between different audio formats while featuring low power consumption. The device also supports abundant interfaces for improved connectivity.

    These new ICs are targeted to support the growing number of electronic equipment that can record digital broadcasts. By employing Fujitsu’s proprietary transcode technology, Fujitsu Semiconductor realized industry-leading low power consumption of only 1.0Watt(W) including the in-package memory.

    H.264/MPEG-2 bi-directional transcoder functionality
    In addition to transcoding from MPEG-2 to H.264, as did the previous MB86H52 product, these new ICs also transcode from H.264 to SD MPEG-2, thus being able to handle the various formats likely encountered during use. Furthermore, with audio transcoding included, the high quality audio needs of discerning customers can be met.

    Reply
  33. Tomi Engdahl says:

    MOST® Linux Driver for Network Inteface Controllers
    http://www.eeweb.com/company-news/microchip/most-linux-driver-for-network-inteface-controllers/

    Microchip Technology Inc. announced that the MOST® Linux Driver supporting Microchip’s MOST network interface controllers has been incorporated into the staging section of the Linux Mainline Kernel 4.3 operating system. The increasing demand for reliable and simple solutions to support audio, video and data communications in cars is driving the trend toward using Linux and open-source software in combination with MOST technology, the de-facto standard for high-bandwidth automotive multimedia networking.

    “We appreciate the Linux Foundation’s support in making the MOST Linux Driver part of the Linux Mainline Kernel 4.3,” said Dan Termer, vice president of Microchip’s Automotive Information Systems Division. “Incorporating this driver will make it easy for designers who are innovating the cars of the future to combine MOST technology with Linux, thereby significantly shortening time-to-market and reducing development costs.”

    MOST technology is a time-division-multiplexing (TDM) network that transports different data types on separate channels at low latency and high quality-of-service. The Linux Driver allows for the transport of audio data over the MOST network’s synchronous channel, which can be seamlessly delivered by the Advanced Linux Sound Architecture (ALSA) subsystem, providing system designers the ability to easily transmit audio over MOST technology using a standard soundcard. Additionally, this driver enables the transport of video data with guaranteed bandwidth, by using the MOST network’s isochronous channel and the Video for Linux 2 (V4L2) interface. This feature results in the ability to seamlessly connect standard multimedia frameworks and players over the Linux Driver to a MOST network.

    MOST® Linux Driver for Microchip’s MOST Network Interface
    Controllers Added to Open Source Linux Mainline Kernel 4.3
    http://www.microchip.com/pagehandler/en-us/press-release/most-linux-driver-for-microchi.html?utm_source=eeweb&utm_medium=tech_community&utm_term=news&utm_content=microchip&utm_campaign=source

    Allows Designers to Quickly and Easily Tap Into the Growing Adoption of Linux-Based
    Devices, Using the MOST Specification for High-Bandwidth Automotive Infotainment Applications

    Reply
  34. Tomi Engdahl says:

    Welcome to the security party, pal
    http://www.edn.com/electronics-blogs/now-hear-this/4440818/Welcome-to-the-security-party–pal?_mc=NL_EDN_EDT_EDN_today_20151112&cid=NL_EDN_EDT_EDN_today_20151112&elq=800f5e14c6c94569a99ed99dbf80d1a8&elqCampaignId=25702&elqaid=29251&elqat=1&elqTrackId=5dabf1d531f9424da0891cfb17caa0dd

    Security, the still-forming all-important lynch pin to IoT’s future, is also all-important in most other aspects of tech. That includes media and content viewing or distribution technologies, as illustrated in the ARM TechCon keynote presentation by Danny Kaye, executive vice president, global research and technology strategy for Twentieth Century Fox Home Entertainment, which produced the Die Hard series, among many other movies.

    Kaye started his presentation with a series of Fox movie clips, including one from Die Hard that focused on the Fox headquarters building, or Nakatomi Plaza as it is known in the movie. He then outlined VIDITY, the consumer brand created by the Secure Content Storage Association that aims to deliver high-quality video playback across a range of options spanning 4K Ultra HD home theater to smartphones with no Internet connectivity needed once downloaded.

    Part of VIDITY’s potential is shared content — not only across devices but shared from person to person, not necessarily only viewable on the buyer’s devices. “All of this is made possible only by next-generation security,” Kaye said. “We don’t wish to continue distributing our best content only for it to be pirated.”

    ARM’s TrustZone will provide the security needed, said Kaye, to allow Fox to provide its best premium content in such an open and shared way. ARM, one of many partners in work for VIDITY, has already established TrustZone in billions of chips and this week announced TrustZone for ARMv8-M, extending the technology to microcontrollers.

    “It’s TrustZone that’s used to protect the key derivation process, decryption of protected content, as well as enabling the implantation of a secure video path when we are processing the content,” Kaye said, adding that other protections can be added through TrustZone.

    “In short, we are really pleased and excited that ARM has deployed a solution that is a widely deployed one on multiple platforms supporting multiple DRMs [digital rights managements].”

    Reply
  35. Tomi Engdahl says:

    Nokia has Slush crowds to view their virtual reality camera

    Nokia presented Ozo camera is a big hit with visitors at the Helsinki Fair Centre held, today, in the current second day Slush event.

    Ozo-camera is a so-called virtual reality camera, ie it describes the eight video camera allows at the same time in all directions. The image produced by the individual cameras are assembled together and then presented to the user.

    The video is possible to look at all the major manufacturers of commercial virtual reality glasses.
    The viewer can, for example, by turning his head to look back at any time.

    Nokia has focused his camera to the fact that it would be most useful for film directors

    On the basis of quick tests of experience offered by the camera is impressive, although part of the demo for the picture, there were limits within which the camera images are processed areas were connected.

    Source: http://www.tivi.fi/Kaikki_uutiset/nokia-on-slushin-yleisomagneetti-takyna-ei-kuitenkaan-toimi-puhelin-6064733

    Reply
  36. Tomi Engdahl says:

    Welcome to the Future — Dynamic Adverts
    http://www.eetimes.com/author.asp?section_id=216&doc_id=1328250&

    It may be that — in the not-so-distant future — everyone watching a TV program in the comfort of their own homes sees different advertising objects seamlessly embedded in the scenes.

    Just when you thought you’d seen it all, something new comes along that makes your mind go all wobbly around the edges. And what has put me in this contemplative mood? Well…

    …as a starting point, let’s take American football games as presented on television. In the not-so-distant past, all we could hope to see on the screen was the field, the players and referees, and the ball. By comparison, these days we are also presented with an imaginary yellow line superimposed on the image reflecting the targeted first down, and this is accompanied by an imaginary blue line to mark the line of scrimmage.

    This is actually really clever — the way they arrange it so that these lines seem to be painted on the field and they don’t appear in front of any of the players and suchlike takes an incredible amount of technology and computing power.

    Or you might access an on-demand version of the program stripped of any adverts. As you can imagine, this doesn’t exactly cheer the advertising agencies up at all.

    One way around this from the advertising point of view is to move the adverts into the television show or movie itself. This form of advertising, known as product placement, can be quite subtle

    The reason I’m waffling on about this here is that I’m currently visiting the UK and I’ve just seen the most amazing demonstration of next-generation advertising technology. This is similar to the yellow and blue lines being superimposed on an American football game on TV, except that it involves integrating products into TV shows and movies.

    Take the examples I presented earlier — now the director can add computer-generated items into the fridge, or computer-generated posters on a wall, or computer-generated cars at the side of a street, and so forth. Why is this better than simply using real objects for product placement? Well, the thing is that you can change the objects depending on the target market.

    Suppose someone is holding a soft drink can in their hands, for example. Now, the actual soft drink seen by the viewer can vary depending on the country in which the media is being presented.

    I can envisage a not-too-distant future in which everyone who is watching a program on television in the comfort of their own homes sees different advertising objects seamlessly interwoven into the scene.

    Reply
  37. Tomi Engdahl says:

    Josh Constine / TechCrunch:
    Facebook brings 360-degree videos to iOS, opens up format to brands with first ads from AT&T, Samsung, Corona, Walt Disney World, others — Facebook Unleashes VR-Style 360 Videos For Ads And iOS — Zuck says VR is the future and Facebook is wasting no time building it into the News Feed and starting to make money off it.

    Facebook Unleashes VR-Style 360 Videos For Ads And iOS
    http://techcrunch.com/2015/11/12/vmarketing/

    Zuck says VR is the future and Facebook is wasting no time building it into the News Feed and starting to make money off it. By embracing the format, Facebook can stay fresh for consumers by offering the most vivid way to connect with places you can’t go. Meanwhile, attracting organic VR videos will provide cover so it can slip VmaRketing into the feed.

    Facebook brought 360 video viewing support to web and Android in September, and you can now watch these VR-style videos on iOS. Facebook is also opening up the format to advertisers, the first “immersive stories” coming from brands including AT&T, Corona, Nescafe, Ritz Crackers, Samsung and Walt Disney World.

    These videos can be watched by tap-and-dragging around the screen, or for a true virtual reality experience, starting today on the Samsung Gear VR. Meanwhile, to get more people sharing 360 content to the News Feed, Facebook is working with 360 camera makers like Theta, Giroptic and IC Real Tech to add “publish to Facebook” buttons to their apps. That means there’s no need to fiddle with uploading the 360 video to your computer first.

    Finally, to stoke 360 content creation, Facebook launched a microsite dedicated to providing filmmakers best practices, upload guidelines, and FAQs.

    360 Video
    A stunning way for content creators to share immersive stories, places, and experiences with their fans.
    https://360video.fb.com/

    Reply
  38. Tomi Engdahl says:

    Quantum-Dot Image Sensor Launch Threatens Silicon
    http://www.eetimes.com/document.asp?doc_id=1328254&

    After nine years of work and having raised more than $100 million in venture capital, InVisage has launched a 13-megapixel sensor that uses a quantum-dot film rather than silicon to capture images.

    The company is shooting for the smartphone market to begin with but claims it has the beating of CMOS image sensors in terms of many specifications and therefore in most of if not all applications.

    InVisage Technologies Inc. (Menlo Park, Calif.), was founded in 2006 to develop a light-sensitive material to replace silicon. On Nov. 11 in Beijing, the company launched the world’s first electronic image sensor that uses quantum-dot material rather than silicon to capture light.

    If quantum-dot based light sensing is superior to that of silicon photodiodes—to the degree that InVisage claims – this could represent the beginning of the end for CMOS image sensors which have built up a considerable market on the strength of camera sales into mobile phones. InVisage claims that its Quantum13 sensor will drive “silicon image sensors into obsolescence.”

    The quantum-dot material is a II-VI metal-chalcogenide type broadband light absorber, Lee said.

    Only about 0.5-micron depth of QuantumFilm is required compared with 2 or 3 microns depth of silicon photodiode, Lee said.

    As a result the Quantum13 embodies a number of advantages over silicon. The 13Mpixel 1.1-micron pixel fits in 8.5mm by 8.5mm module. Light absorption takes place eight times faster than in silicon allowing for the use of a global electronic shutter. Using 0.5-micron thin films—rather than the high aspect ratio wells used for silicon photodiodes—allows much higher incident angles of light, resulting in 4mm camera height module.

    Quantum13 also has a single-shot high dynamic range (HDR) mode called QuantumCinema that provides up to three additional stops of dynamic range compared to conventional CMOS image sensors, according to InVisage.

    “The launch of Quantum13 marks a new era for the smartphone camera industry,” said Jess Lee, CEO of InVisage, in a statement. “For the first time, smartphones will capture images on an entirely new medium. Not silicon, not film, QuantumFilm.”

    The CMOS image sensor market is said to be worth about $10 billion in 2015 and is expected to grow at a compound annual growth rate (CAGR) of 10.6 percent from 2014 to 2020, according to market research firm Yole Developpement.

    The 13-megapixel camera sensor market is expected to increase from 408 million units in 2015 to 995 million units in 2020, according to Techno Systems Research.

    Reply
  39. Tomi Engdahl says:

    Apple to shut down Beats Music on Nov. 30
    All subscriptions will be canceled on that day
    http://www.pcworld.com/article/3005041/apple-to-shut-down-beats-music-on-nov-30.html

    Apple is pulling the plug on Beats Music on Nov. 30, shortly after launching the beta of an Android app for its Apple Music streaming service.

    The company is now encouraging both Android and iOS users of Beats Music to transition to the Apple Music streaming service, which was launched by the company in June.

    Reply
  40. Tomi Engdahl says:

    Claire Wardle / Tow Center for Digital Journalism:
    Frontline virtual reality case study: narrative remains central, production quality requires long-turnaround time, various journalistic uses should be explored — New Report: Virtual Reality Journalism — After decades of research and development, virtual reality appears to be on the cusp of mainstream adoption.

    New Report: Virtual Reality Journalism
    http://towcenter.org/new-report-virtual-reality-journalism/

    After decades of research and development, virtual reality appears to be on the cusp of mainstream adoption. For journalists, the combination of immersive video capture and dissemination via mobile VR players is particularly exciting. It promises to bring audiences closer to a story than any previous platform.

    Two technological advances have enabled this opportunity: cameras that can record a scene in 360-degree, stereoscopic video and a new generation of headsets. This new phase of VR places the medium squarely into the tradition of documentary—a path defined by the emergence of still photography and advanced by better picture quality, color, film, and higher-definition video. Each of these innovations allowed audiences to more richly experience the lives of others. The authors of this report wish to explore whether virtual reality can take us farther still.

    To answer this question, we assembled a team of VR experts, documentary journalists, and media scholars to conduct research-based experimentation.

    Finally, we make the following recommendations for journalists seeking to work in virtual reality:

    Journalists must choose a place on the spectrum of VR technology. Given current technology constraints, a piece of VR journalism can be of amazing quality, but with that comes the need for a team with extensive expertise and an expectation of long-turnaround—demands that require a large budget, as well as timeline flexibility.

    Draw on narrative technique. Journalists making VR pieces should expect that storytelling techniques will remain powerful in this medium. The temptation when faced with a new medium, especially a highly technical one, is to concentrate on mastering the technology—often at the expense of conveying a compelling story.

    The whole production team needs to understand the form, and what raw material the finished work will need, before production starts.

    More research, development, and theoretical work are necessary, specifically around how best to conceive of the roles of journalists and users—and how to communicate that relationship to users. Virtual reality allows the user to feel present in the scene. Although that is a constructed experience, it is not yet clear how journalists should portray the relationship between themselves, the user, and the subjects of their work.

    Journalists should aim to use production equipment that simplifies the workflow. Simpler equipment is likely to reduce production and post-production efforts, bringing down costs and widening the swath for the number of people who can produce VR. This will often include tradeoffs: In some cases simpler equipment will have reduced capability, for example cameras which shoot basic 360-degree video instead of 360-degree, stereoscopic video.

    As VR production, authoring, and distribution technology is developed, the journalism industry must understand and articulate its requirements, and be prepared to act should it appear those needs aren’t being met. The virtual reality industry is quickly developing new technology, which is likely to rapidly reduce costs, give authors new capabilities, and reach users in new ways.

    The industry should explore (and share knowledge about) many different journalistic applications of VR, beyond highly produced documentaries.

    Choose teams that can work collaboratively. This is a complex medium, with few standards or shared assumptions about how to produce good work. In its current environment, most projects will involve a number of people with disparate backgrounds who need to share knowledge, exchange ideas, make missteps and correct them. Without good communication and collaboration abilities, that will be difficult.

    Reply
  41. Tomi Engdahl says:

    Home> Analog Design Center > How To Article
    12 Gbps SDI cable performance estimation
    http://www.edn.com/design/analog/4440765/12-Gbps-SDI-cable-performance-estimation?_mc=NL_EDN_EDT_EDN_analog_20151105&cid=NL_EDN_EDT_EDN_analog_20151105&elq=57cfa83056d547389a0000b5a15ab187&elqCampaignId=25579&elqaid=29104&elqat=1&elqTrackId=4f9ab201c84f427dbe69cb06665137c9

    Transporting video throughout a broadcast studio or live sporting venue is a difficult proposition. Complicating matters is the market shift towards higher bandwidth required for resolutions such as 4K ultra-high definition (UHD). Moving UHD video via traditional serial digital interface (SDI) at 12 Gbps while reusing an existing coax infrastructure imposes additional challenges.

    Modern-day electronic cable equalizers solve some of these challenges, though performance varies markedly with the choice of coax cable. Examining cable loss characteristics enables a comparative analysis across multiple cable types with a given equalizer.

    Video is traditionally transported throughout a broadcast studio using 75Ω coax cable. The equipment’s physical location within the studio may demand cable runs that exceed 30-50m, thus requiring additional electronics to condition the signal at the receive end. Cable equalizers greatly extend the usable working length between two pieces of video equipment.

    A front-page data sheet specification for cable equalizers is maximum cable length. This spec typically assumes ideal conditions with no additional loss due to crosstalk, additional connectors in the signal path, or long lengths of FR4 between the receiver’s BNC connector and the cable equalizer. Not all cables are created equal however, and cable choice has a pronounced effect on the maximum working cable length of a system.

    The most common cable types in the industry are Belden 1694A and 1855A. Belden 1855A is common for rack-to-rack connections, while 1694A is used almost everywhere else. In addition to these examples, boutique cables tailored for very low attenuation can dramatically impact the maximum working cable length; Belden 1794A and Canare L-8CHD are two examples. While they are technically superior cables, cost and availability preclude them from nearly all installations. Low-loss cables are sometimes physically thicker and pose unique challenges when installations require bending cables around tight corners.

    While there is no substitute for bench testing, the calculations performed here are a good starting point to obtain approximate comparative reach across numerous cable types with a given cable equalizer.

    Reply
  42. Tomi Engdahl says:

    Stop Calling Google Cardboard’s 360-Degree Videos ‘VR’
    http://www.wired.com/2015/11/360-video-isnt-virtual-reality/

    I love seeing people get excited about their first taste of VR. The sooner more people experience the transformative power of VR, the better. But if the high-powered, desktop headsets that are coming next year are the main course for virtual reality, then viewing 360-degree video using Google Cardboard is an amuse-bouche at best. It’s a decent first taste, but 360 video is as far from real VR as seeing the Grand Canyon through a Viewmaster is from standing at the edge of the canyon’s South Rim.

    With technology as potentially polarizing as VR, I worry that the slightest hiccup will have a negative impact on people’s perception—and adoption—of that tech.

    The Golden Rule of VR

    At the lowest level, VR uses an array of sensors to precisely track the movement of your head. The computer then perfectly maps your head’s real-world movement onto your view of a virtual world. If you turn your head to the left in the real world, the computer exactly mimics your movement in the rendered world. When executed perfectly, VR tricks your brain into thinking that what you see is real, on both a conscious and subconscious level.

    It sounds simple, but perfecting the execution has prove difficult. Most people are highly sensitive to the slightest dissonance between the movement detected by their inner ear and the motion that they see with their eyes. The human brain is sensitive below the level of conscious perception. If your VR game or application consistently shows frames of animation that are off by a few milliseconds, many people will feel ill effects.

    The good news is the high-end headsets have solved the motion sickness problem for most people.

    The Problem with 360 Video

    The bad news for applications like the NYT VR application and 360 video as a whole is that it’s impossible to avoid breaking this rule with 360 video. 360 video is inherently limited, and its problems are exacerbated by the other limitations of phone-based platforms like Cardboard. But even on more capable desktop platforms, which support higher frame rates and positional tracking, you won’t be able to get up and walk around in a 360 video. The cameras just can’t capture the data required to allow that.

    Even if the director of a 360 film avoids doing something inexcusable like moving the camera, the slight lateral movements that happen when you move your head to look around can be enough to trigger motion sickness.

    How long is too long for 360 video? In my informal tests, between 10 and 20 minutes, depending on the individual’s reported sensitivity to motion sickness.

    This jibes nicely with the Times’ report that the average user spends 14 minutes and 27 seconds in the NYT VR app.

    Good, Fast, or Easy: Pick Two

    In the short term, 360 video offers a relatively cheap bridge to the new medium. It’s fast. You can draft off of the many existing player infrastructures and creating 360 video adds just a few steps to the existing toolchain for video production. It’s easy.

    It isn’t really good, though. This is just the latest example of content creators shoehorning old formats into new technologies.

    Reply
  43. Tomi Engdahl says:

    Pandora:
    Rdio to shut down the service, file for bankruptcy, with Pandora acquiring key assets for $75M in cash — Pandora to Acquire Key Assets from Rdio — Adding technology, IP and talent to accelerate development of new capabilities; Pandora hosting investor call to outline growth strategy at 1:30 p.m. PT today

    Pandora to Acquire Key Assets from Rdio
    http://press.pandora.com/phoenix.zhtml?c=251764&p=irol-newsArticle&ID=2112860

    Reply
  44. Tomi Engdahl says:

    How a computer repair guy turned into a massive YouTube star
    http://www.techinsider.io/unbox-therapy-youtube-lewis-hilsenteger-2015-7

    Lewis Hilsenteger made huge waves last year when a video of him bending a brand new iPhone 6 Plus with his bare hands went viral. Before “Bendgate,” though, Hilsenteger was already making a name for himself with his hugely popular YouTube channel — Unbox Therapy .

    https://www.youtube.com/user/unboxtherapy/

    Reply
  45. Tomi Engdahl says:

    Margaret Sullivan / New York Times:
    The need to restage scenes for filming virtual reality means the tool may not be appropriate for some journalistic purposes — The Tricky Terrain of Virtual Reality — The arrival of a small cardboard box with last Sunday’s Times represented, in its unobtrusive way, a collision of cultures.

    The Tricky Terrain of Virtual Reality
    http://www.nytimes.com/2015/11/15/public-editor/new-york-times-virtual-reality-margaret-sullivan-public-editor.html

    The arrival of a small cardboard box with last Sunday’s Times represented, in its unobtrusive way, a collision of cultures.

    Here was a piece of cutting-edge journalism — promising virtual reality, no less — arriving the old-fashioned way, hand delivered with the print newspaper. The box itself (when assembled, it looked like a Fresh Direct container for three jumbo eggs) struck me as an almost instant anachronism: ready for its place on a historical timeline of the digital age’s evolution. This is what happened in 2015.

    But at the moment, this is new. The Times has leapt into this technology with fanfare and has gathered acclaim. The goggles contained within the cardboard, when combined with a downloaded app on a smartphone, gives viewers a 360-degree immersion into an 11-minute film called “The Displaced,” the stories of three children — from Lebanon, Ukraine and South Sudan — torn from their homes by war.

    Writing in Fortune, Rick Broida described his reaction: “Five seconds into the film, I was struck by the immediacy — and the intimacy — of the images. These aren’t computer-generated faces and landscapes; they’re real people in real places, and I felt like I was standing there myself, not just observing from afar.”

    Well before The Times’s experiment, Tom Kent, the standards editor at The Associated Press, wrote on Medium that the nexus of journalism and V.R. technology means working through the challenges: “Common understandings of what techniques are ethically acceptable and what needs to be disclosed to viewers can go a long way toward guarding the future of V.R. as a legitimate journalistic tool.”

    “There is a whole host of ethical considerations and standards issues that have to be grappled with,”

    “Since V.R. films a scene in 360 degrees, in every direction at the same time, there is no place for the photographer or filmmaker to stand unless they become a constant character in the scene. In traditional photo or video, they stand behind their camera and craft scenes so they do not appear to be present.” So, he said, “we had to hide.”

    Mr. Corbett told me that “it would be crazy to think that all the implications, questions and issues have been settled and determined, or that we have a fully formed set of rules.” After all, he said, “It took decades to develop a body of best practices in news photography.”

    An ethical reality check for virtual reality journalism
    https://medium.com/@tjrkent/an-ethical-reality-check-for-virtual-reality-journalism-8e5230673507

    Virtual reality journalism is with us to stay, and will become even more realistic and immersive as technology improves. Already, virtual reality headsets and vivid sound tracks can put a viewer into stunning, 360-degree scenes of a bombed-out town in Syria. They can drop him onto a dark street in Sanford, Florida, as George Zimmerman surveils Trayvon Martin.
    It’s only a matter of time until VR simulation looks more and more like the actual event. The slightly chubby, Lego-like characters that populate some of today’s VR will likely begin to look much more like the actual newsmakers — perhaps indistinguishably so.
    The power of VR transforms the news viewer’s experience from just learning about events to being in them. It has the potential to attract young viewers to the news as never before.

    Viewers need to know how VR producers expect their work to be perceived, what’s been done to guarantee authenticity and what part of a production may be, frankly, supposition.

    Here are some building blocks to consider for these disclosures and codes:
    ● What’s real? At the Associated Press, our interactive team painstakingly mapped a series of luxury locales in high-resolution imagery for a VR piece on high-end hotels, cruise liners and air travel. Everything in a project like that can be photographed precisely.
    But things can become more complicated if a VR modeler wants to re-create a moving, active news event in 3-D on the basis of 2-D photos or video taken at the time.

    ● Image integrity. How much modification of images should be allowed?

    ● Are there competing views of what happened? Often there’s controversy over how a newsworthy event unfolded.

    ● What’s the goal of the presentation? VR technology has a powerful ability to inspire empathy for those depicted in their productions. TechCrunch called VR “the empathy machine.”

    In traditional media, too, the desire to paint a cause or a person in sympathetic tones can conflict with impartial, hard-headed reporting. But the potential for empathy is even greater in the VR world, since viewers can bond far more easily with a 3-D character they’re practically touching. Music can also be used to evoke specific emotions among VR viewers. VR producers would do well to make clear to their audiences what the fundamental goal of their journalism is.

    ● What’s happening beyond the VR scene? A VR world is a controlled environment. When a viewer straps on a VR headset and starts walking, there’s a powerful impression of roaming freely through the virtual world.

    But they’re not. The limits of the VR world are circumscribed by the imagery the producer chooses to include, just as 2-D photography depends on the angles photographers select.

    ● And more. Many other ethical issues arise with VR. Must the VR rendition of an event cover the same amount of time as the original?

    VR is becoming a powerful technique to hold and influence news audiences. But if producers focus solely on optimizing the technology or creating empathy for their characters, VR’s journalistic credibility will be threatened. Common understandings of what techniques are ethically acceptable and what needs to be disclosed to viewers can go a long way toward guarding the future of VR as a legitimate journalistic tool.

    Reply
  46. Tomi Engdahl says:

    3D Printed lens Gears for Pro-grade Focus Pulling
    http://hackaday.com/2015/11/17/3d-printed-lens-gears-for-pro-grade-focus-pulling/

    Key Grip, Gaffer, Best Boy – any of us who’ve sat through every last minute of a Marvel movie to get to the post-credits scene – mmm, schawarma! – have seen the obscure titles of folks involved in movie making. But “Focus Puller”? How hard can it be to focus a camera?

    Turns out there’s a lot to the job, and in a many cases it makes sense to mechanize the task. Pro cinematic cameras have geared rings for just that reason, and now your DSLR lens can have them too with customized, 3D printed follow-focus gears.

    Unwilling to permanently modify his DSLR camera lens and dissatisfied with after-market lens gearing solutions, [Jaymis Loveday] learned enough OpenSCAD to generate gears from 50mm to 100mm in diameter in 0.5mm increments for a snug friction fit.

    3D Printable, Seamless, Friction-fit Lens Gears for Follow Focus
    http://jaymis.com/2015/11/3d-printable-seamless-friction-fit-lens-gears-for-follow-focus/

    Reply
  47. Tomi Engdahl says:

    Mobility Report forecasts Ericsson believes that, for example, in North America smartphone data traffic will grow from the current 3.8 GB to 22 GB per month.

    In 2012, mobile video among all mobile networks offset bits is 70 per cent. Ericsson’s report, the share of mobile YouTube videos is currently 70 per cent. Netflix proportion will reach 20 per cent in many of the countries where service is available.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3615:eurooppa-jalkijunassa-5g-kayttoonotossa&catid=13&Itemid=101

    Reply
  48. Tomi Engdahl says:

    Big mobile phone photo should be dealt with in the cloud

    When the mobile phone camera sensor captures up to 20 million pixels, sometimes more, each image will be a massive data packet. American researchers have presented a technique for even a large image processing can be done in the cloud: bandwidth-saving and reducing power consumption.

    If large image files are processed directly on the smartphone, the device’s battery discharging under stress quickly. From MIT, Stanford University, and Adobe Systems have developed a solution where the image can be processed in the cloud, without the large picture files need to be moved back and forth over the net.

    Case Siggraph event presented technique reduced the need for image download bandwidth of cloud butts 98.5 percent compared to the size of the image could be shuffled to the server and back. The operation power consumption was 85 per cent lower. How was this possible?

    The system of the smart phone is sent to the server heavily compressed image and the server sends back to the terminal even smaller file that contains the included simple instructions to edit the original image. After this, the image processing can be done quickly in the terminal itself intelligent algorithms.

    The best thing about technology is that over the network moves only about one hundredth of the number of bits that would be required of the original image file reciprocation for moving. Your smartphone should do a little more calculation, but this is a minor inconvenience compared to the work should take less than the entire massive 20 megapixel original image.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3621:iso-kannykkakuva-kannattaa-kasitella-pilvessa&catid=13&Itemid=101

    Reply
  49. Tomi Engdahl says:

    Google Photos
    https://plus.google.com/+JohnElstone/posts/8yFHSgSdtDW

    Helping users free up storage space on their Android device:
    On the Settings screen, users will now see a “Free Up Space” button. Clicking on the button will prompt the user to bulk-delete copies of photos that have already been backed up from their device. To prevent device copies from being accidentally deleted, we’re asking users to double-confirm their intent during the ‘Free Up Space’ flow.

    Downgrade previously uploaded photos from “Original quality” to “High quality”
    When users choose to backup their photos and videos to Google Photos, we allow photos to be uploaded in two ways:
    “Original quality” (large file, full resolution). These photos count against a user’s Google storage quota.
    “High quality” (smaller file, compressed file). These photos don’t count against a user’s Google storage quota.

    If a user joined Google Photos and selected the “Original quality’ setting for their photos, but changed their mind, they could have future media backed up in “High quality”.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*